What is the reasoning behind the "anthropic shadow" effect?
By tobycrisford 🔸 @ 2019-09-03T13:21 (+4)
Suppose that every million years on the dot, some catastrophic event either happens or does not happen with probability P or (1-P) respectively. Suppose that if the event happens at one of these times, it destroys all life, permanently, with probability Q. Suppose that Q is known, but P is not, and we initially adopt a prior for it which is uniform between 0 and 1.
Given a perfect historical record of when the event has or has not occurred, we could update our prior for P based on this evidence to obtain a posterior for P which will be sharply peaked at (# of times event has occurred) / (# of times event could have occurred). I will refer to this as the "naive estimate".
In this paper, the naive estimate is argued to be wrong because of an effect called "anthropic shadow". In particular, it is supposed to be an underestimate. My understanding of the argument is the following: if you pick a fixed value of P and simulate history a large number of times, then in the cases where an observer like us evolves, the observer's calculation of (# of times event has occurred) / (# of times event could have occurred) will on average be significantly below the true value of P. This is because observers are more likely to evolve after periods of unusually low catastrophic activity. In making this argument, they take a frequentist approach for the estimation of P (P is taken to be a fixed unknown parameter rather than a random variable with some prior distribution), but my understanding is that a fully bayesian approach would also be supposed to differ from the naive estimate of the previous paragraph.
But consider an analogous non-anthropic scenario. Suppose we flip a biased coin a hundred times, which lands heads with probability P (unknown). Whenever this coin lands heads, we immediately flip a second biased coin which lands heads with probability Q (known). If we ever get two heads, one from each coin, we paint a blue state marker red, and it remains red from then on. After the hundred tosses of Coin #1, we find that the state marker is blue, and Coin #1 has landed heads N times. How should we estimate P?
In this scenario, it is true that if you run a large number of simulations at fixed P, and look at the naive estimate (N/100) from cases which end blue, they will on average be below the true value of P, for the same reasons as the previous scenario. Nevertheless, in this scenario, I think the naive estimate is still correct. If N is already given, then the colour of the state marker should give you no additional evidence for the value of P, because the colour only depends on P through N. What the simulation argument misses by working within the blue state outcomes at fixed P is that you are more likely to finish in a blue state when P is smaller.
So the first part of my question is: What is the difference between the existence/non-existence distinction, and the red/blue distinction, which makes anthropic shadow happen in the former case but not the latter?
And the second part is: How can the anthropic shadow argument be phrased in a fully bayesian way? How should I obtain a posterior for P given some prior, the historical record, and the fact of my existence?
Ramiro @ 2019-09-05T19:10 (+2)
I'm no expert in the field, but this problem really bothers me, too - so perhaps you should read my remarks as additional questions.
So the first part of my question is:
"Anthropic shadow" is an observation bias / selection effect concerning the data-generating process. I don't see such bias in your red/blue example, where (CMIW) you have both perfect info on Q, N and the final state of the marker. For this to be analogous to anthropic bias regarding x-risks, you should add a new feature - like someone erasing your memory and records with probability P* whenever Coin#1 lands heads.
(My "personal" toy model of anthropic shadow problems is someone trying to estimate the probability of heads for the next coin toss, after a sequence TTTT... knowing that, whenever the coin lands heads, the memory of previous tosses is erased. It's tempting to just apply Laplace's Rule of Succession here - but it'd mean knowing the amnesia possibility gives you no information.
I don't think that's an exact representation of our anthropic bias over x-risks, but it does highlight a problem easy to underestimate)
And the second part is: How can the anthropic shadow argument be phrased in a fully bayesian way?
I guess that's the jackpot, right? idk. But one the best attacks on this problem I've seen so far is Snyder-Beattie, Ord & Bonsall Nature paper.
tobycrisford @ 2019-09-06T10:39 (+1)
Thank you for your answer!
I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. I'm still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we haven't yet gone extinct (which is the analogue of a blue marker).
The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?
I think you're right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:
In the absence of a complete consistent formalism for dealing with observer effects, the argument of the 'anthropic shadow' paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and that's what I am uncertain how to do.
Thank you for sharing the Nature paper. I will check it out!