Sleeping Beauty

Sleeping BeautyFor the last couple of weeks, I have fallen asleep thinking about Sleeping Beauty. Not the heroine of the Charles Perrault fairy tale, or her Disney descendant, but the subject of a thought experiment first described in print by philosopher Adam Elga as follows:

Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Elga, A. “Self‐locating belief and the Sleeping Beauty problem”, Analysis 60, 143–147 (2000)

It has become traditional to add that Sleeping Beauty is initially put to sleep on Sunday and is either woken up on Monday (Heads) or Monday and Tuesday (Tails). Then on Wednesday she is woken for the final time and the experiment is over. She knows in advance exactly what is going to take place, believes the experimenters and trusts that the coin is fair.

Much like the Monty Hall problem, Sleeping Beauty has stirred enormous controversy. There are two primary schools of thought on this problem. The thirders and the halfers. Both sides have a broad range of arguments, but put simply they are as follows.

Halfers argue that the answer is 1/2. On Sunday Sleeping Beauty believed that the chance of Heads was 1/2, she has learned nothing new when waking and so the chances are still 1/2.

Thirders argue that the answer is 1/3. If the experiment is repeated over and over again, approximately 1/3 of the time she will wake up after Heads and 2/3 of the time she will wake up after tails.

I first came across this problem myself when a friend alerted me to a blog post by my former supervisor Bob Walters, who describes the thirder position as an “egregious error”. But as Bob notes, there are many in the thirder camp, including Adam Elga himself, physicist Sean Carroll and statistician Jeffrey Rosenthal.

As for my own view, I will leave you in suspense for now, mainly because I’m still thinking it through. Although superficially similar, I believe that it is a far more subtle problem than the Monty Hall problem and poses challenges to what it means to move the pure mathematical theory of probability to a real world setting. Philosophers distinguish between the mathematical concept of “probability” and real world “credence”, a Bayesian style application of probability to real world beliefs. I used to think that this was a bit fanciful on the part of philosophers. Now I am not sure sure: applying probability is harder than it looks.

Let me know what you think!

Image Credit: Serena-Kenobi

Possibly Related Posts (automatically generated):

29 thoughts on “Sleeping Beauty

  1. Zebra

    Suppose Beauty has a betting website called ULittleBeauty.com. Each time she wakes up she takes bets on Heads at odds she sets.
    If she is a halfer she will offer even odds ie if it is Heads she will pay back the $1 bet + $1 profit. If she is a thirder she will offer 2:1 odds on Heads so if it is Heads she will pay out $1 + $2 profit. We agree that 1/3 of the times when she wakes up it is Heads. Halfer Beauty’s expected return then is $1-1/3*$2 = 33 cents. Thirder Beauty’s return is $1-1/3*$3=$0. So while Halfer Beauty makes a profit on average it is only because she misquotes the odds. Nobody who knows the setup will bet with her because they will expect to lose on average 33 cents. Thirder beauty breaks even because the odds she quotes of it being Heads are based on her believing that the probability of it being Heads, given she is awake =1/3. If Beauty is a better she should also be a thirder.

  2. Stubborn Mule Post author

    @Zebra – the betting argument is certainly a common one from thirders. But, there is something a little odd with the whole set-up. If we consider Sunday to Wednesday an “experiment”, with a single coin flip, even if we imagine performing the experiment repeatedly, in an experiment with Tails tossed, Sleeping Beauty would have two bets, but with a Heads on the coin she only bets once. That feels (almost) like skewing the payout. If I toss a coin and offer you a bet that cost you $2 on Tails and paid you only $1 for Heads, you’d only pay $0.33 for it (assuming you’re happy to take a fair bet). But that doesn’t mean that the odds of Heads are only 1/3. Is the Sleeping Beauty double bet the same? Not quite, but almost.

  3. Zebra

    Well the other argument is more technical, using Bayes theorem:

    P(H|awake)=P(awake|heads)*P(Heads)/P(awake)=(1/2)*(1/2)/(3/4)=1/3

    I’ll look back at your reply later. I am sure it’s not clear cut as it would have been all sorted by now.

    I wonder if it’s more clear if she was asked “what do you think I think the odds of Heads are?”.

  4. Stubborn Mule Post author

    @Zebra a halfer could also apply Bayes theorem, but argue that one thing Beauty knows is that regardless of the outcome of the coin toss, she well be woken (either once or twice) and therefor P(awake) = P(awake | Heads) = 1. Then

    P(H | awake) = 1 x (1/2) x 1 = 1/2.

    Explaining whether P(awake | Heads) should be 1 or 1/2 starts to get into the contentious zone of this problem!

  5. RSM

    I posted a few comments to Bob Walters’ article but the last one is still awaiting moderation (after about two weeks) so I have not continued posting there.

    I wouldn’t put too much credence in Walters’ analysis. He begins with a straw man (claiming that thirders hold that “p3,4 = 2p1”, in the terms defined in his article). In fact, a thirder would claim that p3,4 = p3 = p4 and it would not affect her conclusion. Note that he defines p3,4 not as the probability of being in one of states 3 or 4. It is defined as the probability of reaching 3 in one step, or reaching 4 in two steps. Both are along the same non-branching path, so they are not mutually exclusive. That fact is not lost on thirders.

    He also makes a few technical errors when discussing what he calls pone(y), ptwo(y), and pone,two(y) — the probability of reaching state y in one step, in two steps, or in either one or two steps, respectively. First, he sets forth a false equivalence between p3,4 and pone,two(y), now claiming that thirders are holding that “pone,two(y) = pone(y) + ptwo(y)”. Whether you set y to 3 or 4, the probability of reaching 3 in one step or 4 in two steps is not generally equivalent to the probability of reaching y in one or two steps (though they may have the same value in some cases).

    There is a another technical error here: “Hence taking pone,two(y) = pone(y) + ptwo(y) as the physicists/philosophers do would yield “sum over all y”of pone,two(y) = 2 whereas the total probability should be one.” But, for all y, pone,two(y) >= pone(y) and pone,two(y) >= ptwo(y). If, for any y, either relationship is strictly greater, then you cannot have the sum over all y of pone,two(y) = 1. In the SB case, ptwo(1) = 0 and pone,two(1) = 1/2, so the statement does not hold. In fact, you cannot sum pone,two(y) over all y to get a “total probability”. The summation is meaningless because the components are not mutually exclusive.

    Then he states that a “reasonable definition” for pone,two(y) is the average of the probabilities pone(y) and ptwo(y). Again, this assertion fails. Consider as a counterexample a Markov chain such that:

    1. State 0 goes to state 1 with probability a, or to state 2 with probability b (where a + b = 1).
    2. State 1 goes to state 2 with probability c, or elsewhere with probability 1 – c.
    3. State 2 stays in state 2 with probability d, or goes elsewhere with probability 1 – d.

    Then pone(2) = b, and ptwo(2) = ac + bd. But pone,two(2) = b + ac, which, in general, does not equal the average of pone(2) and ptwo(2).

    In his final paragraph, the false equivalence is used again, this time to sneak in the “reasonable definition” (which is incorrect anyway) of pone,two(y) as a substitution for p3,4.

    Walters posted a followup article, too (http://rfcwalters.blogspot.com/2014/08/the-sleeping-beauty-problem-how-some.html). Here he makes additional technical errors:

    “she must assign 1/4 to the probability of Monday after tails, and 1/4 to the probability of Tuesday after tails, so that the probability of Monday or Tuesday after tails is 1/2.”

    The language is ambiguous: “probability of Monday after tails” could be the joint probability that a given sample (from the sample space of the experiment) is both Monday and tails (p(M and T)), or the conditional probability that a given interview falls on Monday given that the toss was tails (pM|T)). The former is 1/4 and the latter is 1/2, so it is evident that he means the joint probability. Incidentally, because tails implies that SB is awake both days, conditioning on SB being awake does not change these two probabilities: p(M and T and awake) = 1/4 and p(M|T and awake) = 1/2.

    Then:

    “Hence she should estimate the probability of Monday as 1/2 after heads.”

    But to say that “the probability of Monday…after heads” is 1/2 is problematic, because 1/2 is not the joint probability of Monday and heads (p(M and H) = 1/4). If it were 1/2, then, if SB were to learn that it is Monday (still not knowing the toss outcome), she would have to revise her probability that the toss was heads to 2/3. It’s true that p(M|H) = 1/2, but that is before introducing the condition that “SB is awake”. In this case, conditioning on SB being awake does change the conditional probability, p(M|H and awake) = 1, but it does not change the joint probability, p(M and H and awake) = 1/4.

    This is mixing apples and oranges — the phrase “probability of Monday after tails” is referring to a joint probability, and “probability of Monday after heads” is referring to a conditional probability.

    Walters attributes an error to thirders of calculating probabilities “in two different probability spaces”. Thirders don’t do this. Conditionalization is simply the replacement of one probability space with a subset of that probability space. The subset is determined by the information available to the observer, and the measure is simply the renormalized measure of the original. Walters created the two different spaces artificially, a posteriori. Any information that could be used to create such a partition (result of the coin toss, or day of the week) is not available to SB.

    There is in fact a single original sample space defined by the terms of the experiment, consisting of (M,T,awake), (M,H,awake), (Tu,T,awake), and (Tu,H,asleep), each with (by reasonable assumptions) a probability of 1/4. Walters stated that SB has “no new information”, when in fact she can eliminate (Tu,H,asleep), knowing that she is not asleep, creating the smaller probability space defined by the conditional clause “given that SB is awake”.

    The remaining probabilities add up to 3/4. Renormalization sets the probabilities of each of the three reamining options to 1/3.

    Halfers seem to fall into two different camps. One camp would assign unequal probabilities to the original four states, and then conditionalize in the normal Bayesian fashion. There is another camp that seems to reject conditionalization altogether (the “double halfers”). I haven’t seen a satisfactory justification for either approach. Walters seems to be advocating a hybrid of the two, and I still don’t see justification.

  6. Plunko

    For each person the coin is tossed once. Since it is a fair coin, the chances of ”heads” are 50%. Any event happening after the coin toss will not change the result of the coin toss. That’s true regardless of how many times SB is woken up. It doesn’t matter when SB gets woken, or whether she believes this is her first awakening, and she has no way of being sure anyhow.
    Never heard ”creative accountants”called ”thirders ” before….

  7. RSM

    Plunko, that is the first time I have heard “following the laws of probability” called “creative accounting”.

    I think you are confusing two different things. Changing “the result of the coin toss” is not the same as changing the likelihood of a hypothesis about the psst based on observation. Much science, from forensics to paleontology to cosmology, relies on this kind of conditionalization.

  8. Zebra

    I have another version of this problem. Suppose if it is heads she is not woken twice but a million times and if it is tails she is only woken once. Also if it is heads she is woken in a blue room whereas if it is tails she is woken in a pink room. She doesn’t know which room is associated with which colour. When she is woken up each time she is asked if the colour of the room she is in is associated with heads. If she gets one wrong answer she is forced to eat a poisoned apple at the end and will die. What answer should she pick?

  9. Stubborn Mule Post author

    @Zebra – does she die immediately? If so, how can she be woken up a million times?

    I have another version too. Instead of flipping the coin once, the experimenters flip the coin 19 times. If there are 19 tails in a row (which has a probability of 1 in 524,288), Sleeping Beauty will be woken 1 million times. Otherwise (i.e. if there was at least one Heads tossed), she will only be woken once.

    Following the standard argument of the thirders, when Sleeping Beauty is awoken and asked the chances that the coin tosses showed at least one Heads, she should say approximately 1/3 (or more precisely, 524287/1524287).

    This feels rather odd. Notwithstanding the potential for 1 million awakenings, I would find it hard to bet agains something that started off as a 524287/524288 chance.

  10. Zebra

    This is what I wrote: If she gets one wrong answer she is forced to eat a poisoned apple *at the end* and will die.

    I added more to this but it didn’t appear. She will wake up in a blue room 1m times in 1m+1 awakenings so it looks like she should pick heads. So she should be a thirder. However regardless of this she will still only be right 50% of the time and so she will die 50% of the time whether she picks heads or tails. So she should be a halfer.

    In other words there is no unequivocal meaning of “how certain she is”.

  11. RSM

    Mule,

    Since you put it in betting terms… If you were an outside observer of a single awakening, and didn’t know if it was one out of a million awakenings (the 19 tails case), or the sole awakening (every other case) — i.e., if you had the exact same knowledge that Sleeping Beauty has — what would you consider to be fair odds if you were to offer her the bet?

  12. Stubborn Mule Post author

    Been a bit slow to respond…

    @Zebra – sorry, didn’t read you comment properly first time around. Reading again, I see you were clear. I tend agree that there is no unequivocal meaning for the probability here. One way to think about it is to ask about the reference class for the probability. A “30% chance of rain” could mean that it will rain 30% of the day (reference class is the span of time during the day), in 30% of the regions the forecast is broadcast to (reference class is geographical) or that in 30% of days like today (in some sense) it will rain (reference class is days like today). In the Sleeping Beauty problem the reference class could be awakenings (thirders) or experiments, which run Sunday to Wednesday (halfers). Even so, this doesn’t seem to fully capture the slipperiness of the problem. So, I’m interested in why exactly is this equivocal when it seems like a relatively simple problem?

    @RSM – the challenge with the betting arguments can also be thought of in terms of reference classes. Halfers (reference class = experiment) would argue that allowing a bet to be made on every one of the awakenings skews the payoff of the bet. Under one possible experimental outcome (19 tails) there are a million payments, but under all others there is only one. They would argue that to assess the probability in terms of fair bets there should only be a single payoff for each unique run of the experiment. Thirders, on the other hand (reference class = awakening) would be perfectly comfortable with a bet for each awakening.

  13. RSM

    Interesting.

    The point of my question is that I want to know if halfers believe that two people with identical information about a problem, and with an identical set of priors, should assign identical probabilities to a hypothesis. I see the following possibilities:

    1. The answer is no –> could be a halfer (but not necessarily).
    2. The answer is yes, but the person holds that conditionalization is not a valid procedure –> could be a halfer.
    3. The answer is yes and the person accepts conditionalization, but does not accept that the priors for the four possibilities in the Sleeping Beauty puzzle should be equal –> could be a halfer.
    4. Otherwise, must be a thirder.

    The thought about reference classes is interesting. However, it seems to me that the outside bettor in my question has an unequivocal position. The only information he/she has to base odds on is the background information on how the experiment is conducted, plus the immediate observation that “Sleeping Beauty is awake”. I hold that the bettor should set the odds the way a thirder would (unless, for whatever reason, he/she has an asymmetric set of priors or does not accept conditionalization), and that this is independent of reference class as long as his or her information does not change. (Choices of reference class that don’t affect the relevant information are: Choose the outsider only from the town the experiment is conducted in; choose only women from anywhere on the planet; choose only men or women between the ages of 30 and 40; choose from the set of all sapient beings in the Milky Way; etc.)

    Your distinction of reference classes (choose an observer for each trial of the experiment, vs. choose an observer for each awakening) works as follows: Once an observer, chosen per experiment, *sees* Sleeping Beauty awake, that observer joins the reference class “observers chosen per experiment who happen to observe while she is awake”, which is equivalent (from an odds-setting standpoint) to the per-awakening reference class. And that is why I say that the bettor’s position, as I described it, is unequivocal.

    One possibility of a reference class change that would cause the bettor to assign 1:1 odds on heads, is that the experimenters allow observation by an outsider on the first awakening only. But if he or she is aware of that condition (reference class = “observers who know that this is the first awakening”), this would introduce information that Beauty does not possess. I haven’t been able to think of a way to change the bettor’s reference class to allow halfer-style odds, without modifying the bettor’s information to be different from Sleeping Beauty’s.

  14. Zebra

    Regarding the slipperiness of such a simple problem I think it has something to do with filtrations or lack thereof due to forgetfullness. But I haven’t had time to look into it. If there is no filtration then this is outside normal probability theory so may not be surprising there is not a simple answer. Certainly it appears straightforward that Beauty and an outside observer have different filtrations. But it’s just a guess at this stage. One similar example I can think of is if you have a martingale and 2 observers who know that. O1 only knows the value at time t=0 while O2 the value at t=1. If you ask them for the probability distribution at t=2 they will give 2 different answers because they have 2 different filtrations.

  15. Stubborn Mule Post author

    @Zebra – I agree that something like this is at the heart of the problem. In the usual set-up with filtrations, probability distributions are indexed by time, but time is a parameter, it’s not part of the probability space, not something you are uncertain about. Here uncertainty about what time it is (echos of “I’ll tell you what time it is…) is crucial. How then to explain the issue to a non-mathematical audience without using the word “filtration”?

  16. Stubborn Mule Post author

    @RSM – that’s a fair set of alternatives. In practice, I would say there would be halfers spread throughout the options! Personally, 2 seems the most reasonable – there is something about the wiping of memory that messes with conditionalisation.

    But to tease out the point about identical information a bit further, he’s an additional scenario I’ve been reflecting on. Let’s say that the Sleeping Beauty is conducted in a lab along with a variety of different experiments. The director of the lab comes by on Tuesday to inspect the activity. He is familiar with the set-up of all the experiments, including Sleeping Beauty, but doesn’t know whether today is day one or day two of the experiment (this can be made more rigorous, for example the experimenters could toss a second coin to determine whether the first day of awakening would be Monday or Tuesday). The director peeks in the window and sees Sleeping Beauty awake chatting to the experimenters. To what degree should the director now believe that the coin toss was Heads?

    I think that Thirders and Halfers alike would say that the answer is 1/3. This is a straightforward conditioning on seeing Sleeping Beauty awake. The question then is: do the director and Sleeping Beauty at this moment have identical information? I suspect Thirders would say yes and thus conclude Sleeping Beauty should also assign a probability of 1/3 to Heads. But a Halfer would not be so sure. Consider this: Sleeping Beauty glances up and sees the director peering in. He sees her and she sees him. Let us further assume that Sleeping Beauty, the experimenters all knew that the director would be passing by, but neither the director not Sleeping Beauty knew whether this would be on day one or day two of the experiment (it’s ok for the experimenters to know). As the director and Sleeping Beauty look at one another, surely they both now have the same information and should both assign 1/3 to Heads. Seems reasonable. But, does seeing the director given Sleeping Beauty more information? She knew that the if the coin toss was Tails she would certainly see him, but if the coin toss was Heads there was a 1/2 chance she would sleep through his visit. This suggests that there is additional information she gets from seeing him. Now conditional on seeing the director she should assign 1/3 to Heads. Applying Bayes theorem then allows you to conclude that the Sleeping Beauty’s unconditional probability for Heads should be 1/2!

  17. RSM

    “there is something about the wiping of memory that messes with conditionalisation.” That sounds like an additional axiom because I don’t see it following from the axioms of probability.

    I will have to work through the math of your director story before I can comment on it, and I will do that when I get some spare time, but I suspect a flaw. But for now, when you wrote “conclude that the Sleeping Beauty’s unconditional probability for Heads should be 1/2”, did you mean “conclude that Sleeping Beauty’s probability conditional on being awake but not (yet) seeing the director should be 1/2”? I assume that halfers and thirders all agree that the probability prior to any conditioning should be 1/2, so the former statement is unremarkable.

  18. Stubborn Mule Post author

    @RSM – I’m not suggesting that there should be an additional probability axiom (Kolmogorov’s axioms are simply P(whole) = 1, P(empty set)=1 and P(A v B) = P(A) + P(B) for A, B disjoint – conditional probabilities are defined rather than being part of the axioms). What’s at issue here is the application of probability to the “real” world (noting of course that the scenario here is somewhat artificial). Philosophers use the term “credence” to refer to numerical measures of degrees of belief and then make arguments as to why, under certain conditions, credence, to be coherent, must be a probability measure. My suspicion here is that the memory wiping messes up so much with credence that a pure probability measure is no longer useful.

  19. Stubborn Mule Post author

    @RSM – I should add that, upon reflection, my director argument may not work. My logic was something along these lines: if the coin is Heads, SB may not see the director at all, so P(seeing the director | Heads) = 1/2, but if it’s tails, she will see the director P(seeing the director | Tails) = 1. I then proceeded to calculate P(Heads | seeing the director) in terms of P(Heads). However, all of these should be Sleeping Beauty probabilities and, although to the outside observer, P(seeing the director | Tails) = 1, on day two, Sleeping Beauty will not remember whether she’d seen the director on day one or not, so P(seeing the director | Tails) = 1/2. So, perhaps seeing the director gives her no additional information after all.

  20. RSM

    Halfers are fond of saying that SB has gained no new knowledge when she is awakened during the experiment. That is literally true, but it sweeps aside the important fact that she loses information: On Sunday, she knows where in time she is located. But when she awakes on Monday (and perhaps Tuesday), she no longer knows.

    In fact, not knowing anything relevant to the question she is going to be asked, other than the background information about the experiment and that she is awake in the present moment, whatever probabilities she will assign to Heads or Tails will be considered prior probabilities in the event that she learns something she doesn’t know (such as “today is Monday”). Certainly on Sunday night, there is not even a possibility of learning that “today is Monday” (at least, not of so being informed truthfully), so on Sunday night she is already conditioning on every bit of information that is available to her, and she arrives at 1:1 odds for Heads, as anybody else would. If on Monday, after stating odds of 2:1 for Tails in the interview, she is then told that it is Monday, she can adjust her odds by the likelihood ratio p(Monday|Tails) / p(Monday|Heads) = 0.5, which brings the odds back to 1:1. But if she starts off with halfer odds, then she can no longer validly condition upon acquiring the news that it is Monday. Halfer odds, if valid, would imply that she knows as much on Monday as she did on Sunday. Thus, learning her position in time couldn’t help her improve her estimate — she already knew her position in time on Sunday.

    Alternately, she might be told that the toss was heads and then asked what the probability is that it is Monday. Starting with thirder odds as a prior, and multiplying by the appropriate likelihood ratio, again gives the expected answer.

    So I don’t think “wiping of memory messes with conditionalisation”. It simply causes conditionalization to proceed as usual, but in the opposite direction — from more complete knowledge to less complete.

    You might be aware of Bostrom’s Doomsday Argument based on the anthropic principle, and perhaps Gott’s similar argument based on (his version of) the temporal Copernican principle (and deconstructed masterfully by Caves). These arguments are based on a point of view very similar to the halfer viewpoint: that we can’t assign higher prior probability to the hypothesis that we inhabit a timeline that has a higher population. But without the population bias, we must then assign equal probabilities to high- and low-population timelines, and then condition on our birth number (or on the current age of the human species) to conclude that it is unlikely that many more humans will be born relative to the number born to date, or that the human race will be long-lived relative to its current age (whether or not those outcomes are unlikely for reasons unrelated to the anthropic principle has no bearing on the DA’s validity). I thought the DA was silly when I first read about it 10 or 15 years ago. I thought the halfer position was equally silly when I encountered the SB problem more recently, and for the same reason. The SB argument simply substitutes SB’s awakenings for members of the human species as population elements.

  21. Stubborn Mule Post author

    @RSM – I agree that SB loses information, but I don’t really understand the notion of conditionalization proceeding “in the opposite direction — from more complete knowledge to less complete”. Both halfers and thirders tend to talk in terms of process (leaning, forgetting, etc), but most of the analysis I have seen works in terms of a probability space which does not really have the notion of process or progression through time involved. Propositions such as “it is Monday” are included in the probability space, unlike, for example, the framework of stochastic processes which do have a notion of progression through time. Unfortunately, stochastic processes are not much us as time is a parameter not an element of the probability space.

    At the moment, I cannot convincingly characterise myself as either a thirder or a halfer. The mathematics “works” better for the thirder position, but it worries me that that could be because it is really the mathematics of the outside observer of a single awakening. There does seem to be a difference between this observer and Sleeping Beauty herself. I am not convinced they both have the same “information”. Sleeping Beauty does know that she will definitely observe herself on Monday (even if she forgets the experience), which the outside observer does not know. For me the 19 coin toss really brings this concern into focus. For the outside observer, I am happy to say that (conditional on seeing Sleeping Beauty awake) the chance of an all Tails sequence should be around 2/3, but this is because by far the most likely outcome is that the observer will see Sleeping Beauty asleep, and we are now conditioning on a remote event. However, Sleeping Beauty will definitely awake at least once on Monday. With the chances of at least one Heads being 524287/524288, it really doesn’t seem right on awakening for Sleeping Beauty to think it’s more likely that 19 Tails were tossed in a row. Are you comfortable with that position?

  22. Pingback: Sleeping Beauty – a “halfer” approach

  23. Stubborn Mule Post author

    Thanks for the link, I’ll take a look. If I do end up with a halfer position (which would resolve my concern about the 19 coin tosses), then I would certainly also want the probability conditional on being informed it’s Monday to remain a half. The single-halfer, with P(Heads | Monday) = 2/3 is, in my view, even more troubling that the thirder position!

  24. RSM

    Conditionalization is reversible; if you gain information and conditionalize by multiplying the odds by a likelihood ratio, you can conditionalize in the other direction by multiplying the odds (before loss of information) by the inverse of that ratio to get the odds after the information is lost. You might do this in the real life if you had factored some background information, consciously or not, into your priors, and later learned that your assumption lacked any basis.

    I did not specifically mean a process in terms of a temporal progression, but rather a mathematical process; e.g., one could say that SB has a prior of 1/2 and, conditionalizing on “I am awake”, can arrive at a probability of 1/3 through that operation; or one could say that SB has a prior of 1/3 representing the information state when she is awake but “lost in time” and can arrive at a probability of 1/2 by conditionalizing on “It is Sunday” or “It is Monday”, or a probability of 0 by conditionalizing on “It is Tuesday”. There is no temporal progression any more than there is when we take the integral of a function, for example.

  25. JeffJo

    Imagine a slight twist to the problem: SB is wakened both days, usually in a red room. It is different only in the case of Tuesday+Heads, and then it is a blue room.

    When she goes to sleep on Sunday, P(Heads)=1/2. If she wakes in a blue room, she can confidently state that the probability of Heads has increased to 1. But this increase must (Law of Total Probability) be balanced by a decrease if something other than “Blue” occurs. If she wakes in a red room, the only other possibility, the probability must decrease from 1/2. A simple calculation shows it to be 1/3, since there are three equally-likely red-room situations and only one of them includes Heads.

    What halfers overlook is this: The result P(Heads|not Tuesday+Heads)=1/3 does not depend on how – or if – SB would know it was Tuesday+Heads if it was. Sleeping through that contingency does not mean it is impossible, just that it does not correspond to her situation when she is awake. This result depends only on her knowing that her current situation is not Tuesday+Heads. The original SB knows this because she is awake. She has “new information” and can update her estimates of probability. The answer is 1/3.

  26. Pingback: Bob

  27. JeffJo

    Here’s another variation that proves the answer must be 1/3: Use four volunteers, one coin flip, and the same drugs applied over the same two days. All four SB’s will be wakened at least once, and maybe twice; but each will be left asleep under different circumstances. Three will be wakened each day. SB1 will be the one left asleep on Monday, if Heads flipped. SB2 will be the one left asleep on Monday, if Tails flipped. SB3 will be the one left asleep on Tuesday, if Heads flipped. SB4 will be the one left asleep on Tuesday, if Tails flipped.

    They will be kept in separate rooms, knowing about the others’ existence but not who is awake. Each will be asked for her “degree of belief” for the proposition that she will be wakened exactly once during the experiment.

    A simple examination shows (1) that SB3′s schedule exactly matches the originals SB’s, (2) That while her question looks like it is based on a different event, the event happens, or does not happen, under the same sets of circumstances, and (3) each other SB is operating under a functionally equivalent schedule and question.

    When SB3 finds herself awake, she knows that exactly three of the volunteers – herself, and two others – are awake. She also knows that the proposition “I will be awakened exactly once during the experiment” applies to exactly one of them. Her belief, that proposition applies to herself, can only be 1/3.

    What this variation does, is change the random variable from one that varies in an unconventional way over tine, to one that varies in a completely conventional way by subject. But it is exactly the same problem, and the answer is 1/3.

Leave a Reply