If you read the last post on the Sleeping Beauty problem, you may recall I did not pledge allegiance to either the “halfer” or the “thirder” camp, because I was still thinking my position through. More than a month later, I still can’t say I am satisfied. Mathematically, the thirder position seems to be the most coherent, but intuitively, it doesn’t seem quite right.

Mathematically the thirder position works well because it is the same as a simpler problem. Imagine the director of the research lab drops in to see how things are going. The director knows all of the details of the Sleeping Beauty experiment, but does not know whether today is day one or two of the experiment. Looking in, she sees Sleeping Beauty awake. To what degree should she believe that the coin toss was Heads? Here there is no memory-wiping and the problem fits neatly into standard applications of probability and the answer is 1/3.

My intuitive difficulty with the thirder is better expressed with a more extreme version of the Sleeping Beauty problem. Instead of flipping the coin once, the experimenters flip the coin 19 times. If there are 19 tails in a row (which has a probability of 1 in 524,288), Sleeping Beauty will be woken 1 million times. Otherwise (i.e. if there was at least one Heads tossed), she will only be woken once. Following the standard argument of the thirders, when Sleeping Beauty is awoken and asked for her degree of belief that the coin tosses turned up at least one Heads, she should say approximately 1/3 (or more precisely, 524287/1524287). Intuitively, this doesn’t seem right. Notwithstanding the potential for 1 million awakenings, I would find it hard to bet against something that started off as a 524287/524288 chance. Surely when Sleeping Beauty wakes up, she would be quite confident that at least one Heads came up and she is in the single awakening scenario.

Despite the concerns my intuition throws up, the typical thirder argues that Sleeping Beauty should assign 1/3 to Heads on the basis that she and the director have identical information. For example, here is an excerpt from a comment by RSM on the original post:

I want to know if halfers believe that two people with identical information about a problem, and with an identical set of priors, should assign identical probabilities to a hypothesis. I see the following possibilities:

- The answer is no -> could be a halfer (but not necessarily).
- The answer is yes, but the person holds that conditionalization is not a valid procedure –> could be a halfer.
- The answer is yes and the person accepts conditionalization, but does not accept that the priors for the four possibilities in the Sleeping Beauty puzzle should be equal –> could be a halfer.
- Otherwise, must be a thirder.

My intuition suggests, in a way I struggle to make precise, that Sleeping Beauty and the director do not in fact have identical information. All I can say is that Sleeping Beauty knows she will be awake on Monday (even if she subsequently forgets the experience), but the director may not observe Sleeping Beauty on Monday at all.

Nevertheless, option 2 raises interesting possibilities, on that have been explored in a number of papers. For example in D.J. Bradley’s “Self-location is no problem for conditionalization“, *Synthese* **182,** 393–411 (2011), it is argued that learning about temporal information involves “belief mutation”, which requires a different approach to updating beliefs than “discovery” of non-temporal information, which makes use of conditionalisation.

All of this serves as a somewhat lengthy introduction to an interesting approach to the problem developed by Giulio Katis, who first introduced me to the problem. The Stubborn Mule may not be a well-known mathematical imprint, but I am pleased to be able to publish his paper, *Sleeping Beauty, the probability of an experiment being in a state, and composing experiments,* here on this site. In this post I will include excerpts from the paper, but encourage those interested in a mathematical framing of a halfer’s approach to the problem. I am sure that Giulio will welcome comments on the paper.

Giulio begins:

The view taken in this note is that the contention between halfers and thirders over the Sleeping Beauty (SB) problem arises primarily for two reasons. The first reason relates to exactly what experiment or frame of reference is being considered: the perspective of SB inside the experiment, or the perspective of an external observer who chooses to randomly inspect the state of the experiment. The second reason is that confusion persists because most thirders and halfers have not explicitly described their approach in terms of generally defining a concept such as “the probability of an experiment being in a state satisfying a property

Pconditional on the state satisfying propertyC”.

Here Giulio harks back to Bob Walters’ distinction between experiments and states. In the context of the Sleeping Beauty problem, the “experiment” is a full run from coin toss, through Monday and Tuesday, states are a particular point in the experiment and as an example, *P* could be a state with the coin toss being Heads and *C* being a state in which Sleeping Beauty is awake.

From here, Giulio goes on to describe two possible “probability” calculations. The first would be familiar to thirders and Giulio notes:

What thirders appear to be calculating is the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P . Indeed, someone coming to randomly inspect this modified SB problem (not knowing on what day it started) is twice as likely to find the experiment in the case where tails was tossed. This reflects the fact that the reference frame or ‘timeframe’ of this external observer is different to that of (or, shall we say, to that ‘inside’) the experiment they have come to observe. To formally model this situation would seem to require modelling an experiment being run within another experiment.

The halfer approach is then characterised as follows:

The halfers are effectively calculating as follows: first calculate for each complete behaviour of the experiment the probability that the behaviour is in a state satisfying property

P; and then take the expected value of this quantity with respect to the probability measure on the space of behaviours of the experiment. Denote this quantity by Π(_{X}P) .

An interesting observation about this definition follows:

Note that even though at the level of each behaviour the ‘probability of being in a state satisfying

P’ is a genuine probability measure, the quantity Π(_{X}P) is not in general a probability measure on the set of states ofX. Rather, it is an expected value of such probabilities. Mathematically, it fails in general to be a probability measure because the normalization denominatorsn(p) may vary for each path. Even though this is technically not a probability measure, I will, perhaps wrongly, continue to call Π(_{X}P) a probability.

I think that this is an important observation. As I noted at the outset, the mathematics of the thirder position “works”, but typically halfers end up facing all sorts of nasty side-effects. For example, an incautious halfer may be forced to conclude that, if the experimenters tell Sleeping Beauty that today is Monday then she should update her degree of belief that the coin toss came up Heads to 2/3. In the literature there are some highly inelegant attempts to avoid these kinds of conclusions. Giulio’s avoids these issues by embracing the idea that, for the Sleeping Beauty problem, something other than a probability measure may be more appropriate for modelling “credence”:

I should say at this point that, even though Π

(_{X}P) is not technically a probability, I am a halfer in that I believe it is the right quantity SB needs to calculate to inform her degree of ‘credence’ in being in a state where heads had been tossed. It does not seem Ξ(_{X}P) [the thirders probability] reflects the temporal or behavioural properties of the experiment. To see this, imagine a mild modification of the SB experiment (one where the institute in which the experiment is carried out is under cost pressures): if Heads is tossed then the experiment ends after the Monday (so the bed may now be used for some other experiment on the Tuesday). This experiment now runs for one day less if Heads was tossed. There are two behaviours of the experiment: one we denote byp_{Tails}which involves passing through two statesS_{1}= (Mon, Tails),S_{2}= (Tue, Tails) ; and the other we denote byp_{Heads}which involves passing through one stateS_{3}= (Mon,Heads). LetP= {S_{3}}, which corresponds to the behaviourpHeads . That is, to say the experiment is in P is the same as saying it is is in the behaviourp_{Heads}. Note π(p_{Heads}) = 1/2 , but Ξ(_{X}P) = 1/3 . So the thirders view is that the probability of the experiment being in the state corresponding to the behaviourp_{Heads}(i.e. the probability of the experiment being in the behaviourp_{Heads}) is actually different to the probability ofp_{Heads}occurring!

This halfer “probability” has some interesting characteristics:

There are some consequences of the definition for Π

(_{X}P) above that relate to what some thirders claim are inconsistencies in the halfers’ position (to do with conditioning). In fact, in the context of calculating such probabilities, a form of ‘interference’ can arise for the series composite of two experiments (i.e. the experiment constructed as ‘first do experiment 1, then do experiment 2’), which does not arise for the probabilistic join of two experiments (i.e. the experiment constructed as ‘with probability p do experiment 1, with probability 1-p do experiment 2’).…

In a purely formal manner (and, of course, not in a deeper physical sense) this ‘nonlocality’, and the importance of defining the starting and ending states of an experiment when calculating probabilities, reminds me of the interference of quantum mechanical experiments (as, say, described by Feynman in the gem of a book QED). I have no idea if this formal similarity has any significance at all or is completely superficial.

Giulio goes on to make an interesting conjecture about composition of Sleeping Beauty experiments:

We could describe this limiting case of a composite experiment as follows. You wake up in a room with a white glow. A voice speaks to you. “You have died, and you are now in eternity. Since you spent so much of your life thinking about probability puzzles, I have decided you will spend eternity mostly asleep and only be awoken in the following situations. Every Sunday I will toss a fair coin. If the toss is tails, I will wake you only on Monday and on Tuesday that week. If the toss is heads, I will only wake you on Monday that week. When you are awoken, I will say exactly the same words to you, namely what I am saying now. Shortly after I have finished speaking to you, I will put you back to sleep and erase the memory of your waking time.” The voice stops. Despite your sins, you can’t help yourself, and in the few moments you have before being put back to sleep you try to work out the probability that the last toss was heads. What do you decide it is?

In this limit, Giulio argues that a halfer progresses to the thirder position, assigning 1/3 to the probability that the last toss was heads!

These brief excerpts don’t do full justice to the framework Giulio has developed, but I do consider it a serious attempt to encompass all of the temporal/non-temporal, in-experiment/out-of-experiment subtleties that the Sleeping Beauty problem throws up. This paper is only for the mathematically inclined and, like so much written on this subject, I doubt it will convince many thirders, but if nothing else I hope it will put Giulio’s mind at rest having the paper published here on the Mule. Over recent weeks, his thoughts have been as plagued by this problem as have mine.

**Update:** Giulio has now posted a thoroughly revised version of his paper.

### Possibly Related Posts (automatically generated):

- Sleeping Beauty (26 August 2014)
- Randomness (6 April 2014)
- Randomness revisited (mathsy) (21 April 2014)
- Bringing Harmony to the Global Warming Debate (25 February 2014)

{ 50 comments… read them below or add one }

Next Comments →

@ Stubborn Mule

It is not clear how is the day of the director’s visit selected. If it is completely random it could be on a Heads/Tuesday and SB would be asleep. Another scenario is that she is invited only when SB is awakened. Hence, she is always invited on Monday if the coin has been tossed Heads and in either Monday or Tuesday using some sort of random selection if the coin has been tossed Tails. In the first case, (assuming equal probabilities on the selection of Monday or Tuesday) the director should assign P(Heads|SB awake)=1/3. However, in the second case, the director should assign P(Heads|SB awake)=1/2.

There is no need to invent a director, in fact you can only get more confused by doing that. What you should do is explicitly define the Random Experiments whose outcomes form the Sample Space you are using for calculating the involved probabilities. If you only need to calculate probability of Heads upon awakening a simple random experiment consisting of only the coin toss is adequate and it yields P(Heads)=1/2. Using the sample space of this random experiment you also get P(Monday) =1 and P(Tuesday)=1/2, i.e. it is certain that during a trial of the random experiment SB will awake on Monday, and that there is a 1/2 chance that SB will awake also on Tuesday. Notice that upon awakening SB can still use this random experiment since she still remembers the setup of the original experiment. However, if SB wants to calculate probabilities on the day of the week upon awakening, she should define a new random experiment, in which P(Monday) is not equal 1 and P(Tuesday) is not equal 1/2. This new random experiment assumes that in case of Tails a Monday or a Tuesday is randomly selected as SB’s current day. Notice that this is an assumed random experiment that SB uses to model her uncertainty on the day of the week. For details check my paper in:

http://arxiv.org/ftp/arxiv/papers/1409/1409.3803.pdf

The second random experiment calculates P(Monday)=3/4 and P(Tuesday)=1/4. Notice that in the new random experiment P(Monday) and P(Tuesday) are mutually exclusive and collectively exhaustive events (that is why they should add up to 1). P(Heads)=P(Heads,Monday) also equals 1/2 according to the new calculations, presenting consistency with the previous results. However, there is a subtle point: It is clear that P(Heads|Monday)=P(Heads,Monday)/P(Monday)=1/2 / 3/4 =2/3. Thus, it seems like if SB learns it is Monday she should update to P(Heads|Monday)=2/3. However, in this case “Monday” event means “Monday is randomly selected as your current day” and SB knows that the new random experiment is just a model. She knows that awakening on Monday is not due to any random selection. Thus, she knows she doesn’t have evidence of a Monday event as it is defined by the new random experiment. Therefore, she knows she cannot update and that she still has to assign P(Heads)=1/2.

I would caution against favoring one’s intuition when intuition and math seem to be at odds, unless a specific error can be identified in the mathematical reasoning. Here is an example where intuition seems to favor the thirder case.

Suppose, in the 19-toss scenario, that there are two lights in SB’s room, one red and one green. In the 19-tails case, where she is awoken 1 million times, only the green light is illuminated the first 999995 times, and only the red light is illuminated the last 5 times. In all other cases, only the red light is illuminated.

If SB is a thirder, she awakes and, before she opens her eyes, will assume the odds are about 2:1 favoring the 19-tails case. If she is a halfer, she will assume the odds are 1:524287.

If SB then opens her eyes and sees a green light, she can reason as follows: The likelihood ratio, p(G|19 tails)/p(G|<19 tails), is infinitely large, so the 19-tails case becomes a certainty, no matter what her disposition between thirder and halfer is.

If SB opens her eyes and sees a red light, what then? The likelihood ratio p(R|19 tails)/p(R|<19 tails) is 5/1000000 or 1/200000.

Thirder SB will multiply her original 2:1 (approximate) odds by that ratio to get 1:100000 for the new (approximate) odds.

Halfer SB will multiply her original 1:524287 odds by the ratio to get odds on the order of 1:(2^37), astronomical odds against the 19-tails case. Does this satisfy intuition?

Double-halfer SB would simply refuse to update and leave her odds at 1:524287. This certainly sounds like a more reasonable value than the "ordinary" halfer's result. Yet, if we aren't allowed to update based on information, the probability of which is a function of time (or more specifically, a function of the age of a process whose current age is unknown), then how can we test any hypothesis about the age of the Earth or of the Universe?

@RSM

The problem with SB updating using the information on the light’s color is not that its probability is a function of time. The problem is that “red light” provides evidence for the predicament “it is either ‘day1 and not 19 Heads’ or ‘day999996 and 19 Heads’ or ‘day999997 and 19 Heads’ ….or ‘day1000000 and 19 Heads’ “, but it doesn’t provide evidence for the (probability) event “my current state has been randomly selected to be either ‘day1 and not 19 Heads’ or ‘day999996 and 19 Heads’ or ‘day999997 and 19 Heads’ ….or ‘day1000000 and 19 Heads’ “. This is because SB knows the setup of the experiment, i.e. that her current awakening has been predetermined ( In order to correctly interpret the above, please consider an extension of the assumed random experiment I described in my previous post for the original SB problem. Thus, according to this assumed random experiment SB’s current state is randomly selected, depending of course on the coin toss results. i.e. in case there are not 19 Tails SB is awakened on day1, otherwise one of the 1000000 days is randomly selected). However, this doesn’t mean that if SB gets evidence about an event of the assumed Random Experiment she is using to model her uncertainty that she should not update. Consider the case that no lights are led yet and SB is told that there have been 19 Tails. Then, SB has evidence for the event “19 Tails” and she can update from P(Red)=524286.000005/524287 to P(Red|19 Tails)=P(Red,19 Tails)/P(19 Tails)=0.000005.

Reading the above comments, I now see two potential sources of confusion in the SB problem. Unfortunately, these two sources of confusion are fundamentally related, which makes them hard to tease out.

One regards the likelihood of an experiment being in a state from the reference frame of being in one of its own behaviours (with specified begin and end states, where each behaviour may have a different length, and where normalization is required at the level of each behaviour) vs the likelihood of finding an experiment in a state from an external reference frame. By the latter I imagine someone like Sean’s Director who randomly comes to inspect the state of the experiment (but who may do so after a behaviour has ended and find nothing). Many Thirders seem to be effectively calculating this (via comparing the expected number of times the experiment will be in one state versus another). You can better understand this difference with simpler experiments where conditioning is not really required (e.g. my example where the experiment is conducted in an institute under cost pressures so that in the case of Heads the experiment goes only for one day instead of two; it has been pointed out to me that Bob Walters has posted on his blog an example based on his experience on a train which serves to demonstrate the same conceptual point).

There is another confusion which specifically relates to some of the previous comments on this blog. It relates to the difference between conditioning on an experiment

beingin a type of state, and conditioning onknowingan experiment is in a type of state. For example, consider a modification to the standard sleeping beauty experiment where she is awoken on both Mon and Tue regardless of the toss, but with Tails she is given OJ on both days when awoken, and with Heads she is given OJ on Mon and Apple Juice (AJ) on Tue. Now suppose SB is awoken and given OJ – what is her credence the toss was Heads vs it having been Tails? Some Thirders seem to believe this is the same as the situation in the standard SB experiment. But I (and Ioannis and many others I assume) think it is different. In this version SB has information since she could have been in a state where she would be pondering the question and served AJ, while the in the standard version there is no sense in which she could be pondering the question and be Asleep. In the original SB problem we need to condition on the experimentbeingin a state where SB is awake – the “halfer” definition I give in my paper expresses generally what I mean by this (I am not claiming this is original). In the OJ/AJ version, however, we need to condition onknowingthat the experiment is in a state where OJ has been served. I didn’t explicitly define such a calculation in the paper, but on reflection should have. In the machinery of the paper, to calculate her credence the toss was Heads rather than Tails in the OJ/AJ version, SB would calculate both Π(Heads and OJ) and Π(Tails and OJ), note that the former is half the latter, and therefore conclude that the likelihood the toss was Heads is half the likelihood it was Tails.Ioannis, I am having trouble following you. I think you are saying that SB should consider an alternative experiment you are calling a Random Experiment and base her probability calculation on that somehow, but it isn’t clear what this entails. Maybe you can walk me through a simpler problem, one that (I think) will not be as controversial as SB, but is in many aspects nevertheless very similar.

So, let’s say there are two towns, Oneville and Twoville. The folks in Oneville love parades, so much so that they hold one every Monday. The folks in Twoville *really* love parades, so much so that the hold one every Monday and another every Tuesday.

One day you are driving from Oneville to Twoville, and halfway between, you are in an accident which renders you unconscious. You regain consciousness in a hospital, not knowing how many days or weeks have passed. You reason that it is equally likely that you are at Oneville Hospital or Twoville Hospital, but then moments later you hear the sound of a parade outside your window.

On hearing the parade, do you revise your 50-50 estimate of the odds of being in Oneville? If so, what do you think the odds are now?

How does your Random Experiment methodology help to arrive at an answer in this case?

Giulio, I started reading your paper today but haven’t read enough to comment on it.

@ RSM

a brief description of the assumed Random Experiment I am using is in my post on September 29, 2014 at 9:25 pm. You can also find a link there to a paper of mine that 1) analyses in detail this assumed Random Experiment, 2) explains why SB should use it to model her situation upon awakening, and 3) explains why SB cannot use Monday as evidence to update P(Heads|Monday) according to the assumed Random Experiment’s probability space.

Regarding the scenario with Oneville (1v) and Twoville (2v), I would model the situation as such:

A random experiment is conducted and a) 1v 0r 2v is randomly selected as your current location, b) a day of the week is randomly selected for regaining your consciousness. Notice that in this case the random experiment is not hypothetical, you have reasons to believe that in similar conditions you could wake up in the opposite town another day of the week. The sample space of the random experiment is S={(1v,M),(1v,T),…(1v,S),(2v,M),(2v,T),…,(2v,S)}. Notice that axiomatic probability theory does not provide any clues on how to assign probabilities on the outcomes of the sample space. However, based on a principle of indifference we could assign equal probabilities (1/14) to all outcomes (if we were closer to 1v than 2v at the time of the accident we could favor 1v outcomes, or if we had statistics on patients in similar conditions we could favor certain days etc). Now we can apply axiomatic probability theory and compute: P(1v)=P(2v)=1/2, P(M)=P(T)=…=P(S)=1/7, P(Parade)=P((1v,M) or (2v,M) or (2v,T))=3/14. If you hear the parade you should update [because in this case hearing the parade is a random outcome of your situation, you have evidence of an “(1v,M) or (2v,M) or (2v,T)” event] to P(1v|Parade)=P(1v,Parade)/P(Parade)= P(1v,M)/P(Parade)=1/14 / 3/14=1/3. Similarly, you should update P(2v|Parade)=2/3.

@Giulio – your contrast to the juice scenario is, I think, an important point. I see an analogy with the Monty Hall problem which, despite the controversy it has generated in its day, is (I think) a more straightforward problem. In the Monty Hall problem, if Monty knows where the car is (plus the other standard protocol requirements), then conditional on showing a goat, you have 2/3 chance of winning if you switch, if Monty doesn’t know where it is (plus, etc), then conditional on showing a goat you only have 1/2 chance of winning if you switch. Isn’t Sleeping Beauty’s situation a little bit like the first (a goat will always be shown, Sleeping Beauty will always be interviewed and observing her own state) and the director survivor a bit like the second (a car could have been shown, the director could have seen Sleeping Beauty asleep, she could have been offered apple juice, but it turned out none of these things happened). If this distinction is important for Monty Hall, it should be important for Sleeping Beauty too, as you argue.

Ioannis,

Yes, I read your paper before but it seemed unclear. Your explanation of 1v/2v, while not identical to mine, is mathematically equivalent. I just don’t see how you can apply the same approach and get a halfer result in the SBP.

In effect, you should get (just doing a paraphrase of your derivation above):

P(H)=P(T)=1/2

P(M)=P(Tu)=1/2 (experiment precludes waking on other days)

P(Awake)=P((H,M) or (T,M) or (T,Tu))=3/4

If SB is awake she should update [because being awake gives evidence of a “(H,M) or (T,M) or (T,Tu)” event] to P(H|Awake)=P(H,Awake)/P(Awake)= P(H,M)/P(Awake)=1/4 / 3/4=1/3.

It isn’t clear from your article why you think we should proceed otherwise.

I found this review of a book by Michael Titelbaum interesting. I am sure the book is even more so:

http://bjps.oxfordjournals.org/content/early/2014/05/26/bjps.axt056.full.pdf?keytype=ref&ijkey=kqywNo63qoqSlVt

@ RSM

The SB scenario and the 1v/2v scenario are not equivalent. It is erroneous to paraphrase the results of the latter to get answers for the former. The key difference is that only one awakening occurs during 1v/2v, whereas in case of Tails two awakenings occur during SB scenario.

Thus, in SB case

P(H)=P(T)=1/2, but

P(M)=3/4 P(Tu)=1/4 (Experiment precludes waking on other days but it doesn’t mean that the two events are equiprobable, see the analysis in my paper explaining that it is P(M|Tails)=P(Tu|Tails). Even Elga’s paper uses this. However, in his case he then goes on arguing that P(M,Tails)=P(Tu,Tails)=P(M,Heads)=1/3. Therefore, he computes P(M)=2/3 and P(Tu)=1/3. No-one argues that P(M)=P(Tu)=1/2).

Finally, P(Awake)=P((H,M) or (T,M) or (T,Tu))=1, because as these events are defined by the assumed Random Experiment they are mutually exclusive and collectively exhaustive.

Thus, if SB is awake she shouldn’t update [because being awake gives no evidence of a “(H,M) or (T,M) or (T,Tu)” event, since she knows that her current state has not been randomly selected as it is required by the definition of these events] to P(H|Awake)=P(H,Awake)/P(Awake)= P(H,M)/P(Awake)=1/2 / 1=1/2, even though if she did so she would still get the correct probability value.

It is very important to explicitly define the random experiment you are using. Then, the sample space is defined and you don’t mix events from different sample spaces or in case of SB you don’t update on non-existing evidence. Notice, that if SB learns that the coin is tossed Tails she has evidence of the Tails event and she can update to P(M|Tails)=P(Tu|Tails)=1/2.

@ Giulio Katis

I have read your note and found it very interesting. The results you present are (in almost all cases) in accordance to those I am getting after applying axiomatic probability theory. For instance, in SB case I also compute P(M)=3/4 and P(Tu)=1/4 and in case of SB2 I also compute that P(last toss Heads)=5/12. However, as you also notice, taking the n(P,p)/n(p) ratio is arbitrary (it assumes that for each path there is equal probability to be in a certain state). In case of SB a principle of indifference can be applied justifying this, but it can’t be expected that this should generally be the case. There is also the problem with the paths that don’t visit any state with the conditional property and the denumerator becomes zero (you say that you deal with it in the note , but I couldn’t find where). The only disagreement in the results is in the case of the Director, where you compute that after seeing the Director SB should update to Πsbd(H| Aw intersection Dir)=1/3. You argue that it is straightforward to get this result. However, there is a path with zero “Aw intersection Dir” states. Even if we ignore this path during computations I get Πsbd(H| Aw intersection Dir)=1/4* 1/1 +1/4*0+1/4*0=1/4 and Πsbd(T| Aw intersection Dir)=1/4*0 +1/4*1/1+1/4*1/1=1/2. Am I missing something? Can you present the computations that result to the 1/3 answer? If I use the axiomatic probability theory approach I get P(H| Aw intersection Dir)= P(T| Aw intersection Dir)=1/2.

Ioannis, thanks for your comment. If you look at the definition of \Pi on page 9, you will see that the sum is only over the paths that pass through the conditioning states. Furthermore, outside the sum there is a corresponding renormalization factor. You can interpret this as conditioning on the space of behaviours that pass through the conditioning states. (The definition of \Pi can in fact be extended to generally condition on the space of behaviours, as is traditionally done; but I didn’t put this in as a general construct, other than to deal with behaviours that don’t pass through conditioning states, as I thought it would overload the reader.) In any case, applying the definition to the Director example gives: 1/(3/4) * (1/4*1 + 1/4*0 + 1/4*0) = 1/3. Here, the sum is over 3 behaviours, and the 3/4 is the corresponding renormalization factor referred to above.

A minor technical point on the paper: in the definition of an ESP I omitted to add that End States are required to have property they have no transitions out of them.

@ Giulio Katis

Thanks Giulio, it is clear now. I thought that the definition in page 9 applied only in case of multiple begin states. However, I am still troubled about the 1/3 result. My calculations imply that even after seeing the Director SB should still assign P(Heads|Dir)=1/2. I am looking into this discrepancy to find how it can be resolved. Until then I would like to present another objection. More specifically, you argue that “What thirders appear to be calculating is the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P”. I believe that, according to your suggestion of what a halfer computes, it is actually “What halfers appear to be calculating is the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P” and that “thirders appear trying to calculate the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P, but use an absurd formula”. For clarity assume only equiprobable paths. Then, the thirders’ approach results to Πx(P)=n(P)/N, where N is the total number of states, which makes no sense at all. On the other hand, halfers’ approach results to 1/m*Σ(n(P,p)/n(p), where m is the number of total paths, which is exactly what you should do in order to “calculate the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P”. This is why your approach also allows halfers formula to compute Πsb(H intersection Tues)=1/2(1/2+0)=1/4, which only an external observer can inspect.

Sorry, Ioannis, I never claimed they were equivalent. That is why it was a paraphrase; some elements were changed. The derivation is still valid because of the Bayesian evidence of SB being awake.

Ioannis, apologies for splitting my reply into two. I tried to edit my first post but somehow that got screwed up and now the editing window is closed.

You also wrote: “Thus, in SB case

P(H)=P(T)=1/2, but

P(M)=3/4 P(Tu)=1/4”

but the latter violates the principle of indifference (might be okay, it’s just a principle not an axiom or theorem and we can violate it if there is sufficient justification). Perhaps you meant:

P(M|Awake)=3/4

P(Tu|Awake)=1/4

but even that is incorrect; your analysis leading to it must be flawed.

Monday awakenings occur with twice the probability of Tuesday awakenings, so

P(M,Awake) = 2 * P(Tu,Awake)

P(M,Awake)/P(Awake) = 2 * P(Tu,Awake)/P(Awake)

P(M|Awake) = 2 * P(Tu|Awake)

If you meant something other than P(M), P(Tu) or P(M|Awake), P(Tu|Awake), please clarify exactly what that is.

@ RSM

Everything I use is explicitly defined in the paper I have cited in my initial post. In that paper I define two random experiments. The first one is ERE and involves only the coin toss and is used to answer the original question on SB’s credence on Heads upon awakening, whereas the other one is SBRE and is used for answering SB’s question what day it is upon awakening (it also results to P(Heads)=1/2). In both random experiments SB is awake in all outcomes of their sample space., i.e. P_ere(Awake)=P_sbre(Awake)=1. Thus, you can drop the Awake notation, since P(M,Awake)=P(M|Awake)=P(M). However, P(M) corresponds to different things in each experiment. In ERE P_ere(M)=1, it is a certain event (and P_ere(Tu)=1/2), whereas in SBRE P_sbre(M)=3/4 and P_sbre(Tu)=1/4 (you can see the analysis resulting to these values in my paper). You cannot be indifferent in SBRE about Monday or Tuesday randomly selected as your current state. In 2/3 of Monday randomly selected awakenings the coin would have been tossed Heads, whereas only in 1/3 of Tuesday randomly selected awakenings the coin would have been tossed Heads.

Thus, it is clear that “Monday awakenings occur with twice the probability of Tuesday” in ERE (but not, as I believe you assume, with P(M)=2/3 and P(Tu)=1/3, since in ERE these events are not mutually exclusive). However, in SBRE P_sbre(M)=3*P_sbre(Tu) (in SBRE a Monday awakening randomly selected as the current state is mutually exclusive with a Tuesday awakening selected as the current state).

Notice, that all of the above are direct results of axiomatic probability theory and cannot be rebooted. The only question is if SBRE is accepted as a valid model of SB’s situation upon awakening. But before anyone dismisses it I challenge him to define a different Random Experiment in which the three predicaments used in Elga’s argument in favor of 1/3 (namely, H1=”Heads and it is Monday”, T1=”Tails and it is Monday”, T2=”Tails and it is Tuesday”) are mutually exclusive and collectively exhaustive as he claims them to be (a property these thee events present in SBRE).

Bottom line is that Elga defines the predicaments H1, T1 and T2 that he uses, as mutually exclusive and collectively exhaustive events of a sample space that is never explicitly defined or linked to a random experiment. You cannot have both, either you assume that Monday or Tuesday is randomly selected as your current state or these events are not mutually exclusive. This is why I introduce SBRE, and I am using it to continue Elga’s analysis on the correct grounds (in my paper I pinpoint the error in Elga’s analysis).

@Stubborn Mule, I am not sure where you stand on the problem at the moment. But perhaps Nicole Kidman has resolved it for you. From Before I Go To Sleep: “A woman wakes up every day, remembering nothing as a result of a traumatic accident in her past. One day, new terrifying truths emerge that force her to question everyone around her.”

Giulio and Ioannis and Stubborn Mule,

The double-halfer (aka invariantist) position has flaws that are exposed in this paper (Manley) – http://www-personal.umich.edu/~dmanley/Site/Papers,_etc._files/SEEU.pdf – so I won’t repeat his analysis here, but I hope you will review the paper for yourself. I also hope you will find it as interesting as I did. Giulio, Manley uses formal definitions similar to yours for the invariant and frequentist approaches, and also gives a formal definition for the proportionalist (halfer) approach. But note what conclusions the invariant position leads to in Manley’s Chances and Chances2 scenarios.

I’m not really surprised. To me, the double-halfer position has always had a feeling of special pleading to it.

Ioannis, the proof regarding the probabilities of Monday vs. Tuesday is as follows — feel free to state what assumptions you don’t agree with. I note to start with that M (“It is Monday”) and Tu (“It is Tuesday”) are mutually exclusive, as are H (“the toss was heads”) and T (“the toss was tails”). Defining M and Tu so that they are not mutually exclusive, besides being arbitrary, is useless for the purpose of defining a sample space (unless your sample space is divided into atomic terms containing “M and not Tu” and the like, which, if followed, also leads to the thirder conclusion).

1. P(M,A) = P(M,A|T) * P(T) + P(M,A|H) * P(H) by the law of total probability

2. P(M,A|T) = P(M,A|H) by the principle of indifference

3. P(M,A) = P(M,A|T) by substitution of (2) into (1) and noting that P(T) + P(H) = 1

4. P(Tu,A) = P(Tu,A|T) * P(T) + P(Tu,A|H) * P(H) = P(Tu,A|T) * 1/2 by the law of total probability and noting that P(Tu,A|H) = 0

5. P(M,A) = P(M,A|T) = P(Tu,A|T) by (3) and the principle of indifference

6. P(Tu,A) = P(M,A)/2 by combining (4) and (5)

@ RSM

in proposition 2. you are applying the principle of indifference incorrectly. You cannot be indifferent for an event that is conditioned on two different events (H and T in your case). You can only be indifferent for different events that are conditioned on the same event. Quoting Elga on his application of the principle of indifference, where T1 is the event “Tails and Monday” and T2 is the event “Tails and Tuesday”: “If (upon first awakening) you were to learn that the toss outcome is Tails, that would amount to your learning that you are in either T1 or T2. Since being in T1 is subjectively just like being in T2, and since exactly the same propositions are true whether you are in T1 or T2, even a highly restricted principle of indifference yields that you ought then to

have equal credence in each. But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).

However, this is not the main problem with your approach. The problem is that in order for any of your calculations to become meaningful and unambiguous you have to construct a Probability Space, which (quoting wikipedia) “In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process (or “experiment”) consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same.

A probability space consists of :

1. A sample space, Ω, which is the set of all possible outcomes.

2.A set of events F, where each event is a set containing zero or more outcomes.

3.The assignment of probabilities to the events; that is, a function P from events to probabilities.”

Unless you do so you are computing ill-defined quantities. How is “Awake” defined and what value is assigned to P(A) in your model?

It is how you define the experiment that defines what the outcomes are and whether they are mutually exclusive or not. This is why in ERE as it is defined P(M)=1 and P(Tu)=1/2, but in SBRE P(M)=3/4 and P(Tu)=1/4.

The arguments you are presenting correspond to an ill-defined probability space that is clearly different from SBRE. Thus, they can not help proving my approach wrong.

Once SBRE’s probability space is defined the computations of probabilities are based on straightforward application of axiomatic probability theory and cannot be rebooted.

I would appreciate some thoughts and arguments on why SBRE should not be used by SB upon awakening. This is the crucial part of my approach. I remind you though that if SBRE is dismissed I am expecting the definition of a different random experiment that can model SB’s situation upon awakening and “Heads and Monday”, “Tails and Monday”, and “Tails and Tuesday” are mutually exclusive and collectively exhaustive events.

Thanks @RSM for forwarding this. I don’t believe there is one definition that is appropriate for all problems. We would be better served, I believe, by identifying the concepts the different definitions are trying to capture, so we can decide which should be used in particular contexts. For example, see my comments above on contrasting the OJ/AJ variant with the standard SB problem, and Stubborn Mule’s subsequent comments. As I state above, I don’t think the halfers (invariance, if you like) definition is appropriate to use in the OJ/AJ context. I will read the paper in more detail, but looking at the section you directed me to (with the examples involving multiple subjects), I don’t see that this type of distinction (which to me is what makes the SB problem interesting) has been drawn by author. A blunt question for you: do you see the OJ/AJ problem as equivalent to the SB problem, or different?

Ioannis,

Thanks for catching the error in my post. You are correct, (2) does not follow from indifference alone. It follows from indifference and the presumption of a fair coin. I should have written this out more explicitly:

2a. P(M,A,T) = P(M,A,H) by the principle of indifference

2b. P(T) = P(H) by fair coin

2c. P(M,A,T)/P(T) = P(M,A,H)/P(H) by (2a) and (2b)

2. P(M,A|T) = P(M,A|H) by definition and (2c)

And thanks but no thanks for the condescending lecture on how to construct a probability space, as if I had not already explored this space for the SB experiment. If you had bothered to do the same with the correct assumptions, you would see that the answer “1/3” results.

You seem confused about Awake and how to calculate P(A). Awake is the set containing the following atomic elements of the probability space: (A,M,T), (A,Tu,T), (A,M,H), (A,Tu,H). P(A) is the sum of the individual probabilities of these four elements: 1/4 + 1/4 + 1/4 + 0 = 3/4.

I am not interested in pursuing the red herring of “SBRE” or other random experiments. The experiment that matters is the one being conducted, it has a well-defined probability space, and there is no need to go through all the contortions and obfuscations halfers and double-halfers are prone to.

Giulio, one minor correction: The invariant position in Manley’s paper is actually the double-halfer position, not the halfer position. The proportional position is the single-halfer, and the frequentist position is the thirder. Manley critiques all three; it is the invariant position he finds leads to the most counter-intuitive results, followed by the proportional, followed by the frequentist. Of course I respect that not everyone will have the same opinion about which results are most troubling, and as far as I am aware, there are no logical inconsistencies in any of the three. Of course, one may ask which one(s) actually represent(s) a true probability calculation. I have not addressed that question and unless I missed it, neither does Manley.

I find the thirder position to be the most parsimonious, and with no good reason to reject it, I therefore prefer it.

As for the OJ/AJ variation, it could be stated as equivalent if we see the Juice variable replacing the Wakefulness variable (since SB is awake in all four situations, we don’t need the latter) — that is, it is just a matter of substitution and unless you see a reason to modify priors in this variation (I don’t), the same answer falls out. One could also build a probability space based on the four variables (Coin, Day, Juice, Wakefulness) in which case I would say that it is not strictly equivalent to, but is reducible to, the original problem.

@ RSM

the principle of indifference requires that we have no reason to believe that one way will occur preferentially compared to another, i.e. in 2a no reason to believe that (M,A,H) will occur preferentially compared to P(M,A,T). However, in order to decide if this is the case we have to examine how these events are defined. In ERE, P(M)=1, P(A)=1, P(M,A,H)=P(H)=P(T)=P(M,A,T)=1/2. Thus, these events (as defined in ERE) are indistinguishable and we have no reason to prefer one over the other. However, in SBRE P(M)=3/4, P(A)=1, P(M,A,H)=P(H)=1/2, P(M,A,T)=1/4, P(Tu,A,T)=1/4. In SBRE case if the coin is tossed Tails a Monday awakening is randomly selected as SB’s current state only 1/2 of the times, whereas in case of Heads all the time. Thus, in SBRE we have reason to believe that P(M,A,H) will occur preferentially compared to P(M,A,T).

You are arguing that “The experiment that matters is the one being conducted, it has a well-defined probability space, and there is no need to go through all the contortions and obfuscations halfers and double-halfers are prone to”.

However, in case of Tails the conducted experiment results (in the same trial) to both a Monday and a Tuesday awakening. Thus, how can you justify in the probability space you are using that the events (M,A,T) and (Tu,A,T) are considered mutually exclusive? If (M,A,T) occurs in the conducted experiment, it is certain that (Tu,A,T) also occurs and vice-verse. They are just two events that are equivalent to the outcome Tails.

In my paper I have defined the probability space of the random experiment that corresponds to the conducted experiment. It is ERE and it has only two outcomes Heads or Tails. I also argue that SB can use it for answering about her credence on Heads upon awakening , since the memory loss is not about the experiment’s setup (she still knows that it is valid).

The “red herring” of “SBRE” as you called it is introduced only to address other uncertainties SB faces upon awakenings, such as what day it is. These uncertainties cannot be addressed by the random experiment that corresponds to the conductive experiment. Notice that according to the conducted experiment a Monday awakening is certain! Elga in his line of arguments is using conditional probabilities on different day awakenings mixing two probability spaces. I am using SBRE to explicitly define the second probability space Elga is using to produce his mixed results. Even if you disagree with the relevance of SBRE, the problem still remains. What is the random experiment you can use in which (M,A,T) and (Tu,A,T) are mutually exclusive, since it cannot be the one corresponding to the conducted experiment.

@RSM: The fact that you don’t see a fundamental difference between (what needs to be calculated in) the traditional SB problem and the Juice variant reflects in my opinion the central point of contention between thirders and halfers (at least halfers like myself).

Halfers see the two problems as different because in the traditional SB problem (conditioning on being Awake) she could never not be Awake and be asked the question; while in the Juice variant (conditioning on being served OJ) she could be in a situation where she was not being served OJ and be asked the question.

This exchange has helped me find a better way to express this in words: in the standard SB problem she needs to “condition on only ever being Awake”, while in the Juice variant she needs to “condition on being served OJ instead of something else”. These are different types of conditioning operations (part of the point of my note was put this in mathematical terms).

@ Giulio & RSM

Well posed Giulio. This is exactly the case. The two problems are different for the reasons you are presenting. In a previous post RSM wrote:” (A,M,T), (A,Tu,T), (A,M,H), (A,Tu,H). P(A) is the sum of the individual probabilities of these four elements: 1/4 + 1/4 + 1/4 + 0 = 3/4″. If I understand correctly the above probabilities imply that in the probability space RSM is using there is also the (As,Tu,H) event with probability 1/4. Apparently, the (As,Tu,H) event corresponds to the (AJ,A,Tu,H) event of the variant problem.

I argue that it is always safer to explicitly define the random experiment you are using to derive the employed probability space. In OJ/AJ variant, again you need to model your situation upon awakening as a random selection between a Monday and a Tuesday awakening (in order to get “Monday” and “Tuesday” events that are mutually exclusive) . Again P(A)=1. Also, in this case (unlike in the original SB problem) Tuesday awakening is independent from the coin toss result (remember that in the original problem Tuesday awakening is conditioned only on Tails). However, in this case, P(OJ)=3/4 (as P(A) in the original problem according to RSM), but according to SBRE in the original problem P(A)=1 . Thus, SBRE, which is rigorously defined in my paper, models what Giulio describes. I agree with him that this is the central point of contention between halfers and thirders. Thus, I would like to pose the question: ” What is the situation thirders propose, that each time it arises, the set of possible outcomes is the same and the probabilities are also the same and it results to SB being awake with probability 3/4?” . It cannot be the conducted experiment, since each time it arises SB is awakened at least once.

In most cases, once you know the conducted experiment it is straightforward to define the sample space and assign probabilities. However, there is no guarantee that the conducted experiment a) has the properties to qualify as a random experiment, and b) is the suitable random experiment that has the expressive power to model the uncertainties you are interested in. Thus, I suggest to always play it safe and check for a) and b) before starting irrelevant computations. In SB case the approach I propose directly exposes the subtle differences between the conducted experiment, the random experiment SB should employ to model her uncertainty on the day of the week upon awakening, and OJ/AJ experiment.

Giulio,

Perhaps the distinction you are looking for is the same as Lewis’s de se vs. de dicto/de re evidence, or at least along very similar lines. At any rate, you do see a distinction between SB and SB/Juice.

Does the following scenario match SB or SB/Juice? Or is it graded, somewhere between the two? If it is graded, how do you calculate its probability?

SB/Drunk: The experiment is the same as SB, except for the following condition: If heads, instead of remaining asleep on Tuesday, she will first be given an infusion of ethanol, enough to make her quite drunk — then she will be awakened. In the other three circumstances (M/T, Tu/T, M/H) she will wake up sober.

Keep in mind that different versions of this experiment may use different dosages of ethanol: enough for her to feel a light buzz, enough to make her very drunk but capable of reasoning, enough to render her incapable of reasoning (about probability, at least) but still somewhat responsive, or enough to put her into a stupor while remaining barely conscious.

At which point in that spectrum, if any, does this experiment stop being like the one and become like the other?

I will check back for your answer because I’d like to know what you think. I probably won’t respond after that, as it’s time I got on to other things, although I may check back later to see if Sean posts another follow-up article.

RSM, assuming (as usual) SB has full information on how the experiment is conducted, your Sober/Drunk problem (which reminds of times in my own life) involves the same type of conditioning as the OJ/AJ example. I.e. if asked to guess the toss when sober, she should “condition on being sober instead of not sober”. So she should calculate that heads was half as likely as tails.

I understand however you are asking what happens in the case where she can’t be sure if when drunk she will register being asked a question at all. Suppose we assume when drunk she will register being asked the question (let’s call this being effectively conscious) with probability p, and not be able to register that she was asked a question (so effectively unconscious) with probability 1-p.

This experiment has 3 complete behaviours: one where tails was tossed (occurs with probability ½); one where heads was tossed and she is effectively conscious on the Tues (occurs with probability p/2); and one where heads was tossed and she is effectively unconscious on Tues (occurs with probability (1-p)/2).

How should she calculate the likelihood that heads was tossed if she awakes and finds she is sober? First she should “condition on only ever being effectively conscious”, and then with respect to that “condition on being sober instead of not sober”.

In the notation of the paper, calculating \Pi (Heads and Sober | EffectivelyConscious) gives ½*(0) + (p/2)*(½) + ((1-p)/2)*(1) = (2-p)/4.

Calculating, \Pi (Tails and Sober | EffectivelyConscious) gives ½*(1) + (p/2)*(0) + ((1-p)/2)*(0) = 1/2.

So the likelihood the toss was heads is (2-p)/2 as likely as it was tails; i.e. the “probability” it was heads is (2-p)/(4-p).

When p = 0, we get the halfers result for the standard SB problem, when p=1 we get the OJ/AJ result of 1/3.

Thanks for your comments and references. My own point of view has been clarified through the process – I realize there are some omissions in the paper I wrote, and probably the best formulation of “conditioning on only ever being in a state satisfying property C” is in terms of an operation on experiments (you create a new Markov Process which removes the states that don’t satisfy C, and where you need to add the composite transitions that pass through the states you remove).

@ Giulio

very concise analysis. I agree with your approach and results. I also agree with your suggestion to create a new Markov Process which removes the states that don’t satisfy C in order to “condition on only ever being in a state satisfying property C”.

Is it safe to conclude that if we define a Markov Random Chain and assign zero transition probability from the begin state to itself, unit transition probability from end state to itself, unit prior probability for begin state, and zero prior probability for the remaining states we will end up with a setup similar to the one you propose in your paper?

I would also appreciate any thoughts on SBRE and my proposal that even though you can use an assumed random experiment to model your uncertainty upon awakening, once you learn that it is Monday you cannot consider it evidence that the event “Monday” has occurred.

Thanks, Giulio. I didn’t really have in mind a probabilistic interpretation of the Tuesday+Drunk state, but rather a case where she knew in advance exactly how “out of it” she would be. So I expected more of a gradation between various Tuesday states of awareness. Even if I believed that a different probability measure were warranted, I really don’t think we lose our selfhood enough when sleeping to warrant a full shift to halferism — I would reserve that for more extreme states of unconsciousness — for example, SB/Chair, where if the toss is heads, then on Tuesday, she is turned into a chair. Then standard SB would fall somewhere in a spectrum between SB/Juice and SB/Chair, with a result between 1/2 and 1/3.

Though I said I was done with this thread, a few additional thoughts came to me in the last 24 hours that I’d like to share.

First is that I note that, for the invariant measure you are proposing, the law of total probability (as conventionally stated) does not hold. That is, m(P) is not, in general, equal to the sum of m(P|Ci)*m(Ci) for all Ci in a given set of mutually exclusive and exhaustive conditions. So either a new form of the law must be found that holds for the invariant measure, or the law must be forfeited.

Second is that I was curious to note that in the paper, you explained the Director case by proposing that it is best modeled as an experiment in which none of the four states which the director could observe are temporally linked. But you grant that the SB/Juice case produces the 1/3 answer, even though those states are temporally linked. If the latter is allowed, then there is no need for the parallel model used in the Director case.

Third is a general comment. All of the deviations from standard probability that I have seen proposed in support of halfer or double-halfer positions (such as invariant measures, or Ioannis’s insistence on modeling with an SBRE that is not equivalent to the real Sleeping Beauty experiment — RSBE?) seem to be lack a motivation from logical or mathematical necessity (such as an inconsistency resulting from the thirder answer), nor even from utilitarian needs (such as setting of odds for a fair bet). Instead, the motive seems to come from a need to avoid what is perceived as a counterintuitive result. I would caution that, if we abandoned any mathematical theory that seems merely counterintuitive, not contradictory, then mathematics would have stalled long ago — imagine the Axiom of Choice being abandoned due to the Banach-Tarski paradox. While I don’t find the halfer position inconsistent in any mathematical sense, it seems that it is a less than parsimonious position that needs more than a psychological justification.

RSM, yes, the key point is that what Manley calles invariance is not a probability measure on the space of states, but in fact an expected value of such probabilities on the space of states (where the expectation is over the space of behaviours). As I pointed out in my introduction (with the example of the variant of the SB problem where the experiment ends a day early if Heads was tossed), the problem with what Manley calls the frequentist approach is that in cases like this it implies that the likelihood of being in the subset of states that completely characterIzes a behaviour is different from the likelihood of that behaviour occuring (which doesn’t make sense from the reference frame of the experiment). But at this point, we are going round in circles; and yes, Stubborn Mule, looks like you were right.

Well, I am getting closer to distilling my thoughts and the exchanges here have been helpful in getting there, so thanks to all. So, I think that a third post on the subject is not far away…following in Bob’s footsteps in that respect too.

In the meantime, I do have a question for Ioannis in relation to the SBRE version of the experiment. You distinguish Monday* (SBRE) and Monday (ERE), but you also assume that, because the coin is fair, P(Heads) = P(Tails), which seems to be inherited from ERE. Given the different spaces you are working with, wouldn’t it also make sense to distinguish Heads* (SBRE) and Heads (ERE), and likewise for tails?

As this post is coming to an end, I should clarify my current position. (This may answer some of your questions Ioannis.) There are two sources of confusion, which when I wrote the paper I hadn’t fully resolved; and as a result I made the inaccurate statement that \Pi is not a probability measure (it is, but the type of conditioning required to solve the SB problem isn’t classical conditioning).

The first cause of confusion relates to how to define the probability an experiment is in a subset P of its states. There are two obvious ways to do this. One is to define what I referred to as \Xi, which is characterized by \Xi(P) <= \Xi(Q) iff the expected number of times the experiment is in P <= the expected number of times it is in Q. The other is to define what I referred to as \Pi, which is characterized by \Pi(P) <= \Pi(Q) iff the expected percentage of times the experiment is in P <= the expected percentage of times it is in Q. I have claimed that \Xi is an appropriate measure for a reference frame external to the experiment, while \Pi is consistent with the internal reference frame of the experiment. The only way I can justify this is by appeal to examples such as the one where SB experiment ends a day early when Heads is tossed (or Bob Walters' example based on his train experience).

The second source of confusion relates to conditioining. As both \Pi and \Chi are probability measures, they each admit their own standard form of conditioning. Eg to solve the OJ/AJ variant of the SB problem I described in the comments above, I would use \Pi with its standard conditioning (specifically conditioining on SB being served OJ). But one can also consider the construct of 'conditioning on the experiment only ever being in a subset C of the states', which I now believe is best treated as an operation that defines a new experiment that only has states C (though to formally treat it this way I probably need to deal with the start and end states of an experiment a little differently to how I did in the paper). The standard SB problem I claim should be treated by first applying this type of conditioning to the experiment (condition on SB only ever being awake) and then applying \Pi. This construct does not satisfy the usual laws of conditioning (related to the 'interference' effects I describe in the paper when say considering series composites of experiments).

@ Stubborn Mule

technically we could distinguish Heads* (SBRE) and Heads (ERE), since they are events of different sample spaces. However, there is absolute correlation (do not interpret correlation in a strict mathematical sense here) between the two, since in the definitions of ERE and SBRE, it is identified that they both correspond to the experimenter’s coin toss. Thus, it would only add unnecessary abbreviations to use Heads* instead of Heads. The same does not apply for “Monday” and “Monday*”. They correspond to different things. “Monday” corresponds to a Monday awakening occurring during a trial of the conducted experiment. Even if the coin is tossed tails, SB is awake, and now is Tuesday, she knows that a “Monday”event has occurred (after all it is a certain event in ERE). Hence, it is clear that according to ERE (which is the “real Sleeping Beauty experiment”, to use RSM’s phrasing ), “Monday” and “Tuesday” events are not mutually exclusive (this is why we have to define SBRE in order to use probabilities conditioned on Monday or Tuesday as Elga does). On the other hand, “Monday*” corresponds to an assumed situation, where in case the coin is tossed Tails either Monday or Tuesday is randomly selected as your current state. Notice that in this case Monday* and Tuesday* are mutually exclusive events, but there is no direct correspondence to a real life event, since SB is always aware that such selection never actually occurs, i.e even if she learns that it is Tuesday she does not have evidence of a Tuesday* event.

In case there is confusion on using an assumed random experiment to model the uncertainty one has, I would like to present a toy example.

Imagine a game where you are presented with 3 doors. Behind one door there is a car, whereas behind the other two there are goats. The host of the game tells you that he always puts the car in door number 2. He then gives you a drug that makes you forget the last number you heard. What is the probability you should assign for the car being behind door number 2?

You know that there is no random selection, you know the host always picks the same door to hide the car, but you don’t remember which. Thus, you can model your situation as a random experiment where (on a principle of indifference) you can assign probability 1/3 of the car being behind each door. You know that your experiment is just a model, you know that if the conducted setup is repeated many times the car will always be behind the same door, whereas your assumed experiment expects in many repetitions roughly 1/3 of them to result to each door. Nevertheless it is what you should use to compute the probability in question and decide your betting strategy if any additional bets are offered (e.g. The host offers you an alternative:” he would toss a fair coin and in case of Heads he would give you the car, otherwise you get the goat. “. This is a real random experiment that will actually take place like ERE. In this case it is better to choose the coin toss since it provides you with 1/2 chance to win the car instead of 1/3 you have on selecting one of the doors.).

@ Giulio Katis

I find your approach very interesting and promising. However, the external versus internal reference frames you are mentioning are not explicitly defined. Thus, I believe that your work may benefit if a rigorous definition of these frames is provided. You argue that “The first cause of confusion relates to how to define the probability an experiment is in a subset P of its states.” I believe that some clarifications are needed here. When is the experiment examined, by whom (what information the examiner has) and how (by which procedure)? I don’t believe that there is a unique probability for all possible cases. Thus, relating this to the initial discussion on the reference frames, I suggest to define the two frames by answering the above questions for each.

I had planned to get a final post up tonight, but time has got away from me, so I will post some preliminary thoughts here…good opportunity to get some further critiquing.

I am increasingly convinced that the central, if somewhat disappointing, issue here is ambiguity. The problem as posed seems to make it clear that is a question that should have a single right answer, but there be dragons. Although the ambiguity is of a very different nature, there is something of a parallel with a poorly posed Monty Hall problem. If all you are told is that you have picked a door (say door A) and Monty has opened another to reveal a goat (say door B) and you are asked whether or not you should switch, you don’t have enough information to calculate the probability that the car is behind door C. If you assume a protocol for Monty whereby he will (a) knows where the car is, (b) always open a door to reveal a goat, and therefore open B if the car is behind C and vice versa and (c) choose randomly between B and C if you have chosen correctly (i.e. car behind door A), then you can conclude that the probability of the car being behind door C is 2/3. If, however, Monty’s protocol is that he (a) doesn’t know where the car is himself and (b) chooses randomly between B and C (i.e. the two doors you didn’t initially select), then the probability of the car being behind door C, given that Monty just happened to show a goat behind door B, is only 1/2. So, if you are not told the protocol, what probability should you assign the car being behind door C? There’s just not enough information to say. While you might get all Bayesian and assign a prior to these two protocols, I should note that there are a whole lot of other possible protocols too…

So, the careful posers of the Monty Hall problem will spell out the protocol precisely (usually the first of the above, thereby concluding you should switch doors).

What then of Sleeping Beauty? I think that now the question of what is the applied probability we are trying to probability (or, in philosopher’s parlance Sleeping Beauty’s “credence”) is not well defined. There are a number of things it could reasonably be. One of them would lead you down a halfer route, one would lead you down a thirder route (and perhaps another down a double-halfer route). Rather than any one of these being correct, I think that halfers or thirders happen to be drawn to initially seeing one or other interpretation as more natural, then calculate that probability. This is not a mathematics or pure probability question, it is a (rather artificial) question of applied probability.

I imagine few are convinced at this point, but bear with me. There are a few ways in to this way of thinking. Let’s start with betting. A very standard way of meshing the calculus of probability with the application of credence is to consider the price of fair bets (a classic reference here is van Fraassen, C. “Belief and the Will”,

The Journal of Philosophy81, 235 (1984)). So how would that work here? A thirder would offer Sleeping Beauty the bet on every awakening that pays $1 if the coin is heads then argue that the fair price of the wager is $1/3. QED. For the halfer this feels like a cheat: if the coin comes up tails, the bet is placed twice, if it comes up heads it’s only placed once. The halfer may counter with an alternative bet. Again, Sleeping Beauty is offered a payout of $1 if the coin is heads but has to place her bet in a locked ballot box. On Wednesday, after the experiment is over the bet will be settled, but only one bet…if she woke twice as a result of a tails toss, there will be two bets in the box but she will only be made to pay the price on one (or, if the wagers were different amounts, the average of the two). I suspect this feels like a more legitimate bet for a halfer, but thirders will cry foul.Another way in, rather than betting, is through simulation. In these days of easy computing, if in doubt fire up Excel, R or your calculator of choice, and simulate away. Certainly simulations have proved to be an effective way to convinced doubters of the traditional Monty Hall solution. But how do we simulate it?

A thirder may proceed like this (I’ve seen it done!). First, we don’t know whether the coin is Heads or Tails, so we simulate that with a 50% probability each way, then we simulate Monday/Tuesday as 50% each way as well. Here are some possible runs:

1. Tails Mon

2. Heads Tues

3. Tails Mon

4. Heads Tues

5. Tails Tues

6. Tails Mon

7. Tails Mon

8. Heads Mon

9 . Tails Tues

10. Tails Tues

We would then scratch runs 2 and 4 because Sleeping Beauty Sleeps through the day. This should seem fairly natural to RSM and, with enough simulations, this will give you the thirder position.

The halfer would recoil in horror. After all, every time the coin is Tails, there should be an awakening on both Monday and Tuesday, but the simulations above have four (Tails, Mon) but only three (Tails, Tues). A halfer would probably prefer the following approach:

1. Tails – Mon then Tues

2. Heads – Mon

3. Tails – Mon then Tues

4. Tails – Mon then Tues

5. Tails – Mon then Tues

6. Tails – Mon then Tues

7. Heads – Mon

8. Heads – Mon

9. Tails – Mon then Tues

10. Tails – Mon then Tues

These two interpretations have a nice parallel in a paper referred to by Ioannes in his own paper. In this paper, by Groisman, B., “The end of Sleeping Beauty’s nightmare”

arXiv(2008) Sleeping Beauty awakenings are replaced with a robot that either places a single green ball in a box if a coin toss is Heads, or two red balls if the toss is Tails. You could then ask, what is the probability that a green ball was placed in the box (1/2) or what is the probability that a ball randomly drawn from the box is green (1/3). Halfers intepret the Sleeping Beauty problem as the first question, thirders as the second.Is one interpretation more correct or more natural? I don’t think so. Ultimately I fear that it is as ambiguous as the Monty Hall problem without the protocol.

You have probably noticed that I’ve really only talked about two interpretations (halfer and thirder), not double-halfer, which is the position Ioannes takes. This involves yet another probability distribution, but somehow it doesn’t seem quite as natural to me as the other two. But, since I’m arguing the question is ambiguous, I can’t rule it out!

I should add that it is a side-effect of the unusual resort of the amnesia drug that the whole problem arises. Without that, you can’t get the multiple outcomes evident in the halfer simulations.

@ Stubborn Mule

I agree with you that “This is not a mathematics or pure probability question, it is a (rather artificial) question of applied probability”. It all comes down to how you should apply probability theory in case you are SB and you are asked upon awakening what is your credence that the coin was tossed Heads.

In my previous posts I have argued that we should explicitly define the random experiment we are using to construct our probability space. The simulation approach requires exactly that. There is nothing wrong with the halfer’s simulation. It is consistent with SB’s situation upon awakening and it can be used to compute P(Heads)=1/2. However, it cannot be used to calculate the probability that “today is Monday” (as there are only two outcomes, “Heads-Mon” and “Tails-Mon then Tues”.

What about the thirder’s simulation is it valid to assume such a setup? Obviously not. In the proposed setup, the day selection is modeled as independent from the coin toss. This is clearly not the case, since when the coin is tossed Heads the day selection (SB’s current state) is fixed to Monday. Scratching runs out doesn’t compensate for treating dependent events as independent. Notice that it is not a matter of phrasing or interpretation, thirder’s simulation is based on an invalid assumption.

This is exactly why I have introduced SBRE which takes into account this dependency. Thus, in case of SBRE the simulation will produce something like that:

1.Heads-Mon

2.Tails-Mon

3.Heads-Mon

4.Heads-Mon

5.Tails-Tues

6.Tails-Mon

7.Tails-Tues

8.Heads-Mon

and again P(Heads)=1/2. However, in this case (unlike the halfer’s simulation) the model allows for the computation of P(Mon)=3/4.

The only bizarre result is that once you learn that it is Monday you cannot use this model to update P(Heads|Mon)=2/3. This is because the random selection of the day is only assumed (as it is assumed in the thirder’s simulation), it never occurs during the conducted experiment.

In my paper I also address the betting arguments on the SB problem. I will present my analysis on that in a following post. Briefly, my conclusion on that is that a correct betting strategy should be based on the P(Heads)=1/2 probability even in case of multiple bets per trial.

@ Stubborn Mule

Regarding the betting strategy SB should use upon awakening…

You have argued that: “A thirder would offer Sleeping Beauty the bet on every awakening that pays $1 if the coin is heads then argue that the fair price of the wager is $1/3. QED. For the halfer this feels like a cheat: if the coin comes up tails, the bet is placed twice, if it comes up heads it’s only placed once.”. The important thing is which betting strategy should SB apply once a specific betting setup is established. Thus, lets examine the case where SB knows that she will be offered the bet (Heads pay $1) each time she is awakened, i.e. twice in case of Tails. It is also explained that in case of Tails she would also pay the wager twice. It is clear that the fair price of the wager is $1/3. Notice that even a halfer concludes to the same amount. Let me present the two sets of calculations:

a. Thirder’s approach:

Expected Gain=P(Heads)*(1-wager)$+P(Tails)*(-wager$)=0 –>wager=P(Heads)=1/3$ (correct result, total disrespect to probability theory)

b. Halfer’s approach:

Expected Gain=P(Heads)*(1-wager)$+P(Tails)*(-2*wager$)=0 –> wager=P(Heads)/(P(Heads)+2*P(Tails))=(1/2)/(1/2+1)=(1/2)/(3/2)=1/3 (correct result respecting probability theory)

Why these totally different approaches produce the same result?

Coincidentally, P_half(Heads)/(2*P_half(Tails))=1/2= P_third(Heads)/P_third(Tails), and since we calculate the fair wager and the Expected Gain is set to zero it is only these ratios that matter.

If we set a different wager, e.g. 1/2$ we get:

a. a. Thirder’s approach:

Expected Gain=P(Heads)*(1-wager)$+P(Tails)*(-wager$)=

1/3*(1-1/2)$+2/3*(-1/2$)=1/6$-2/6$=-1/6$, whereas

b. Halfer’s approach:

Expected Gain=P(Heads)*(1-wager)$+P(Tails)*(-2*wager$)=

1/2*(1-1/2)$+1/2*(-2*1/2$)=1/4$-1/2$=-1/4$

Thus, in case of a 1/2$ wager SB should actually expect a loss of 1/4$ and not 1/6$ as thirder’s approach predicts. To make this example more practical, lets assume that SB is told that if she accepts the bet she will get additional 1/5$ once at the end of the experiment, i.e. in case of Tails she would pay 1$ for the two wagers and will get a refund of 1/5$ (this implies a total loss of 4/5$), whereas in case of Heads SB pays 1/2$ and gets 6/5$.

According to thirder’s approach SB is expecting a 1/6$ loss and a 1/5$ constant refund resulting to 1/30$ gain. However, according to halfer’s approach, SB is expecting a 1/4$ loss and a 1/5$ constant refund resulting to 1/20$ loss. Thus, SB should not accept the bet described in this new setup.

I wonder what a thirder would do if he is repeatedly offered this bet and after 1000 trials realizes that he is loosing money…

The bottom line is that offering a bet twice in case of Tails (with P(Heads)=1/2=P(Tails)) is not equivalent to offering a bet once with P(Heads)=1/3 and P(Tails)=2/3.

@Ioannis I think you just proved my point! “For the halfer this feels like a cheat: if the coin comes up tails, the bet is placed twice, if it comes up heads it’s only placed once”. A committed thirder would disagree with you, arguing that by posing the question to the freshly awoken Beauty, the reference class of the question must be awakenings not experiments and the bet is only offered once per experiment.

I don’t think it is correct to say that the thirder approach has total disrespect to probability theory. There is a mathematically consistent probability space they are using: Ω = {(Heads, Mon), (Tails, Mon), (Tails, Tues)}, with a probability of 1/3 attached to each. This also happens to be the appropriate probability space for the director who turns up to observe the experiment, knowing the protocol but not knowing whether it’s day one or day two of the experiment, and sees Beauty awake. I’d be interested in RSM’s view on this, but I would say that most thirders would say that the probabilities should be the same for Beauty, the director and Giulio’s AJ/OJ scenario. There’s nothing wrong with this mathematically, it’s a question of applied probability rather than probability. A halfer would claim that there is something different between the director’s perspective and Beauty’s and so the same probability space does not seem appropriate.

My own view is that the halfer and the thirder position are answers to two different questions: (1) what would be the long run frequency of heads per experiment and (2) what would be the long run frequency of heads per awakening. My contention is that Sleeping Beauty question as posed is ambiguous. It is not clear which of those two questions is the most natural or correct interpretation of the question being asked.

@ Stubborn Mule

The analysis in my previous post is based on the betting setup you have suggested. Quoting you: “A thirder would offer Sleeping Beauty the bet on every awakening that pays $1 if the coin is heads then argue that the fair price of the wager is $1/3. “. Thus, it is an undeniable fact that the bet is offered twice in case of Tails for both thirders and halfers (and SB). Consequently, it is a total disrespect of probability theory to confuse a bet offered twice in case of a fair coin tossed Tails, with a bet offered once, but with P(Tails)=2/3. However, my analysis demonstrates why both approaches result to the same correct fair price of 1/3$ for the wager. It also explains that for any other computation of the expected gain the two approaches produce different results and that the halfers’ approach should be used by SB to determine her betting strategy upon awakening.

I never argued that “placing the bet twice feels like a cheat”. I simply point out that SB is aware of that fact and should take it into account in her calculations of the expected gain. This is why in case of Tails the wager is multiplied by a factor of 2 in the halfers’ formula.

You also argue that:

“A committed thirder would disagree with you, arguing that by posing the question to the freshly awoken Beauty, the reference class of the question must be awakenings not experiments and the bet is only offered once per experiment.”. I have already quoted you saying that a thirder would offer Sleeping Beauty the bet on every awakening. Thus, in case of Tails the bet is offered twice per experiment (once per awakening). If a thirder chooses to ignore this and insists on counting as different outcomes awakenings that occur during the same toss, he is doing exactly what I argue he cannot do, i.e. he models a situation where a bet is offered twice in case of a fair coin tossed Tails, as one where a bet is offered once, but with P(Tails)=2/3.

You also argue that:”My own view is that the halfer and the thirder position are answers to two different questions: (1) what would be the long run frequency of heads per experiment and (2) what would be the long run frequency of heads per awakening.”

Exactly!However, (1) corresponds to the probability of Heads upon awaking, whereas (2) corresponds to the probability of Heads if an awakening is randomly selected among “Heads-Monday”, “Tails-Monday”, and “Tails-Tuesday” in a uniform way, i.e. P(Heads-Monday)=P(Tails-Monday)=P(Tails-Tuesday)=1/3. My point with SBRE is that we have no reason to assign equal probabilities to these outcomes, since in case the coin is tossed Heads, Monday is automatically selected, whereas in case of Tails either Monday or Tuesday is selected.

A general comment I would like to make (taking as a cause the characterization “committed thirder” of StM, and some advices to halfers by RSM to not use intuition) is that I typically avoid assigning labels to people. I only use the “thirders, halfers, double-halfers” labels to denote the results of someone’s approach and nothing more, i.e. for me a thirder is one whose approach results to P(Heads)=1/3, a halfer is one whose approach results to P(Heads)=1/2, and a double-halfer is one whose approach on top of P(Heads)=1/2 results also to P(Heads|Monday)=1/2. In the published literature we can find great differences between “halfers” and the same applies for the other two cases. Thus, even if I don’t explicitly state it, my arguments against thirder positions is against specific thirders positions I have read in the published literature or in the internet. In my paper I have focused on arguments against Elga’s position. However, I have found fallacies in all of the works I have cited there (including halfers and double-halfers).

The results of my approach is that of a “double-halfer” (hence I am a double-halfer in that sense). However, it could as easily be that of a thirder if the model I used produced such results. What I mean to say is that I didn’t follow any intuition and I had not beforehand decided that only the double-halfers approach makes sense. In my discipline (research on machine learning , artificial intelligence, image analysis) we often have to model certain situations using probabilistic models. Thus, I have a good idea of how easy it is to produce an absurd model that doesn’t correspond to anything meaningful and which gives useless results. This is even easier when you follow intuition instead of careful planning. For constructing SBRE I didn’t rely on any intuition. I carefully answered the question of how I should model my situation if I were in the place of SB and I wanted to compute the probability of Monday upon awakening (if you want to compute probability of Heads upon awakening you can use ERE, which is straightforward but does not address the problem of conditioning on Monday that Elga is posing). I would like to remind you that Elga does not explicitly define how he models the awakenings he only states the outcomes (predicaments in his terminology) and uses conditioning on Monday to conclude that all three ( “Heads-Monday”, “Tails-Monday”, and “Tails-Tuesday”) are equiprobable. However, if one should be very careful when constructing the model, he should be ten times more careful if he starts calculating probabilities without having an explicitly defined model.

Speaking of explicit models I would like to return to the analysis on simulations, where the models are necessarily well defined. Thus, Stubborn Mule, what is your response to my arguments on why thirder’s simulation is absurd? (You have commented on my arguments regarding betting, but not on those of my other post regarding your suggestion of a thirder’s simulation)

Sean, you are correct. I think Ioannis’s insistence that the thirder probability space is undefined or poorly defined is unfounded, and also, if I may say so, rather disrespectful. I also agree that there is some ambiguity with whether the “credence” posed by the question represents probability or something else (expectation of probability measure, per Giulio?). It is not at all correct to claim that thirders are not calculating a probability. Halfers are also calculating a probability, using different priors. Double-halfers seem to be calculating something else, as explained by Giulio. To me, it is as if the double-halfer is starting with a well-defined probability space and applying a distortion to it. It may not be invalid from a mathematical perspective, and it might represent something that can be called “credence” just as properly as a regular probability could be called “credence” — it seems to be a matter of definition.

I came across another paper, “When Beauties Disagree” (Pittard, http://www.johnpittard.com/John_Pittard/Research_files/Pittard.%20When%20Beauties%20Disagree.pdf), that reinforces my view that halfing destroys the notion that agents with identical knowledge and identical priors should have identical credences, while thirding preserves that notion. Pittard, however, rejects the notion (which he calls “modest proportionality”) in favor of one he calls “robust perspectivism”, at least for cases similar to SB. So on a purely philosophical (not mathematical nor utilitarian) motive, he embraces halferism (or double-halferism, to be exact). See if you agree with that motivation.

I have just started reading another paper on the subject, “Putting a Value on Beauty” (Briggs, http://joelvelasco.net/teaching/3865/briggs10-puttingavalueonbeauty.pdf), which is also promising. Based on the introduction, I think this one is leaning towards a thirder perspective.

Interesting to read and compare.

@RSM – thanks for the links. I’ve just started reading the Pittard paper. It’s a diabolical variation. Without having the benefit yet of the full argument, my initial reaction is to take a similar line to the one I have taken for SB, namely that the credence question is ambiguous. There is an “outside” view (where the reference class is entire experiments) in which Claire should continue to think her chances of being the victim are 1/4, as should Dillon. Then there is the “inside” view (reference class is awakenings) in which both Claire and Dillon should update their probabilities to 1/2. There is an additional subtlety here in the “outside” view. While Claire can assign 1/4 to her credence that she is the victim, even in this outside view, she should assign 1/2 to her credence that she is the victim

conditional on seeing Dillonrather than one of the others.@RSM – I’m not sure what the double-halfer interpretation of this variant would be.

Just when you think you’re out, you get pulled back in… This is indeed an interesting problem. There’s a sense in which upon seeing Dillon, Claire gets information (she now knows she is in one of two of the possible four behaviours of the experiment); but there is also a sense in which she doesn’t (she would be getting a similar type of information whenever she wakes up).

In the language I’ve been using, if you condition the experiment on only being in states where Claire is awake (i.e. take Claire’s reference frame), and then apply the probability measure Pi you calculate 1/4; while if you condition the experiment on only being in states where both Claire and Dillon are awake (i.e. take their joint reference frame), and then apply Pi you calculate 1/2. I’ll have to think about what this means.

What if we change this experiment a little. Suppose Claire is the only conscious entity, and the other three ‘beauties’ are inanimate dolls that have a waking and a sleeping state; and when Claire awakes she sees the doll Dillon is in its awake state. Does this change anything about the way Claire should reason?

My thirder/Bayesian reaction is that if she sees Dillon, she should reason: “I am more likely to encounter Dillon (specifically) if I am the victim, than if I am not. Therefore, seeing Dillon (and not one of the others) is evidence in favor of the proposition that I am the victim.” On the other hand, there is that nagging question of “What’s so special about Dillon? Wouldn’t seeing any of the others lead me to the same conclusion?” So I feel tugged both ways in this example, more so than in original SB.

But I keep coming back to that notion about equal credences for identical information and priors, and that grounds me to the thirder viewpoint, because the halfer viewpoint results in each thinking the other is more likely to be the victim.

Dear RSM,

I would like to assure you that I mean no disrespect to anyone, no matter what his/her position on the SB problem is. I also don’t argue that thirders’ position is wrong because they use an ill-defined probability space. After all, so do the halfers and the double-halfers whose works I have read. Let me remind you that Elga talks about credence and predicaments and subjective probability. I don’t expect him or anyone else to rigorously define a probability space and base his analysis on that. It is me who is suggesting that a straightforward approach to the SB problem, using axiomatic probability theory, can be extremely useful, and I have presented such approach on the paper I have mentioned in my initial post. I argue that the subtle differences between halfers’ and thirders’ approaches are easier to be resolved, when such a rigorous framework is employed. Very sincerely, I would appreciate any specific comments on this approach. For instance why you think ERE cannot be used by SB when she is awakened to compute P(Heads), or why SBRE cannot be used by SB to model her uncertainty on the day of the week upon awakening? Or what do you think about the bizarre conclusion that SBRE implies, i.e. that even if SB learns that it is Monday she cannot use SBRE to update P(Heads|Monday*)=2/3.

I would also like to thank you for the interesting links you ‘ve posted. I will post back my thoughts on these works once I have finished reading them.

Ioannis,

Thank you for addressing my concern. However, it’s statements like this that give that impression:

“thirders appear trying to calculate the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P, but use an absurd formula”

“However, this is not the main problem with your approach. The problem is that in order for any of your calculations to become meaningful and unambiguous you have to construct a Probability Space, which (quoting wikipedia)…[unsolicited lecture ensues]” — in spite of my having done just that (constructed a probability space).

“The arguments you are presenting correspond to an ill-defined probability space that is clearly different from SBRE. Thus, they can not help proving my approach wrong. Once SBRE’s probability space is defined the computations of probabilities are based on straightforward application of axiomatic probability theory and cannot be rebooted [sic].”

I still find no foundation for the claim that the thirder probability space is ill-defined and therefore reject your claim. As I noted before, it does not seem that SBRE can be relied on to resolve the question, because it is not equivalent to SB.

Dear RSM,

taking the above parts of my previous posts out of context provides a wrong impression about my positions. I do not imply it has been done on purpose, yet I have to clarify some things:

When I said “thirders appear trying to calculate the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P, but use an absurd formula”, I was referring to Giulio Katis’ interpretation of what can be a formula that thirders use to calculate (withing the framework he is proposing) the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P. Thus, all I am saying is that in my opinion, if thirders (or anyone else) wish to calculate the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P withing the framework Giulio Katis is proposing, they should not use this formula.

Regarding the second post part:

“However, this is not the main problem with your approach. The problem is that in order for any of your calculations to become meaningful and unambiguous you have to construct a Probability Space, which (quoting wikipedia)…[unsolicited lecture ensues]” — in spite of my having done just that (constructed a probability space).”

Lets agree that we disagree. You believe that by referring to the conducted experiment and stating

” I note to start with that M (“It is Monday”) and Tu (“It is Tuesday”) are mutually exclusive, as are H (“the toss was heads”) and T (“the toss was tails”). Defining M and Tu so that they are not mutually exclusive, besides being arbitrary, is useless for the purpose of defining a sample space (unless your sample space is divided into atomic terms containing “M and not Tu” and the like, which, if followed, also leads to the thirder conclusion).”,

you have defined a probability space. In the wikipedia link I have provided (as part of my argument and not as a lecture) it is stated that “A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same”. If, as you imply, the probability space you propose is associated to the conducted experiment, we cannot go ahead and define a Monday awakening and a Tuesday awakening as mutually exclusive events, since they can both occur in the same trial of the conducted experiment. Thus, in order for your analysis to be complete, a situation where a random selection of either a Monday or a Tuesday awakening occurs, must be specified. It is then, that the associated probability space can be defined. This is what I do with SBRE. I first describe the situation that when it arises either a Monday or a Tuesday awakening occurs and then I proceed with the definition of the associated probability space. It is exactly because I believe that this is very crucial for addressing the contention between halfers and thirders that I insist on being rigorous on the definitions of the probability spaces we are using, and not because I want to express any disrespect. My insistence is argumentative and nothing more.

As for the last quote:

“The arguments you are presenting correspond to an ill-defined probability space that is clearly different from SBRE. Thus, they can not help proving my approach wrong. Once SBRE’s probability space is defined the computations of probabilities are based on straightforward application of axiomatic probability theory and cannot be rebooted .”

It is like a summary of the previous quote. I am arguing that in the analysis you have presented before starting your computations you have not described a situation that when it arises either a Monday or a Tuesday awakening occurs. Thus, I argue that you are using an ill-defined probability space. The rest of the text in the quote is there to stress out that it is not the computations (me or you) do once we start applying axiomatic probability theory (they both cannot be rebooted) that matter, but the situation that is associated with them. This is why I go on and ask you about arguments on dismissing SBRE. I think it would not be fair to accuse you of being disrespectful for dismissing SBRE automatically, without providing any arguments on why we should do that. Nevertheless, you insist on accusing me of disrespect because I insist on my arguments that you (not thirders in general-As I said I don’t expect that everyone should use such approach to tackle the SB problem) have not used a fully defined probability space to base your computations in your previous posts. I am insisting because I have not yet gotten a response on these arguments. You just state that you have defined a well posed probability space, despite the contradiction I pinpointed that in that space you define a Monday awakening and a Tuesday awakening as mutually exclusive events, whereas in the experimental setup your space is associated to these events can occur in the same trial.

When you say:

“I still find no foundation for the claim that the thirder probability space is ill-defined and therefore reject your claim. ”

I have to clarify that I am not making a general claim that “the thirder probability space is ill-defined “. I am only referring to the space you are using in your analysis. Moreover, you argue that there is no foundation for the claim I make on your probability space. However, I have introduced a concise argument (i.e. that in that space you define a Monday awakening and a Tuesday awakening as mutually exclusive events, whereas in the experimental setup your space is associated to they can occur in the same trial), which you have not addressed.

Finally you say:”As I noted before, it does not seem that SBRE can be relied on to resolve the question, because it is not equivalent to SB.”

I think by SB you are denoting the conducted experiment. In that case, I have to say that this is exactly the point. SB is not equivalent to SBRE, and this is not only desired but also necessary, since in SB a Monday awakening and a Tuesday awakening are not mutually exclusive events. The question is not if SBRE is equivalent to SB, but whether it can be employed by Sleeping Beauty upon awakening to model her uncertainty on the day of the week.

If by SB you mean something else please define it, in order for me to respond accordingly.

Best Regards,

Yannis