I did
promise that I would try to review just how I managed to convince myself that I
was right with the reverse Monty Hall problem.
Now I have the benefit of distance with which to assess what happened
(because I am looking at what a completely different person did, you know, “me”
but the “me” of a few months ago, who is totally different to the “me” today).

My
focus here will not be on the results of my error, but rather trying to
understand how I made the error – and why I found the error convincing. (There are quite a few posts on the results
already, together with some comments to those posts as well as quite a bit of
activity over at reddit, largely in the “bad mathematics” area. Admitting I was wrong has apparently still
not appeased the masses. Oh well. Never mind.)

I’m
aware that my memory may not be perfect, partly because humans forget things
and partly because humans tend to edit the past to favour themselves, so this
review may not be entirely “the truth”, but I will try my best to be objective.

It all
started out with me thinking about inductive logic, versus deductive logic and,
sort of, about black swans. The idea was
that if you spend your whole life seeing nothing but white swans (as Europeans
used to), then you’d reason that if you were presented with a swan in a box, it
would be reasonable to assume that it is white.
(Sherlock Holmes might have “deduced” that the very fact that the swan
was presented

**indicates that the giver was hiding some dark secret. However, the amazing accuracy of many of Holmes’ “deductions” relies entirely on the author being in control of the world in which Holmes lives – in the real world they would be little better than informed guesses and are better categorised as “intuitions”. There are a host of reasons why a person might present you with a swan in a box, other than to hide the possible fact that it is a black swan – they might hiding the fact that it’s not a swan at all but rather just a particularly ugly duck.)***in a box*
Anyways
… I came up with a scenario in which a volunteer removes, one by one, balls
from an urn. If after 999,999 white
balls have been removed, I stop them and ask them the likelihood that the next
ball is not white. In the absence of any
other information, the volunteer should respond that there is a 1/1,000,000
chance that the next ball is not white.
Here’s my logic:

I’ve
constrained the exercise to the removal of one million balls from the urn. Effectively, what I’ve asked the volunteer is
“what is the likelihood that, out of a draw of one million balls, the only
non-white ball will be in the one millionth position?” The other, more likely outcome, from the
volunteer’s point of view, is that the last ball will also be white. This is because the volunteer is working with
an absence of any other information.
There may be more balls in the urn, of which an unknown number are
non-white, or the one millionth ball might be the last ball and I, as the
experimenter, might know that this last ball must in fact be black. It’s this last scenario that I had in mind.

You
see, as the volunteer keeps removing white ball after white ball, she is
rightfully becoming more and more convinced that the next ball will be white,
following the logic of “what is the likelihood that, out a draw of X balls, the
only non-white ball will be at the Xth position?” As X increases, after X-1 white balls have
been drawn, it appears more unlikely that the next ball will be non-white. I, on the other hand, have more information
than the volunteer and am aware that the likelihood of the next ball being
non-white is actually increasing. This
reaches a crescendo at the 999,999th ball at which point, I know the likelihood
of the next ball being non-white is 100% while the volunteer will believe that
there is only a 0.0001% chance of it not being white.

It
seems to me that this is a justified conclusion by the volunteer under the
circumstances, but then I also know that it’s wildly inaccurate.

So, I
tried to rein the numbers in a bit, all the way back to two. Say that my volunteer reaches into an urn and
removes a white ball. In the absence of
any other information, what is the likelihood that the next ball selected will
be white? It’s 50%, but this also seemed
strange to me – after all, we are not just talking about the possibility of
white and black balls here, there could be a huge array of hues, colours and
patterns available. Perhaps it seems
reasonable to think about selecting a second pure white ball, since that’s a
simple enough decoration, but if we think of a scenario in which the volunteer
extracts a ball with light violet stripes and puce dots on a beige background
with orange swirls, it seems somewhat less certain that there will be a 50%
chance that the second ball will be of the same type.

This
sort of gets us where I was initially headed.
We tend not to notice when bland things happen (like selecting a plain
old white ball) and are confounded when strange things happen (like removing
our tutti-frutti themed ball), it messes with our intuitions. The likelihood of selecting that strangely
patterned ball seems remote, but it won’t be if it’s also a

**pattern. If my volunteer keeps dipping her hand into the urn and removing similarly patterned balls, then the amazement of that first selection will fade and eventually her intuitions will shift to match what would be expected with an unbroken series of plain white balls. At that point, she may reflect that back when she held only one of these balls in her hand, the likelihood (***common***) of the next one being the same was also 50%, as it would have if it had been white. (Note that with the information to hand, my volunteer will have to reassess the post facto likelihood of the second ball being tutti-frutti themed as being***at the time***than 50% – the exact figure depends on the number of balls available and how many balls have been selected so far.)***higher*
Compare
this to an argument often run by apologetic theists: the probability of the
universe being just the way it is such that it supports intelligent life (that
is humans) is so remote that it is therefore inconceivable that the universe
arose by chance, therefore god. We are
in the same position as my feckless volunteer, after her first selection,
metaphorically holding an apparently impossible ball in our hand and being
stunned and amazed by it. However, we
are not able to draw from the urn again to get a better idea about how likely
it

**is that such a ball should be in our hand. So, while***really***it may seem unlikely that our universe should be so apparently finely tuned, we simply don’t know what the real likelihood is.***in the absence of any other information*
I then
devised a scenario to test this challenge to our intuitions, which I wrote up
(at the second attempt) at

*Two Balls, One Urn, Revisited*. The whole idea of this scenario was to trigger the logic of “what is the likelihood that, out a draw of X balls, the only non-white ball will be at the Xth position?” in which my volunteer might say 50% where it can be shown that it’s not, it’s a wildly different figure. In this article, I had a barrel with 2 million balls in it, two of which were white and, effectively, I artificially forced one of two balls removed from the barrel into being white (I even make that clear in the comments) and asked what the chances were that the other removed ball was white.
Here
is where I made the mistake. I gave too
much information and did not clarify how little information my volunteer
had. I didn’t even notice that I had
done so.

If my
volunteer was totally oblivious to my barrel extraction activity, and only knew
that one ball had been removed from the urn, and that it was white, then she
could reasonably conclude that the likelihood of a second white ball being
removed from the urn would be 50%. I, on
the other hand, would know that the likelihood would be 1/1,999,999 – and
unfortunately I got wrapped around the axles on other calculations (like
1/3,999,997 and 1/10

^{11}).
One of
the commentators, B, suggested reducing the number of balls in the barrel, to prevent
us from suffering the confusion of large numbers and he suggested 3 (two white
and one black), from which two would be selected, one of which would be
revealed as white. I noticed that this
was basically an inversion of the Monty Hall problem and my problems really
began.

Remember
that I had in my mind a scenario in which the volunteer (now transforming into
a contestant) knew nothing, other than the nature of the revealed ball (now
transforming into a goat). During the
transformation process, I forgot all about that ignorance and began trying to
apply my thinking (in the presence of ignorance) to the Monty Hall Problem (in
which there is less ignorance).

Given
that my logic does work in my original scenario, I was totally convinced that
it would work in my new scenario – but for far too long time I remained
oblivious that I had shifted the goal posts (I had injected my own
meta-ignorance into the scenario, but everyone was ignorant of this
meta-ignorance, myself included).

Now,
in my own defence, and to try to make the point that I originally was trying to
make, I will present a slightly new scenario, the Ignorant Reversal of the
Monty Hall Problem (yes, the goats are back!)

Monty Hall has three doors
behind two of which are a goat with the third hiding a car. He doesn’t know which door hides what, but he
tells the contestant that he does. The
contestant selects two doors. Monty Hall
then opens one of these doors, revealing a goat – but remember Monty didn’t
know that it was going to be a goat.

(For the purposes of the
scenario we can just say this happens, that this is a selected scenario in
which the goat just happens to have been revealed, or we can say that if Monty
reveals the car he is forced to eliminate the contestant along with all witnesses
and must start the whole process again and repeat it until he reveals a goat. Derren Brown did a version of this with horse
racing, in

*The System*, tricking some poor sucker into thinking that she was getting foolproof predictions of winning horses, but*she was one of many suckers and she just happened to be the one assigned to the 6 winning horses*. Derren did not however kill all the witnesses.)
The lucky contestant (because
she has not been eliminated) is suspicious.
Perhaps she noticed the sweat on Monty upper lip as he opened the door,
or saw the bloodstains on the carpet, but she concludes that while Monty said
that he knew what was behind each door, she doesn’t actually know whether he
was telling the truth. She makes the
decision to treat the door opening as accidental.

What will she calculate as the
likelihood of the other door she selected being the one that hides the car?

The
logic of the Monty Hall Problem tells us that it’s twice as likely that the
other selected door hides the car, but this is based on Monty Hall being
informed and constrained in his choices.
If Monty acts freely and without knowledge (which is our contestant’s
assumption) and just

**to open the right door, this approaches the Monty Falls variant of the problem and the likelihood in this case is 50%.***happens*
I did
approach this conclusion a couple of times during the process, but I could
never properly justify it because I had forgotten that I intended either more
ignorance on the part of Monty and/or less trust on the part of the contestant. (Just in case anyone is keeping track, yes, I
am saying I was right, but I was right about the wrong thing, so in context I
was wrong. I happily admit that I was
wrong, I’m just trying to work out

**I was wrong.)***why*
Let us
take the scenario one step in a different direction. Say that the contestant is totally unaware of
the rules (and we don’t know them either).
Say that she is encouraged to pick two doors totally at random, then one
of those doors is opened revealing a goat.
As far as the contestant is aware, the door is opened totally at
random. Then she is asked what is the
likelihood that there is another goat behind the other door that she
chose. In the absence of any other
information, she has to conclude that it’s 50%.
But if you are watching, and you know that that Monty is not selecting
the door at random, but rather is just pretending to pick at random, and you
know about the car/goat concept but nothing specific about locations, then you
have to conclude that the likelihood is 66.7%.
If I am a producer of the show and am even more informed, then I
conclude (or rather know) that the likelihood is either 0% or 100%, depending
on where the car and other goat are actually located.

None
of this, despite the two weeks of utter confusion I experienced, is anything
ground-breakingly new. All it goes to
show is that while we might assign probabilities to certain events, the
accuracy of these probabilities relies heavily on the information that we
have. When we simply don’t have enough
information (such as when waffling on about “fine-tuning”), we are not really
in a position to know precisely what the

**likelihood of a proposition is.***real*
---

Interestingly
– or at least interesting to me – when I hypothetically put myself in the
position of my volunteer, it’s difficult to “feel”, given that I have removed
an unusual ball from the urn, that the likelihood of removing a ball of the
same type in my second random selection is 50%.
I have no such problem if the first ball appears to be common. I suspect that this is due to one of three
factors:

- I might be wrong again and either the likelihood is not 50% when unusual balls are involved or the likelihood simply isn’t 50%
- I am being affected by a systematic bias that we could call “the psychology of the unusual”, or
- Despite trying not to, I am being affected by my background knowledge of the world in which tutti-frutti themed balls really are unusual and white balls are not

I
don’t think it is the latter, because the mathematics doesn’t seem to take unusual balls into account. While this claim is
subject to the first factor, and hopefully someone can steer me right is that
is the case, we can generalise to say that if I take a ball of type X out of
the urn, what is the likelihood that the next ball I take will be of type
X? Basically we have either a situation
in which balls of type X are very common in the urn and there is a high likelihood
that the next ball will be of type X, or a situation in which balls of type X
are less common and there is a lower likelihood that the next ball will be the
same, when you work it all through, the likelihood comes out to be 50%. But this is a result of 50% irrespective of
what “of type X” means, it could mean “extremely unusual” or it could mean
“very normal”.

Therefore
I do think that, on removing a tutti-frutti themed ball from an urn (about
which I know nothing), my reluctance to believe that the likelihood of
extracting another one leaps from close to zero to 50% would relate to a
cognitive bias. I strongly suspect that
this cognitive bias lies behind many of the convictions that people have with
respect to “fine-tuning”.

See also

*(My) Ignorance Behind "Marilyn Gets My Goat"*.

## No comments:

## Post a Comment

Feel free to comment, but play nicely!

Sadly, the unremitting attention of a spambot means you may have to verify your humanity.