## Tuesday, 26 September 2017

### cnearing's argument, subjective probability and the whole Reverse Monty debacle

Over at Craig-Land, in his thread on what constitutes a "good argument", a forum member called cnearing wrote this:

I would point out that I take a strictly subjectivist approach to probability.  Probability is not an objective feature of reality (probably--set aside potential quantum weirdness for the moment) but rather a representation of the uncertainty one has about the way things behave.

Probability comes from models.  Models are built through inference from experience.  Both are subjective.

This reminded me of the terrible trouble I got myself into with respect to the "Reverse Monty Hall Problem".  I went into it with an assumption which, for one reason or another, had become implicit rather than explicit and which then became forgotten, hidden and/or overlooked.

There are a lot of articles connected with the one I've provided a link to, so I'll try to distil it down to essentials.

If we don't know, and have no way to know, the probability of a particular claim or premise in terms of it being true or false, we are required by the principle of indifference to assign it a notional probability of 1/2.  If we know more, for example that there are more options, say:

A is true, B and C are false.
B is true, A and C are false.
C is true, A and B are false.
One of A, B and C is true.

Then, knowing nothing else, we are required to assign a notional probability of 1/3 to A, B and C.  We don't know anything that makes A more or less likely than the others.

Say then, that we have an urn in which there is an arbitrarily large number of balls that are identical in size and that is all we know.  We draw out all but one of them (say 999 of them), and they are all white.

What is the likelihood that the last one is also white?  If we know nothing else, then the answer is one in a thousand, the same likelihood of a single non-white ball being placed in any specific position in a sequence of one thousand extractions.

What was the likelihood, after having drawn 99 balls, that the 100th was non-white?  The same logic applies and it was one in one hundred.  As more white balls were drawn, the likelihood of a non-white ball being drawn went down.

Now, compare this to the likelihood of drawing a non-white ball as assigned by someone who has one more piece of information, the knowledge that the balls were initially drawn from an enormous barrel in which there were 900,000 white balls and 100,000 black balls.

Unlike us, who had no additional information and could only work on the basis of what balls we had drawn out, this other person will know that there is an increasing likelihood of drawing a black ball after each white ball is removed.  In fact the likelihood of the 1000th ball being non-white after a sequence of 999 white balls is very slightly higher than one in ten.  The likelihood of drawing 999 non-white balls in a row is extremely low, but that is immaterial, since we are only looking at the likelihood associated with the next draw once this extremely unlikely scenario has already played out.

We can fiddle with the figures to make it more explicit.  Say we only know about 999,999 balls that we've drawn, over a period of a couple of boring days.  All of them are white.  We have to say that the likelihood of the next ball being non-white is one in a million.

But if our more knowledgeable friend knows also that there is one black ball in the barrel, then she will have to say that the likelihood of the next ball being non-white is one in one, 100%.

---

My point here is that we have evidence, and we might also have assumptions.  The assumptions that we make about the distribution of balls in the urn will change our assessment of the likelihood of a non-white ball being drawn.  If we characterise our (potentially false) assumptions as "knowledge" - as theists are often wont to do - we will consistently misjudge the likelihood of our premises (and subsequent conclusions) being true.

Add to this the possibilities that we don't consider (i.e. thinking only of A being true or false, rather than factoring in other possibilities like B and C), then we end up with very little likelihood of reaching reliable conclusions.