Who would have guessed it?
William Lane Craig has, despite his appearance of overweening arrogance,
admitted to being wrong. You might have to read the article carefully
because otherwise you might not see it, but he has come to the conclusion that
his use of statistics has been inadequate.
In many of his arguments, Craig works from the principle
that while we might not know with total certainty that each of his premises are
not true, they are individually "more likely than not" and that, as a
consequence, the conclusions reached as also "more likely than not"
(this to me appears to be borrowed from Plantinga, but maybe it's an idea that
infests apologetics as a whole). [I
address related issues in Planting a Demigod and Planting a Tiger.]
In his own words:
… that raises the further
question of what qualifies as a “good” deductive argument. I take it that a
good argument is one whose conclusion is shown to be more plausible than not.
So under what conditions is an argument good? As you note, I have long said that
in order for a valid deductive argument to be a good one, it suffices that each
individual premise of the argument be more probable (or plausible) than its
contradictory.
What Craig has written here should be parsed carefully. Note that he uses the phrase "more
plausible than not" and then implies that "probable" and
"plausible" are interchangeable.
The problem is that these terms are not interchangeable, especially not
in the context in which they are used and Craig appears to equivocate between use
of the term as meaning "probable" and use as meaning "able to be
believed" (that is, not totally impossible).
Then there is a question regarding precisely what Craig means
by "probable" when he uses that term (or "plausible" in its
stead). I would suggest that when he
talks about a premise having a probability of X, he means that there is a
probability X that the premise is true.
Say we are tossing a fair coin.
The probability that we will get a head in any individual toss is Pr=0.5
- and the probability that it is true that we have tossed a head (assuming that
I can't see the result) is Pr=0.5. The
premise "we have tossed a head" doesn't really have a probability,
but the statement "it is true that we have tossed a head" does.
(We get around this in logic by assuming that any statement
is also a statement to the effect that the statement itself is true.)
I think that it might be useful to introduce a new term:
"assuredness". Assuredness is
the probability that what you believe to be true is actually true. This might seem to be a nugatory term, but I
hope to demonstrate that it is actually useful when considering probabilistic
syllogisms.
Say we have a scenario in which Ted, our research assistant,
draws a ball from an urn. There are two
balls in the urn, a black ball and a white ball. If he draws the black ball, he picks up a
fair coin and tosses it fairly. If he
draws the white ball, he does something completely different, so long as it doesn't
result in a coin being tossed (he can toss a die, or slap his own face or sing
the national anthem in his underpants, we don't know and to some extent we
don't care as long as he doesn't damage our laboratory). From this we can draw the following
probabilistic syllogism:
(It is true that) if Ted tosses a
coin, a head will result (Pr=0.5)
(It is true that) Ted tosses a
coin (Pr=0.5)
Therefore, (it is true that) a
head will result (Pr=0.25)
Note that I've constructed this scenario so that the fairness
of the coin (making it 50-50 that a head will result from a fair toss) is
totally independent of the process of selecting a ball from an urn (which makes
it 50-50 that Ted will toss the coin).
A question that can be raised about the syllogism above is …
what does the remaining 0.75 represent? Keep
in mind that we are primarily interested in whether a head results. A table might help (note it has to be
reordered so as to be chronological):
Coin tossed
(Pr=0.5) |
No coin tossed
(Pr=0.5) |
|
Head
(Pr=0.5) |
Not Head
(Pr=0.5) |
Not head
(Pr=1.0) |
Head
(Pr=0.25) |
Not Head
(Pr=0.25) |
Not Head
(Pr=0.5) |
Quite obviously then, Pr=0.75 is the probability that a head
does not result.
This Pr=0.75 figure results because the experimental protocol
prevented Ted from getting a head unless he drew a black ball. But we can change this.
Let's say instead that if Ted draws a white ball, takes up
and tosses a biased coin, which he then tosses as if it were a fair coin. Say further, that we don’t know what the bias
is on the coin. The results change
significantly:
Coin tossed
(Pr=0.5) |
Random biased coin
tossed (Pr=0.5)
|
|
Head
(Pr=0.5) |
Not Head
(Pr=0.5) |
Probability of result
unknown
(Pr=1.0) |
Head
(Pr=0.25) |
Not Head (Pr=0.25)
|
Probability of result
unknown
(Pr=0.5) |
This is where "assuredness" comes into its
own. The probability of a head resulting
is no longer Pr=0.25, but rather Pr=0.25+0.5xPr(Head|Biased Coin) - and we
don't know what Pr(Head|Biased Coin) is.
This means we only know the interval in which the probability of a head
lies: [0.25,0.75]. There is, therefore,
a lower bound on the probability of a head, namely 0.25 and it is this that I
want to associate with "assuredness".
We can be 25% assured that the coin tossed will show a
head. We can be 25% assured that the coin
tossed will not show a head. The remaining
50% is an ignorance interval - we simply don't know what the associated
probability is.
For Craig's purpose, I suggest that this lower bound, this assuredness,
is all that he can use to support his argument because it is not reasonable to
base his argument (even if in part) on ignorance. If he can only state that his premises are
more probable (or plausible) than not, then he is saying little more than each premise
has Pr(true)=0.51. This means in turn
that his assuredness in a simple syllogism is 26.01%.
It is certainly true that, if he were able to raise the "plausibility"
of his premises such that Pr(true)>0.71414, then his assuredness would rise
to 51%. But he would have to provide good
argumentation for that increased "plausibility". I do note that he makes the claim that the
probability of some of his premises approach unity. I note the claim, but I note also that it's a
bald claim with little to back it up other than Craig's confidence that he is
right.
---
Anyway, as mentioned, Craig has now accepted the error of
his ways. Although, he probably
hasn't. Now he just claims that the
conjunction of his premises has a probability of at least 51%. (It's unclear how much ignorance is supposed
to contribute to this figure.)
There's a problem with this approach though. Isn't one of the most important, in fact the only,
relevant conjunction of premises in a syllogism a little something that we like
to call the conclusion? This is the happy
conjunction when both (or all) the premises are true. (There might be lesser conjunctions, but
those would likely be subsets of the conclusion anyway.)
It seems to me here that the first thing Craig has done
after doing his about face is to fall on it.
No comments:
Post a Comment
Feel free to comment, but play nicely!
Sadly, the unremitting attention of a spambot means you may have to verify your humanity.