A
while back, I got the following feedback on the article Planting a Tiger (I
can’t help myself, so I’ve marked typos in red –
I just feel so proud when I notice any typos myself rather than having others
point them out to me):
Quote: neopolitan
(reddit/r/atheism)
A person, on the other hand, who commences with
a huge assumption and then brutally twists logic to form a disparate array of lesser assumptions into a bridge between
the state of things as they are and the state one should expect if the initial
huge assumption were true, is not a philosopher. He is what we refer to as an
"apologist".
But both Craig and Plantinga
would argue that they are seeking knowledge and understanding. They would argue
that they are following the evidence where it leads and not shying away from
reality. You can both be an apologist and a philosopher. It is one thing to say
that they are wrong (maybe they are) but it is
something completely else to say that they are deliberately twisting the truth
and don't deserve the title of 'philosopher'.
The article's well written.
Correct me if I'm wrong but the crux of your argument seems to be here:
Plantinga totally ignores that the scientific
method takes into account the suggestion that, individually, our cognition is
faulty but that this may be overcome if we don’t assume our beliefs to be true,
we gather evidence to support or refute our beliefs, we have the intellectual
courage to accept when evidence does refute our beliefs, we invite others to
critically assess our evidence and our interpretations of that evidence where
it seems to support our beliefs and we have the grace to accept that we were
wrong if the evidence or interpretation is shown to be problematic.
What Plantinga tries to show
is that the scientific method can't stand up to the same criteria by which it
judges everything else. You said at the beginning of your article that
scientists are 99.99999% sure before committing to the truth of a claim. Can scientists be
99.99999% sure that our cognitive abilities align with reality? If not, then
what does that say about using science as a criterion
for judging truth? If we are not 99.99999% sure
that our cognitive abilities are reliable then
doesn't science itself rest on an unfounded belief?
I
was initially responding to a claim (by another redditor) that I had erred in bracketing
the word “philosopher” with quotation marks when using that word in reference to
Alvin Plantinga. My contention, as my quote above indicates, is
that Plantinga is not so much a philosopher as an apologist (and possibly
theologian) despite his apparently impressive academic qualifications in
philosophy. (To address what might seem
like a valid reason to call Plantinga a philosopher – and even Craig, for that
matter – please see Draconic Philosophy.)
In Planting a Courted Controversy, I
related what happened after I had actively sought feedback from the WLC Fan
Club at the Reasonable Faith
website. In the conclusion to that
article, I agreed that I had not provided a comprehensive argument against the
Evolutionary Argument Against Naturalism (EAAN), which is what is being alluded
to above with “If we are not 99.99999% sure that
our cognitive abilities are reliable then
doesn't science itself rest on an unfounded belief?”
Here
then is my argument as to why the EAAN fails …
----------------------------------------
I’ll
explain what the EAAN is, as best I can (generally using Plantinga’s words) and put in comments where necessary:
Step 1 – Develop the probability expression
P(R|N&E)
If:
R is the claim that our cognitive
faculties are reliable,
N is metaphysical
naturalism, including the claim that there is no God
E is the claim that our
cognitive faculties are the product of evolution
Then:
P(R|N&E) is the question “What is the probability of R
if both N and E are true?”
Step 2 – Argue that P(R|N&E) is low
Since there is no reason to
suppose that evolution unguided by God will favour reliable cognitive faculties
(see Planting a Tiger for
why he thinks this is the case), then P(R|N&E) is “relatively low, somewhat
less than 1/2”.
Comment: This is an extraordinarily weak
argument. There is plenty of evidence to
support the idea that animals with poor cognitive faculties will die out, this
is particularly so if they are delicious – roasted dodo, anyone?
In
reality, Plantinga’s argument falls down right here, he fails to provide a
convincing argument as to why the beliefs of a human, if they were evolved, would
be unlikely to be true. His argument is
so laughable that originally I didn’t think it needed more effort, but let’s
continue ...
Step 3 – Argue that P(R|N&E) is
inscrutable
Admit that “the argument for
a low estimate for P(R|N&E) is pretty weak” and then claim that perhaps it
is better to be agnostic with regard to this probability, on the grounds of it
being “inscrutable; we
just can’t tell what it is”.
Comment: This raises the question of why the
argument was presented at all if it’s considered, even by Plantinga, as “pretty
weak”. Cynically, one could suggest that
he does so in order that the “inscrutable argument” might appear less weak in
comparison. If that was the intent, it worked
… by a narrow margin.
Step 4 – Argue that if P(R|N&E) is
inscrutable, then P(R|N&E) is inscrutable
I’ll have to use Plantinga’s
words here or it will sound like I am misrepresenting his argument (taken from
his lecture notes):
Of course the argument for a
low estimate of P(R/N&E) is pretty weak. In particular, our estimates of the
various probabilities involved in estimating P(R/N&E) with respect to that
hypothetical population (comment - a hypothetical population of creatures
rather like ourselves on a planet similar to Earth … (that had) arisen by way
of the selection processes endorsed by contemporary evolutionary thought – as
indicated in an earlier paragraph – ed.) were pretty shaky. So perhaps the
right course here is simple agnosticism: that probability is inscrutable; we
just can’t tell what it is. This also seems sensible.
What would then be the
appropriate attitude towards R (specified to that hypothetical population)?
Someone who accepts N&E and also believes that the proper attitude towards
P(R/N&E) is one of agnosticism, clearly, has good reason for being agnostic
about R as well.
But now suppose we again
apply the same sort of reasoning to ourselves and our condition. Suppose we
think N&E is true: we ourselves have evolved according to the mechanisms
suggested by contemporary evolutionary theory, unguided and unorchestrated by
God or anyone else. Suppose we think, furthermore, that there is no way to determine
P(R/N&E) (specified to us). What would be the right attitude to take to R?
Well, if we have no further information, then wouldn’t the right attitude here,
just as with respect to that hypothetical population, be agnosticism,
withholding belief? If this probability is inscrutable, then we have a defeater
for R, just as in the case where that probability is low.
So P(R/N&E) is either low
or inscrutable; and if we accept N&E, then in either case we have a
defeater for R.
Now, trying to turn this
into English for the rest of us ... the value of P(R|N&E) with respect to a
population of evolved creatures is unknown and, according to Plantinga,
unknowable (i.e. inscrutable). If we,
as evolved creatures, accept that both N and E are true, but are agnostic about
P(R|N&E), as Plantinga argues we must be, then we should rightly be agnostic
about R (by which Plantinga seems to mean P(R) since he refers to “this
probability”).
In short, if P(R|N&E) is
inscrutable and N&E is true, then P(R) is inscrutable.
Comment: My sarcastic wording of this step
reflects the fact that, if N&E is true, P(R) = P(R|N&E), irrespective
of what the value of that probability is – inscrutable or low or, indeed, high.
Step 5 – Argue that if P(R|N&E) is
inscrutable (or low) then R has a defeater
Actually, I can’t really see
how he justified this step, but I can see why he does it.
A “defeater” is a
proposition which, if in the possession of a person who otherwise would have a
justified true belief constituting knowledge about something, would negate the
justification or the belief (but not the truth). It’s a term from epistemology, the study of
knowledge, and it relates a particularly esoteric discussion as regards to what
we can “know”.
In the tripartite theory of
knowledge, in order to truly know something rather than just
believe something to be true:
the something in question
must be true;
one must maintain a belief
that that something is true; and
that belief must be
justified.
If you are deluded in your belief
about something then you cannot know it (for example, correctly
believing that the sun is not the centre of the universe would not count as knowing
if your reason for believing is that you also believe that the Earth is the
centre of the universe).
Perhaps you might be in
possession of a fact which you do not accept or do not understand, for example
the fact that the Bertrand Paradox has
a unique resolution such that p=0.5, you can’t really say that you know this if
you don’t believe it. (If you believe
this fact to be true, but you don’t understand it, you might still know
it if, for example, you had been informed by an expert in the field. So long as it is true, of course.)
Finally, you can’t know
something that is not true.
I suspect that there’s an
element of equivocation and conflation involved here. Plantinga talks about R as if it means:
the claim that our cognitive faculties are reliable, **AND**
the probability that our cognitive faculties are reliable, **AND**
the knowledge that our cognitive faculties are reliable, **AND**
the fact that our cognitive faculties are reliable
This is cheating on a major
scale.
Step 6 – The Grand Finale
If you have a defeater for
R, then you have a defeater for any belief you hold, including the belief that
N&E is true. Therefore it is
irrational to hold the belief that N&E is true.
That’s
it. Well, he does go into a little
pre-emptive defence against defeater-defeaters, but that’s largely irrelevant
since you’d have to agree with Plantinga’s arrival at Step 6 and still want to
hold onto N&E. That would mean that
you’d have to agree with Step 5 (and maybe Step 2) and you wouldn’t do that
unless you were already presupposing the whole outcome with respect to N&E.
Let
us look at Step 5 a bit more closely.
Plantinga
is arguing that if we don’t know how probable it is that our cognitive
faculties are reliable then it is probable that our cognitive faculties are
unreliable. Presumably, therefore, the
only way that our cognitive faculties could be reliable would be for us to know
that it is probable that our cognitive faculties are reliable. But we’d only know that if they were
reliable, otherwise we couldn’t know anything.
What
Plantinga seems to be supposing here is some sort of bizarre bootstrapping – if
you know that your cognitive faculties are reliable, then they are, and you know
they are because those faculties are reliable, making your belief both true and
justified. However, if you are in doubt
about the reliability of your cognitive faculties, they are unreliable by
default, you know nothing and everything you believe is suspect.
This
is, of course, nonsense.
All (Plantinga) manages to
do … is show that if naturalism and evolution are true, then we cannot know
with 100% certainty that naturalism and evolution are true. In other words, if naturalism and evolution
are true, the scientific fact of naturalism and evolution is precisely the same
as any other scientific fact – (because) we don’t know anything to be true with
100% certainty.
(There’s an extremely
remote, non-zero chance that The Matrix was actually an ironic mockumentary
screened by our mechanical overlords to taunt us and we are plugged into vats
precisely as shown in what is portrayed to us as “fiction”. This extremely unlikely possibility chips
away from any otherwise perfect certainty we might have had about our
world. Scientists will think you are
some sort of batty philosopher if you raise this argument, but eventually will
concede that they cannot know that we aren’t in vats.)
What
rational people will do is assume that the world is as it appears to be until illusions can be shown to exist.
As we detect illusions, we try to understand them so that we can account
for them in our theories. Rational
people won’t deny that our cognitive faculties are limited, and prone to
failure. Nor will they assume that
because those faculties are limited, and prone to failure, that they are
completely useless.
And
they won’t hang onto illusions just because they are comforting.
Now
that I’ve explained, as best I can, the EAAN, I’d like to introduce the EAAT,
which is the EAAN Argument Against Theism, or to give it its full glory …
“Evolutionary
Argument Against Naturalism” Argument Against Theism Version 1
Step
1 – Do the whole EAAN thing
Step
2 – Point out that if N&E is true, then it is irrational to believe
anything Plantinga has to say, given that he is just an evolved form of tiger
food
“Evolutionary
Argument Against Naturalism” Argument Against Theism Version 2
Step
1 – Do the whole EAAN thing
Step
2 – Consider the following:
·
If N&E is true then, irrespective of
whether or not you believe that N&E is true, then it is quite likely that
you will hold beliefs that are incorrect
·
If you continually challenge all of your
beliefs, then those beliefs which consistently survive challenge will be more
likely to be true
·
If a belief which fails a challenge is
modified and then, as a consequence of being modified, consistently survives
challenge, then that modified belief is more likely to be true
·
If N&E is continually challenged and
consistently survives, then it is likely to be true
·
If N&E is challenged and fails, then is
modified and the modified variant, N&E*, consistently survives challenge,
then it is likely to be true
·
If another belief, say theism (T), is continually
challenged and consistently fails, then it is unlikely to be true
·
If a belief is unchallenged and the holders
of that belief are not willing to accept challenge, then the probability of
that belief being true is inscrutable (you might want to follow Plantinga’s own
argument here to reach the conclusion that holding such a belief is irrational)
From
all this, we can conclude that:
·
P(R|N&E|S) is high when S is high (where
S is the extent to which naturalism and evolution consistently survive
challenges), and
·
P(R|S) is high when S is high (because the
reliability of our cognitive faculties is also continually challenged)
The
fact that N&E have been modified throughout the years, particularly the E
component, is an indication that N&E is more likely to be true. As a consequence, it is entirely rational to
believe both that R is true and N&E is true.
On
the other hand, however:
·
P(T|F) is low when F is high (where F is the
extent to which theism fails being challenged), and
·
P(T|A) is low when A is high (where A is the
extent to which theism avoids being challenged –
see also being an infidel and
apostasy).
Unlike
with many other sorts of beliefs, there is great resistance to variations to T,
therefore there is no T* as such (where T* is an improved version of theism) and T has no avenue by which to evolve towards
a truth statement.
In
other words, yes, I agree with Plantinga: many beliefs held by evolved
creatures are likely to wrong, especially if they are untreated. Even some of the smartest humans that ever
lived used to believe such things as:
·
everything is a combination of up to four
elements
·
phlogiston is a substance that escapes matter
when it burns
·
there is an aether that conducts light
·
those enormous bones unearthed by mining and
erosion are the bones of dragons
·
narwhal horns belonged to unicorns
These
people were wrong.
Who
knows, maybe string theory is a load of old cobbler’s. But the point is, even if this is the case, scientists
are actively encouraged to investigate string theory, to uncover how the theory
is wrong and which associated beliefs are wrong. Scientists working in the field will then,
collectively, attempt to come up with something that is slightly better, but almost
certainly also wrong in its own special way – and the process will continue.
This
sounds like a terrible system, and perhaps it is. A far worse system, however, is to cling
unquestioningly to a demonstrably false belief which requires you to brutally corrupt logic in
order to divert attention from the wealth of evidence against that belief.
No comments:
Post a Comment
Feel free to comment, but play nicely!
Sadly, the unremitting attention of a spambot means you may have to verify your humanity.