Monday, 10 September 2012

Planting a Tiger

I previously discussed the different levels of confidence used by scientists and theists when declaring something to be “true”.  Scientists like to use something around 99.9999% certain, while theists (like Craig and Plantinga) like to use something above 50%.

Plantinga uses this sort of certainty to argue (as he did in a presentation given at Biola University) that evolution provides evidence of creation.  Anyone reading the full set of notes for that presentation will be amused by the contents of the last paragraph on page four – unless of course they are unaware that tigers are perfectly capable swimmers making the handy lake somewhat less handy.  (Tigers are also very quick, they are good jumpers and can climb – so good luck trying to escape one by climbing up a cliff!)

Interestingly, Plantinga seems to start off using the formulation P(R|N&E) correctly (unlike Craig who misuses it right from the start, see Sweet Probability).  However, Plantinga has gone well off the rails by page six, in “3.The Argument”.

When introducing P(R|N&E) (on page three), Plantinga describes the formulation correctly, stating that it represents “the probability of R, given N&E”.  Note the words given N&E (being naturalism and evolution, where R is the reliability of our cognitive faculties).  The probability being discussed here is based on a precondition that N&E are true.  See the table of options:

Using Plantinga’s apparent assumption that the probability of anything is about half, we might say the likelihood of naturalism and evolution being true is 50% and the likelihood of them being false is 50%.  But the formulation that Plantinga is discussing here presumes that N&E is true so the bottom row isn’t under consideration.  Given Plantinga’s standard handling of probability one would think that if naturalism and evolution are true, then the probability that our cognitive faculties are reliable is 50% and the probability of them not being reliable is 50%.  Apparently not.

Let’s look at the meaning of the terms.

Plantinga defines “naturalism” as “the theistic picture minus god”.  Um, what?  No, that’s not how atheists and those who subscribe to what is better known as “materialistic naturalism” see the world.  It’s not a case of “everything theists think but with god carefully excised”.  (Admittedly this argument fails if Plantinga has in his head the idea that naturalism means the habit of walking around without clothes on but the likelihood of that being the case is no higher than 50%.)

Wikipedia is a bit more forthcoming: “the viewpoint that laws of nature (as opposed to supernatural ones) operate in the universe, and that nothing exists beyond the natural universe or, if it does, it does not affect the natural universe.

Plantinga does not define evolution, but we can (albeit somewhat foolishly) give him the benefit of the doubt and assume that he has a proper understanding of the term.  Per Wikipedia: “the change in the inherited characteristics of biological populations over successive generations. Evolutionary processes give rise to diversity at every level of biological organisation, including species, individual organisms and molecules such as DNA and proteins”.

Plantinga begins to introduce confusion when defining “cognitive faculties”.  This term as defined by Plantiga means: “the powers or faculties of capacities whereby we have knowledge or form belief: memory, perception, reason, maybe others” (my emphasis).

Note carefully that Plantinga has made “have knowledge or form belief” primary in the definition of a term which is not standard (cognitive function is more common).  The importance of this might not be immediately obvious.

Time and time again with William Lane Craig we found that he conflates the disparate meanings of terms and here Plantinga does the same thing, specifically with the term “belief”.

Belief is the psychological state in which an individual holds a proposition or premise to be true.  From a philosophical or scientific viewpoint, “belief” can be used in relation to things about which we could be said to have “knowledge” (or justified true belief), so long as:

1.    we maintain a specific belief about a thing

o   we can’t know something if we don’t believe it

2.    we have good reason to maintain a specific belief about a thing

o   we don’t know something if we don’t have adequate justification for believing it

3.    our specific belief about a thing is consistent with the facts associated with that thing

o   we can only be said to know something if it is true

We generally work with concepts at the belief level because while we are generally quite proficient at holding things to be true, particularly if we want them to be true, we are less good at analysing our reasons for holding things to be true and getting confirmation as to whether they actually are true.  To work around these shortcomings, one needs to use a process involving scepticism (not believing something unreservedly but rather requiring some sort of supporting evidence, preferably involving an element of unbiased systematic observation, empirical measurement, falsifiability and reproducibility) – in other words, the scientific method.

There is no inconsistency if a materialistic naturalist were to use belief in this sense given that it would be consistent a sort of tentative truth statement.  To a materialistic naturalist, “I believe that X is true” would merely be shorthand for “I am reasonably confident that X is true, under the circumstances, but I am not sufficiently certain to say ‘I know that X is true’”.

The “under the circumstances” is important.  Say I was in the Army and it was my job to go to into weapons shooting range to collect targets after the exercise was over.  I would want to know that the exercise was over.  If, on the other hand, I was the person who collects golf balls at the driving range, then I could accept a lower level of certainty.  I could go out so long as I believed that the driving range was not in operation and not require the level of certainty that would constitute knowing that driving range was not in use, because while I’d not like to be hit by a golf ball due to the pain involved, I reckon the likelihood of being killed by a golf ball to be minimal.

(“Reckon” is the term I use to avoid using “believe” and “know”, I use it with implication of computation and thought so that “I reckon X is true” means “Having thought about it, I have come to the conclusion that there is sufficient justification for me to hold X to be true”.)

Let’s label this rational stance of holding, or reckoning something to be true without going so far as to say it is known to be true as belief type one, or “belief1”.

The type of belief that Plantinga conflates with belief1 is the sort that is associated with faith.  This sort of belief, which we can label as belief2, can often be expressed with a “believe in” statement and is thus inconsistent with belief1:

While it is indeed possible to say “I believe2 in Mark’s story”, this is stating a rather different position to that expressed by “I believe1 Mark’s story”.  It should be noted that a Christian theist traditionally believes2 in a god who is not only the source of all truth but is, in a sense, Truth itself.  It is therefore incoherent for a Christian theist to espouse belief2 in a god who, in their mind, might in fact not be true as is implied in a scientific understanding of belief1.

Note also that it is quite logical that some beliefs2 should inform some beliefs1 and that some beliefs1 should provide support to some beliefs2.  If Mark’s story is the Gospel of Mark, then a believer2 in God will be likely to consider that story credible and will therefore be more likely to believe1 it.  Equally, someone who reads Mark and reckons that it is true is likely to consider that as support for the belief2 in God.

It seems to be the scientific, materialistic “holding something to be true” belief1 that is central to Plantinga’s four possible categories of mind-body interaction detailed on page four:

·         beliefs1 do not impact on behaviour (epiphenomenalism)

·         beliefs1 cause behaviour “but only by virtue of their electro-chemical properties, not by virtue of their content” (Semantic epiphenomenalism)

·         beliefs1 cause behaviour “by way of content but (are) maladaptive”, and

·         beliefs1 cause behaviour and are adaptive

Having detailed these four “jointly exhaustive categories”, Plantinga then goes on to discuss, in amusing detail, the sort of beliefs1/2 that a prehistoric hominid called Paul might have about tigers.  (The term belief1/2 indicates a possible conflation of the two forms of belief.)

Note here that prehistoric hominids could arguably include humans since there were preliterate humans who were not able to write down history, there is a possibility that there were early humans who did not have a language that was sufficiently rich to convey history and there were certainly early humans who had no real history to convey.  However, if Plantinga meant to imply that Paul is pre-human hominid, then one might wonder why he is worried about tigers at all, tigers never having lived in Africa.  To be as fair as possible, let’s assume that the “tiger” is really a large cat-like thing with a taste for prehistoric hominid flesh.

Plantinga suggests that there is any number of combinations of belief1/2 and desire (goals, perhaps) that are consistent with an adaptive “tiger” avoiding strategy.

For no adequately explained reason, Plantinga then claims that (given all the possible combinations) the probability that Paul acts the “right” way for the “right” reason is actually low.

This is worth focussing on for a moment.  These are the specific possibilities that Plantinga raised:

1.    Paul wants to be eaten, but wants so much to be eaten that he runs away from tigers to find a more likely diner (so Paul does not believe1 that the chances of being eaten by any given tiger is sufficiently high to satisfy his goal of being eaten)

2.    Paul wants to pet what he believes2 to be a large friendly pussycat, but believes2 the best way to arrange a good petting is to run away from the tiger (this does not qualify as belief1 because of absence of justification, there is sufficient evidence available to bring both of Paul’s conclusions into question)

3.    Paul wants to be eaten, but is confused by the concepts “towards” and “away from” (so Paul believes2 that his running away from the tiger will increase his chances of being eaten by that tiger – again this does not qualify as belief1 because of absence of justification)

4.    Paul is worried about his weight, and has resolved to run whenever he sees a tiger shaped illusion (so Paul believes2 that the tiger is just something convenient for motivating his exercise regimen – again this does not qualify as belief1 because of absence of justification)

5.    Paul is involved in a prehistoric athletics event, is highly competitive and believes2 that the appearance of a tiger is the starting signal for some sort of jolly race (yet again this does not qualify as belief1 because of absence of justification)

I am not kidding, read the top of page five if you don’t believe me!

I’d like to add:

6.    Paul has observed that when his fellow prehistoric hominids are eaten by tigers, they don’t seem to relish the experience, so Paul believes1 that being eaten by a tiger is a suboptimal outcome and does what he can to avoid ending up as tiger poo.

7.    Paul has an inbuilt instinctive reaction to fear large things with sharp teeth. When his brain registers images of a large thing with sharp teeth collated from visual input, signals are sent to his endocrine system stimulating his adrenal glands which puts Paul into a state of excitement which he interprets as fear, the attention system in his brain will be shouting “big thing with fangs!” at his cerebral cortex and the combination of these will result in a quick decision to maximise the distance between Paul and the large thing with sharp teeth.  Quite probably, Paul’s brain will make the snap decision to run either back the way he came (since the path will be known to him), or directly away from the tiger (slightly more risky if the location is not well known to Paul).  That decision will then later be incorporated into Paul’s history as a rational decision to run away in a carefully selected direction (beliefs1 about tigers being added post facto) or Paul will have a brief moment to rue the fact that he apparently either didn’t run fast enough, or he chose the wrong way to run.  If he was like Mr Plantinga, he might have a slightly longer moment to think, as he’s splashing around in some Asian river: “Oh my god, tigers can SWIM!!!! Why didn’t someone tell me that? I never should have left Africa.”

Now what Plantinga is telling us is that, because he thinks he can construct a thousand (approximately or as a minimum, I don’t know) different belief1/2-desire scenarios in which Paul might avoid being eaten, of which only those which bear some similarity to #6 involve a true belief1/2 (I don’t think he would factor in a version of #7) then evolution, as expressed as a species becoming incrementally better at not being eaten by tigers, will not increase a species’ likelihood of holding true beliefs1/2 about tigers.


Even more amusingly, Plantinga then says “of course the argument for a low estimate of P(R|N&E) is pretty weak” (“pretty” here might mean “extraordinarily”).  Then a couple lines below he claims that we must therefore be agnostic about the value of P(R|N&E).

No, Mr Plantinga, we don’t.  Certainly not based on your argumentation and especially not given that you clearly have entertained false beliefs about tigers and, so from an evolutionary standpoint, you’re probably not qualified to talk about evolution.  (This was a poor attempt at a joke, don’t take it too seriously.  Some of the people who might have been eaten by tigers if they were living in a somewhat more tiger rich environment are qualified to talk about evolution.  Just not Plantinga or anyone else tempted to believe without evidence that large cats are basically house cats on steroids.)

However, based on Plantinga’s argument, we should be sceptical about anything that Plantinga has to say that involves probability including his conclusion that P(R|N&E) is low.

Towards the bottom of page six, Plantinga suddenly changes the definition of a term in his argument.  B was introduced the bottom of page four to denote a tiger avoiding behaviour (selected as a consequence of the belief-desire nexus), to something denoting a belief1/2.  Perhaps Plantiga didn’t do this to sow confusion.  Perhaps not deliberately.

Fortunately, the use of this term isn’t really important in terms of the overall argument.  What is important is that Plantinga introduces a new definition of “belief”:

But now suppose we return to the person convinced of N&E who is agnostic about P(R/N&E): something similar goes for him. He is in the same position with respect to any belief B of his, as is the above believer in God. He is in the same condition as the person who comes to think she has been created by that Cartesian evil demon. So he too has a defeater for B, and a good reason for being agnostic with respect to it.


Now for the argument that it is irrational to believe N&E: P(R/N&E) is either low or inscrutable; in either case (if you accept N&E) you have a defeater for R, and therefore for any other belief B you might hold; but B might be N&E itself; so one who accepts N&E has a defeater for N&E, a reason to doubt or be agnostic with respect to it. If he has no independent evidence, N&E is self-defeating and hence irrational.

I’m going to try to translate Plantinga’s notes into English, free free to comment if you think my translation is unfair or inaccurate:

But now suppose that we return to the person who believes or holds that naturalism and evolution is true (and) who believes or holds that the probability of our cognitive faculties being reliable is unknowable, something similar goes for him.  He is in the same position with respect to any belief he holds, as is the believer in God (who comes to believe that her belief in God is wish fulfilment). He is in the same condition as the person who comes to think she has been created by that Cartesian evil demon. So he too has something that indicates that his beliefs may be unreliable, and a good reason for being agnostic with respect to the subject of any belief he holds.


Now for the argument that it is irrational to believe or hold that naturalism and evolution is true.

The probability of our cognitive faculties being reliable is either low or unknowable; in either case (if you accept that naturalism and evolution is true) you have something that indicates that the very belief that your cognitive faculties may be unreliable is unreliable itself, as is the case for any other belief you might hold.

However a specific belief might be the belief that naturalism and evolution is true itself; so one who accepts that naturalism and evolution is true has something that indicates that the belief that naturalism and evolution is true is unreliable, a reason to doubt or be agnostic with respect to it. If he has no independent evidence, the belief that naturalism and evolution is true is self-defeating and hence irrational.

Where is the additional “belief” introduced?  It’s when Plantinga misuses conditional probability, as he does in his ontological argument (as later stolen by WLC).

P(R|N&E) means “what is the likelihood of R, given that N&E is already satisfied” (as addressed in Sweet Probability).  Plantinga then refers to N&E as a “belief”, via the already confused term B, even though the probability of N&E in the term P(R|N&E) is 100% because it assumed to be true.  So now we have:

·         belief1 – the conditionally holding true of a thing due to sufficient supporting evidence (a belief1 that will be abandoned if evidence shows the thing to be false)

·         belief2 – the unconditionally holding true of a thing by faith (a belief2 that is maintained in the absence of supporting evidence and possibly even in the presence of negating evidence) and

·         belief3 – the unconditionally holding true of a statement in a conditional logic structure (a belief3 that can be turned on and off at will).

Plantinga’s argument can be reworded thus (using the polysyllogism form introduced in The Logic of an Apologist):

1.    Premise 1 – If naturalism and evolution are true (A) then the likelihood of our cognitive faculties being reliable is low or unknowable (B)

2.    Premise 2 – If the likelihood of our cognitive faculties being reliable is low or unknowable (B) then the probability of the subject of any belief1 being true is unknowable so long as there is no independent evidence (C)

3.    Assertion 1 – Naturalism and evolution are true (A)

4.    Conclusion 1 – The probability of the subject of any belief1 being true is unknowable so long as there is no independent evidence (C)

5.    Assertion 2 – Assertion 1 falls into the category “any belief1” (D)

6.    Conclusion 2 – The probability that naturalism and evolution are true is unknowable so long as there is no independent evidence (E, from D and C)

Well, ok.

I’m happy with the probability of naturalism and evolution being unknowable, even though there is independent evidence for at least evolution and insufficient evidence to support anything other than naturalism.

Plantinga, however, tries to confuse the argument by defining both low probability and an unknown probability as a “defeater”, allowing someone who is so inclined to think that an unknown probability is indistinguishable from a low probability.  He’s trying to argue that the assertion that “naturalism and evolution are true” leads inexorably to the conclusion that “naturalism and evolution are not true”.

All he manages to do, however, is show that if naturalism and evolution are true, then we cannot know with 100% certainty that naturalism and evolution are true.  In other words, if naturalism and evolution are true, the scientific fact of naturalism and evolution is precisely the same as any other scientific fact – we don’t know anything to be true with 100% certainty.

(There’s an extremely remote, non-zero chance that The Matrix was actually an ironic mockumentary screened by our mechanical overlords to taunt us and we are plugged into vats precisely as shown in what is portrayed to us as “fiction”.  This extremely unlikely possibility chips away from any otherwise perfect certainty we might have had about our world.  Scientists will think you are some sort of batty philosopher if you raise this argument, but eventually will concede that they cannot know that we aren’t in vats.)

But let’s look again at the path that Plantinga took to get to this somewhat less than totally convincing conclusion.  It all hinges around the probability that our cognitive faculties are reliable.  Darwin and Churchland apparently thought that this probability is relatively low, Quine and Popper (bless their cotton socks) apparently thought that this probability is relatively high.

But to what extent are we considering when discussing “reliability”?  This is another theist trick, they use terms that sound firm and chunky but which are, when you look at them more closely, gossamer thin.

We all know that our cognitive faculties are imperfect (or at least that those of others are).  Well we know with 99.9% certainty.  We make mistakes, we make things up, we misremember things, we are susceptible to illusion, we don’t pay attention to everything that goes on around us, we trust those we shouldn’t and we don’t trust those we should, we see faces where there are no faces, we run from things that seem to be tigers but aren’t and yet we fail to notice the car hurtling towards us on the road that we pulling out into.

So long as we don’t define what we mean by “reliability” there’s nothing really contradictory in the positions of Quine, Popper, Churchland and Darwin.

If we were wrong all the time, we’d not survive.  Clearly on every single level at which it is important to (generally) be right, we are (generally) right.  So Quine and Popper are right, our cognitive faculties are (generally) reliable.

But we don’t have to be right about everything to survive.  Males of various species can get aroused by very basic simulations of females of their species (artificial cows used to collect semen from bulls look nothing like real cows).  The ability to be ready to mate at short notice is a benefit, even if a chap might occasionally get ready erroneously.  Similarly, it’s a benefit if we don’t eat things that will kill us.  There is a good reason to not eat some of the things that the Bible prohibits because they can make us sick (meat left out overnight, pork which is not sufficiently well cooked, shellfish).  Thinking that being sick afterwards is a punishment from God for eating the wrong thing is equally as efficacious as just avoiding them because they make you sick.  So, we can be wrong about a lot of things and still get an evolutionary advantage from it, which means Darwin and Churchland were right too.

Plantinga totally ignores that the scientific method takes into account the suggestion that, individually, our cognition is faulty but that this may be overcome if we don’t assume our beliefs1 to be true, we gather evidence to support or refute our beliefs1, we have the intellectual courage to accept when evidence does refute our beliefs1, we invite others to critically assess our evidence and our interpretations of that evidence where it seems to support our beliefs1 and we have the grace to accept that we were wrong if the evidence or interpretation is shown to be problematic.

The only person in this whole debacle who is shown to be comprehensively* wrong is Plantinga.


* Where “comprehensive” means “with a likelihood of greater than 50%” and/or “as in being eaten by a tiger”

No comments:

Post a Comment

Feel free to comment, but play nicely!

Sadly, the unremitting attention of a spambot means you may have to verify your humanity.