Friday, 29 November 2013

A win for the powers of generosity - Part 2


In Part 1, I mentioned an interesting article – "Generosity leads to evolutionary success" – which is largely based on a paper by Alexander Stewart and Joshua Plotkin, "From extortion to generosity, the evolution of zero-determinant strategies in theprisoner’s dilemma".  The Stewart&Plotkin paper was, again largely, a response to what strikes me as a somewhat more technical paper by William Press and Freeman Dyson, “Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent” (this latter paper certainly seems less accessible to a lay reader such as myself, irrespective of how technical it is).

-----------------------------

Both Press&Dyson and Stewart&Plotkin make reference to evolution, but when they do so, they mean different things.

Press&Dyson are talking about the evolution of a strategy by a single “evolutionary opponent”, such that this opponent will move towards a strategy that maximises their score.  This can be equated to a situation in which I as a stock trader might modify my buy and sell strategies over a period of time until I consistently get the best possible income.  Every now and then, I could use a slightly different combination of decision parameters for a couple of sessions and compare the outcome against previous sessions, if I do better with the new combination of decision parameters then I can make this combination my new strategy, but if I do worse, I can keep my original strategy.  I’m not evolving, but my strategy will.  I would be an “evolutionary trader”, rather than an “evolving trader”.

Stewart&Plotkin, however, are talking about the evolution of a population as one strategy prevails over another.  This would be as if a group of traders were able to see how others in the group did with their strategies and were willing to adopt the more successful strategies in place or their own, or if traders with more successful strategies could employ more new junior traders who would use the same successful strategies and later employ yet more juniors.  Over time, the dominant strategies (most populous) would be those that are most successful.

I did some very simple modelling of this latter approach and discovered something that I found rather interesting.

Press&Dyson provided a “concrete example” of extortion in which the extortionist (E) responds to cooperation and defection on the part of their opponent (G) in a probabilistic way (as discussed in Part 1, p1=11/13, p2=1/2, p3=7/26 and p4=0).

To maximise her score against an extortionate strategy like this, the G strategy player must always cooperate (so q1=q2=q3=q4=1/1).  Therefore while G and E strategy players face off against each other, the E wins.

However, in a mixed population in which a player might play against either a G strategy player or an E, what Stewart&Plotkin found is that when a G strategy player meets another G (assuming they retain their always cooperate strategy), they’ll reap sufficient rewards from mutual cooperation as to mitigate the losses that follow from playing against an occasional E strategy player.  However, when an E strategy player meets another E, they are quickly locked into mutual defection and obtain a low score.  Therefore, to get a good score, an E strategy player needs to meet a G strategy player, while a G strategy player gains no benefit from an E strategy player and benefits only from meeting another G strategy player.  For this reason, a population of mostly E strategy players becomes self-limiting, while a population of mostly G strategy players will grow.

The population of E strategy players will tend to only grow well at the border with the population of G strategy players, which is part of why smaller populations favour extortion – smaller things having proportionally greater surfaces than larger things, the surface to volume ratio decreases as the size increases (this is why large creatures have problems to deal with in hot climates and small creatures tend to struggle in cold climates).

I modelled this in a spreadsheet (using the strategy adoption approach) and found that generous strategies did indeed expand at the expense of extortionate strategies, so long as a few provisos were met:

·         the population had to be largish – I used N=400

·         each pairing had to run more than two iterations of the Prisoners’ Dilemma (PD)

·         there had to be some inclination to change strategy

This last proviso might seem a bit strange, because it seems obvious that to evolve a population must have at least a little inclination to change.  What I found though was that the magnitude of the inclination to change had a huge effect not so much on the outcome, but on how rapidly the outcome manifested and to what extent.  An inclination to change of 5% with three iterations of the PD left approximately 5% of the players using the extortion strategy after 50 rounds.  With 300 iterations of the PD, with an inclination to change of 5%, there were on average slightly fewer players using the extortion strategy.  Even with 3x10^30 iterations, with all else held constant, there were still just under 5% of players being extortionate.

Make that an inclination to change of 10% with 300 iterations and after about 50 rounds almost all extortionate players are gone.  Increase the inclination to change to 20% and the extortion strategy players are gone after 30 rounds.

I’d interpret an overly high inclination to change as being bad, since it will wipe out variation in strategies where variation might be necessary to adapt to future environments which don’t favour a currently successful strategy.

The second proviso, about how many iterations of the PD are required for the generous strategy to prevail corresponds well with the idea that our morality breaks down when times are bad.  In other words, when things are peaceful and stable, then each member of a society will interact repeatedly with various other members of the society.  The iterations of interaction indicate that generous win-win exchanges will predominate.  But when each interaction may well be the last, there are no iterations to speak of, so extortion will be fostered.

I was also interested in latent tendencies, by which I meant an inclination to be extortionate or generous.  Unfortunately, I don’t have the time or resources to properly model it, but I suspect that if members in a society have a certain inclination towards extortion strategies, they will tend to predominate given occasional “bottlenecks” or periods of “bad times”.  As discussed in Part 1, the feeling that things are bad can lead people to act less generously.  Some people, when times seem bad, will flip right over from “less generous” to “outright extortionate” and in general people will have a certain amount of tolerance with respect to the difficulty of the times.  Those who have little tolerance will become extortionate when it is not appropriate (and thus become criminals) while those with too much tolerance will suffer at the hands of others when things really are tough (or at the hands of criminals even in reasonably good times).

An interesting question is how we could use this insight to better our societies.  Personally I think we need to do at least two things.  Firstly, we need to do what we are pretty sure will promote generosity – by encouraging people to see that we are in a time of plenty and that we are part of a large, inclusive population.  Secondly, we need to dissuade extortionate behaviour – by making the cost of not cooperating, of not being generous, high enough to make cooperation the best option even in one-off interactions.

Doing this without damaging ourselves in the process is the difficult part.

Friday, 22 November 2013

An Atheist Onslaught on Free Will?


In Random Will, I presented a mechanism by which some form of free will might be possible.

In that article I wrote:

If the universe is entirely deterministic, there is no free will because our actions are merely the consequence of our interactions with our environment.  Presented again with precisely the same environment, our brains would go through exactly the same processes and we would make the same decisions, take the same actions and think the same thoughts.

If the universe is entirely random (and therefore entirely indeterminate), then there is no free will either.  There would be little, if any, causal relationship between action and reaction in a random universe.  Presented again with precisely the same environment, our brains would likely react in a vastly different way.  However, integral to the concept of free will is the idea that there is some degree of constancy in our thoughts and behaviour.

Since then, I’ve been involved in a ranging discussion on an “atheist onslaught against the common concept of free will”.  This discussion has involved many participants, both theist and atheist as well as what appears to be the occasional quantum mystic.  Not all have been as grumpy as myself.

I’ve collected some of my contributions to the discussion below, with some minor editing, as a form of intellectual recycling.

----------------

The initiator of the discussion used the definition from a creation science web-site (creationwiki), claiming it to be the "common definition":

Free will, is the capability of agents to make one of alternative futures the present. The logic of free will has two main parts, a categorical distinction is made between all "what chooses", and all "what is chosen", referred to as the spiritual domain and the material domain respectively. This understanding in terms of two categories is named dualism.

Together with these two domains come two ways of reaching a conclusion, subjectivity and objectivity. You have to choose to identify what is in the spiritual domain, resulting in opinions (subjectivity). You have to measure to find out what is in the material domain, resulting in facts (objectivity).

This definition seems to have some major issues. First and foremost, a computer has the capability which is described as "free will" in this strange definition. A simple example being the one that controls the traffic lights at an intersection. Say the lights are flashing amber and there are two alternative futures for one set of lights, they can be red, or they can be green. The computer can make a particular future become the present (assuming the passage of time) by making the light green. The thing that falls into the category of "what chooses" is the computer and "what is chosen" is the future state of the set of lights (which can be objectively measured, either by checking that a light in a particular position is lit, or by checking the nature of the lens over the light that is lit).

I think we can safely abandon this definition, can't we?

There seems to have been an effort to define "spirit" and "freedom of opinion" but not "free will". It's bizarre that long discussions about free will (such as the one in question) are about something that is effectively undefined, either because no-one makes the effort to produce a definition, or because the operating definition is as weak as this one.

I asked that the main protagonists make clear what they mean by free will. As a start, by trying to answer the following:

By "free will", do you mean that decisions can be made without taking any cognisance of prior states? Do you mean that a person who is brought up on the wrong side of town and is taught to be a criminal makes a free decision every day to continue on with criminal behaviour? Or are you saying that free will somehow allows us to make decisions totally divorced from our experience and that it is only some weakness on the part of criminals that makes them freely choose to do what other people in their social group (i.e. other criminals) do and it would be easy for them to bring their free will to bear and choose to stop being a criminal at any time?

Or, by "free will" are we saying that in the traffic light example, we are more like the computer, making choices between more complex alternatives, rather than deciding which lights should be on at certain times? In which case, our decisions are totally determined (or pretty much determined) by antecedent causes, and our "free will" is just the ability to affect other things that are not able to interact in the same way with antecedent causes?

Sadly no-one really put forward a usable definition.

Never mind, I presented one for consideration.

--------------------

To my mind there is "strong" free will and "weak" free will.

If you have "strong" free will, then you are not influenced by what came before, and you have some sort of immutable element that can make decisions unconstrained by current circumstances (i.e. an immortal soul). This presumes some sort of absolute morality, since the "right" thing to do won't be situational.

If you have "weak" free will, you can mould your decisions, you can change your mind and you don't have to follow a set rule book - but you are going to be heavily influenced by who you have become and what is going on at the time and your options will be limited by a number of factors (including what you know and what you can imagine). This presumes a more fuzzy logic approach to decision making, with multiple overlays that contribute to a sort of grid from which options can be selected. It's "sort-of-free will".

This latter form of free will is what the materialists tend to be happy with (including me). The former is the domain of magical thinkers including theists.

--------------------------

Later on, I returned to the idea of what free will is not:

I prefer to use a definition based on Laplace's demon. Laplace's demon knows everything there is know - all the characteristics of the most fundamental particles (or waves or probability functions or whatever is most fundamental to the universe). If the demon can tell what is going to happen in the future based on that knowledge, then there is no free will. If there is something else, something fundamentally unknowable, then that makes it impossible for the demon to predict the future and free will is a possibility.

(Note that I am assuming here that probabilistic phenomena are based on laws and forces that we humans might not be able to know about and that it is this lack of access that makes it impossible for us to identify the antecedent cause that results in the radioactive decay of particles at some precise instant and not before or later.  However, the reliability of probabilistic predictions of such decay does indicate that some regular mechanism may be at play "behind the curtain" as it were. If so, Laplace's demon would be able to see behind that curtain so long as what goes on there is natural, rather than supernatural.)

This conception makes free will something magical, rather than emergent from physical phenomena and therefore it's the sort of free will that theists tend to believe in. I refer to this form of free will as "strong free will" and I think it's what Belinda calls "ontic free will". (I suspect though that "ontic" just means it's real, sort of like when the Greek Titans Prometheus and Epimetheus where creating humans and all the other creatures respectively, they are described as slotting in abilities to their creations - they'd have had a free will module on the shelf and Prometheus would have inserted that into his human. Alternatively free will might have been in a vat, with Prometheus decanting a large portion and putting it in to the human, and Epimetheus taking smaller portions for all the other creatures.)

With the notion "strong" free will comes the notion "weak" free will. For me, weak free will is a consequence of the imperfection of biological machines. We are sort of programmed, by our genetics, our upbringing and other experiences and by our culture (and also our language of course). This programming sits inside our wetware processor and reacts to stimuli - but our sensors are not perfect, nor is our brain. The huge complexity of what is going on when we respond to our environment makes how we act appear like it's driven (at times) by strong free will, but it's not - we're just reacting according to programming that we ourselves did not choose, using a processor that we did not choose on the basis of input that we did not choose via sensors that we did not choose.

To a certain extent it is true that we cannot help ourselves, but only to an extent. For example, we could choose what factors play a greater role in each decision (although that choice itself is limited), we can eat Italian if annoying our friends who don’t like Italian is the most important factor, or Greek if gastronomic pleasure on our part is primary (and we are big fans of Greek food). But not only that, we can mould our wetware processors, and our programming, by changing our behaviour to improve our outcomes in the future. We're not always successful at this (because we don't always know what we will want in the future), but it does allow for a very strong feeling that we are, at least in part, self-created beings exercising free will.

---------------------------

What I have noticed, both in this debate and more generally, is that the whole discussion regarding free will tends to miss a key element - what is the actual mechanism for free will? At least theists have their "souls" to point at, although they can't prove the existence of a soul (the closest they have got is via that extremely dodgy 21g experiment and surely no-one takes that seriously). So I ask, if a free will defender is not positing a soul, but is positing free will, what is their proposed mechanism?

Note that pointing to quantum uncertainty isn't going to help because fundamentally not being able to know what is going to happen in the future isn't any more helpful with respect to free will than knowing the future perfectly. Unless of course the free will defender posits a mechanism that somehow bends quantum uncertainty to their will, in which case they head into quantum mysticism and, again, they would have to explain how this mechanism works.
Note also that I am not saying that if a mechanism isn't immediately available, that the idea has to be shelved forever.  I'm just saying that if you don't have a mechanism by which free will would be possible (and you have no other evidence that free will actually exists), then you're not in much of a position to champion the existence of free will.

-------------------

Unfortunately, no-one was able to present a mechanism by which free will would work – not even “weak” free will, let alone “strong” free will.

-------------------

Later, I got a little hot under the collar:

Can we agree that if the universe is strongly, and ontologically, deterministic, then there is no free will - irrespective of whether the universe is epistemologically predictable?

I think we can, so I'll press on.

I struggle to see how people justify making the leap from the notion that the universe is *not* strongly (and ontologically) deterministic, that this leads inexorably to free will. As Belinda indicated, we can't truly perceive causal connections. We can only intuit them, or deduce them, or backwards engineer them - and when we do so we can only do so incompletely.

At certain levels (i.e. macroscopic levels) and within certain timeframes (i.e. shorter timeframes), we seem to be able to predict the future based on our understandings of causal connections, but only to a certain extent. If we want to be totally accurate, we can't. If we want to predict well into the future, we can't. And if we want to make discrete predictions at the subatomic level, we can't (we can only make probabilistic predictions).

Any will we might have, therefore, is somewhat frail - we can't really will a thing to happen with total certainty and what we can will to happen is predicated on pre-existing circumstances and a host of physical "laws" over which we have no influence whatsoever. Thus we cannot will ourselves to fly by flapping our arms, but we can will ourselves to drink a cup of coffee, so long as a cup of coffee can be procured.

How could such a will be justifiably described as "free" when it's so very much constrained?

But even going this far is making an unwarranted assertion. I suspect that when a free will defender implies that if the future is (ontologically) "open", then free will follows. The problem here is that the inscrutability of the future doesn't necessarily give us (humans, or any other living creature) an ability to shape it. There's no reason to think that we have any more influence over an indeterminate (and thus unpredictable) future than we have over a strongly deterministic future. From what I can tell, in a practical sense, the claims about free will centre on an ability to influence the future - not just about whether the future is fixed or not.

I suggest that the best position is to be sceptical about the existence of "strong" (or "ontic") free will until such time as evidence for its existence appears (or a mechanism by which free will would work).

--------------------

It was about this time that the initiator of the discussion revealed himself to be crazier than had previously been apparent – singing the praises of the Tea Party, claiming that people who deny his definition free will were spiralling into depravity, accusing me of a being a filthy cursed liar (when I’m so obviously clean) and using a website like www.scienceoflife.nl as a reference.

Syamsu had been claiming that a Professor Walter Schempp had used a freedom based theory to produce the functional MRI and I responded that there was no indication that Schempp had any involvement in the development of the fMRI.  It was at this time he called me a filthy cursed liar and provided the following link and quote as counterevidence:

"You have studied and come to understand the complement of this concept. You created mathematical models which are used for constructing functional MRI devices, by which now even separate nerve strands can now be made visible in the body, thanks to your work."

I put a bit of effort into my response (tinged with an element of anti-quantum mystic grumpiness) so I’ll share it with you:
-------------------

Syamsu,

Where to start?

Ok, firstly this is a useless reference because it does not support your claim, i.e. that Schempp developed the fMRI (expressed when you asked in a previous post "How can anybody accept the machine Schempp produced, but reject the basic theory Schempp used to produce the machine, as pseudoscience?") The quote only claims that Schempp has created mathematical models that "are used for constructing functional MRI devices". At best he may have contributed to the improvement of fMRIs.

Secondly, the nature of the quote makes it useless. It's taken from an open letter to Professor Schempp, inviting him to contribute to a book "Science of Life". There is no indication that Schempp has responded.

The only person from the list of people Otto von Oddball has invited to contribute whose name I recognise is Matti Pikanen and that's only because I've had a similar discussion to this about his TGD theory. It's interesting to note the comment on the page that lists Matti's work: "These materials are made available online because TGD publications are not yet accepted in so-called ‘respected’ physics journals." 

A search of Matti's eye-strainingwebsite indicated that he is actually linked to Science of Life.  For example Matti makes reference to a document on Crop Circles ... yes, that's right, he's talking about crop circles being "messages providing biological information (including genetic codes) about some unknown life forms". As a keen watcher of the television show QI, I've seen some of the guys who have created a number of these crop circles as a bit of a lark, including one with a QI logo.

No wonder, if Walter Schempp is a half-way reputable scientist, that he wants nothing to do with Science of Life.

Finally, as I've already presaged, the website from which you've taken a quote that doesn't support your claim (and would be useless even if it seemed to do so) is hardly reputable. It's a treasure trove of dead ends, there's no indication that "Science of Life" is linked to any reputable academic body, he talks about a Science of Life Symposium in 2011 that never happened and when you look at Otto von Oddball's writings, one quickly sees that there's something seriously wrong. For example, look at this article on something close to your heart "Freedom of Choice" - I'll post a bit here so people can see without having to soil themselves by visiting the site:


Freedom of choice is a change in dimensional organisation.
This can be represented by Dimensional Operators.
The Vortex is a well known Dimensional Operator.
A vortex unifies Point, Line, Plane and Volume.
Although we can define this, we cannot describe this.
The reason is that Dimensional Operations involve involvement.
In changing our involvement, we also change the Dimensions.
This requires a notation addressing multiple logic.
Presently, scientific notation offers this option.
Classical, Relativistic, Probabilistic and Unified theories complement each other.
Each pertains to a different mode, degree, in participation in creation.
The shift from one theory to another is executed by a scientist, using choice.
In changing our involvement, we change our participation in creation.
At the same time, we change our realisation in/of/for creation.
But we also create a realisation of change.
We need to realise that WE create that change.
The realisation of creation of change is known as awareness.
The realisation of change of creation is known as consciousness.
The realisation of consciousness of change in creation is called life.
The realisation of awareness of change in creation is response-ability.

The rant just keeps going. Pretty much the whole thing (and the text on other pages) is written in four line blank verse.

In sooth I do know why it makes me sad, Syamsu, but a quote from this website unwisely ripped does not me a liar make. (Thanks and apologies to Shakespeare.)

Your challenge now, Mr Syamsu, is to produce authoritative evidence that Schempp supports a theory that even remotely supports your creationwiki definition of free will and that he applied that theory in the development of the fMRI.

Please note that I am not ridiculing Professor Schempp, I'm ridiculing your misuse of Professor Schempp's work (and the misuse by other mystics). Unless of course, Professor Schempp is a willing participant in this nonsense, in which case I'll ridicule him too, but so far there's no real evidence that he is, other than his name appearing at the quantrek.com site (a site dedicated to what appears to be quantum nonsense).

-----------------------

My expectations with regard to a response were not high and Syamsu managed to live down to them.


----------------------

I just realised that I left out one of my longer attempts to get a cogent response from Syamsu (slightly edited):

Ok, if the only definition of free will you will accept is that which is closely linked to a peculiar fantasy regarding the origins of the universe, then of course you're going to find that atheists don't think much of "free will". I don't think that atheists are attacking the "common concept of free will", nor do I think that your definition is anywhere close to the common conception of "free will" (except in the sense that it is "base" or "simple", as in "simple-minded"). I think you'll find that most atheists will just ignore it, because it's silly. I will do you the honour, however, of trying to take you seriously.

You seem to think that atheists, and others who disagree with your "common concept of free will" are ignorant of or don't understand how choosing works. This is a curious claim.

Let me try to give a real example of what goes on inside the mind of real live atheist.

Earlier today I was playing a computer game, Need for Speed. One could say that I chose to play it. I certainly did choose to purchase the game, from a range of many other options (so I made "neopolitan owns a copy of Need for Speed" the present, in accordance with your creationist definition of free will).

However, I'm a bit of an achievement junkie, so I have become conditioned to play this game (and games like it) for the thrill I get from, for example, learning how a particular car handles and using it to beat one of the harder races. A win, particularly if it required some effort, releases a rush of endorphins and it is often the case that I "choose" to immediately start a new race after a sweet victory. But do I really choose?

Sometimes it doesn't feel like it.  Sometimes I know that I should be doing something else, and I might have previously thought "This has to be the last race, then I really must go and see what else Syamsu has written", but I nevertheless find myself clicking on the right combination of buttons to start a new race anyway.

Even within the race scenario, my "choices" are not entirely my own. While there's some fiddling going on with the computer program with regards to the steering (I suspect that the game cheats and makes me drive into walls at the most inconvenient times... until it takes pity on me after a few soul destroying failed attempts), but there is also some autonomous control on my part.

For example, taking a hint from something that the cricketer Don Bradman used to say, I adopted the principle of not looking at cars and structures that I needed to drive past but rather I looked at the gaps that I needed to drive through.  As a consequence I found that I crashed into such obstructions far less frequently. I didn't previously "choose" to crash into civilian traffic, or into annoyingly placed buildings, but the simple act of not focussing on them seemed to result in me hitting them less often.

With regard to that element of my experience, I didn't feel like I had much free will at all - I was responding at an unconscious level and when I tried to exert a free will of a kind by controlling my driving in order to avoid obstructions, it actually brought about what I didn't want (ie crashing into those obstructions at high speed).

Anyway, I eventually did stop playing the game to take my dogs for a walk.

Now, was that my choice? Was it a free choice? It certainly felt like it, more so than my compulsion to keep playing, but really, I was just responding to a more subtle mix of stimuli and motivations.

One section of my brain wanted more endorphins in the quick rush from winning a race, while another wanted the slower release of endorphins and health benefit of a walk, plus there was an avoidance of the guilt that would have ensued if I had failed to walk my poor dogs. But walking the dogs is something that has been foisted on me by my earlier self (the one who bought the dogs in the first place), and really I only get a little window of choice in respect to exactly when I walk. I further get a bit of a choice as to where I walk, limited by the distance we walk and the weather, and where we walked the day before, and so on. In reality, I just follow a pattern, if I remember to do so.

Now, Syamsu thinks that I don't know about making choices, but with regard to the walk, I made a whole host of choices ... when to walk, where to walk, what to wear when I walked, what music to listen to while I walked, who to stop and talk to while out walking, who to ignore and hurry past, when to stop to let traffic past, when to walk across the road, when to check my phone for messages, when to scratch my nose and so on. I managed to do all of this, despite being an atheist (or more specifically a non-theist)!

Each one of these choices (choosings?) presented me with alternative options, and I acted to make one of those options become the present (they became "the present" at the time, sadly they are now all in the past, except for my ongoing health resulting from the walk and the relative happiness of my dogs). So, in in the Syamsu world, was I exercising "free will" as he defines it, or was I somehow getting it wrong? If so, could Syamsu explain how I was getting it wrong.

Please note that it is possible to scientifically explain all of my behaviour as described above (even my suicidal crashes during the game), none of it is particularly mystical. I'm not seeking therapy with respect to it, I just want Syamsu to explain how this "choosing" thing is differently to the choosing that I've been doing pretty much every day of my life.

Syamsu's response?  Well, he only responded to the first paragraph, with this:

That's great but taking the common concept seriously means to disregard all other concepts. Go ahead, apply the common concept.

Friday, 15 November 2013

A few questions from unkleE

I’ve been asked a few questions by unkleE:

“Hi neopolitan, I wonder if you'd mind my asking a few clarifying questions please?

"When someone like Barnes personally corrupts science in support of a theist agenda, he needs to be called on it, personally."

Could you outline, please, exactly how Barnes corrupts science, and also how he uses it in support of a theistic agenda (two separate questions)?

"the Templeton Foundation and the Discovery Institute .... Barnes doesn't come out and say he's a theist, he doesn't necessarily act like an apologist ..."

Can you offer any evidence that Barnes has any connection with either of those two bodies, and that he is an apologist, or are you using these references for some other purpose? How would any such connection or action make any difference to the science Barnes outlines?

""unbiased cosmologists" including Martin Rees - Templeton winner and Paul Davies - Templeton winner. You're being a little untruthful to include those two in your list of "non-theists".

Have you any evidence that either is a theist? Have you read any of their books or papers? Can you give me any quotes in support? Or is it just 'guilt by association'?

How does Barnes corrupt science and how does he use it in support of a theist agenda?

A scientist directing people to read William Lane Craig’s work as “worth a read” is corruption enough.  Read elsewhere in my blog about the various nonsense that Craig comes up with.  That a serious scientist might support the author of such nonsense is mind boggling.

In the article “An Open Letter to Luke Barnes”, I encouraged him to make his position clear, as a theist or otherwise.  He failed to do so.

Can I offer any evidence that Barnes is linked to the Templeton Foundation or the Discovery Institute?

You drew a conclusion from words in two separate paragraphs regarding those organisations.  I don’t think that Barnes is an apologist, he would have be more forthright about his position if he were.  As I said: “he doesn’t act like an apologist, but he sits at the edges using his apparent scientific credibility to defend scientific corruptions”.

How would any such connection or action make any difference to the science Barnes outlines?

I’m not saying that Barnes is connected to the Templeton Foundation or Discovery Institute; that was your conclusion.  But there are people who are linked to these organisations and they deliberately try to find aspects of science that can be bent to apologetic arguments.  Basically, they look for the gaps in which god might still reside.

An example of such a “gap” is central to the argument of irreducible complexity.  Note the provenance of the link.  Here’s another one, which is slightly less favourable.

The Discovery Institute in particular pushes for apologetic science to be taught in schools.  I use the term “apologetic science” to cover creationism in various forms, including the biological anti-evolution arguments (or arguments for guided evolution) and also cosmological arguments such as fine-tuning.  Whether the Discovery Institute will move into other areas of physics is yet to be seen, Intelligent Falling is certainly an option given that gravity is only a theory.  Medicine is another fertile field – after all, why should such a potent antibiotic such as penicillin have been hiding in mould if not placed there by a caring and thoughtful god (note that this argument will have be made quickly before we are overtaken by the evolution, oops I mean “intelligent design” of multidrug resistant bacteria).

I suspect that the corruption of the minds of students will have a knock-on effect on what science is done and how, although I agree that it won’t affect the underlying science – evolution and gravity will work the same way as it does now irrespective of how well our children and grand-children understand the processes in the future.

Martin Rees and Paul Davies – theists or non-theists?

Martin Rees claims to have no religious views at all, he agrees to the description “a church-goes who doesn’t believe in god”.  He’s only a “non-theist” in so much as he’s not specifically a theist.  His recognition by Templeton is based on his willingness to accommodate religion for example in education, in part because he’s worried that if given a choice between “God and Darwin, there is a risk they will choose their God and be lost to science”.  As someone who identifies as a non-theist, I don't think that this is compatible with good education.  While I agree that being too confrontational can be counterproductive, I disagree that intelligent people should roll over and let poorly educated pulpit thumpers twist the minds of children unchallenged.

Paul Davies, if anything, is likely to be a deist – but then again he could be a theist.  He certainly pushes for the unification of science and faith, which would be a strange position is he was profoundly sceptical about faith.  Here’s an article by Davies that indicates that he has a strong mystical streak, if not a specifically Christian one.

If unkleE means by “non-theist” anyone who is not an evangelical or apologist, or anyone who for whatever reason fails to identify themself overtly as a theist, then ok, I agree, both Martin Rees and Paul Davies (and perhaps even Luke Barnes) are “non-theists” in that sense.  They’re just not non-theists in the same category as most people who identify themselves as non-theists.

Is it guilt by association?
Certainly there is an element of this.  Imagine the outcry if a philosopher or scientist accepted money from NAMBLA (by which I don’t mean the fictional “North American Marlon Brando Look-Alikes”).  While the Templeton Foundation and Discovery Institute are not (quite) in the same league as the real NAMBLA, being willing to accept money from such organisations is a statement in itself.

Friday, 8 November 2013

A win for the powers of generosity - Part 1


Lokee kindly drew my attention to an interesting article recently – "Generosity leads to evolutionary success" – which is largely based on a paper by Alexander Stewart and Joshua Plotkin, "From extortion to generosity, the evolution of zero-determinant strategies in theprisoner’s dilemma".  The Stewart&Plotkin paper was, again largely, a response to what strikes me as a somewhat more technical paper by William Press and Freeman Dyson, “Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent” (this latter paper certainly seems less accessible to a lay reader such as myself, irrespective of how technical it is).

I spent a bit of time mulling over these papers (and a couple more that are referenced by one or both of them) and thought I might share the outcome of those ponderings.

-----------------------------

My first reaction was to think that, based on the popular article at the Archaeology News Network (ANN), there might be scientific vindication of some of the ideas raised in the series of articles Morality as Playing Games.  In that series, I primarily suggested that self-interest might lie behind our morality (and more specifically the avoidance of loss).  I also suggested that successful ethical systems would involve overt generosity and kindness during good times combined with an ability to act less generously either during bad times, or when no-one is watching.  I further suggested that cooperation might arise when you compete against a third party (for example, in the prisoner’s dilemma, the prisoners can compete against each other, which is the standard assumption, or they can cooperate in order to compete against the prosecutor).

The key evidence that seems to vindicate the idea that bad times promote less generous behaviour is in Figure 3 of Stewart&Plotkin.  They found that in an evolving population “generous” strategies were more successful than “extortionate” strategies, basically because “extortionate” strategies don’t do well against themselves so they are in effect self-limiting – but when the populations were small (for example when it comes down to a population consisting of just you and me, or my family and your family), “extortionate” strategies prevail.

But what, you might ask, is an “extortionate” strategy?  

Let’s return to our prisoners, Larry and Wally.  I’ll have to modify the scenario slightly to introduce the idea of T, R, P and S which are “maximum payoff”, “mutual cooperation payoff”, “mutual defection payoff” and “minimum payoff” and are conventionally set to values of T=5, R=3, P=1 and S=0 (note however that Stewart&Plotkin use a “donation game” variant in which T=B, R=B-C, P=0 and S=-C, where B>C so that R>P).

-----------------------------------

In Ethical Prisoners, I explained that Larry and Wally are faced with a dilemma in which they can choose to either cooperate with each other (by remaining silent with respect to a crime they are accused of committing) or defect (by confessing).

I used a table to explain how these options play out, which I’ve updated below:

 
Larry defects (confesses)
Wally defects
(confesses)
payoffLARRY=1 (P)
payoffWALLY=1 (P)
Wally cooperates
(remains silent)
payoffLARRY=5 (T)
payoffWALLY=0 (S)

or

 
Larry cooperates (remains silent)
Wally defects
(confesses)
payoffLARRY=0 (S)
payoffWALLY=5 (T)
Wally cooperates
(remains silent)
payoffLARRY=3 (R)
payoffWALLY=3 (R)

The results are more traditionally represented in terms of cooperation (c) and defection (d) like this (where Larry is the focal player):

 
cWALLY
dWALLY
cLARRY
R=3
T=5
dLARRY
S=0
P=1

Larry and Wally can choose different strategies depending on not only what sort of outcome they want but also what sort of overall scenario they find themselves in.  As originally framed, the prisoners’ dilemma (PD) is a one-shot affair, meaning that Larry and Wally face off against each other once and make a single decision with no historical context or potential for future consequences.  We can, however, consider an iterated prisoners’ dilemma (IPD) in which Larry and Wally would make equivalent decisions repeatedly, in which they face off many times and, presumably, can learn about each other’s behaviour and react accordingly.

If we ignore the prosecutor and any pre-existing moral imperative to not abandon your partner in crime (as discussed in Ethical Prisoners), in the one-shot PD Larry and Wally are likely to choose a simple defection strategy since defection not only opens up the possibility of a maximum payoff, but also avoids the possibility of the minimum payoff.

Things get more interesting with the IPD.  Strategies can now involve a consideration of previous moves and there has been a wealth of evidence to show that the most successful strategy in terms of overall payoff is what is known as Tit-For-Tat (TFT).

(In terms of head-to-head battles, the most successful strategy is (and remains) the simple defection strategy Always Defect (ALLD), since it either wins or draws.)

We can represent the TFT strategy as a probability table like this:

previous round
cWALLY
dWALLY
cLARRY
p1=1
p2=0
dLARRY
p3=1
p4=0

where,

·         p1 is the probability of Larry cooperating if both cooperated last round,

·         p2 is the probability of Larry cooperating if Wally defected while Larry cooperated last round,

·         p3 is the probability of Larry cooperating if Wally cooperated while Larry defected last round, and

·         p4 is the probability of Larry cooperating if both defected last round. 

In short, if Wally defected in the previous round, then Larry will defect this round but if Wally cooperated, then Larry will cooperate.

Different strategies can be tested against each other using PD-bots in tournaments.  With simple strategies as TFT and ALLD, the whole outcome of the tournament is predicated on the first round.  Assuming that a TFT-bot starts off with cooperation, two TFT-bots will cooperate forever, obtaining an average score of 3 each, two ALLD-bots will defect forever obtaining an average score of 1 each and a TFT-ALLD pairing will result in an initial win for ALLD followed by mutual defection forever obtaining average scores that approach 1 (down from 5 for ALLD and up from 0 for TFT).

(If we don’t assume that TFT-bots start off with cooperation, but instead with defection, then all results default to an average score of 1.)

When assessing the average score of strategies over many rounds against a range of opponents and many iterations in each round, TFT has been shown to be a clear winner (with a cooperative start) despite losing one iteration per round in any match-up against an ALLD.

What Press&Dyson discovered is that there are more complex strategies, being a subset of “Zero Determinate” or ZD strategies, in which an “extortionate” player can drive a self- interested, evolutionary opponent to always cooperate.  A “concrete” example of an extortionate strategy on the part of Larry (per Press&Dyson) is:

previous round
cWALLY
dWALLY
cLARRY
p1=11/13
p2=1/2
dLARRY
p3=7/26
p4=0

Even before going into more detail, it might be pretty easy to see that Wally’s best option is to always cooperate in order to maximise the frequency with which Larry cooperates.  The likelihood of Larry cooperating is always higher if Wally has just cooperated, and if also Larry has just cooperated.  The flip side of this is that the likelihood of Larry defecting is higher if either of them has just defected.  These facts combine to drive Wally towards unilateral cooperation in order to maximise his score.

What Press&Dyson showed is that if Wally does unilaterally cooperate to maximise his score against the extortionate Larry, he does so at the cost of maximising Larry’s score above his own.  What I interpret out of this is that since Wally can’t win against Larry in the long term, his obvious choices are to:

·         maximise his own score, thereby ensuring that Larry wins, or

·         minimise both scores by locking into mutual defection (and thereby secure a draw).

----------------------

Stewart&Plotkin were interested in ZD strategies in general but somewhat more intrigued by a different subset of them, not the “extortionate” strategies but rather what they called “generous” ZD strategies.

The generosity in question can be interpreted in two ways.  Firstly, and I think possibly most importantly, a generous ZD strategy is more forgiving and this can be brought about by having a non-zero value of p4 (which avoids being locked eternally into mutual defection).  Secondly, a generous ZD strategy tends to maximise average payoffs for both players, which can be done by using strategies which have a low value of χ and a value of κ that approaches R, where R is the value of mutual cooperation and, in combination, κ and χ constitute an indication of how “extortionate” the strategy is.  The latter term, χ, is referred to by Press&Dyson as an “extortion factor” where χ=1 is “fairness” and higher values are increasingly extortionate.  (This term might otherwise be referred to as “leverage”.)  One can obtain an indication of how wide the gulf is between the average payoffs for each player (sLARRY and sWALLY) using an equation that applies for ZD strategies:

sLARRY - κ = χ . (sWALLY - κ)

If Larry’s strategy produces χ=1, then the payoffs are equal for both players, irrespective of the value of κ for that strategy.

--------------------------

Note that determining precisely what the parameters χ and κ relate to in reality is a little complex.  So far I’ve seen no simple method that one can use to select values of χ and κ and from them generate a strategy.  It is possible however to fiddle with the values of p1, p2, p3 and p4 in a coordinated way to raise or lower the value of the parameters.

Note also that with conventional values of T, R, P and S, there is a limitation to the value of sLARRY and sWALLY, such that 2P <= sLARRY + sWALLY <= 2R, due to the typical assumption that 2R > T + S.

------------------------------

Extortionate strategies are defined as those for which κ=P (the payoff for mutual defection) and χ>1.  In such strategies, the equation sLARRY - κ = χ . (sWALLY - κ) shows quantitatively that the other player can only increase their payoff by simultaneously increasing the extortionist’s payoff, for example for a strategy with χ=4 and κ=P=1:

sLARRY
sWALLY
1.000
1.00
1.250
1.06
1.500
1.13
1.750
1.19
2.000
1.25
2.250
1.31
2.500
1.38
2.750
1.44
3.000
1.50
3.250
1.56
3.500
1.63
3.750
1.69
4.000
1.75
4.200
1.80
3.250
1.56
3.500
1.63
3.750
1.69
4.000
1.75
4.200
1.80

where sLARRY + sWALLY = 6.0 is a limiting value.

Strategies that are more generous have a higher value of κ, for example for a strategy with χ=4 and κ=R=3:

sLARRY
sWALLY
0.000
2.25
0.250
2.31
0.500
2.38
0.750
2.44
1.000
2.50
1.250
2.56
1.500
2.63
1.750
2.69
2.000
2.75
2.250
2.81
2.500
2.88
2.750
2.94
3.000
3.00

This strategy is still manipulative in a way, since Larry is still encouraging Wally to increase Larry’s score while Wally tries to increase his own, but it’s more generous since until parity is reached (where sLARRY = sWALLY), Wally’s score will be higher than Larry’s.

What the strategy does do, however, is open Larry up to sabotage by Wally.  Wally could, at the cost of a fraction of a point, drive Larry’s score to 1.0.  Larry might not obtain any benefit from increasing his score above 2.5 so even without malice on his part, lack of incentive to cooperate further might damage Larry catastrophically.

This vulnerability to sabotage can be mitigated by Larry if he were to choose a strategy with a lower value of χ: by lowering the “extortion factor” which, in a generous strategy, works against you (hence the suggestion that the term “leverage” be used).  For example if Larry’s strategy produces χ=1.5 and κ=R=3, he will get the following results:

sLARRY
sWALLY
0.600
1.40
0.750
1.50
1.000
1.67
1.250
1.83
1.500
2.00
1.750
2.17
2.000
2.33
2.250
2.50
2.500
2.67
2.750
2.83
3.000
3.00

where sLARRY + sWALLY = 2.0 is a limiting value.

This strategy narrows the gap in scores between yourself and the other (thus limiting the effectiveness of any attempts to sabotage you), increases the self-harm done by a saboteur and makes it impossible for an opponent to drive your score to zero – while still being generous.

If χ is reduced even further, to a value below 1, a potential saboteur would do more damage to themselves than they would to their opponent, for example with χ=0.75 and κ=R=3:

sLARRY
sWALLY
~1.285
~0.715
1.500
1.00
1.750
1.33
2.000
1.67
2.250
2.00
2.500
2.33
2.750
2.67
3.000
3.00

where sLARRY + sWALLY = 2.0 is a limiting value.

Another potential way to limit sabotage is to set κ to a lower value – within the range P < κ < R.  Doing so again increases the harm a saboteur does to themselves in their efforts to harm you.  For example for a strategy with χ=1.5 and κ=(P+R)/2=2:

sLARRY
sWALLY
0.800
1.20
1.000
1.33
1.250
1.50
1.500
1.67
1.750
1.83
2.000
2.00
2.250
2.17
2.500
2.33
2.750
2.50
3.000
2.67
3.200
2.80

where sLARRY + sWALLY = 6.0 and sLARRY + sWALLY = 2.0 are limiting values.

Note however that the payoffs for sLARRY and sWALLY always reach parity at κ, no matter what value κ is set to, so reducing κ would a self-defeating strategy on the part a generous Larry, since a malign Wally can still force him into an inferior position by reducing his payoff to below whatever level Larry sets for κ.  Therefore, employing a strategy with a lower value of κ would just decrease Larry’s score.  On the other hand, against a more reasonable, evolutionary player who aims only to increase their own score, without worrying about Larry’s, this strategy is slightly superior giving Larry a marginally higher payoff than a strategy with a higher value for κ.

A better result, however, can be obtained using a lower value of χ and a higher value of κ, for example with χ=0.11 and κ=2.75:

sLARRY
sWALLY
2.625
0.60
2.750
1.74
2.875
2.875
2.900
3.10

where sLARRY + sWALLY = 6.0 is a limiting value.

Here, Wally is highly encouraged to score well, even slightly higher than Larry and is punished severely if he acts malignly without damaging Larry in any significant way, particularly since Larry has signalled that he is uninterested in a maximal score.  I’m not totally convinced that this strategy is better than one in which κ=R and χ is set low, but there may be situations in which it has its benefits (specifically thinking of situations in which success brings with it some other risk, so that being a dog that’s doing nicely without actually being top dog is preferable).

As in Ethical Prisoners , the strategies that Larry and Wally select will depend at least partially on who they believe they are playing against – in terms of this scenario, their strategies will depend on whether they want to maximise their own scores or to beat the other.

Perhaps the best strategy is somewhat more complex, one in which Larry identifies what sort of opponent he has and what the playing environment is like, and then choses an appropriate level of generosity or extortion to match.

----------------

There’s something else to unpack from this.  If this mathematical modelling can be applied to real life situations, and there are indications that it can, then something that we can take away is that if the “players” consider that either times are poor or the population is small, then they will tend to act less generously, since generosity tends to prevail only in good times with larger populations.

With creatures as intellectually complex as humans, we need to be careful about how we consider population size.  Some people might consider that there is only a population size of two – people like me (i.e. primarily me) and everyone else.  Very few will, in practical terms, consider the population to be a little over 7 billion.

These findings could be considered support for the notion that we should encourage people to regard themselves as part of a larger inclusive society because by doing so we would encourage more cooperative and generous behaviour.

Similarly, we could consider that a sense that times are poor can dissuade people from engaging in generous behaviour (examples are replete in dystopian themed tales in which people turn on each other during tougher times) – so constant proclamations of doom from the media and politicians (especially those in opposition) can become self-fulfilling.  If we take the opposite approach by highlighting the positive, we might be able to generate a virtuous circle in which a feeling that things are getting better motivates people to act more generously, resulting in evidence that things are in fact getting better thus locking further generosity.

----------------

In the next part, I’ll look at what evolution means in Press&Dyson and Stewart&Plotkin and share the results of my own very simple modelling, modelling which indicates that generosity may in fact be the best strategy for evolving populations.