## Monday, 31 July 2017

### Two Envelopes and the Hypergame

The two envelopes problem goes a little like this:

Say I offer you one of two identical envelopes.  I don't tell you anything other than that they both contain cash, one contains twice as much cash as the other and that you can keep the contents of whichever envelope you eventually choose.  Once you make a choice, I offer you the opportunity to switch.  Should you switch?

Note that there's no need for me to know or to not know what is in each envelope (so this is not a Monty Hall type situation in which my knowledge affects your decision).  Note further that each time you make a choice (including the choice to switch), I could conceivably offer you the opportunity to switch - so if you were to decide to switch, then the same logic that caused you to switch in the first instance still applies and therefore you should take me up on the offer and switch back to the original envelope, and switch again, and again, and again.

To me, this indicates that the only valid answers to the question "should you switch?" go along the lines of "it doesn't really matter if I switch or not because, statistically speaking, there is zero value in a switch, I don't gain or lose anything by switching".

---

So, why might you think that there is value in switching?  A presentation of the problem at Quora put it this way:

Well, think about the expected payoffs here. You had a 50% chance of choosing the better envelope from the start, right? So that means half the time, the second envelope will have twice as much money as the first one, and half the time it'll have half as much. So say the first envelope has \$10 in it; then if you switch, you have a 50% chance of losing \$5 and a 50% chance of gaining \$10. So it's objectively better to switch!

This is one of those rare situations in which using actual numbers can make understanding slightly more difficult, so let's think about this in terms of X.

You choose an envelope and assign that envelope a value of X dollars.  In the other envelope is either 2X dollars or X/2 dollars, each with an equal likelihood.  Therefore, a switch has the following value:

0.5 * (2X-X) + 0.5 * (X/2-X) = 0.5X - 0.25X = 0.25X

So, when thought about this way, there's apparently a positive value associated with the switch rather than a zero value.

This is clearly the wrong way to think about it.  First and foremost, it skews the total value of envelopes.  On one hand, you are presuming that there is \$30 in total value and you have the wrong envelope, while on the other, you are presuming that there is only \$15 but you have the right envelope.  Naturally, it’s going to look like swapping is better since you currently only have \$10 if you are right and stand to gain \$10 if you swap, while only rising the loss of \$5 if you are wrong.

A better way is to think about this in terms of X and 2X only.  One envelope has X dollars and another has 2X dollars.  Once you have selected an envelope, there is a 50% chance that you have X dollars and a 50% chance that you have 2X dollars, therefore, the value of your envelope is:

0.5 * X + 0.5 * 2X = 1.5X

The value of the other envelope must, given that the total value of both envelopes is 3X, be 1.5X dollars as well and there is therefore zero value in switching.

The value of the switch can also be calculated this way - there is a 50% chance that you will give up X dollars in exchange for 2X dollars and a 50% chance that you will give up 2X dollars in exchange for X dollars:

0.5 * (2X-X) - 0.5 * (X-2X) = 0.5X - 0.5X = 0

So the "paradox" resolves down to a simple misrepresentation of the problem (related to the old error of counting chickens before they hatch).

---

Naturally, there is a slight twist.  Say I give you an envelope (with X dollars in it), then I toss a coin and fill another envelope with an amount of money based on that result.  All you know is that there is a 50% chance that the second envelope has 2X dollars in it and a 50% chance that is has X/2 dollars.  On this basis you should in fact swap, because the second envelope has a value of 1.25X dollars (therefore the value of switching is 0.25X dollars as calculated above).

In this instance, however, it initially seems as if, were I to ask you if you wanted to swap again, you should say no, because the first envelope only has a value of X dollars while the one you switched to has a value of 1.25X and therefore the switch back would have a value of -0.25X dollars.

However, the second switch actually has this value:

0.5 * (2X-1.25X) + 0.5 * (X/2-1.25X) = 0.375X - 0.375X = 0

In other words, there is no value or cost associated with a second swap, or a third swap and so on.  This is further indication that using the X/2 and 2X values is problematic.

---

While I am responding to something written by Leon Zhou, I want to take the opportunity to respond to his hypergame paradox.

It goes a bit like this:

You and I play a game.  The rules are that I choose a finite two-player, turn-based game.  We play that game.  You get the first move and whoever wins the game wins the hypergame.

A finite game is a game that ends after a finite number of moves (it doesn't matter how many though).

Can I choose, as my finite game, this very game, the hypergame?

It seems that I can since, under the rules, the game chosen must be finite, thus the hypergame is the same number of moves, plus one, and therefore finite as well.  But if I chose the hypergame as my game, then you can choose the hypergame too and we can go backwards and forwards forever, choosing to play the hypergame … in which case the hypergame is not finite after all.

So the hypergame is both finite and not finite and we have a paradox.

I agree that this is a paradox, but I disagree with Zhou's claim that this paradox is not an example of "self-referential trickery".  It's quite clearly an example of self-reference in which the game itself is called from within the game.  He also suggests that it's not related (although he qualifies it with the term "direct reference") to Russell's paradox, but it is.  Within the hypergame is a call to the set of all finite games, Y.  If you put the hypergame in Y, then a path to an infinite loop opens and – by virtue of being placed in Y, the hypergame becomes ineligible as a member of Y.  Take the hypergame out of the set of games that can be called by the hypergame and it becomes a finite game again, and thus qualifies for being a member of Y.

This is similar to (but not exactly the same as) Russell's set R which is the set of all sets that are not members of themselves.  As a set which is not a member of itself, R becomes a candidate for being a member of R, but is thus disqualified.  And by not being a member of R, R becomes a candidate for membership of R.

The hypergame is both a member of Y and not a member of Y in the same sort of way that R is a member of itself and also not a member of itself.

We can avoid the hypergame paradox, perhaps naïvely, with a minor clarification.  We simply clarify that the game chosen within the hypergame cannot be infinite.  Not "is not" but rather "cannot be".

This sort of clarification leaves Russell's paradox untouched.  Say we were to define R as the set of all sets that cannot be members of themselves - if R can be a member of itself, then it cannot be a member of itself, then it qualifies for being a member of itself, but it thus immediately disqualifies itself … and so on.

Somewhat unsurprisingly, Russell's paradox seems to be more fundamental than the hypergame paradox.

## Thursday, 20 July 2017

### Turning Fine-Tuning on its Head

The existence of you and m e is, in a sense, fine-tuned.  This statement might come as a surprise to anyone who has noticed that I am vehemently against the fine-tuning argument, but I can explain.  The fine-tuning argument goes a little like this:

Fine-tuning,
Therefore, god.

I've stripped it down a bit, but those are the basics of the argument.  I don't have much against the first line, but I do have problems with the second as I would have problems with the first premise of an expanded fine-tuning argument, namely the assertion "If fine-tuning then god".  The stripped-down deluxe version of the argument, which I also have problems with, is:

If there is fine-tuning, then either god or something else,
Fine-tuning,
Not something else,
Therefore, god.

My problem is with the third line of this version.  When argued by such luminaries as WLC the main thrust is that the idea that we should be here purely by chance, when the odds against are so staggeringly high, beggars belief.  The problem is that there remains a gap between unlikely and impossible.  The answer "something else" remains possible, and to some it is believable despite being unlikely, while the god solution is simply not believable.

What I don't argue against is the line "Fine-tuning", so long as that line is short-hand for "if things were slightly different in this universe, then intelligent life would almost certainly not exist".  If "fine-tuning" is short-hand for "this universe was designed by a divine being of some sort", then of course I have a problem but then the whole fine-tuning argument resolves down to begging the question, "god, therefore god".

Now, about you and me being fine-tuned - let's get all excited about it!  Think about how amazing it is that we exist on one little planet in such a massive universe.  What are the chances of that?  If we go by volume, we can see that the Earth is only about 10-56 of the observable universe's volume.  If we were just about anywhere else, we'd be dead.

But of course it'd be silly to compare our location with a random spot in intergalactic or even interstellar space.  It's pretty cold out there, and we need to be a bit warmer than that, without being too warm.  So we could consider how remarkable it is that we are in orbit around just this star, one that is so suitable for intelligent life to develop.  While our sun is one of about 1021 stars, it's a pretty common type of star - roughly one in five is a G-class star.  And about one in five of those is estimated to have an "Earth-like" planet in orbit around it - in the habitable zone.  So about one in twenty-five stars are of the right type with the right type of planet in orbit around it.

Not that amazing after all.  Although, to get the one in twenty-five figure we are assuming that once we've got the right type of star, and it's got the right sort of planet in the right sort of zone, then we get that one automatically.  That's a little unreasonable.  Our solar system is jam-packed with planets, moons, asteroids and comets (by which I mean generally "very sparsely packed, but with more than half a million objects").   The chances of ending up on the one habitable object out of all of those is about, well, one in half a million (after we've select the right star, with the right planet in orbit around it).

If we did limit ourselves to a suitable planet in orbit around a suitable star though, the fact is that we are on a very privileged part of that planet - standing somewhere close to the surface (ISS residents aside, plus anyone currently on a high-altitude international flight).  We're sitting or standing rather comfortably on the surface instead of floating high in the atmosphere, floundering at the bottom of an ocean or being crushed and then burnt to a cinder in the core.  And we live in a community that is relatively well placed on that surface: not in a volcano crater, not in Death Valley, not in Antarctica, not just below the summit of Everest.  What great fortune!  However, only about one sixth of the Earth's surface is habitable (half of the land mass).

Then there is timing.  This is a rather good time for intelligent life.  The planet that we are on has sufficient oxygen, not too much carbon dioxide, it still retains enough of the ozone layer that we don't die too quickly of cancer and the temperature is about right - not too cold that we freeze to death and not too warm that we are constantly flayed by killer storms or reduced to desiccated husks.  We have flowering plants, most importantly variations of grass that permit large populations to feed themselves, and they have only been around for about 55 million years, so 1.4% of the time that the Earth has been a planet.  Humans (as Homo sapiens) have only been around for about 200,000 years, or one thousandth of one per cent of the age of the Earth.  Genetic research indicates that humanity has gone through a few bottlenecks, the most recent may have been about 70,000 years ago - meaning that we were lucky, as a species, to not go extinct at that time.

It's remarkable that we're alive at all.  It's in the order of one in a billion, even aside from all the unlikeliness of one particular human meeting another particular human and producing a specific human at a specific time (with none of them being killed by any of the lethal flora and fauna that is abundant on this planet).  So … sure, we're fine-tuned, in a sense.

---

The thing is that what I have just provided is a continuous stream of indicators that our universe is not finely tuned for human life.  Our planet is not finely tuned for human life – there’s a very narrow band geologically and temporally.  We are finely tuned to exist in that very narrow band and the fine-tuned-ness of a biological organism to its environment is far more elegantly explained by evolution than it is by the introduction of a creator being.

The bottom line is that fine-tuning is not a challenge to naturalism or evolution or atheism, the real challenge is to the creationist or intelligent design theorist to explain why the universe is so unrelentingly hostile to humanity.

## Tuesday, 18 July 2017

### William MacAskill is Nice Guy, but Morally Confused

I wrote recently about Sam Harris and William MacAskill's discussion in Harris' podcast Being Good and Doing Good.  During that discussion, MacAskill revealed himself to be an ostentatious altruist, which is little surprise since he is the author of Doing Good Better - Effective Altruism And a Radical Way to Make a Difference.  MacAskill gives away a large proportion of his earnings and he actively encourages others to do so (also here).  He reports feeling rewarded by his carefully targeted altruism and there's nothing particularly wrong with his largesse, in itself.  I don't however see him as "morally superior".  In fact, I see him as morally confused.

One thing that I aimed to explain with the notion of an ethical structure was the broad range of morality, from people who are obsessed with doing the right thing at all times to those who simply don't give a toss about the rules, but more importantly those of us somewhere in the middle who are generally good but, given the right circumstances, can be tempted to do wrong (i.e. when we think that no-one is watching or that we might just get away with it).  In the Morality as Playing Games series, I concluded that every one of us is the descendant of a person who, when it became necessary, abandoned their morality and betrayed others in order to survive - but, to have been successful, this ancestor of ours didn't abandon their morality before it was necessary, at least not to the extent that they were deemed dangerous and unsuitable members of society before they could produce at least one child.

In order to be able to survive tough times, each of us must strike a delicate balance between the imperative to fit into our society (and be moral) and the capacity to betray our fellows at precisely the right time - not too early and not too late.  We seek a balance between the impulse to cooperate in a group and need to react appropriately when it is time to defect from that group.  In the modern western world, the times we live in are not particularly tough, even among the more disadvantaged (gone are the days when we might break a window to steal a loaf of bread in order to heroically save our starving nieces and nephews).

In good times, we are biased towards morality and we now have huge industries built around punishing defectors.  Given our increased wealth, it could be argued that the modern western person is less likely to defect than someone from, say, 200 years ago (which would explain the decline of violence).  This certainly has positive sides, but it can go too far to the extent to which our "morality" leads to self-harm, especially when we factor lost opportunity into the harm calculus.

MacAskill, I suggest, verges on self-harm when he carves away at the economic margin between modern comfort and survival.  By doing so, he pushes himself towards a situation in which he would not survive should his conditions deteriorate.  The rational approach is to look towards increasing your economic margin, while taking into account other relevant factors - for example, for the purposes of survival there comes a point at which being richer just increases the risk that you will be killed for your wealth.  As an academic, unless his books are spectacularly successful, MacAskill will not be in a situation in which his wealth is so huge as to be unconducive to his on-going survival.

So, on the surface, what I read into his unbounded willingness to aid others is an indication that his morality is poorly calibrated for survival in extremis.  He does talk about preserving yourself so as to not burn out too quickly (as an altruist) and to maximise your longer-term altruism, for example an expensive suit might prevent money being donated to worthy causes if you bought it today, but your ownership of it may permit you to secure a well-paying job thus allowing you to donate much more in the future.

MacAskill is saved from actual self-harm in a couple of ways.  Firstly, there is the framing of his legacy survival.  Who MacAskill is and how he behaves is an essential part of his legacy and therefore children (should he have any) would not necessarily be his sole method of ensuring legacy survival.  Others who follow his example, however, and who won't have their names and legacy attached to the organisations that MacAskill has created would likely be self-harming should they donate as much of their earnings as he seeks to - if that level of donation puts their other legacy survival efforts at risk.

The other way he is protected from self-harm is that he would be building up a certain amount of good will, should times turn tough and he finds himself in financial straits he could likely cash in that good will and make it through - but only so long as he hasn't convinced all his friends and colleagues to ruin themselves financially.

Aside from self-harm, there is another concern that I have with his efforts, involving the concept of moral self-licensing.  It is a known feature (some might say a bug) of psychology that when we have done something good, particularly something very good, we may feel entitled to either do something bad or forego doing something else that is good.  Alcohol advertising, for example, highlights this idea when it is suggested that by having put in a full day's work or prevailed in some sporting event, you have somehow earned the right to get drunk (in moderation).

My concern is two-fold.  Firstly, this altruism movement may trigger bad behaviour and act as an enabler for ongoing bad behaviour.  The idea that large numbers of people might feel entitled to behave poorly due to their altruism is worrisome.

Secondly, there is the issue of perception.  We are naturally inclined to see others as largely neutral (mythical saints aside) and so, if people were as ostentatiously "good" as MacAskill strives to be, the more cynical among us would wonder what they were compensating for.  Therefore, if altruism were to go too far, it could paradoxically lead to reduced trust within our societies.

I don't think that MacAskill is covering up some moral culpability, merely that he is morally confused.  His approach is a very nice idea, in the hypothetical, and one that, in the hypothetical, we should all strive for (meaning that it is beneficial to persuade others that, in the hypothetical, such ostentatiously good people are what we would want to be).  In practice, however, such ascetic extremes of goodness are a bit weird, relatively few of us could comfortably approach emulating them and we are left with an impression that the person involved is, at best, somewhat naïve.

It is this naivety that is possibly most problematic, when we consider MacAskill's zeal in spreading the word.  All traits which have survival implications have a range of expression in the relevant population.  There are benefits in being big and strong and there are benefits in being small and flexible, depending on the conditions.  If a population became entirely big and strong due to the prevailing conditions and those conditions changed, then the entire population could fail.  We are protected from this by a sort of regression to and variance around a mean, so that the small flexible still exist in times that are best for the big and strong and when being big and strong is no longer optimal, the small and flexible take over (while not dominating entirely so that a swing back doesn't wipe out the population).

The same applies to variations in morality.  Today, in relatively good times, we have people who are more inclined to steal and kill than we are willing to accept and we punish them.  But under extreme conditions, these are precisely the sort of people who would ensure the continuation of our clans while people who are too touchy-feely starve to death or are slaughtered in their beds.

To be able to survive, which I argue is what morality is really all about, we have to be able to be bad when conditions call for it.  We need to be able to look out for ourselves and while doing maximum good in the world, simply for its own sake, sounds like a brilliant idea in principle, turning our minds to this sort of thing in practice risks disarming us at the very moment when we are most vulnerable, making us miss the signs that conditions have changed for the worst and we may soon be required to reap the benefit of our morality – or be erased from the Earth.

## Sunday, 16 July 2017

### Inflation of the Hubble Constant

I've been chewing on an old bone recently, metaphorically that is.

In A Little Expansion on the Lightness of Fine-Tuning, I wrote about how I visualised the way two spaceships might approach each other, with both of them travelling at half the speed of light (relative to an implied third observer) and yet have a closing velocity of less than the speed of light.  The resultant model, for me at least, also managed to explain the spatial and temporal effects of special relativity.

A consequence of this model is that the universe is expanding at the speed of light and this expansion is time (see also On Time) - so that were you to be at rest in spatial terms, you would not be at rest in temporal terms, you would still be travelling "through" time by virtue of the universal expansion (at a rate equivalent to the speed of light).

The problem is that if the universe is expanding at the speed of light, per my model, then what about reports that the rate of expansion of the universe is increasing?  The speed of light is invariant, so the rate of expansion of the universe should also be invariant - if my model is valid.  What about inflation?  Well, I'll get to inflation in a moment.

I have previously (and rhetorically) asked the question Is the Universe Expanding at the Speed of Light?  My conclusion was that, if a single spatial Planck unit were added to the universe (in the direction we are looking) for every temporal Planck unit, then we would observe an expansion of the universe (today) at pretty much the rate that we observe the universe expanding at (today) - about 70 kilometres per second per megaparsec.

This was calculated, however, using a different model to that presented in A Little Expansion on the Lightness of Fine-Tuning.  It was if I were looking at a segmented ruler, with each segment being a Planck length long and I was adding a unit of Planck length somewhere in the middle for every unit of Planck time.  I then calculated the rate of the expansion of the ruler after 8.08x1060 units of Planck time (which is the age of the universe) and found that this matched the Hubble Constant.

So the question I have: is what happens if I use the onion like model to calculate the effect of the universe expanding at the speed of light?  This is taken from A Little Expansion on the Lightness of Fine-Tuning:

I was trying to explain something a little different there, so it's a bit more cluttered than it might need to be.  For our purposes at the moment, all we really need to consider is the difference between the value of the arc defined by xG at tE and its value at tG, noting the relevant angle θ.  My contention is that the universe expands at c, so therefore Δt = c.  Let us call the arc length x and refer to any change as Δx.

An arc length is calculated reasonably simply, x = θ.r (where θ is expressed in radians, and a full circle circumference is therefore given by x = 2π.r – see Hugging the World).  We therefore know that the difference in x would be given by Δx = θ.Δr (and in this model Δx = θ.Δt).  This gives us enough to work with.

Consider two moments in time:

x = θ.tnow

and

x + Δx = θ.(tnow + Δt)

And eliminate θ:

(x + Δx)/x = (tnow + Δt)/tnow

1 + Δx/x = 1 + Δt/tnow

Δx/x = Δt/tnow

Δx/Δt = x/tnow

So the rate of expansion of the universe is proportional to the age of the universe (tnow).  The Hubble Constant is presented as expansion over a given distance, so:

Ho = Δx/Δt/x = 1/tnow

Phew, my model stands up - because the value of the Hubble Constant is actually the reciprocal of the age of the universe - noting that the age of the universe is not uniquely calculated from the reciprocal of the Hubble Constant, there are other methods (see strong priors a little further down the page).  However, my model suggests that the universe is, in a sense, expanding at the speed of light - always has done so and always will do so - and the Hubble Constant should be decreasing with the age of the universe.  Nevertheless, we have people telling us that the rate of expansion of the universe is increasing.

This sounds like a potential worry because if the Hubble Constant were increasing rather than decreasing, then its current value at the reciprocal of the age of the universe would be coincidental.  This would be another, worrying example of fine-tuning that would have to be explained.

Fortunately, while there are observations that indicate that expansion of the universe might be increasing, the Hubble Constant is not.  This might seem counter-intuitive.  As Sean Carroll explains, the Hubble Constant gives us a scale by which to measure the velocity at which distant objects recede from us due to universal expansion, v = Ho.d, where d is the distance to the object receding away from us.  But note that this is caveated with "due to universal expansion".  Carroll is considering dark energy here.  If, in addition to universal expansion, things are being pushed apart even only by smidgen, then d will be increasing at rate greater than v (where v is "due to universal expansion").

Note that there is some room to doubt whether there actually is this acceleration in the rate of universal expansion, I myself remain a bit dubious maybe in part because if dark energy is real then my model may be fatally flawed, despite explaining so many things so well.

---

But what about inflation, I hear you yell excitedly.

Well, there's two things.  Firstly, we need to remember Hugging the World.  The term tnow doesn't necessarily mean the time since any absolute beginning to the universe, it means time since some key event - what that event was may have been no more than a phase change from inflation to the current state of affairs.  (Or from an earlier aeon to this aeon.  This makes my model consistent with conformal cyclic cosmology, which can bypass inflation.)

Secondly, in my model, we currently have a nice, orderly, temporal expansion of the universe, with one layer (one moment) added "at a time".  This isn't necessarily the way things have to be.  Instead, there could have been a situation in which each Planck volume spawned a new Planck volume each unit of Planck time.  This would lead to an exponential cascade of expansion - and to get to the lower limit of inflation, an increase in size by a factor of 1026, it is only necessary to have about 86 doublings … if it is assumed that every Planck volume splits during each doubling.  However, there are about 2x1011 units of Planck time in the period during which inflation is thought to have occurred meaning that all that is required to achieve the minimum for inflation is that for each unit of Planck time, there would be an average one additional Planck volume for every existent 3x109 Planck volumes.  This is what would happen about 3x109 units of Planck time in, or alternatively, at 1.62x10-34s – noting that inflation is believed to have occurred sometime about 10-33s in.

In other words, something very like inflation happens anyway with my model.

## Tuesday, 11 July 2017

### Ethical Structures, Non-Identity and the Repugnant Conclusion

Derek Parfit's Repugnant Conclusion results from a consideration of welfare and population.  In essence, the conclusion is that a world in which a very large number of people have lives that are barely worth living is preferable to a world in which a much smaller number of people have significantly better lives.

My attention was brought to this ethical conundrum by Sam Harris and William MacAskill in one of Harris' podcasts - Being Good and Doing Good.

The Repugnant Conclusion is reached in a step wise fashion with certain assumptions, primarily the idea that we should maximise "quality of life" (with a further assumption that we can quantise "quality of life" or at least think about units of "quality of life", which are even further assumed to be positive).  Also tied into this is the idea of future persons, who don't currently exist (hence their non-identity, which leads to other considerations which I'll get to shortly).

Imagine one person, A, existing with a total of 100 units of "quality of life" (let us call these UQL) and say that 100 units of UQL is pretty damn good.  If we are maximising UQL, then it follows that 101 people existing with 1 UQL each is better than this one person existing with 100 UQL, where having 1 UQL is equivalent to having a life that is barely worth living.  With the leap to the conclusion, this seems bizarre - many people living what is only slightly better than a life that is not worth living doesn't intuitively feel better than A living the bliss of a 100 UQL "pretty damn good" life.

(Note that our intuitions associated with this are likely to be faulty.  As I write, the 2016 Paralympics have opened and the whole and healthy among us would probably find it difficult to think that someone who is eligible to enter these Games would not want to swap their stumps for real legs, or to be able to see, or to not be confined to a wheelchair.  However, these people would not see themselves as a life that is barely worth living - and this applies to more the mundanely handicapped, not just Paralympians.  Some even claim that they would turn down the hypothetical option to turn back the clock and not be handicapped.  This notion has important links to the "non-identity problem".)

The Repugnant Conclusion, however, is not reached by a leap, it is reached step-wise.  Consider, rather than 101 people living a 1 UQL life, two people living lives that are only slightly worse off than A's life, even 99 and 99 UQL.  This is clearly better than just one person living a life with a total of 100 UQL, it's very close to double the total UQL.  Keep doing this over and over again and you eventually reach the conclusion that a very large number of people living lives that are barely worth living can be better than any significantly smaller number of people with significantly better lives.

Converting this into a real world situation, this would mean that it would be better to keep producing more and more humans until all of us are living lives that are barely worth living, but we'd do it one baby at a time where each new baby would not significantly degrade the average UQL and would slightly raise the total UQL.  There are few among us who do not realise that if we were all to continue to produce large numbers of children, as we did in the past, then the future will look bleak indeed, but few of us choose not to produce any children at all in order to not contribute to great suffering on the part of our potential descendants - noting that our population increase shows little sign of slowing down (at least globally) and may in fact increase if we are so foolish as to eradicate malaria, cure cancer, prevent heart disease, stroke and diabetes and provide universal first world medical care and clean water to the third world without modifying their family structures - so even if we restrict ourselves to one child, we may still thus produce descendants who will be born in to a world with a vastly increased population.

Now we turn to the "non-identity problem".  This problem revolves around hypothetical people, who might exist (or not exist) based on decisions we make.  In essence, the question is whether it is better for a person to exist than not exist, given that their existence might not be perfect.  Such moral dilemmas are faced (or ignored) by people who are aware of the risk of defects in their unborn child and choose to get a check.  Say an unborn child is found to have Down Syndrome and the parents decide to abort the pregnancy.  In this situation (at least in a sense), an existent person stops existing and a life that was probably worth living is not lived.  (There's a complex calculus that could conceivably be conducted to determine what the overall UQL would be with the introduction of this Down Syndrome child and it is arguable that the total UQL would go down, but this would possibly be the case anyway with the decision to go with an abortion since such a decision is rarely an easy one.  Other arguments could be made that humanity is held back by its weakest links, but such arguments would veer close to eugenics and are probably repugnant in themselves.)

We could step back a little and think not of an abortion, but of a decision to not conceive.  We make these decisions all the time and have few qualms about them, because they involve acts of omission rather than commission - the omission of an emission in the case of the withdrawal method, and the occlusion of an emission in the case of condoms and similar devices, such that both methods result in sperm and ovum not meeting.  However, in each decision to avoid conception, we are effectively denying the existence of a person who could have existed and who could have lived a life worth living.  If we have the potential to bring into being a person who could live a life worth living, how do we justify not doing so (assuming that any reduction in our quality of life is marginal and we can continue to live a life worth living)?

If we cannot justify choosing to not bring into being a child with UQL of greater than 1, then the conclusion is that we should just keep producing children to the greatest extent possible, until we would otherwise start producing children with an UQL of less than or equal to zero.  And this does not seem right.

So how could we justify to not create such a child?

Selfishness seems a common (albeit post hoc) justification.  I like my life as it is, I don't want to compromise it by introducing children into it.  However, this is making the assumption that my marginal quality of life (the slight difference between the quality of life that I have as a single, unencumbered person and the quality of life I would have as parent) is more important than the hypothetical quality of life of this potential child (which would not exist in the world if the child did not exist).

Another justification is that any child is a link towards a future descendant living in a bleak, overpopulated world.   But this argument only works if you have decided to never reproduce.  It disappears as soon as you produce your first child and then you seem committed to produce as many children as you can, paradoxically rushing towards a situation in which your descendants are consigned to living in a world which is barely worth living in (and who may in turn have descendants living in a world that is not worth living in).

These linked problems, the Repugnant Conclusion and the Non-Identity Problem, have not been conclusively solved by ethicists.  Attempts to avoid the conclusions tend to result in other paradoxical outcomes, or other ethical problems (for example, if we think about UQL in terms of an average rather than a sum, we can arrive at a conclusion that it is better to eradicate those with low UQL and keep doing so until there is one single blissfully happy person with very high UQL - presuming that this person doesn't mind the genocide going on around her).

What does the Ethical Structure approach lead us to think about these problems?

Careful readers will probably have picked up on stilted wording above.  It's not easy (for me) to breeze past a statement that "X is better than Y" without commenting on the fact that the term "better" isn't securely grounded.  What exactly is a "better life"?  In what way is 100 UQL vested in one person "better" than 101 UQL spread across 101 people?  What does it mean to say it is "better" (or even simply "good") to keep producing children?

A "better" life is merely a life that is more good, but good for what?  And having quality of life is again ungrounded.  Does this mean happier, more productive, containing more puppies, what?  Even if we allow that quality of life need not be defined precisely, we are left with a begged question as to whether a life with high quality is a good thing.  Good for what?  The unabated production of children is good for what?

The answer, if the logic behind the Ethical Structure idea is correct, is survival - but survival on a few levels.

Firstly, to survive we each need to have lives that are worth living, in order to prevent ourselves from self-terminating (or at least so that we put serious effort into not dying).  We need to have both lives that are worth living and the expectation that our children's lives will be worth living in order to survive through them (so legacy survival as opposed to physical survival).  We do not, however, necessarily need to have a quality of life that is very high - merely sufficiently high.  Similarly, we don't need to expect more than that our children should have a sufficiently high quality of life - where "sufficiently high" is a value judgement that will vary from person to person.

Secondly, there is the need to continually signal to other members of our communities that we are obeying the rules and not threatening the welfare of others.  In this context, the unabated production of children is not good because flooding of the population with my children (my legacy) is potentially deleterious to your legacy and even to you personally, if my children were to displace you.  The rules, in the form of social norms, tell us how many children are appropriate and any significant divergence from this number (positive or negative) is generally frowned upon, which is to say you could thus be a suspicious character as a rule-breaker who had too many children.

These are, however, practical considerations and they don't address the more abstract notion that we should want to increase the total quality of life in the world (at least among humans) nor do they explain that we are appalled by the idea of large numbers of people that have lives that are barely worth living.  These, I suggest, derive from our consideration of the people that we want to be or, more accurately, the people that we want other people to think that we are, the nature of which follows from an internalisation of the ethical structure.

The second tier of the structure is the injunction against harm.  This is generally applied to existent people (most importantly ourselves), but in order to be consistent and to convey our reliability to fellow community members in quite a cost-effective way we may also apply it to potential or hypothetical people.

Consider person A, the one with 100 UQL.  We compared this with 101 people with 1 UQL each.  Note that the injunction is against doing harm, rather than being a commandment to do good (since we can more reasonably demand that people not harm us as opposed to demanding that others do good for us).  With the 101 people with 1 UQL each, we either have one person who has been harmed by losing 99 UQL - or who has been eliminated entirely (thus losing 100 UQL and her existence!)  This is a level of harm that we cannot countenance, especially if A is supposed to represent "me".  (This is a difficult conclusion to avoid, since if there were one single person in the universe, hogging all the UQL, then that person would obviously "me" from their own perspective and there would be no other perspective.)

We can accept an incremental decrease in our quality of life, especially if it is hypothetical, because by doing so we are affirming to ourselves and advertising to others that we hold the survival (and thus existence) of other people to be of more importance than maintaining a high quality of life for ourselves.  Remember though that when it gets down to brass tacks, the purpose of my ethical structure is to aid my survival, not yours, and that means I have a point at which I will abandon the "charade" of morality in order survive (as do you).  This means that while I might accept a gradual decline in my quality of life, there is still a point at which I will no longer accept it - not that I necessarily know precisely where that point lies, but it how it is set will probably be related to how I view my legacy in relation to my physical survival, I am likely to call a stop to the diminishment of my quality of life when it threatens my legacy.

When thinking about population increase, this manifests as a hypothetical willingness to sacrifice a little quality of life in order to allow another person to come into existence.  As we don't want people to suffer (or rather we don't want anyone to think that we would not care about people suffering), our preference is that hypothetical and newly existent people should a similar level of quality of life to ours (or slightly better, or slightly worse, it doesn't really matter particularly since it's hypothetical).  But we are not willing (even hypothetically) to significantly decrease our quality of life, nor to posit into existence people with a quality of life that is substantially below ours.  This might in part be because we don't really deal in absolutes, we deal in comparison, so a single "positive" UQL would be counted as negative when compared to our notional 100 UQL.

An example which came up frequently in the Harris-MacAskill discussion was Singer's drowning child in a lake.  In this thought experiment, you are walking past a shallow lake, one that you can safely enter to extract a child that is drowning.  However, you're wearing a nice set of clothes and the lake, while safe, is dirty or muddy, so while saving the child, you would be ruining your outfit.  Do you risk the expense of getting a replacement outfit and save the child?

The argument goes that you would be a moral monster if you were to fail to save the child simply because you wanted to save your clothes.  Taking your clothes off and rescuing the child naked is apparently not an option, nor is laundering them afterwards, although interestingly enough, it seems a common that people do want to save the child, but they also consider how they might prevent damaging their clothes in the process.  We could instead imagine a hypothetical imperative to drive off the road, thus destroying your car, in order to not kill a small child that had strayed onto the road - a dispassionate choice to prioritise your car over the child would make you a moral monster.  This scenario makes killing the child somewhat more active, rather than passively letting it drown.  In a sense, the drowning child dilemma is more like a variation of the trolley problem, in which you can choose to inconvenience yourself slightly in order to redirect the trolley onto a track that will destroy some of your stuff and thus save the life of a child.

Reframing the dilemma slightly, we could say that by not saving the life of a child on the grounds of a relatively small cost and minor inconvenience we become moral monsters.

Therefore, using Singer's extrapolation, we become moral monsters when we fail to send money to sub-Saharan Africa to help buy mosquito nets, at least those of us who have discretionary funds that we waste on such things as iPhones, summer holidays or food that we don't need (given that if we are overweight it's almost certainly because we eat more than we need).  Harris and MacAskill used the phrase "we're back standing by the lake" or the like to suggest that by failing to help distant people, or by reaching a conclusion that is repugnant, we would be doing the equivalent of letting the child drown (while noting that salience, or immediacy, is absent by virtue of the distance involved).

I agree that if we let the child drown while we stand passively at the lake's edge or as we walk away, we could be considered to be moral monsters.  I'd find it difficult to forgive myself if I failed to act in such a scenario, because I would not be the person I would want to be, and I'd be deeply suspicious of anyone else that I knew to be willing to let the child drown.  However, I don't agree that we run the same risk of being considered moral monsters due to any responsibility to save distant people.  I do, on the other hand, agree that we have precisely the same responsibility to save distant people as we have to save the child - but by that I mean that we have no responsibility to save any of them, none at all.  Yes, I meant to write that.

As an ordinary person walking past the lake, we have no inherent obligation to save the drowning child.

Before I get accused of being a moral monster, let me try to explain the thinking behind this.  If I don't care what sort of person I am, and I don't care what you might think of me or do to me if I were to fail to react the way you think I should, then I would have no motivation or obligation to save the child (for the sake of saving the child) even if I could do so at no cost or inconvenience to myself.  However, none of us, perhaps not even psychopaths, live in that sort of world.  We live in the sort of world in which a real, if not inherent, obligation emerges when there is a pressing, obvious and reasonable demand for our assistance - because we do care about what sorts of people we are and we do care about what others think of us and what they might do to us.  So take on board an expectation that we should attempt to save a child drowning in a lake if we can do so safely

What is not expected is that we should go out of our way to address a vaguely defined need for assistance that is largely invisible.  This is partly why the crystallisation of that need, via advertising or someone knocking on your door, tends to work.  A level of expectation is only established once we are aware of the need and, to be really effective, there should be some threat of shaming involved, meaning that someone else must know that I am aware of the need.  If I can avoid donating time or money without feeling bad about myself or being shamed for my selfishness, why shouldn't I?

(As an aside, the donation of time and money is more obvious in the US than it tends to be in other countries - acts of charity are certainly more ostentatious.  Charity is simply not expected to the same extent in most other countries, and people will rarely ask as to whether you volunteer time and/or money to any worthy causes.  Spending quality time in your garden may well be more highly regarded than being a "busybody" or "do-gooder".)

So, getting back to Harris and MacAskill's discussion of Singer's drowning child argument, I agree wholeheartedly with the idea that the reluctance to donate to worthy-but-distant causes is related to a lack of immediacy or salience.  But I disagree that this is a truly moral issue, since we have no more objective obligation to save the child drowning in the lake than we have to save the child being bitten by mosquitoes in Africa, any obligation we do have is subjective and dependant only on how much we care about our place in the world.

As a society, we might choose to build into our ethical ruleset the notion that ostentatiously donating to worthy causes irrespective of distance is a requirement - and there may well be survival and quality of life benefits in doing so - but until then, our reluctance to donate time and effort to distant causes is no more than a psychological issue that charities need to deal with.

## Monday, 10 July 2017

### The Logic of Theological Zombies

Sometimes, rather than accepting an argument in words, an apologist will demand that an argument be presented as a series of premises and conclusions.  So here is the formal argument for theological zombies (and a little beyond):

P1 – Everything a maximally excellent being (MEB) wants is a thoroughly good thing (TGT)

P2 – The saving of a soul is a TGT

P3 – The MEB wants to save more than a single soul

P4 – That which is thoroughly good cannot saturate

C1 – Therefore, the MEB must want an infinite (or maximal) number of saved souls (from P1, P2, P3 and P4)

P5 – If the MEB wants an infinite (or maximal) number of saved souls it can achieve that by means of theological zombies

P6 – With the option of theological zombies, it not necessary that the MEB send any soul to hell

P7 – To be a maximally excellent being it must be impossible for there to be a superior being

P8 – If the notional MEB were to send any soul to hell, when it is possible to not send any souls to hell, then there would be a possible superior being to the notional MEB (meaning that the notional MEB is not an actual MEB)

C2 – No souls are sent to hell by the MEB (from C1, P5, P6, P7 and P8)

P9 – If the MEB will not, and cannot, send any souls to hell, then Jesus (as, or as a representative of, the MEB) was lying about hell (from C3)

P10 – A being that lies and lets its representatives lie is a lesser being than one that is always truthful and does not let its representatives lie

C3 – Therefore, there is no MEB associated with Jesus (from C2, P10, P11 and P7)

----

Support and clarification for various premises and explanations for conclusions are as follows:

P1 – if the MEB wants anything that is less than a TGT then it is not omnibeneficient and not thoroughly good in itself (which would defeat the moral argument)

P2 – this could be argued from at least two perspectives.  Firstly, unnecessary damnation for anyone is not a thoroughly good thing.  Secondly, there are claims that the MEB wants humans to enter into a personal relationship with it, which saves the soul and must be a thoroughly good thing (along with the consequential saving of the soul).  Note that “soul” here could just mean “individual”, the precise mechanism of individuation post death is immaterial to the argument

P3 – the MEB apparently wants not only more than one saved soul but also more than one of each type, since that limited requirement could have been satisfied by just saving Adam and Eve.  The need to procreate and fill the planet with billions of people would indicate that the MEB wants many saved souls

P4 – that is to say that there is no value of N such that N is the optimum number of TGTs, because if N+1 TGTs is less good than N TGTs, then the TGT is not thoroughly good, because it can be bad under certain circumstances.  In other words, if you can say there is too much of a good thing, then that good thing is not a thoroughly good thing

C1 – infinite seems better when it comes to thoroughly good things since, as per P4, they cannot saturate, so for every value N, N+1 TGTs is better than N TGTs.  Perhaps there could be a "maximal number" of saved souls that is finite, but this eats away at both the omnipotence of the MEB and the thorough goodness of saving souls

P7 – seems quite self-evident to me

P8 – that is to say that given that theological zombies are not impossible there is a way to avoid sending any soul to hell (noting that there is no commitment from the MEB that all experiences of humans must be both authentic and veridical – the non-veridical experiences of someone in a universe which is otherwise inhabited by theological zombies are still authentic experiences in that they are identical to the experiences they would have had in interactions with real others, they have the benefit of being authentic without necessitating the condemning of any of those others to hell)

C2 – otherwise it would not be maximally excellent

P9 – the Jesus character mentions hell a few times, admittedly mostly in relation to parables, for example the Parable of the Net (Matt 13:49-50): “This is how it will be at the end of the age. The angels will come and separate the wicked from the righteous and throw them into the blazing furnace, where there will be weeping and gnashing of teeth.”  One might argue that this is just a parable, but earlier in the chapter, another parable, the Parable of the Weeds, was explained (Matt 13:41-42): “The Son of Man will send out his angels, and they will weed out of his kingdom everything that causes sin and all who do evil.  They will throw them into the blazing furnace, where there will be weeping and gnashing of teeth.”  There’s little space for misinterpretation there.

P10 – if there are two paths to the same objective and the only consequential difference is that one involves a lie and the other doesn’t, then the honest path is the better of the two

C3 – an MEB would not lie or permit lies on its behalf and it would not send souls to hell.  Therefore, there is a dilemma with respect to the Jesus character’s pronouncements on hell.  Either these pronouncements are factual and the supposed MEB sends souls to hell (and therefore it cannot be an MEB) or these pronouncements are false and the Jesus character is lying about it (and therefore Jesus cannot be representing an MEB)

---

I listed the ways that theists might try to avoid the difficulties that follow consideration of theological zombies in WLC – A Hole:

Reject maximal excellence (although WLC argues that a less than maximally excellent being is not god)

Reject the arguments of WLC and people like him (a very good start on the road to reason and intellectual freedom)

Appeal to ignorance (the standard fall-back option)

Argue that the theological zombie is logically impossible (this would have to be a valid argument, of course, otherwise it's just another appeal to ignorance hidden behind a veil of rhetoric and hand-waving - of the sort that I'd expect WLC to embark upon)

I think here there is another issue that one might need to keep in mind given the argument above.  If the MEB did use theological zombies to ensure that no created soul would be unsaved, then it would not necessarily need to send an actual avatar to Earth to speak on its behalf.  A madman who thinks that he is the earthly representative of the MEB would do, or a group of people with runaway imagination who create a fictional character who thinks he is the earthly representative of the MEB (or even knows it, given that he is fictional).  That is to say, the apparent lies of Jesus would only be lies if Jesus were divine or divinely inspired.  If they are the ravings of a madman or the words put into the mouth of a fictional character, then the MEB is neither lying nor allowing its representatives to lie.

## Thursday, 29 June 2017

### Deformed Epistemology

For some reason, there are some christians who find calling Plantinga’s “Reformed Epistemology” by another name, namely “Deformed Epistemology”, insulting.  They called it name-calling.  I think that’s a little unfair, so I’m going to put a little effort into defending the use of the term “deformed epistemology”.

First, we need to look at what is being either reformed or deformed – epistemology.  This is what Wikipedia has to say (I’ve tidied up the list though):

Epistemology is the branch of philosophy concerned with the theory of knowledge. Epistemology studies the nature of knowledge, justification, and the rationality of belief. Much of the debate in epistemology centers on four areas: the philosophical analysis of the nature of knowledge and how it relates to such concepts as truth, belief, and justification; various problems of scepticism; the sources and scope of knowledge and justified belief; and the criteria for knowledge and justification.

Alternatively, there is the Stanford Encyclopedia of Philosophy entry:

Defined narrowly, epistemology is the study of knowledge and justified belief. As the study of knowledge, epistemology is concerned with the following questions: What are the necessary and sufficient conditions of knowledge? What are its sources? What is its structure, and what are its limits? As the study of justified belief, epistemology aims to answer questions such as: How we are to understand the concept of justification? What makes justified beliefs justified? Is justification internal or external to one's own mind? Understood more broadly, epistemology is about issues having to do with the creation and dissemination of knowledge in particular areas of inquiry.

the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity

Now I could go on adding text from the IEP (“Epistemology is the study of knowledge”), other dictionaries (like dictionary.com: “(epistemology is) a branch of philosophy that studies the origin, nature, methods, and limits of human knowledge”), other encyclopedias (like Britannica: “Epistemology (is) the study of the nature, origin and limits of human knowledge”) and papers on epistemology (JT Tennis in Epistemology, Theory, and Methodology in Knowledge Organization: “Epistemology is how we know”), but I think the point has been adequately made already – epistemology is about knowledge.

If we are going to reform (or deform) epistemology, then we are going to reform (or deform) something about how we interact with or think about knowledge.

So, let’s look at Plantinga's deformed epistemology and specifically his "proper functionalist version of epistemic externalism" which he summarises in Warranted Christian Belief (p133):

Put in a nutshell, then, a belief has warrant for a person S only if that belief is produced in S by cognitive faculties functioning properly (subject to no dysfunction) in a cognitive environment that is appropriate for S’s kind of cognitive faculties, according to a design plan that is successfully aimed at truth. We must add, furthermore, that when a belief meets these conditions and does enjoy warrant, the degree of warrant it enjoys depends on the strength of the belief, the firmness with which S holds it

Note that, according to Plantinga, knowledge is warranted true belief rather than justified true belief:

That may not come as much of a surprise, given that this book is a sequel to Warrant: The Current Debate and Warrant and Proper Function. In the first of those books I introduced the term ‘warrant’ as a name for that property—or better, quantity—enough of which is what makes the difference between knowledge and mere true belief.

So ... is there a problem with Plantinga's deformed epistemology?

This might not come as a surprise, but I think there is.  Right off the bat, there is the assumption of a "design plan".  I understand that theists believe that there is, in some sense, a design plan to humans.  I do think it's possible to work backwards from a belief in a god, including a belief that one's belief in a god is a true belief and reach a belief that one has knowledge about the existence of a god.  Breaking it down a bit:

Is there a god?

Do you believe there is a god?

Do you consider your belief to be a true belief?

Is there a truth (or god) detecting design plan in the human brain?

Is your brain working in accordance with the design plan (or is properly functioning)?

Does your belief with respect to god constitute knowledge?

Hopefully it can be seen that the only situation in which Plantinga's deformed epistemology both matters and works is one in which:

1) there is actually a god of the right sort,

2) there is actually a design plan for brains such that, when functioning properly, they seek and find truth (or detect the being that created the design plan),

3) the brain of the theist is actually functioning properly in accordance with that design plan, and

4) the believer does actually have the correct sort of belief about the god that exists.

If there is no god (or no god that fiddles with brains the way that it would have to in order to ensure that humans accurately and truthfully detect it), then Plantinga's argument falls in a big heap.  Remember that Plantinga’s
“reformed epistemology” involves warrant, that warrant is that which “makes the difference between knowledge and mere true belief” and that a key element of warrant is that it involves “cognitive faculties functioning properly … according to a design plan”.

Now, it cannot be that a belief in a god (or a particular sort of god) is a “true belief” if there is no such god.  However, it is conceptually possible that that which “makes the difference between knowledge and mere true belief” could exist even in the absence of a true belief so that a (false) belief that nevertheless has this quality (“warrant”) could be described as a “rational belief”.

Note that when you look for descriptions of “Reformed Epistemology”, you invariably get referred to Plantinga and his attempts to argue that religious belief may be rational.  IEP:

Reformed epistemology is a thesis about the rationality of religious belief.

A section on reformed epistemology appears in the SEP, but within the article on the Epistemology of Religion, an article which makes it clear that it is concentrating on questions of the justification of religious belief and ignoring questions as to whether “these beliefs count as knowledge or whether these beliefs are scientific”.

So, it seems, “reformed epistemology”, despite Plantinga’s protestations, isn’t about knowledge after all, but about the justification or rationality of belief.  It doesn’t really qualify as epistemology at all, or rather it would qualify as epistemology if and only if Plantinga’s god existed, which means he’s seriously begging the question with his terminology.

Note that a not uncommon defence by theists is that reformed epistemology is no more than the position that “belief in God, like belief in other persons, does not require the support of evidence or argument for it to be rational” (Kelly Clark, Without Evidence or Argument: A Defense of Reformed Epistemology).  So, when challenged, they are more than willing to step back from the appearance of any knowledge claim.  But what they are not stepping back from, and this is tricky bit, is the implication of a truth claim.

Reformed epistemology might claim that a belief in a god may be rational despite not having any evidence or argument in its support, but this is entirely contingent on there being a god and the belief in that god being true.  This is why I call this point of view deformed epistemology, it’s not really about knowledge at all, just about justifying (or warranting) belief in a god in the absence of evidence or convincing argument.  And, to the extent that it is an argument about knowledge (because Plantinga doesn’t step away from implying that warranted true belief is knowledge), it’s both begging the question and special pleading (because you can’t use the same approach on other things you might want to believe and claim as knowledge).

Is there anything about Plantinga's “deformed epistemology” that a person who is not already a committed believer should take seriously?  Does it do anything more than provide a fig leaf of rationality to someone who believes something that otherwise should not be believed without very good evidence and/or argument?