Showing posts with label ethical structure. Show all posts
Showing posts with label ethical structure. Show all posts

Monday, 11 February 2019

Ethical Structure vs. Hi-Phi Nation - Respecting the Wishes of the Dead


I have heard a bit about Hi-Phi Nation for a while, but only just recently subscribed to the podcast and started listening to their back-catalogue.  It’s really rather good and the first episode certainly got me thinking.  The key questions were what do we owe the dead, should we respect their wishes and what implication would not respecting the wishes of the dead have on the living?

An easy answer to the first question is “nothing”.  The dead are dead and if they don’t know that we are not respecting their wishes (which they can’t, due to their being dead), then our lack of compliance isn’t going to hurt them.  It’s an easy answer but it may be wrong.  You can quickly work through hypotheticals which indicate that, as a matter of fact, a person can be harmed by actions of which they are not aware and which they may never know about.

The example given was a person in a coma who is abused.  The nature of the coma (and the abuse) is that when the person awakes from the coma, they will be totally unaware of having been abused.  Our intuition is that it’s not okay to abuse such a person.  Then consider a person who is in a permanent coma.  Is it ok to abuse such a person, given that they will never awaken?  It would seem not, remembering that we stipulated that the person would never be aware of having been abused and the potential to awaken isn’t the key factor in determining whether abuse of one’s comatose body is acceptable or not.  Then consider the newly dead.  Then the recently dead.  Then the longer term dead.

There’s an element of the Sorites paradox to this sort of thinking, which might invalidate the final conclusion that the dead can in fact be harmed, but it’s certainly intuitively persuasive – as is the notion that the dead are dead and thus can’t be harmed.  They can’t both be right, can they?

Well, maybe they can.  Sort of.

It is true that the dead are dead and, for that reason, cannot be harmed.  However, a person can be considered to be more than their mere physical body and the mind generated by their brains.  In the Morality as Playing Games series of articles, I discussed Physical Survival and Legacy Survival, basically in terms of how both are important and how we might sacrifice the former for the latter.  Parents will sacrifice themselves for their offspring, people will sacrifice themselves for great causes and some would die before allowing their good name to be tarnished.  We all generate a legacy of some sort during our lives, for good or ill, and that legacy is a part of what it is to be us.  And it doesn’t necessarily disappear when our physical bodies die.  We do tend to be willing to let negative legacies lapse on death (albeit not always, which is why crowds have been known to desecrate the corpse of a tyrant from time to time) – but positive legacies are frequently maintained, often by other positive legacies … in other words the followers, friends and families of great people will keep a foundation going long after they have passed, or will maintain a mausoleum or a statue, or keep a project going, and so on and so forth.

It should be noted here that these are all the actions of the living, on behalf of the living only via veneration of the dead.  These people want to retain links to greatness, so they keep alive the legacy of the dead.  This means that those who would actually be harmed by not respecting the wishes of the dead are these followers, friends and family who are still alive.

---

So, how can the Ethical Structure deal with the wishes of the dead?

Recall that an idea associated with the Ethical Structure is that our morality is built around a moral agent’s concern for self-preservation.  Each moral agent in inherently making an assessment as to whether the acts and behaviours of others are threatening to their own survival in some broader sense.  And there are layers of concern around the moral agent that indicate higher levels of threat as each layer is penetrated.

For example, there is an implicit threat when someone enacts violence against a thing – they are demonstrating a lack of control and a certain level of willingness to use violence.  There is a higher threat when the thing attacked is a living thing.  Then a higher threat when that living thing is a human being.  And an even higher threat when that human being is a human being like me in some way.  An extremely worrying threat is when that human being being harmed actually is me (because the willingness to harm is on the path towards a willingness to kill or destroy).  So, we consider the harm enacted by someone to be a moral wrong with a severity that depends on the associated threat – from petty vandalism through to murder (you might want to ignore the legal associations there).

We have a concern about our standing in our groups (ie in society), because being expelled from a group makes us vulnerable.  Harm to our status is therefore considered to be a moral wrong and we assess threats to our status in a manner quite similar to the assessment of threats to our survival.  If we see people attacking the status of other people [and maybe even things], or disrespecting them without good justification, we become concerned because we could become victims of such attacks or disrespect.

Considering other people, we need to remember that while there truly are people out there in the real world acting in accordance with motivations that we can only guess at, we do not interact directly with those people.  Instead we interact with the representations of these people that we have generated in our own heads and, as a consequence, it is these representations that are the most immediate to us rather than the people they are based on.  Representations are not necessarily discarded as soon as the represented person dies and so, in our heads, the dead live on as salient, abidingly relevant representations, together with their wishes and their legacies.  And when we see someone acting against the interests of a representation of a person, even if that person is no longer alive, we interpret that as a potential threat to our own interests because we could be next.

So, while on an intellectual level we can know with great certainty that the dead are not harmed by disrespect, we can still consider representations of the dead to be vulnerable and since there is an emergent moral injunction against harm, we can interpret harming of the dead as a moral wrong.

Note that we must of course be aware of something meaningful associated with the dead that can be harmed.  For example, a statue to war a hero or supposed discoverer of a land (ignoring the natives already in residence), or a legally established foundation that hands out money for various reasons (Hershey or Nobel or Pulitzer).  If an individual person is dead long enough, their desires and wishes and their legacy evaporate and we need no longer worry about them.  We do however feel a moral obligation to respect the wishes of groups of people, namely the people who make up our history and who established the mores and standards of our society – although in this case I would place this, at least in part, in the lowest tier of the Ethical Structure “obey my rules”.  Whose rules are respected relates to who is respected, which usually relates to respect of people and organisations long dead, but obeying rules without knowing specifically who established them is a fundamental (low-level but nevertheless important) method for signalling that you are not a threat.

---

Should we respect the wishes of the dead?  It depends.  Are we at all invested in the legacy of the dead in question (even if we might not know who they are specifically)?  Are the wishes of the dead consistent with our current mores and standards?  If yes to these, then yes, probably.

If, as in the case of Hershey, the law is entangled with the wishes of the dead via a legal charitable foundation, then morality may no longer be operative because the law itself is not moral.  Morality only comes into sight when one is deciding whether to obey the law or not, or whether the law itself (in this case with regard to obligations with respect to foundations and trusts) ought to be changed.

Monday, 21 May 2018

The Ethical Structure vs the Drowning Child


This is the second in what I hope will become a series of short(ish) articles in which I address ethical problems and dilemmas from the perspective of the Ethical Structure – as introduced in the Morality as Playing Games series.

---

In Peter Singer’s drowning child scenario, you are on your way somewhere when you pass a shallow pond or in which a child is drowning.  You can quite safely rescue the child but if you do so you’ll be mildly inconvenienced and you may ruin your clothes.  Should you rescue the child?  It’s a small cost to you but you save a life.  You’d have to be a moral monster to walk past the pond and let the child die.

Compare that to a child dying in Africa due to war or famine or malaria (sadly enough, there’s always at least one trouble spot or another).  You could dip into your wallet, extract a few dollars (a small cost to yourself) when passing a chugger and save a life.  Are you a moral monster for failing to do so?

Perhaps you might think that a few dollars won’t save a life, that too much disappears in administrative costs. GiveWell suggested in 2015 that at about $US3,300, you could save a child’s life.  It’s true that that’s a lot of money to take out of your wallet, and it’s rare that you come across a child drowning in a lake while children die in Africa at a rate of about 6 million a year (only counting those under five) – that’s more than 16 thousand a day, or 679 an hour or 11 a minute.  Not all of these die of malaria, of course, which is what you will be preventing with your $US3,300.

But if you put away $US13 a week then you could save a child every five years, which is more frequently than you could reasonably expect to save a child from drowning, unless you are lifeguard or spend your time lurking around pools and ponds.

Are you a moral monster for failing to put aside such a small amount of money?

Peter Singer would have it that you are.  If you are a well-to-do sort, who easily spends an extra $US10,000 on a car than you really need to, because it’s styled as a sports car, and you update your car every three years, then you are letting a child die each year that you could have saved.  If you waste money on a holiday, you are effectively killing a child, if you buy expensive meals, or wine, or jewellery, you are effectively killing a child.  And you should feel guilty.

The thing is, we don’t really.  Or not many of us do.  There’s a huge difference between an abstract child in a foreign land who we might or might not be able to save with our malaria nets, or our donation of a goat, or our supply of medicines and the child drowning right there, in front of us.

To walk away from a child we know to be drowning and we know could be saved would make us a moral monster.  To fail to hand over money to someone who may or may not use that money to buy something that may or may not be given to a child who may or may not die without it does not make us a moral monster.

A variant of the drowning child scenario has you on your way to an important job interview with no time to spare and you’re in your best clothes, which are worth more than $US3,300 (or the clothes and lost work opportunity combined are worth more than $US3,300).  As a utilitarian, you could say that the best option is to let the child drown and commit to giving money to an effective charity that will save another child for you.  But you won’t, because that child drowning in front of you is visceral.  If you let her drown, you are a moral monster.

Because this is a thought experiment, we can fiddle with the knobs and consider a situation in which saving the child will cost you as much as saving five other, distant children (who you can save with total certainty).  And because you’re such an ethical person, you’re actually on your way to save those five.  If you stop to save the one, the five will die.

This is basically the trolley problem in reverse.  You have the opportunity to switch metaphorical tracks to put the metaphorical trolley on a path to kill five, but you save one in the process.  You’d be a moral monster if, in the trolley problem, you deliberately killed five to save one (well, unless it was your mother, maybe – but there’s a fair chance your mother won’t be happy about surviving in that context).

But … the child is drowning in front of you and the others are distant.  Chances are, you’ll save the drowning child.  Why is that?

---

In terms of the Ethical Structure, it’s entirely clear that you “should” save the drowning child and you have no obligation whatsoever to save a distant child – at least unless you are in the company of Peter Singer acolytes.

Survival is the name of the game and whatever action (or inaction) you decide on, it must promote your survival.  A cost to you with little or no benefit does not promote survival.  Therefore, shelling out a largish sum of money to save a child you will never see and no-one will really know you saved is not efficient.  Even if you intend to give the money away somehow, you’d be better of spending it close to home to show your neighbours what a good person you are.

The drowning child on the other hand is a perfect opportunity for you to demonstrate your worth.  Now, you might have noticed that I have at no point indicated that there is anyone around to see your heroics and in fact there is an implication that isn’t (because they could save the child allowing you to save your clothes).

Even if there is no-one to see you abandon the child, you’d still (in terms of the Ethical Structure) have an ethical imperative to save her.  This is because an essential part of the concept of the Ethical Structure is that you survive, up to and including the need to be able to betray others before they do when it comes to the crunch.  The more trustworthy you are, the less likely you are to be betrayed by other trustworthy people, so you are in a sort of arms race of trustworthiness signalling.

There is only person who is 100% committed to signalling your trustworthiness and that person has to be utterly convinced that you are trustworthy, so that the lie can be told convincingly – and that person is you.   For this reason, you will convince yourself that you are the type to save a drowning child even if no-one is there to laud you for it, because you want to be the sort of person who will save a drowning child and not the sort of person who others would likely choose first to make the ultimate sacrifice for the good of the group.

---

The above might sound awful, the idea that we might only save a child for selfish reasons, but the positive from it is that selfish motives can push us towards prosocial actions.  The drowning child doesn’t care much whether the person saving her is acting selfishly or selflessly, so long as she is rescued from harm.

A follow-on question is how can Peter Singer’s good intentions be realised?  We all want to be taken to be good people (or recognised to be good people, if you prefer) but we are all selfish to a certain extent.  Very few of us are going to give up significant material comfort in order to maximise the number of lives we might save.  But we easily can be persuaded to be more selfish.

My suggestion therefore is to focus in on the benefits of an expanded circle of ethics.  In what ways can we benefit from preventing death in foreign places?  Singer has done some work already by pointing out the issue, making it possible for people to signal their worthiness by saving children from malaria in Africa (although there are diminishing returns as more and more people get on the charity boat, to the point where there will most certainly be freeloaders).  But I think we can go further. 

Lands in poverty are incubators of disease and discontent.  Wealthy people are less likely to comb the nearby jungle for bushmeat (a key vector for diseases like ebola).  Wealthy people are more likely to be educated and will understand the difference between a viral and a bacterial infection, so won’t take antibiotics in such a way as to promote the development of superbugs.  Wealthy, content people take fewer risks, they tend to prefer peace over war – they have more to lose.  We might not solve all problems by helping those less fortunate than ourselves, but there are definitely a raft of perfectly selfish reasons why we should have a go.

Thursday, 5 April 2018

Ethical Structure vs the Trolley Problem


This is the first in what I hope will become a series of short(ish) articles in which I address ethical problems and dilemmas from the perspective of the Ethical Structure – as introduced in the Morality as Playing Games series.

---

The Trolley Problem is a thought experiment based on a set of scenarios involving a runaway “trolley”.  Most of us would think of it as a runaway train or carriage, or perhaps a tram, maybe even a streetcar or a cable car (San Francisco style, not St Moritz style).  The originator – Philippa Foot – even referred to a tram rather than a trolley, but she was British and some American, when they got onboard, used their own strange vernacular.  So now we’re stuck with talking about the trolley problem, despite a trolley looking like this (according to Google):



Fundamentally, the problem is about making a choice between A) not acting and thereby letting a group of (many) people die and B) acting and thereby killing a smaller number of people but saving the group in the process.  You could, for example, save five by switching tracks so that a runaway train kills one worker rather than five.  Or you could push the fat man standing next to you onto the tracks to stop the train, which would kill him but save the five.  Or you could just let the scenario run its course without intervening.

There a number of variants of this problem intended to pick away at the key issue which is how we tend to utilitarian in one situation and deontic in another.  In other words, most (ie 85%) would flick the switch to save five at the expense of one, but even the most utilitarian among us baulk at getting our hands metaphorically dirty if we had to push the fat man onto the tracks (12%).  I note the fact that it is a “fat man” who is in the firing line.  The reason the man is fat is to ensure that he has the bulk to stop the train, but there’s also a significant bias against fat people (who are seen as slothful and greedy with insufficient self-control) which probably makes it easier to push one of them under a train.  I suspect that significantly fewer than 12% would be willing to kill a healthy, well-proportioned woman or a child in order stop the train.

(Imagine some sort of situation where disruption of power would divert the train onto a safe track, but the only way to do this is to push a smaller person than yourself through a crack where they would be fatally electrocuted, but their sacrifice would mean that five others would be saved.  Even I as write this I feel myself recoiling strongly from the idea – I do note that it’s a step away from using the body of the fat man to wedge under the train’s wheel and thus stop it.  I feel that the more Rube Goldberg-like the mechanism for killing a person to stop the train, where the death happens in the first step, the less likely we are to feel justified in using it.  I suspect that if the death happens later in the mechanism, we might not be quite so squeamish.)

So, if the action is remote, we more willing to be utilitarian – we’ll let (or make) one die to save five. But if the action is up close and personal, we are deontic followers of the injunction “thou shalt not kill”.  How do we explain this?

In terms of the Ethical Structure, our primary concern is our continuation as a moral agent, in other words our survival.  And our chances of survival are increased when we clear signal to others that we are not a threat.  There is a much more worrisome signal sent when we act to kill someone than when we fail to act to save someone.  Recalling the pyramidal structure, “do not kill (destroy)” is at the pinnacle.  There is no specific “do not fail to save” but there is quite likely to be a general rule that, if you can, you ought to try to save people which falls into the “do not break the rules” category right down the bottom.  It’s for this reason that it takes in the order of five lives saved to motivate us to (remotely) cause the death of one person.

It’s this “remotely” that is key.  As far as signalling goes, there is a substantial gulf between flicking a switch (even if, as a consequence, someone dies) and physically putting your hands on someone and directly causing their death.  Few of us want to be the sort of person who kills another person (because it makes it much easier to convey the impression that we would never kill anyone and that we are therefore no threat to others).  It’s much more palatable to see yourself as the person who saved five lives by just flicking the switch when you can gloss over the consequences.

The tuning of our positions with regard to each scenario (the vanilla trolley problem and the fat man scenario) is indicative of how there is variation across the population as to when to break with the ethical structure.  Only 85% of the participants in an experiment were willing to switch tracks and save five.  I suspect that, for the most part, the remaining 15% were either overly cautious (meaning that they’d be roadkill in hard times) or overly coy (perhaps meaning that, in reality, they’d be more willing to kill, but you wouldn’t be aware of it until the blade was sliding into your chest) or they just didn’t think it through.  The proportion could probably be determined by pushing the numbers of potential casualties up.  If you’ve got someone holding out when there are 20 lives stacked up against one, then there’s a serious problem with that person.  (Of course, if that one life is that of your mother, or your beloved, or your child, then it’d be understandable, but in the scenario they are all nameless strangers.)

The bottom line is that, when seen through the prism of the ethical structure, it is no mystery that we lean towards utilitarianism when forcing the train to switch tracks to kill one and save five but lean towards deontology when it comes to more directly sacrificing the life of another person to save the five.

---

It’s interesting to think about the trolley problem in terms of psychopaths, or people who have damage or some other impairment in their dorsolateral prefrontal cortex.  Psychopaths would probably fall into the 12% who would kill the fat man, since they are strongly utilitarian.  On the other hand, why do they care about the five?  As a psychopath, would you not care about the fact that more were going to do.  After all, it’s not your problem.

In the article on psychopaths, the authors talk about “wrongness ratings” and this might be another area where the Ethical Structure can be brought to bear.

It is not the case that flicking the switch to redirect the train into one person rather than five is good and pushing the fat man into the path of the train is bad.  They are all bad options.  There is a problem in many discussions about morality, even by ethicists who should know better.  We cannot point to an action associated with the trolley problem and say “this is the right thing to do”.  At best we can point to one option and say “this is less wrong than the other option” and try to justify it.  We can do wrong via an act of omission and let five die, or we can do wrong by an act of commission and let one die.

The study assessed only how wrong the act of commission was and effectively ignored how wrong the potential act of omission was.  A valuable variation of the study would be one in which the participants were asked to assess the wrongness of both the act of commission and the act of omission before being asked how likely it would be that they would perform the act of commission.  Would anyone say that killing the fat man was less wrong than letting the five die and also say that they would not kill the fat man anyway?  It’s possible that some people would have the self-awareness to admit to that, knowing that while intellectually they are aware that it’s the better thing to do (in terms of the greater good), but they simply could not bring themselves to do it (because, like, you know, they aren’t psychopaths).

---

Parenthetically above, I mentioned Rube Goldberg machines.  Here’s an example:



The idea behind a Rube Goldberg machine is that there is a ridiculously complicated series of steps to achieve a relatively simple task – and the steps themselves are likely to be ridiculous.  I was going to suggest that if the death of a stranger (fat or otherwise) involved in saving the five was a side-effect then that would make it far more palatable than if it were an essential part of the “machine”.

I also noted that there are a number of variants of the trolley problem, some of which involve relocating the fat man – for example the loop variant in which the train is diverted temporarily onto a side track on which there is a worker who is large enough to stop the train, but if he wasn’t the train would continue on and kill the five anyway.

It occurs to me that, in the initial version, the death of the single worker is in fact a side-effect.  If we break up the problem differently we can see that a train is heading towards five workers who will be killed.  We can flick the switch without any cost to ourselves and divert the train onto another track to prevent the five deaths.  It would be difficult to argue that we should not act to save those five people.  We should, of course, check to make sure that our action is not going to result in a worse outcome (for example if there were six people on the other track), but so long as the consequences of our action are not worse than the consequences of inaction it would seem to me that failure to divert the train is a moral failure, totally aside from any considerations of “greater good”.

On the other hand, choosing to kill the fat man is a moral failure because you will cause an instrumental death, rather than a consequential one.  The impression that one is a utilitarian decision and the other a deontological decision could well be an illusion.

Tuesday, 18 July 2017

William MacAskill is Nice Guy, but Morally Confused

I wrote recently about Sam Harris and William MacAskill's discussion in Harris' podcast Being Good and Doing Good.  During that discussion, MacAskill revealed himself to be an ostentatious altruist, which is little surprise since he is the author of Doing Good Better - Effective Altruism And a Radical Way to Make a Difference.  MacAskill gives away a large proportion of his earnings and he actively encourages others to do so (also here).  He reports feeling rewarded by his carefully targeted altruism and there's nothing particularly wrong with his largesse, in itself.  I don't however see him as "morally superior".  In fact, I see him as morally confused.

One thing that I aimed to explain with the notion of an ethical structure was the broad range of morality, from people who are obsessed with doing the right thing at all times to those who simply don't give a toss about the rules, but more importantly those of us somewhere in the middle who are generally good but, given the right circumstances, can be tempted to do wrong (i.e. when we think that no-one is watching or that we might just get away with it).  In the Morality as Playing Games series, I concluded that every one of us is the descendant of a person who, when it became necessary, abandoned their morality and betrayed others in order to survive - but, to have been successful, this ancestor of ours didn't abandon their morality before it was necessary, at least not to the extent that they were deemed dangerous and unsuitable members of society before they could produce at least one child.

In order to be able to survive tough times, each of us must strike a delicate balance between the imperative to fit into our society (and be moral) and the capacity to betray our fellows at precisely the right time - not too early and not too late.  We seek a balance between the impulse to cooperate in a group and need to react appropriately when it is time to defect from that group.  In the modern western world, the times we live in are not particularly tough, even among the more disadvantaged (gone are the days when we might break a window to steal a loaf of bread in order to heroically save our starving nieces and nephews).

In good times, we are biased towards morality and we now have huge industries built around punishing defectors.  Given our increased wealth, it could be argued that the modern western person is less likely to defect than someone from, say, 200 years ago (which would explain the decline of violence).  This certainly has positive sides, but it can go too far to the extent to which our "morality" leads to self-harm, especially when we factor lost opportunity into the harm calculus.

MacAskill, I suggest, verges on self-harm when he carves away at the economic margin between modern comfort and survival.  By doing so, he pushes himself towards a situation in which he would not survive should his conditions deteriorate.  The rational approach is to look towards increasing your economic margin, while taking into account other relevant factors - for example, for the purposes of survival there comes a point at which being richer just increases the risk that you will be killed for your wealth.  As an academic, unless his books are spectacularly successful, MacAskill will not be in a situation in which his wealth is so huge as to be unconducive to his on-going survival.

So, on the surface, what I read into his unbounded willingness to aid others is an indication that his morality is poorly calibrated for survival in extremis.  He does talk about preserving yourself so as to not burn out too quickly (as an altruist) and to maximise your longer-term altruism, for example an expensive suit might prevent money being donated to worthy causes if you bought it today, but your ownership of it may permit you to secure a well-paying job thus allowing you to donate much more in the future. 

MacAskill is saved from actual self-harm in a couple of ways.  Firstly, there is the framing of his legacy survival.  Who MacAskill is and how he behaves is an essential part of his legacy and therefore children (should he have any) would not necessarily be his sole method of ensuring legacy survival.  Others who follow his example, however, and who won't have their names and legacy attached to the organisations that MacAskill has created would likely be self-harming should they donate as much of their earnings as he seeks to - if that level of donation puts their other legacy survival efforts at risk.

The other way he is protected from self-harm is that he would be building up a certain amount of good will, should times turn tough and he finds himself in financial straits he could likely cash in that good will and make it through - but only so long as he hasn't convinced all his friends and colleagues to ruin themselves financially.

Aside from self-harm, there is another concern that I have with his efforts, involving the concept of moral self-licensing.  It is a known feature (some might say a bug) of psychology that when we have done something good, particularly something very good, we may feel entitled to either do something bad or forego doing something else that is good.  Alcohol advertising, for example, highlights this idea when it is suggested that by having put in a full day's work or prevailed in some sporting event, you have somehow earned the right to get drunk (in moderation).

My concern is two-fold.  Firstly, this altruism movement may trigger bad behaviour and act as an enabler for ongoing bad behaviour.  The idea that large numbers of people might feel entitled to behave poorly due to their altruism is worrisome.

Secondly, there is the issue of perception.  We are naturally inclined to see others as largely neutral (mythical saints aside) and so, if people were as ostentatiously "good" as MacAskill strives to be, the more cynical among us would wonder what they were compensating for.  Therefore, if altruism were to go too far, it could paradoxically lead to reduced trust within our societies.

I don't think that MacAskill is covering up some moral culpability, merely that he is morally confused.  His approach is a very nice idea, in the hypothetical, and one that, in the hypothetical, we should all strive for (meaning that it is beneficial to persuade others that, in the hypothetical, such ostentatiously good people are what we would want to be).  In practice, however, such ascetic extremes of goodness are a bit weird, relatively few of us could comfortably approach emulating them and we are left with an impression that the person involved is, at best, somewhat naïve.

It is this naivety that is possibly most problematic, when we consider MacAskill's zeal in spreading the word.  All traits which have survival implications have a range of expression in the relevant population.  There are benefits in being big and strong and there are benefits in being small and flexible, depending on the conditions.  If a population became entirely big and strong due to the prevailing conditions and those conditions changed, then the entire population could fail.  We are protected from this by a sort of regression to and variance around a mean, so that the small flexible still exist in times that are best for the big and strong and when being big and strong is no longer optimal, the small and flexible take over (while not dominating entirely so that a swing back doesn't wipe out the population).

The same applies to variations in morality.  Today, in relatively good times, we have people who are more inclined to steal and kill than we are willing to accept and we punish them.  But under extreme conditions, these are precisely the sort of people who would ensure the continuation of our clans while people who are too touchy-feely starve to death or are slaughtered in their beds.


To be able to survive, which I argue is what morality is really all about, we have to be able to be bad when conditions call for it.  We need to be able to look out for ourselves and while doing maximum good in the world, simply for its own sake, sounds like a brilliant idea in principle, turning our minds to this sort of thing in practice risks disarming us at the very moment when we are most vulnerable, making us miss the signs that conditions have changed for the worst and we may soon be required to reap the benefit of our morality – or be erased from the Earth.

Tuesday, 11 July 2017

Ethical Structures, Non-Identity and the Repugnant Conclusion

Derek Parfit's Repugnant Conclusion results from a consideration of welfare and population.  In essence, the conclusion is that a world in which a very large number of people have lives that are barely worth living is preferable to a world in which a much smaller number of people have significantly better lives.

My attention was brought to this ethical conundrum by Sam Harris and William MacAskill in one of Harris' podcasts - Being Good and Doing Good.

The Repugnant Conclusion is reached in a step wise fashion with certain assumptions, primarily the idea that we should maximise "quality of life" (with a further assumption that we can quantise "quality of life" or at least think about units of "quality of life", which are even further assumed to be positive).  Also tied into this is the idea of future persons, who don't currently exist (hence their non-identity, which leads to other considerations which I'll get to shortly).

Imagine one person, A, existing with a total of 100 units of "quality of life" (let us call these UQL) and say that 100 units of UQL is pretty damn good.  If we are maximising UQL, then it follows that 101 people existing with 1 UQL each is better than this one person existing with 100 UQL, where having 1 UQL is equivalent to having a life that is barely worth living.  With the leap to the conclusion, this seems bizarre - many people living what is only slightly better than a life that is not worth living doesn't intuitively feel better than A living the bliss of a 100 UQL "pretty damn good" life.

(Note that our intuitions associated with this are likely to be faulty.  As I write, the 2016 Paralympics have opened and the whole and healthy among us would probably find it difficult to think that someone who is eligible to enter these Games would not want to swap their stumps for real legs, or to be able to see, or to not be confined to a wheelchair.  However, these people would not see themselves as a life that is barely worth living - and this applies to more the mundanely handicapped, not just Paralympians.  Some even claim that they would turn down the hypothetical option to turn back the clock and not be handicapped.  This notion has important links to the "non-identity problem".)

The Repugnant Conclusion, however, is not reached by a leap, it is reached step-wise.  Consider, rather than 101 people living a 1 UQL life, two people living lives that are only slightly worse off than A's life, even 99 and 99 UQL.  This is clearly better than just one person living a life with a total of 100 UQL, it's very close to double the total UQL.  Keep doing this over and over again and you eventually reach the conclusion that a very large number of people living lives that are barely worth living can be better than any significantly smaller number of people with significantly better lives.

Converting this into a real world situation, this would mean that it would be better to keep producing more and more humans until all of us are living lives that are barely worth living, but we'd do it one baby at a time where each new baby would not significantly degrade the average UQL and would slightly raise the total UQL.  There are few among us who do not realise that if we were all to continue to produce large numbers of children, as we did in the past, then the future will look bleak indeed, but few of us choose not to produce any children at all in order to not contribute to great suffering on the part of our potential descendants - noting that our population increase shows little sign of slowing down (at least globally) and may in fact increase if we are so foolish as to eradicate malaria, cure cancer, prevent heart disease, stroke and diabetes and provide universal first world medical care and clean water to the third world without modifying their family structures - so even if we restrict ourselves to one child, we may still thus produce descendants who will be born in to a world with a vastly increased population.

Now we turn to the "non-identity problem".  This problem revolves around hypothetical people, who might exist (or not exist) based on decisions we make.  In essence, the question is whether it is better for a person to exist than not exist, given that their existence might not be perfect.  Such moral dilemmas are faced (or ignored) by people who are aware of the risk of defects in their unborn child and choose to get a check.  Say an unborn child is found to have Down Syndrome and the parents decide to abort the pregnancy.  In this situation (at least in a sense), an existent person stops existing and a life that was probably worth living is not lived.  (There's a complex calculus that could conceivably be conducted to determine what the overall UQL would be with the introduction of this Down Syndrome child and it is arguable that the total UQL would go down, but this would possibly be the case anyway with the decision to go with an abortion since such a decision is rarely an easy one.  Other arguments could be made that humanity is held back by its weakest links, but such arguments would veer close to eugenics and are probably repugnant in themselves.)

We could step back a little and think not of an abortion, but of a decision to not conceive.  We make these decisions all the time and have few qualms about them, because they involve acts of omission rather than commission - the omission of an emission in the case of the withdrawal method, and the occlusion of an emission in the case of condoms and similar devices, such that both methods result in sperm and ovum not meeting.  However, in each decision to avoid conception, we are effectively denying the existence of a person who could have existed and who could have lived a life worth living.  If we have the potential to bring into being a person who could live a life worth living, how do we justify not doing so (assuming that any reduction in our quality of life is marginal and we can continue to live a life worth living)?

If we cannot justify choosing to not bring into being a child with UQL of greater than 1, then the conclusion is that we should just keep producing children to the greatest extent possible, until we would otherwise start producing children with an UQL of less than or equal to zero.  And this does not seem right.

So how could we justify to not create such a child?

Selfishness seems a common (albeit post hoc) justification.  I like my life as it is, I don't want to compromise it by introducing children into it.  However, this is making the assumption that my marginal quality of life (the slight difference between the quality of life that I have as a single, unencumbered person and the quality of life I would have as parent) is more important than the hypothetical quality of life of this potential child (which would not exist in the world if the child did not exist).

Another justification is that any child is a link towards a future descendant living in a bleak, overpopulated world.   But this argument only works if you have decided to never reproduce.  It disappears as soon as you produce your first child and then you seem committed to produce as many children as you can, paradoxically rushing towards a situation in which your descendants are consigned to living in a world which is barely worth living in (and who may in turn have descendants living in a world that is not worth living in).

These linked problems, the Repugnant Conclusion and the Non-Identity Problem, have not been conclusively solved by ethicists.  Attempts to avoid the conclusions tend to result in other paradoxical outcomes, or other ethical problems (for example, if we think about UQL in terms of an average rather than a sum, we can arrive at a conclusion that it is better to eradicate those with low UQL and keep doing so until there is one single blissfully happy person with very high UQL - presuming that this person doesn't mind the genocide going on around her).

What does the Ethical Structure approach lead us to think about these problems?

Careful readers will probably have picked up on stilted wording above.  It's not easy (for me) to breeze past a statement that "X is better than Y" without commenting on the fact that the term "better" isn't securely grounded.  What exactly is a "better life"?  In what way is 100 UQL vested in one person "better" than 101 UQL spread across 101 people?  What does it mean to say it is "better" (or even simply "good") to keep producing children?

A "better" life is merely a life that is more good, but good for what?  And having quality of life is again ungrounded.  Does this mean happier, more productive, containing more puppies, what?  Even if we allow that quality of life need not be defined precisely, we are left with a begged question as to whether a life with high quality is a good thing.  Good for what?  The unabated production of children is good for what?

The answer, if the logic behind the Ethical Structure idea is correct, is survival - but survival on a few levels.

Firstly, to survive we each need to have lives that are worth living, in order to prevent ourselves from self-terminating (or at least so that we put serious effort into not dying).  We need to have both lives that are worth living and the expectation that our children's lives will be worth living in order to survive through them (so legacy survival as opposed to physical survival).  We do not, however, necessarily need to have a quality of life that is very high - merely sufficiently high.  Similarly, we don't need to expect more than that our children should have a sufficiently high quality of life - where "sufficiently high" is a value judgement that will vary from person to person.

Secondly, there is the need to continually signal to other members of our communities that we are obeying the rules and not threatening the welfare of others.  In this context, the unabated production of children is not good because flooding of the population with my children (my legacy) is potentially deleterious to your legacy and even to you personally, if my children were to displace you.  The rules, in the form of social norms, tell us how many children are appropriate and any significant divergence from this number (positive or negative) is generally frowned upon, which is to say you could thus be a suspicious character as a rule-breaker who had too many children.

These are, however, practical considerations and they don't address the more abstract notion that we should want to increase the total quality of life in the world (at least among humans) nor do they explain that we are appalled by the idea of large numbers of people that have lives that are barely worth living.  These, I suggest, derive from our consideration of the people that we want to be or, more accurately, the people that we want other people to think that we are, the nature of which follows from an internalisation of the ethical structure.

The second tier of the structure is the injunction against harm.  This is generally applied to existent people (most importantly ourselves), but in order to be consistent and to convey our reliability to fellow community members in quite a cost-effective way we may also apply it to potential or hypothetical people.

Consider person A, the one with 100 UQL.  We compared this with 101 people with 1 UQL each.  Note that the injunction is against doing harm, rather than being a commandment to do good (since we can more reasonably demand that people not harm us as opposed to demanding that others do good for us).  With the 101 people with 1 UQL each, we either have one person who has been harmed by losing 99 UQL - or who has been eliminated entirely (thus losing 100 UQL and her existence!)  This is a level of harm that we cannot countenance, especially if A is supposed to represent "me".  (This is a difficult conclusion to avoid, since if there were one single person in the universe, hogging all the UQL, then that person would obviously "me" from their own perspective and there would be no other perspective.)

We can accept an incremental decrease in our quality of life, especially if it is hypothetical, because by doing so we are affirming to ourselves and advertising to others that we hold the survival (and thus existence) of other people to be of more importance than maintaining a high quality of life for ourselves.  Remember though that when it gets down to brass tacks, the purpose of my ethical structure is to aid my survival, not yours, and that means I have a point at which I will abandon the "charade" of morality in order survive (as do you).  This means that while I might accept a gradual decline in my quality of life, there is still a point at which I will no longer accept it - not that I necessarily know precisely where that point lies, but it how it is set will probably be related to how I view my legacy in relation to my physical survival, I am likely to call a stop to the diminishment of my quality of life when it threatens my legacy.

When thinking about population increase, this manifests as a hypothetical willingness to sacrifice a little quality of life in order to allow another person to come into existence.  As we don't want people to suffer (or rather we don't want anyone to think that we would not care about people suffering), our preference is that hypothetical and newly existent people should a similar level of quality of life to ours (or slightly better, or slightly worse, it doesn't really matter particularly since it's hypothetical).  But we are not willing (even hypothetically) to significantly decrease our quality of life, nor to posit into existence people with a quality of life that is substantially below ours.  This might in part be because we don't really deal in absolutes, we deal in comparison, so a single "positive" UQL would be counted as negative when compared to our notional 100 UQL.

An example which came up frequently in the Harris-MacAskill discussion was Singer's drowning child in a lake.  In this thought experiment, you are walking past a shallow lake, one that you can safely enter to extract a child that is drowning.  However, you're wearing a nice set of clothes and the lake, while safe, is dirty or muddy, so while saving the child, you would be ruining your outfit.  Do you risk the expense of getting a replacement outfit and save the child?

The argument goes that you would be a moral monster if you were to fail to save the child simply because you wanted to save your clothes.  Taking your clothes off and rescuing the child naked is apparently not an option, nor is laundering them afterwards, although interestingly enough, it seems a common that people do want to save the child, but they also consider how they might prevent damaging their clothes in the process.  We could instead imagine a hypothetical imperative to drive off the road, thus destroying your car, in order to not kill a small child that had strayed onto the road - a dispassionate choice to prioritise your car over the child would make you a moral monster.  This scenario makes killing the child somewhat more active, rather than passively letting it drown.  In a sense, the drowning child dilemma is more like a variation of the trolley problem, in which you can choose to inconvenience yourself slightly in order to redirect the trolley onto a track that will destroy some of your stuff and thus save the life of a child.

Reframing the dilemma slightly, we could say that by not saving the life of a child on the grounds of a relatively small cost and minor inconvenience we become moral monsters.

Therefore, using Singer's extrapolation, we become moral monsters when we fail to send money to sub-Saharan Africa to help buy mosquito nets, at least those of us who have discretionary funds that we waste on such things as iPhones, summer holidays or food that we don't need (given that if we are overweight it's almost certainly because we eat more than we need).  Harris and MacAskill used the phrase "we're back standing by the lake" or the like to suggest that by failing to help distant people, or by reaching a conclusion that is repugnant, we would be doing the equivalent of letting the child drown (while noting that salience, or immediacy, is absent by virtue of the distance involved).

I agree that if we let the child drown while we stand passively at the lake's edge or as we walk away, we could be considered to be moral monsters.  I'd find it difficult to forgive myself if I failed to act in such a scenario, because I would not be the person I would want to be, and I'd be deeply suspicious of anyone else that I knew to be willing to let the child drown.  However, I don't agree that we run the same risk of being considered moral monsters due to any responsibility to save distant people.  I do, on the other hand, agree that we have precisely the same responsibility to save distant people as we have to save the child - but by that I mean that we have no responsibility to save any of them, none at all.  Yes, I meant to write that. 

As an ordinary person walking past the lake, we have no inherent obligation to save the drowning child.

Before I get accused of being a moral monster, let me try to explain the thinking behind this.  If I don't care what sort of person I am, and I don't care what you might think of me or do to me if I were to fail to react the way you think I should, then I would have no motivation or obligation to save the child (for the sake of saving the child) even if I could do so at no cost or inconvenience to myself.  However, none of us, perhaps not even psychopaths, live in that sort of world.  We live in the sort of world in which a real, if not inherent, obligation emerges when there is a pressing, obvious and reasonable demand for our assistance - because we do care about what sorts of people we are and we do care about what others think of us and what they might do to us.  So take on board an expectation that we should attempt to save a child drowning in a lake if we can do so safely

What is not expected is that we should go out of our way to address a vaguely defined need for assistance that is largely invisible.  This is partly why the crystallisation of that need, via advertising or someone knocking on your door, tends to work.  A level of expectation is only established once we are aware of the need and, to be really effective, there should be some threat of shaming involved, meaning that someone else must know that I am aware of the need.  If I can avoid donating time or money without feeling bad about myself or being shamed for my selfishness, why shouldn't I?

(As an aside, the donation of time and money is more obvious in the US than it tends to be in other countries - acts of charity are certainly more ostentatious.  Charity is simply not expected to the same extent in most other countries, and people will rarely ask as to whether you volunteer time and/or money to any worthy causes.  Spending quality time in your garden may well be more highly regarded than being a "busybody" or "do-gooder".)

So, getting back to Harris and MacAskill's discussion of Singer's drowning child argument, I agree wholeheartedly with the idea that the reluctance to donate to worthy-but-distant causes is related to a lack of immediacy or salience.  But I disagree that this is a truly moral issue, since we have no more objective obligation to save the child drowning in the lake than we have to save the child being bitten by mosquitoes in Africa, any obligation we do have is subjective and dependent only on how much we care about our place in the world.

As a society, we might choose to build into our ethical ruleset the notion that ostentatiously donating to worthy causes irrespective of distance is a requirement - and there may well be survival and quality of life benefits in doing so - but until then, our reluctance to donate time and effort to distant causes is no more than a psychological issue that charities need to deal with.