This is the first in what I hope will become a series of short(ish)
articles in which I address ethical problems and dilemmas from the perspective
of the Ethical Structure – as introduced
in the Morality as Playing Games series.
---
The Trolley Problem is a thought experiment
based on a set of scenarios involving a runaway “trolley”. Most of us would think of it as a runaway
train or carriage, or perhaps a tram, maybe even a streetcar or a cable car
(San Francisco style, not St Moritz style).
The originator – Philippa Foot – even referred to a tram rather than a trolley,
but she was British and some American, when they got onboard, used their own
strange vernacular. So now we’re stuck
with talking about the trolley problem, despite a trolley looking like this
(according to Google):
Fundamentally, the problem is about making a choice between A) not acting
and thereby letting a group of (many) people die and B) acting and thereby
killing a smaller number of people but saving the group in the process. You could, for example, save five by
switching tracks so that a runaway train kills one worker rather than five. Or you could push the fat man standing next
to you onto the tracks to stop the train, which would kill him but save the five. Or you could just let the scenario run its
course without intervening.
There a number of variants of this problem intended to pick away at the key
issue which is how we tend to utilitarian in one situation and deontic in another. In other words, most (ie 85%) would flick the switch to save five at the expense
of one, but even the most utilitarian among us baulk at getting our hands metaphorically
dirty if we had to push the fat man onto the tracks (12%). I note the fact that it is a “fat man” who is
in the firing line. The reason the man
is fat is to ensure that he has the bulk to stop the train, but there’s also a
significant bias against fat people (who are
seen as slothful and greedy with insufficient self-control) which probably makes
it easier to push one of them under a train.
I suspect that significantly fewer than 12% would be willing to kill a healthy,
well-proportioned woman or a child in order stop the train.
(Imagine some sort of situation where disruption of power would divert the
train onto a safe track, but the only way to do this is to push a smaller
person than yourself through a crack where they would be fatally electrocuted,
but their sacrifice would mean that five others would be saved. Even I as write this I feel myself recoiling
strongly from the idea – I do note that it’s a step away from using the body of
the fat man to wedge under the train’s wheel and thus stop it. I feel that the more Rube Goldberg-like the mechanism
for killing a person to stop the train, where the death happens in the first
step, the less likely we are to feel justified in using it. I suspect that if the death happens later in
the mechanism, we might not be quite so squeamish.)
So, if the action is remote, we more willing to be utilitarian – we’ll let
(or make) one die to save five. But if the action is up close and personal, we
are deontic followers of the injunction “thou shalt not kill”. How do we explain this?
In terms of the Ethical Structure, our primary concern is our continuation
as a moral agent, in other words our survival.
And our chances of survival are increased when we clear signal to others
that we are not a threat. There is a
much more worrisome signal sent when we act to kill someone than when we fail
to act to save someone. Recalling the
pyramidal structure, “do not kill (destroy)” is at the pinnacle. There is no specific “do not fail to save”
but there is quite likely to be a general rule that, if you can, you ought to
try to save people which falls into the “do not break the rules” category right
down the bottom. It’s for this reason
that it takes in the order of five lives saved to motivate us to (remotely)
cause the death of one person.
It’s this “remotely” that is key. As far as signalling goes, there is a substantial
gulf between flicking a switch (even if, as a consequence, someone dies) and physically
putting your hands on someone and directly causing their death. Few of us want to be the sort of person who
kills another person (because it makes it much easier to convey the impression
that we would never kill anyone and that we are therefore no threat to others). It’s much more palatable to see yourself as
the person who saved five lives by just flicking the switch when you can gloss
over the consequences.
The tuning of our positions with regard to each scenario (the vanilla trolley
problem and the fat man scenario) is indicative of how there is variation across
the population as to when to break with the ethical structure. Only 85% of the participants in an experiment
were willing to switch tracks and save five.
I suspect that, for the most part, the remaining 15% were either overly
cautious (meaning that they’d be roadkill in hard times) or overly coy (perhaps
meaning that, in reality, they’d be more willing to kill, but you wouldn’t be
aware of it until the blade was sliding into your chest) or they just didn’t
think it through. The proportion could probably
be determined by pushing the numbers of potential casualties up. If you’ve got someone holding out when there
are 20 lives stacked up against one, then there’s a serious problem with that
person. (Of course, if that one life is
that of your mother, or your beloved, or your child, then it’d be
understandable, but in the scenario they are all nameless strangers.)
The bottom line is that, when seen through the prism of the ethical structure,
it is no mystery that we lean towards utilitarianism when forcing the train to
switch tracks to kill one and save five but lean towards deontology when it
comes to more directly sacrificing the life of another person to save the five.
---
It’s interesting to think about the trolley problem in terms of psychopaths, or people who have damage
or some other impairment in their dorsolateral prefrontal cortex. Psychopaths would probably fall into the 12%
who would kill the fat man, since they are strongly utilitarian. On the other hand, why do they care about the
five? As a psychopath, would you not
care about the fact that more were going to do.
After all, it’s not your problem.
In the article on psychopaths, the authors talk about “wrongness
ratings” and this might be another area where the Ethical Structure can be
brought to bear.
It is not the case that flicking the switch to redirect the train into one
person rather than five is good and pushing the fat man into the path of the
train is bad. They are all bad
options. There is a problem in many
discussions about morality, even by ethicists who should know better. We cannot point to an action associated with
the trolley problem and say “this is the right thing to do”. At best we can point to one option and say “this
is less wrong than the other option” and try to justify it. We can do wrong via an act of omission and
let five die, or we can do wrong by an act of commission and let one die.
The study assessed only how wrong the act of commission was and effectively
ignored how wrong the potential act of omission was. A valuable variation of the study would be
one in which the participants were asked to assess the wrongness of both the
act of commission and the act of omission before being asked how
likely it would be that they would perform the act of commission. Would anyone say that killing the fat man was
less wrong than letting the five die and also say that they would not kill the
fat man anyway? It’s possible that some people
would have the self-awareness to admit to that, knowing that while
intellectually they are aware that it’s the better thing to do (in terms of the
greater good), but they simply could not bring themselves to do it (because,
like, you know, they aren’t psychopaths).
---
Parenthetically above, I mentioned Rube Goldberg machines. Here’s an example:
(And here’s another.)
The idea behind a Rube Goldberg machine is that there is a ridiculously
complicated series of steps to achieve a relatively simple task – and the steps
themselves are likely to be ridiculous. I
was going to suggest that if the death of a stranger (fat or otherwise)
involved in saving the five was a side-effect then that would make it far more
palatable than if it were an essential part of the “machine”.
I also noted that there are a number of variants of the trolley problem, some
of which involve relocating the fat man – for example the loop variant in which
the train is diverted temporarily onto a side track on which there is a worker
who is large enough to stop the train, but if he wasn’t the train would
continue on and kill the five anyway.
It occurs to me that, in the initial version, the death of the single
worker is in fact a side-effect. If we break
up the problem differently we can see that a train is heading towards five workers
who will be killed. We can flick the
switch without any cost to ourselves and divert the train onto another track to
prevent the five deaths. It would be
difficult to argue that we should not act to save those five people. We should, of course, check to make sure that
our action is not going to result in a worse outcome (for example if there were
six people on the other track), but so long as the consequences of our action are
not worse than the consequences of inaction it would seem to me that failure to
divert the train is a moral failure, totally aside from any considerations of “greater
good”.
On the other hand, choosing to kill the fat man is a moral failure because you
will cause an instrumental death, rather than a consequential one. The impression that one is a utilitarian
decision and the other a deontological decision could well be an illusion.
No comments:
Post a Comment
Feel free to comment, but play nicely!
Sadly, the unremitting attention of a spambot means you may have to verify your humanity.