Member only
Episode
155

The Trolley Problem

May 4, 2021
Philosophy
-
22
minutes

A trolley is running down the tracks and going to kill five people.

You can switch it to another track where it will only kill one person.

What is the right thing to do?

Continue learning

Get immediate access to a more interesting way of improving your English
Become a member
Already a member? Login
Subtitles will start when you press 'play'
You need to subscribe for the full subtitles
Already a member? Login
Download transcript & key vocabulary pdf
Download transcript & key vocabulary pdf

Transcript

[00:00:00] Hello, hello hello, and welcome to English Learning for Curious Minds, by Leonardo English. 

[00:00:12] The show where you can listen to fascinating stories, and learn weird and wonderful things about the world at the same time as improving your English.

[00:00:22] I'm Alastair Budge and today we are going to be talking about The Trolley Problem, the tough philosophical and ethical question that asks us to consider whether we would sacrifice a life to save another.

[00:00:37] You may well have heard of this problem before, but we are going to explore all of the different ideas around it, diving into the ethics and philosophy of how, and under what conditions, certain decisions can be considered acceptable.

[00:00:53] It might seem like a theoretical and unrealistic question, but we will see that it’s actually one that is very relevant to our lives, with the development of autonomous vehicles, of self-driving cars.

[00:01:07] I want to thank Carmine, a recent economics graduate from the University of Naples for this suggestion. 

[00:01:13] It’s an awesome idea, and I hope you enjoy it.

[00:01:17] So, let’s not waste a minute, and get stuck in right away.

[00:01:21] As a quick administrative note, before we start, a trolley is another word for a tram, a one carriage train. 

[00:01:31] Whether it’s a trolley, a tram or a train doesn’t really matter for the purposes of the problem - the point is that they are all big, heavy objects that will probably kill you if they hit you.

[00:01:43] There are several variants of the trolley problem, but they all go something like this.

[00:01:50] There is an out of control trolley going down the tracks

[00:01:54] Ahead, on the tracks, there are five people tied up and unable to move. 

[00:02:00] The trolley is headed straight for them. 

[00:02:03] You are standing some distance off in the operating room, next to a lever.

[00:02:10] If you pull the lever, the trolley will switch to a different set of tracks

[00:02:16] However, you notice that there is one person on this set of tracks. If you pull the lever, the trolley will move to the other set of tracks and kill the one person. 

[00:02:29] So, you have two options. 

[00:02:31] Do you do nothing and allow the trolley to kill the five people on the main track.

[00:02:37] Or do you pull the lever, diverting the trolley onto the other track where it will kill one person?

[00:02:44] What is the ethically correct option? 

[00:02:47] Or, to put it simply, what is the right thing to do?

[00:02:52] Although there have been similar questions proposed for centuries, this question was really popularised by two women - an English woman called Philippa Foot, and an American called Judith Jarvis Thomson.

[00:03:07] Foot’s original version first involved a judge, before introducing the idea of the trolley, or tram.

[00:03:15] Her version proposed this hypothetical situation.

[00:03:20] Imagine that there is a trial for a particular crime, and a judge cannot decide who is guilty

[00:03:28] They cannot find the person who committed the crime.

[00:03:31] Outside, some rioters are demanding that the judge finds someone guilty of the crime, and that person is put to death. 

[00:03:41] If the judge doesn’t find someone guilty, these rioters, these protestors will take revenge on a particular section of the community, killing five people. 

[00:03:53] Given that the judge can’t find the real guilty person, he decides that the only way he can avoid these five people being killed is by finding an innocent person and sentencing him to death for the crime.

[00:04:09] In this case, the judge decided to kill one innocent person to save a group of five innocent people.

[00:04:16] Now, Foot expanded on this and asked us to imagine a pilot in charge of a plane that is about to crash. 

[00:04:25] The pilot can choose to crash into a less inhabited area, into an area with fewer houses, to kill fewer people. 

[00:04:34] Should the pilot do it?

[00:04:36] Or, Foot proposed, what if instead of a plane it was a tram, a trolley, and the driver could flip a switch and kill only one person instead of five.

[00:04:47] What is the right thing to do?

[00:04:49] Now, in both cases, Foot showed, the result is the same. 

[00:04:53] There is the exchange of one person’s life for five lives. 

[00:04:59] But why is it that for most people they would say that they would flip the switch and allow the train to kill one person in order to save the five, when they wouldn’t agree that the judge did the ethically correct thing by finding an innocent person and sentencing them to death, to save the five other people?

[00:05:20] Since the original publication of this article, in 1967, there have been multiple developments and variants on this problem, which ask us to consider how our views change depending on the circumstances.

[00:05:35] For example, in 1976 Judith Jarvis Thomson proposed an alternative with a surgeon, with a doctor.

[00:05:44] Imagine that there is a brilliant surgeon with five patients, each in need of a different organ.

[00:05:52] One needs a heart, another needs a new lung, another, a new liver, another, new kidneys, and the final one needs a new stomach.

[00:06:03] Each of whom will die without that organ

[00:06:07] Unfortunately, no suitable organs are available to perform any of these five transplant operations. 

[00:06:15] A healthy young traveller, just passing through the city in which the doctor works, comes in for a routine checkup, there's nothing wrong with him. 

[00:06:30] In the course of doing the checkup, the doctor discovers that this young traveller’s organs are compatible with all five of his dying patients

[00:06:35] Suppose further that if the young man were to disappear, no one would suspect the doctor, he wouldn't be caught. 

[00:06:43] Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying people and save their lives?

[00:06:54] Again, the answer might be “probably not”, even though the result is the same, five people live and one person dies.

[00:07:02] There’s a similar version of this that is more similar to the original train problem and involves a fat man.

[00:07:11] Imagine that you are walking along a bridge and you can see a train running down the tracks

[00:07:17] Ahead of it are five men. 

[00:07:19] They can’t escape, and the train will hit and kill all of them, if you do nothing.

[00:07:25] Ahead of you, on the bridge is a very fat man. 

[00:07:30] You know that you can push him over the bridge onto the train tracks, and he will stop the train.

[00:07:37] The fat man will die, but the five men will be saved.

[00:07:42] What should you do?

[00:07:44] Is it different because you are actively killing a person to save five, instead of saving five to allow one to die?

[00:07:54] Other variants of this problem complicate it further by including emotion.

[00:07:59] Let’s say that instead of that one person on the railway tracks being a random person you don’t know, what if they were your son, daughter, husband, wife, brother, sister, mother or father?

[00:08:12] Or what if they were someone that you knew was evil, that was the complete opposite to someone close to you?

[00:08:19] How would your decision-making process change, given these differences?

[00:08:25] Obviously, there are no right or wrong answers here, only moral judgments of what we believe to be right and wrong.

[00:08:33] Is there a difference between actively killing someone and allowing someone to die, if that was what was going to happen anyway?

[00:08:41] A utilitarian view, which you can learn more about in episode 116 on Jeremy Bentham, tells us that we should flip the switch and allow the one person to die in order to save the five, because this is what causes the greatest happiness overall

[00:08:59] Five lives are worth more than one.

[00:09:02] Not only would that decision be allowed by a utilitarian, but it would also be a morally better choice.

[00:09:11] There is, of course, an alternative view that merely participating in something that will result in the death of one person is morally wrong. 

[00:09:21] The view goes that it isn’t your fault that the train is running down the tracks, you aren’t responsible for the death of those five people, and if you flipped the switch to allow the train to kill the one person instead, well you would take some responsibility for that person’s death.

[00:09:40] If this is how you would think about the situation, let me put a spanner in the works, let me complicate it further by proposing to you an alternative.

[00:09:51] Imagine that there was a train running down the tracks

[00:09:54] You could see that it was on course to hit five people, killing them all. 

[00:09:59] You could press a button and move the train to another track, where there was only one person. Instead of killing five people, only one person would be killed.

[00:10:12] But there was also another button where you could switch the train to another track altogether.

[00:10:18] This track was empty, and nobody would be killed.

[00:10:22] I imagine you would say that the correct course of action would be to move the train to the empty track and save everyone’s lives. 

[00:10:30] Of course it is. 

[00:10:32] But the point is that in this case, you have involved yourself with the situation, you have changed the natural course of what was going to happen, so if you would do it to save five lives, why wouldn’t you do it to save four lives, why wouldn’t you save the five people and only allow one to die?

[00:10:52] Now, the trolley problem, or the train problem, has come under a lot of scrutiny, and there are plenty of criticisms of it.

[00:11:01] Of course, it’s unrealistic

[00:11:03] It’s theoretical.

[00:11:04] There are no situations in which we know exactly, in which we know with certainty, what will happen. 

[00:11:11] So asking us to make these moral and ethical choices in completely certain situations is unrealistic, and not even useful.

[00:11:21] There’s also the point that most moral judgments do not involve life and death, thankfully. 

[00:11:28] The trolley problem is too extreme, too unrealistic, and therefore it’s not actually helpful when thinking about moral or ethical decisions, so the criticism goes.

[00:11:40] In real life, there are very few times where anyone needs to make these kinds of decisions, and focusing on this kind of question deflects attention away from, it moves the focus away from more important, more realistic ethical and moral questions that we should spend more time thinking about.

[00:12:01] Finally, and perhaps most importantly, it reduces human life to a number. 

[00:12:08] Of course, 5 is greater than 1, but that’s not how life works. 

[00:12:13] Real-life has real people, they are different, humans make decisions in different ways, and to reduce the entire problem to arithmetic, to an algorithmic calculation, isn’t how life works.

[00:12:28] It makes you think of the famous quote that was reportedly said by Josef Stalin: “One death is a tragedy, a million deaths a statistic”. 

[00:12:36] And Josef Stalin, of course, isn’t one of history’s great moral philosophers.

[00:12:43] Yet the trolley problem is again becoming increasingly relevant.

[00:12:47] But this time, we aren’t asking humans to make moral judgments, we are asking machines, we are asking algorithms.

[00:12:56] Of course, these algorithms need to be told what to do, they need to be given instructions by humans.

[00:13:03] I’m talking here about autonomous vehicles, about self-driving cars.

[00:13:08] With a human driver, we rely on humans to make decisions about what to do.

[00:13:13] If a pedestrian steps out into the road unexpectedly, the driver would swerve, they would move quickly to try to avoid them.

[00:13:23] If there were a situation in which your car was out of control, and you had to choose between hitting a group of people and swerving hard to the right and you would only hit one person, at the moment you have to make that choice.

[00:13:39] Thankfully, it isn’t a choice that most of us will ever have to make, but still, it is a possibility.

[00:13:45] With self-driving cars, they drive themselves. 

[00:13:49] Complex algorithms, complex computer code tells them what to do in certain situations.

[00:13:55] For the vast majority of the time, these are engineering decisions, not moral decisions.

[00:14:02] For example, if a self-driving car sees a stopped car ahead of it, it should slow down or move to the right or left. 

[00:14:11] If there is a ball that bounces into the street, it should stop.

[00:14:15] From a technological point of view, of course, this is amazing, but from a moral point of view, there isn’t a huge amount going on.

[00:14:24] You don’t need to make moral decisions about turning left or right or slowing down in certain areas.

[00:14:31] But, what happens in a situation where a car does have to make a choice about where to cause the least amount of harm?

[00:14:39] Let’s return to our situation of either going straight into a crowd of people or swerving to only hit one person. 

[00:14:48] What should the car do?

[00:14:50] Obviously, the software cannot foresee every potential situation and tell the car what to do every time, but it does need to provide a framework for the car to make decisions on its own.

[00:15:05] The Trolley Problem is therefore a useful way of thinking about this, but it’s, of course, imperfect.

[00:15:12] It gets even more complicated as we think about the implications of software making moral or ethical decisions.

[00:15:20] Let’s say that a self-driving car saw a motorbike ahead that was out of control and about to crash into a group of pedestrians

[00:15:29] The self-driving car could brake and stop, thereby avoiding a crash with the motorbike, but the motorbike would crash into the pedestrians, seriously injuring or killing them.

[00:15:39] Or, the self-driving car could speed up, hitting the motorbike and probably killing its driver, but saving the group of pedestrians?

[00:15:49] What should it do? 

[00:15:51] You might say, well, it should just brake because the actions of the motorbike aren’t its responsibility, or you could argue that it has a moral responsibility to save the pedestrians if it can.

[00:16:04] How about a different situation?

[00:16:06] Let’s say that a self-driving car with you inside was driving over a single-lane bridge and there was a group of 10 schoolchildren that had stepped into the road ahead.

[00:16:18] The car hadn’t seen them, they had stepped out quickly, and after doing all of the necessary calculations in a millisecond, the car knew that there wasn’t enough time for the children to move, and there wasn’t enough time for the car to slow down.

[00:16:34] But, it could swerve to the right and throw itself off the bridge, thereby killing you, the passenger, but saving the group of schoolchildren.

[00:16:44] What is the right thing for the car to do?

[00:16:47] Understandably, most people wouldn’t like the idea that their car could kill them in order to save complete strangers, and it is probably a strange idea for you to think that your car should be making moral judgments on your behalf, and sacrificing your life in the process.

[00:17:04] Should you, as a car owner, or as a passenger in a self-driving car, be able to select the ethics and morals of your car? 

[00:17:14] Could you choose to have a selfless car that would sacrifice itself, and the people inside, in an instant? 

[00:17:21] Or would you prefer to have a selfish car, which put much more value on the life of its passengers than any humans nearby?

[00:17:30] Going one step further, could you say that you absolutely loved kids, and you’d do anything to save anyone under the age of 12? 

[00:17:39] But you hated animals and people over the age of 70 and were very happy to hit as many rabbits and old-age pensioners as possible.

[00:17:49] Obviously, the last point is an exaggeration, but the point is that the software in self-driving cars needs to be provided guidance on what to do by humans, we need to give them instructions for what to do in these situations.

[00:18:03] This software has complex machine-learning algorithms, so it does get smarter over time, but we can’t rely on the algorithms to make moral or ethical judgements on their own.

[00:18:17] Going back to our earlier criticism of the trolley problem, that it was too black and white, that it was either life or death, and it didn’t consider the fact that nothing was certain, computers are generally much better than humans at processing large amounts of information quickly.

[00:18:35] If you need to calculate the probability of certain things happening, the probability of being able to stop in time, the probability of death or serious injury in a certain type of collision, or similar complicated calculations, a computer is infinitely better and quicker at doing this than a human is.

[00:18:56] But, of course, it’s very complicated.

[00:18:58] And who decides? 

[00:19:00] Is it left to the technology companies building the software, writing the algorithms to decide what is right or wrong? 

[00:19:08] Or should it be a legal matter? 

[00:19:11] Should the law of each country specify how autonomous cars should behave? 

[00:19:16] And if so, how would this be different from country to country? 

[00:19:20] Would the software have to be adjusted based on the country the car was registered in? 

[00:19:25] Or would it be adjusted based on where the car was? 

[00:19:29] Would you have a situation where you were in a self-driving car, and when it crossed national borders its software would automatically update to the moral code of the country?

[00:19:40] It does seem that, although the technology behind self-driving cars isn’t so far away, there still isn’t complete agreement on how they should behave from a moral point of view.

[00:19:52] And this is quite telling.

[00:19:54] These moral questions are ones that humans have been battling with for millennia, for thousands of years. 

[00:20:02] The technology behind self-driving cars, although it is brilliant, is relatively new, and hasn’t taken that long to develop. 

[00:20:11] Yet it looks like the software will arrive before agreement on the ethics.

[00:20:16] And that, for me, is a good indication of what the harder problem to solve might be.

[00:20:23] OK then, that is it for this exploration of the Trolley Problem, what it is, how it makes us think about our relationship with each other, and its relevance for us today.

[00:20:35] I hope it's been an interesting one, and that you've learnt something new.

[00:20:39] You will note that I have not, or I have at least tried not to give any kind of moral judgments here. That is for you to decide, and there is clearly no right answer.

[00:20:50] I know we have lots of software developers who are members of Leonardo English, so what do you think of this moral problem? 

[00:20:57] Where does the role of the programmer end, and where does the role of the lawmaker, or moralist, start?

[00:21:04] I would love to know what you think.

[00:21:06] You can head right into our community forum, which is at community.leonardoenglish.com and get chatting away to other curious minds.

[00:21:15] You've been listening to English Learning for Curious Minds, by Leonardo English.

[00:21:20] I'm Alastair Budge, you stay safe, and I'll catch you in the next episode.

[END OF EPISODE]


Continue learning

Get immediate access to a more interesting way of improving your English
Become a member
Already a member? Login

[00:00:00] Hello, hello hello, and welcome to English Learning for Curious Minds, by Leonardo English. 

[00:00:12] The show where you can listen to fascinating stories, and learn weird and wonderful things about the world at the same time as improving your English.

[00:00:22] I'm Alastair Budge and today we are going to be talking about The Trolley Problem, the tough philosophical and ethical question that asks us to consider whether we would sacrifice a life to save another.

[00:00:37] You may well have heard of this problem before, but we are going to explore all of the different ideas around it, diving into the ethics and philosophy of how, and under what conditions, certain decisions can be considered acceptable.

[00:00:53] It might seem like a theoretical and unrealistic question, but we will see that it’s actually one that is very relevant to our lives, with the development of autonomous vehicles, of self-driving cars.

[00:01:07] I want to thank Carmine, a recent economics graduate from the University of Naples for this suggestion. 

[00:01:13] It’s an awesome idea, and I hope you enjoy it.

[00:01:17] So, let’s not waste a minute, and get stuck in right away.

[00:01:21] As a quick administrative note, before we start, a trolley is another word for a tram, a one carriage train. 

[00:01:31] Whether it’s a trolley, a tram or a train doesn’t really matter for the purposes of the problem - the point is that they are all big, heavy objects that will probably kill you if they hit you.

[00:01:43] There are several variants of the trolley problem, but they all go something like this.

[00:01:50] There is an out of control trolley going down the tracks

[00:01:54] Ahead, on the tracks, there are five people tied up and unable to move. 

[00:02:00] The trolley is headed straight for them. 

[00:02:03] You are standing some distance off in the operating room, next to a lever.

[00:02:10] If you pull the lever, the trolley will switch to a different set of tracks

[00:02:16] However, you notice that there is one person on this set of tracks. If you pull the lever, the trolley will move to the other set of tracks and kill the one person. 

[00:02:29] So, you have two options. 

[00:02:31] Do you do nothing and allow the trolley to kill the five people on the main track.

[00:02:37] Or do you pull the lever, diverting the trolley onto the other track where it will kill one person?

[00:02:44] What is the ethically correct option? 

[00:02:47] Or, to put it simply, what is the right thing to do?

[00:02:52] Although there have been similar questions proposed for centuries, this question was really popularised by two women - an English woman called Philippa Foot, and an American called Judith Jarvis Thomson.

[00:03:07] Foot’s original version first involved a judge, before introducing the idea of the trolley, or tram.

[00:03:15] Her version proposed this hypothetical situation.

[00:03:20] Imagine that there is a trial for a particular crime, and a judge cannot decide who is guilty

[00:03:28] They cannot find the person who committed the crime.

[00:03:31] Outside, some rioters are demanding that the judge finds someone guilty of the crime, and that person is put to death. 

[00:03:41] If the judge doesn’t find someone guilty, these rioters, these protestors will take revenge on a particular section of the community, killing five people. 

[00:03:53] Given that the judge can’t find the real guilty person, he decides that the only way he can avoid these five people being killed is by finding an innocent person and sentencing him to death for the crime.

[00:04:09] In this case, the judge decided to kill one innocent person to save a group of five innocent people.

[00:04:16] Now, Foot expanded on this and asked us to imagine a pilot in charge of a plane that is about to crash. 

[00:04:25] The pilot can choose to crash into a less inhabited area, into an area with fewer houses, to kill fewer people. 

[00:04:34] Should the pilot do it?

[00:04:36] Or, Foot proposed, what if instead of a plane it was a tram, a trolley, and the driver could flip a switch and kill only one person instead of five.

[00:04:47] What is the right thing to do?

[00:04:49] Now, in both cases, Foot showed, the result is the same. 

[00:04:53] There is the exchange of one person’s life for five lives. 

[00:04:59] But why is it that for most people they would say that they would flip the switch and allow the train to kill one person in order to save the five, when they wouldn’t agree that the judge did the ethically correct thing by finding an innocent person and sentencing them to death, to save the five other people?

[00:05:20] Since the original publication of this article, in 1967, there have been multiple developments and variants on this problem, which ask us to consider how our views change depending on the circumstances.

[00:05:35] For example, in 1976 Judith Jarvis Thomson proposed an alternative with a surgeon, with a doctor.

[00:05:44] Imagine that there is a brilliant surgeon with five patients, each in need of a different organ.

[00:05:52] One needs a heart, another needs a new lung, another, a new liver, another, new kidneys, and the final one needs a new stomach.

[00:06:03] Each of whom will die without that organ

[00:06:07] Unfortunately, no suitable organs are available to perform any of these five transplant operations. 

[00:06:15] A healthy young traveller, just passing through the city in which the doctor works, comes in for a routine checkup, there's nothing wrong with him. 

[00:06:30] In the course of doing the checkup, the doctor discovers that this young traveller’s organs are compatible with all five of his dying patients

[00:06:35] Suppose further that if the young man were to disappear, no one would suspect the doctor, he wouldn't be caught. 

[00:06:43] Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying people and save their lives?

[00:06:54] Again, the answer might be “probably not”, even though the result is the same, five people live and one person dies.

[00:07:02] There’s a similar version of this that is more similar to the original train problem and involves a fat man.

[00:07:11] Imagine that you are walking along a bridge and you can see a train running down the tracks

[00:07:17] Ahead of it are five men. 

[00:07:19] They can’t escape, and the train will hit and kill all of them, if you do nothing.

[00:07:25] Ahead of you, on the bridge is a very fat man. 

[00:07:30] You know that you can push him over the bridge onto the train tracks, and he will stop the train.

[00:07:37] The fat man will die, but the five men will be saved.

[00:07:42] What should you do?

[00:07:44] Is it different because you are actively killing a person to save five, instead of saving five to allow one to die?

[00:07:54] Other variants of this problem complicate it further by including emotion.

[00:07:59] Let’s say that instead of that one person on the railway tracks being a random person you don’t know, what if they were your son, daughter, husband, wife, brother, sister, mother or father?

[00:08:12] Or what if they were someone that you knew was evil, that was the complete opposite to someone close to you?

[00:08:19] How would your decision-making process change, given these differences?

[00:08:25] Obviously, there are no right or wrong answers here, only moral judgments of what we believe to be right and wrong.

[00:08:33] Is there a difference between actively killing someone and allowing someone to die, if that was what was going to happen anyway?

[00:08:41] A utilitarian view, which you can learn more about in episode 116 on Jeremy Bentham, tells us that we should flip the switch and allow the one person to die in order to save the five, because this is what causes the greatest happiness overall

[00:08:59] Five lives are worth more than one.

[00:09:02] Not only would that decision be allowed by a utilitarian, but it would also be a morally better choice.

[00:09:11] There is, of course, an alternative view that merely participating in something that will result in the death of one person is morally wrong. 

[00:09:21] The view goes that it isn’t your fault that the train is running down the tracks, you aren’t responsible for the death of those five people, and if you flipped the switch to allow the train to kill the one person instead, well you would take some responsibility for that person’s death.

[00:09:40] If this is how you would think about the situation, let me put a spanner in the works, let me complicate it further by proposing to you an alternative.

[00:09:51] Imagine that there was a train running down the tracks

[00:09:54] You could see that it was on course to hit five people, killing them all. 

[00:09:59] You could press a button and move the train to another track, where there was only one person. Instead of killing five people, only one person would be killed.

[00:10:12] But there was also another button where you could switch the train to another track altogether.

[00:10:18] This track was empty, and nobody would be killed.

[00:10:22] I imagine you would say that the correct course of action would be to move the train to the empty track and save everyone’s lives. 

[00:10:30] Of course it is. 

[00:10:32] But the point is that in this case, you have involved yourself with the situation, you have changed the natural course of what was going to happen, so if you would do it to save five lives, why wouldn’t you do it to save four lives, why wouldn’t you save the five people and only allow one to die?

[00:10:52] Now, the trolley problem, or the train problem, has come under a lot of scrutiny, and there are plenty of criticisms of it.

[00:11:01] Of course, it’s unrealistic

[00:11:03] It’s theoretical.

[00:11:04] There are no situations in which we know exactly, in which we know with certainty, what will happen. 

[00:11:11] So asking us to make these moral and ethical choices in completely certain situations is unrealistic, and not even useful.

[00:11:21] There’s also the point that most moral judgments do not involve life and death, thankfully. 

[00:11:28] The trolley problem is too extreme, too unrealistic, and therefore it’s not actually helpful when thinking about moral or ethical decisions, so the criticism goes.

[00:11:40] In real life, there are very few times where anyone needs to make these kinds of decisions, and focusing on this kind of question deflects attention away from, it moves the focus away from more important, more realistic ethical and moral questions that we should spend more time thinking about.

[00:12:01] Finally, and perhaps most importantly, it reduces human life to a number. 

[00:12:08] Of course, 5 is greater than 1, but that’s not how life works. 

[00:12:13] Real-life has real people, they are different, humans make decisions in different ways, and to reduce the entire problem to arithmetic, to an algorithmic calculation, isn’t how life works.

[00:12:28] It makes you think of the famous quote that was reportedly said by Josef Stalin: “One death is a tragedy, a million deaths a statistic”. 

[00:12:36] And Josef Stalin, of course, isn’t one of history’s great moral philosophers.

[00:12:43] Yet the trolley problem is again becoming increasingly relevant.

[00:12:47] But this time, we aren’t asking humans to make moral judgments, we are asking machines, we are asking algorithms.

[00:12:56] Of course, these algorithms need to be told what to do, they need to be given instructions by humans.

[00:13:03] I’m talking here about autonomous vehicles, about self-driving cars.

[00:13:08] With a human driver, we rely on humans to make decisions about what to do.

[00:13:13] If a pedestrian steps out into the road unexpectedly, the driver would swerve, they would move quickly to try to avoid them.

[00:13:23] If there were a situation in which your car was out of control, and you had to choose between hitting a group of people and swerving hard to the right and you would only hit one person, at the moment you have to make that choice.

[00:13:39] Thankfully, it isn’t a choice that most of us will ever have to make, but still, it is a possibility.

[00:13:45] With self-driving cars, they drive themselves. 

[00:13:49] Complex algorithms, complex computer code tells them what to do in certain situations.

[00:13:55] For the vast majority of the time, these are engineering decisions, not moral decisions.

[00:14:02] For example, if a self-driving car sees a stopped car ahead of it, it should slow down or move to the right or left. 

[00:14:11] If there is a ball that bounces into the street, it should stop.

[00:14:15] From a technological point of view, of course, this is amazing, but from a moral point of view, there isn’t a huge amount going on.

[00:14:24] You don’t need to make moral decisions about turning left or right or slowing down in certain areas.

[00:14:31] But, what happens in a situation where a car does have to make a choice about where to cause the least amount of harm?

[00:14:39] Let’s return to our situation of either going straight into a crowd of people or swerving to only hit one person. 

[00:14:48] What should the car do?

[00:14:50] Obviously, the software cannot foresee every potential situation and tell the car what to do every time, but it does need to provide a framework for the car to make decisions on its own.

[00:15:05] The Trolley Problem is therefore a useful way of thinking about this, but it’s, of course, imperfect.

[00:15:12] It gets even more complicated as we think about the implications of software making moral or ethical decisions.

[00:15:20] Let’s say that a self-driving car saw a motorbike ahead that was out of control and about to crash into a group of pedestrians

[00:15:29] The self-driving car could brake and stop, thereby avoiding a crash with the motorbike, but the motorbike would crash into the pedestrians, seriously injuring or killing them.

[00:15:39] Or, the self-driving car could speed up, hitting the motorbike and probably killing its driver, but saving the group of pedestrians?

[00:15:49] What should it do? 

[00:15:51] You might say, well, it should just brake because the actions of the motorbike aren’t its responsibility, or you could argue that it has a moral responsibility to save the pedestrians if it can.

[00:16:04] How about a different situation?

[00:16:06] Let’s say that a self-driving car with you inside was driving over a single-lane bridge and there was a group of 10 schoolchildren that had stepped into the road ahead.

[00:16:18] The car hadn’t seen them, they had stepped out quickly, and after doing all of the necessary calculations in a millisecond, the car knew that there wasn’t enough time for the children to move, and there wasn’t enough time for the car to slow down.

[00:16:34] But, it could swerve to the right and throw itself off the bridge, thereby killing you, the passenger, but saving the group of schoolchildren.

[00:16:44] What is the right thing for the car to do?

[00:16:47] Understandably, most people wouldn’t like the idea that their car could kill them in order to save complete strangers, and it is probably a strange idea for you to think that your car should be making moral judgments on your behalf, and sacrificing your life in the process.

[00:17:04] Should you, as a car owner, or as a passenger in a self-driving car, be able to select the ethics and morals of your car? 

[00:17:14] Could you choose to have a selfless car that would sacrifice itself, and the people inside, in an instant? 

[00:17:21] Or would you prefer to have a selfish car, which put much more value on the life of its passengers than any humans nearby?

[00:17:30] Going one step further, could you say that you absolutely loved kids, and you’d do anything to save anyone under the age of 12? 

[00:17:39] But you hated animals and people over the age of 70 and were very happy to hit as many rabbits and old-age pensioners as possible.

[00:17:49] Obviously, the last point is an exaggeration, but the point is that the software in self-driving cars needs to be provided guidance on what to do by humans, we need to give them instructions for what to do in these situations.

[00:18:03] This software has complex machine-learning algorithms, so it does get smarter over time, but we can’t rely on the algorithms to make moral or ethical judgements on their own.

[00:18:17] Going back to our earlier criticism of the trolley problem, that it was too black and white, that it was either life or death, and it didn’t consider the fact that nothing was certain, computers are generally much better than humans at processing large amounts of information quickly.

[00:18:35] If you need to calculate the probability of certain things happening, the probability of being able to stop in time, the probability of death or serious injury in a certain type of collision, or similar complicated calculations, a computer is infinitely better and quicker at doing this than a human is.

[00:18:56] But, of course, it’s very complicated.

[00:18:58] And who decides? 

[00:19:00] Is it left to the technology companies building the software, writing the algorithms to decide what is right or wrong? 

[00:19:08] Or should it be a legal matter? 

[00:19:11] Should the law of each country specify how autonomous cars should behave? 

[00:19:16] And if so, how would this be different from country to country? 

[00:19:20] Would the software have to be adjusted based on the country the car was registered in? 

[00:19:25] Or would it be adjusted based on where the car was? 

[00:19:29] Would you have a situation where you were in a self-driving car, and when it crossed national borders its software would automatically update to the moral code of the country?

[00:19:40] It does seem that, although the technology behind self-driving cars isn’t so far away, there still isn’t complete agreement on how they should behave from a moral point of view.

[00:19:52] And this is quite telling.

[00:19:54] These moral questions are ones that humans have been battling with for millennia, for thousands of years. 

[00:20:02] The technology behind self-driving cars, although it is brilliant, is relatively new, and hasn’t taken that long to develop. 

[00:20:11] Yet it looks like the software will arrive before agreement on the ethics.

[00:20:16] And that, for me, is a good indication of what the harder problem to solve might be.

[00:20:23] OK then, that is it for this exploration of the Trolley Problem, what it is, how it makes us think about our relationship with each other, and its relevance for us today.

[00:20:35] I hope it's been an interesting one, and that you've learnt something new.

[00:20:39] You will note that I have not, or I have at least tried not to give any kind of moral judgments here. That is for you to decide, and there is clearly no right answer.

[00:20:50] I know we have lots of software developers who are members of Leonardo English, so what do you think of this moral problem? 

[00:20:57] Where does the role of the programmer end, and where does the role of the lawmaker, or moralist, start?

[00:21:04] I would love to know what you think.

[00:21:06] You can head right into our community forum, which is at community.leonardoenglish.com and get chatting away to other curious minds.

[00:21:15] You've been listening to English Learning for Curious Minds, by Leonardo English.

[00:21:20] I'm Alastair Budge, you stay safe, and I'll catch you in the next episode.

[END OF EPISODE]


[00:00:00] Hello, hello hello, and welcome to English Learning for Curious Minds, by Leonardo English. 

[00:00:12] The show where you can listen to fascinating stories, and learn weird and wonderful things about the world at the same time as improving your English.

[00:00:22] I'm Alastair Budge and today we are going to be talking about The Trolley Problem, the tough philosophical and ethical question that asks us to consider whether we would sacrifice a life to save another.

[00:00:37] You may well have heard of this problem before, but we are going to explore all of the different ideas around it, diving into the ethics and philosophy of how, and under what conditions, certain decisions can be considered acceptable.

[00:00:53] It might seem like a theoretical and unrealistic question, but we will see that it’s actually one that is very relevant to our lives, with the development of autonomous vehicles, of self-driving cars.

[00:01:07] I want to thank Carmine, a recent economics graduate from the University of Naples for this suggestion. 

[00:01:13] It’s an awesome idea, and I hope you enjoy it.

[00:01:17] So, let’s not waste a minute, and get stuck in right away.

[00:01:21] As a quick administrative note, before we start, a trolley is another word for a tram, a one carriage train. 

[00:01:31] Whether it’s a trolley, a tram or a train doesn’t really matter for the purposes of the problem - the point is that they are all big, heavy objects that will probably kill you if they hit you.

[00:01:43] There are several variants of the trolley problem, but they all go something like this.

[00:01:50] There is an out of control trolley going down the tracks

[00:01:54] Ahead, on the tracks, there are five people tied up and unable to move. 

[00:02:00] The trolley is headed straight for them. 

[00:02:03] You are standing some distance off in the operating room, next to a lever.

[00:02:10] If you pull the lever, the trolley will switch to a different set of tracks

[00:02:16] However, you notice that there is one person on this set of tracks. If you pull the lever, the trolley will move to the other set of tracks and kill the one person. 

[00:02:29] So, you have two options. 

[00:02:31] Do you do nothing and allow the trolley to kill the five people on the main track.

[00:02:37] Or do you pull the lever, diverting the trolley onto the other track where it will kill one person?

[00:02:44] What is the ethically correct option? 

[00:02:47] Or, to put it simply, what is the right thing to do?

[00:02:52] Although there have been similar questions proposed for centuries, this question was really popularised by two women - an English woman called Philippa Foot, and an American called Judith Jarvis Thomson.

[00:03:07] Foot’s original version first involved a judge, before introducing the idea of the trolley, or tram.

[00:03:15] Her version proposed this hypothetical situation.

[00:03:20] Imagine that there is a trial for a particular crime, and a judge cannot decide who is guilty

[00:03:28] They cannot find the person who committed the crime.

[00:03:31] Outside, some rioters are demanding that the judge finds someone guilty of the crime, and that person is put to death. 

[00:03:41] If the judge doesn’t find someone guilty, these rioters, these protestors will take revenge on a particular section of the community, killing five people. 

[00:03:53] Given that the judge can’t find the real guilty person, he decides that the only way he can avoid these five people being killed is by finding an innocent person and sentencing him to death for the crime.

[00:04:09] In this case, the judge decided to kill one innocent person to save a group of five innocent people.

[00:04:16] Now, Foot expanded on this and asked us to imagine a pilot in charge of a plane that is about to crash. 

[00:04:25] The pilot can choose to crash into a less inhabited area, into an area with fewer houses, to kill fewer people. 

[00:04:34] Should the pilot do it?

[00:04:36] Or, Foot proposed, what if instead of a plane it was a tram, a trolley, and the driver could flip a switch and kill only one person instead of five.

[00:04:47] What is the right thing to do?

[00:04:49] Now, in both cases, Foot showed, the result is the same. 

[00:04:53] There is the exchange of one person’s life for five lives. 

[00:04:59] But why is it that for most people they would say that they would flip the switch and allow the train to kill one person in order to save the five, when they wouldn’t agree that the judge did the ethically correct thing by finding an innocent person and sentencing them to death, to save the five other people?

[00:05:20] Since the original publication of this article, in 1967, there have been multiple developments and variants on this problem, which ask us to consider how our views change depending on the circumstances.

[00:05:35] For example, in 1976 Judith Jarvis Thomson proposed an alternative with a surgeon, with a doctor.

[00:05:44] Imagine that there is a brilliant surgeon with five patients, each in need of a different organ.

[00:05:52] One needs a heart, another needs a new lung, another, a new liver, another, new kidneys, and the final one needs a new stomach.

[00:06:03] Each of whom will die without that organ

[00:06:07] Unfortunately, no suitable organs are available to perform any of these five transplant operations. 

[00:06:15] A healthy young traveller, just passing through the city in which the doctor works, comes in for a routine checkup, there's nothing wrong with him. 

[00:06:30] In the course of doing the checkup, the doctor discovers that this young traveller’s organs are compatible with all five of his dying patients

[00:06:35] Suppose further that if the young man were to disappear, no one would suspect the doctor, he wouldn't be caught. 

[00:06:43] Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying people and save their lives?

[00:06:54] Again, the answer might be “probably not”, even though the result is the same, five people live and one person dies.

[00:07:02] There’s a similar version of this that is more similar to the original train problem and involves a fat man.

[00:07:11] Imagine that you are walking along a bridge and you can see a train running down the tracks

[00:07:17] Ahead of it are five men. 

[00:07:19] They can’t escape, and the train will hit and kill all of them, if you do nothing.

[00:07:25] Ahead of you, on the bridge is a very fat man. 

[00:07:30] You know that you can push him over the bridge onto the train tracks, and he will stop the train.

[00:07:37] The fat man will die, but the five men will be saved.

[00:07:42] What should you do?

[00:07:44] Is it different because you are actively killing a person to save five, instead of saving five to allow one to die?

[00:07:54] Other variants of this problem complicate it further by including emotion.

[00:07:59] Let’s say that instead of that one person on the railway tracks being a random person you don’t know, what if they were your son, daughter, husband, wife, brother, sister, mother or father?

[00:08:12] Or what if they were someone that you knew was evil, that was the complete opposite to someone close to you?

[00:08:19] How would your decision-making process change, given these differences?

[00:08:25] Obviously, there are no right or wrong answers here, only moral judgments of what we believe to be right and wrong.

[00:08:33] Is there a difference between actively killing someone and allowing someone to die, if that was what was going to happen anyway?

[00:08:41] A utilitarian view, which you can learn more about in episode 116 on Jeremy Bentham, tells us that we should flip the switch and allow the one person to die in order to save the five, because this is what causes the greatest happiness overall

[00:08:59] Five lives are worth more than one.

[00:09:02] Not only would that decision be allowed by a utilitarian, but it would also be a morally better choice.

[00:09:11] There is, of course, an alternative view that merely participating in something that will result in the death of one person is morally wrong. 

[00:09:21] The view goes that it isn’t your fault that the train is running down the tracks, you aren’t responsible for the death of those five people, and if you flipped the switch to allow the train to kill the one person instead, well you would take some responsibility for that person’s death.

[00:09:40] If this is how you would think about the situation, let me put a spanner in the works, let me complicate it further by proposing to you an alternative.

[00:09:51] Imagine that there was a train running down the tracks

[00:09:54] You could see that it was on course to hit five people, killing them all. 

[00:09:59] You could press a button and move the train to another track, where there was only one person. Instead of killing five people, only one person would be killed.

[00:10:12] But there was also another button where you could switch the train to another track altogether.

[00:10:18] This track was empty, and nobody would be killed.

[00:10:22] I imagine you would say that the correct course of action would be to move the train to the empty track and save everyone’s lives. 

[00:10:30] Of course it is. 

[00:10:32] But the point is that in this case, you have involved yourself with the situation, you have changed the natural course of what was going to happen, so if you would do it to save five lives, why wouldn’t you do it to save four lives, why wouldn’t you save the five people and only allow one to die?

[00:10:52] Now, the trolley problem, or the train problem, has come under a lot of scrutiny, and there are plenty of criticisms of it.

[00:11:01] Of course, it’s unrealistic

[00:11:03] It’s theoretical.

[00:11:04] There are no situations in which we know exactly, in which we know with certainty, what will happen. 

[00:11:11] So asking us to make these moral and ethical choices in completely certain situations is unrealistic, and not even useful.

[00:11:21] There’s also the point that most moral judgments do not involve life and death, thankfully. 

[00:11:28] The trolley problem is too extreme, too unrealistic, and therefore it’s not actually helpful when thinking about moral or ethical decisions, so the criticism goes.

[00:11:40] In real life, there are very few times where anyone needs to make these kinds of decisions, and focusing on this kind of question deflects attention away from, it moves the focus away from more important, more realistic ethical and moral questions that we should spend more time thinking about.

[00:12:01] Finally, and perhaps most importantly, it reduces human life to a number. 

[00:12:08] Of course, 5 is greater than 1, but that’s not how life works. 

[00:12:13] Real-life has real people, they are different, humans make decisions in different ways, and to reduce the entire problem to arithmetic, to an algorithmic calculation, isn’t how life works.

[00:12:28] It makes you think of the famous quote that was reportedly said by Josef Stalin: “One death is a tragedy, a million deaths a statistic”. 

[00:12:36] And Josef Stalin, of course, isn’t one of history’s great moral philosophers.

[00:12:43] Yet the trolley problem is again becoming increasingly relevant.

[00:12:47] But this time, we aren’t asking humans to make moral judgments, we are asking machines, we are asking algorithms.

[00:12:56] Of course, these algorithms need to be told what to do, they need to be given instructions by humans.

[00:13:03] I’m talking here about autonomous vehicles, about self-driving cars.

[00:13:08] With a human driver, we rely on humans to make decisions about what to do.

[00:13:13] If a pedestrian steps out into the road unexpectedly, the driver would swerve, they would move quickly to try to avoid them.

[00:13:23] If there were a situation in which your car was out of control, and you had to choose between hitting a group of people and swerving hard to the right and you would only hit one person, at the moment you have to make that choice.

[00:13:39] Thankfully, it isn’t a choice that most of us will ever have to make, but still, it is a possibility.

[00:13:45] With self-driving cars, they drive themselves. 

[00:13:49] Complex algorithms, complex computer code tells them what to do in certain situations.

[00:13:55] For the vast majority of the time, these are engineering decisions, not moral decisions.

[00:14:02] For example, if a self-driving car sees a stopped car ahead of it, it should slow down or move to the right or left. 

[00:14:11] If there is a ball that bounces into the street, it should stop.

[00:14:15] From a technological point of view, of course, this is amazing, but from a moral point of view, there isn’t a huge amount going on.

[00:14:24] You don’t need to make moral decisions about turning left or right or slowing down in certain areas.

[00:14:31] But, what happens in a situation where a car does have to make a choice about where to cause the least amount of harm?

[00:14:39] Let’s return to our situation of either going straight into a crowd of people or swerving to only hit one person. 

[00:14:48] What should the car do?

[00:14:50] Obviously, the software cannot foresee every potential situation and tell the car what to do every time, but it does need to provide a framework for the car to make decisions on its own.

[00:15:05] The Trolley Problem is therefore a useful way of thinking about this, but it’s, of course, imperfect.

[00:15:12] It gets even more complicated as we think about the implications of software making moral or ethical decisions.

[00:15:20] Let’s say that a self-driving car saw a motorbike ahead that was out of control and about to crash into a group of pedestrians

[00:15:29] The self-driving car could brake and stop, thereby avoiding a crash with the motorbike, but the motorbike would crash into the pedestrians, seriously injuring or killing them.

[00:15:39] Or, the self-driving car could speed up, hitting the motorbike and probably killing its driver, but saving the group of pedestrians?

[00:15:49] What should it do? 

[00:15:51] You might say, well, it should just brake because the actions of the motorbike aren’t its responsibility, or you could argue that it has a moral responsibility to save the pedestrians if it can.

[00:16:04] How about a different situation?

[00:16:06] Let’s say that a self-driving car with you inside was driving over a single-lane bridge and there was a group of 10 schoolchildren that had stepped into the road ahead.

[00:16:18] The car hadn’t seen them, they had stepped out quickly, and after doing all of the necessary calculations in a millisecond, the car knew that there wasn’t enough time for the children to move, and there wasn’t enough time for the car to slow down.

[00:16:34] But, it could swerve to the right and throw itself off the bridge, thereby killing you, the passenger, but saving the group of schoolchildren.

[00:16:44] What is the right thing for the car to do?

[00:16:47] Understandably, most people wouldn’t like the idea that their car could kill them in order to save complete strangers, and it is probably a strange idea for you to think that your car should be making moral judgments on your behalf, and sacrificing your life in the process.

[00:17:04] Should you, as a car owner, or as a passenger in a self-driving car, be able to select the ethics and morals of your car? 

[00:17:14] Could you choose to have a selfless car that would sacrifice itself, and the people inside, in an instant? 

[00:17:21] Or would you prefer to have a selfish car, which put much more value on the life of its passengers than any humans nearby?

[00:17:30] Going one step further, could you say that you absolutely loved kids, and you’d do anything to save anyone under the age of 12? 

[00:17:39] But you hated animals and people over the age of 70 and were very happy to hit as many rabbits and old-age pensioners as possible.

[00:17:49] Obviously, the last point is an exaggeration, but the point is that the software in self-driving cars needs to be provided guidance on what to do by humans, we need to give them instructions for what to do in these situations.

[00:18:03] This software has complex machine-learning algorithms, so it does get smarter over time, but we can’t rely on the algorithms to make moral or ethical judgements on their own.

[00:18:17] Going back to our earlier criticism of the trolley problem, that it was too black and white, that it was either life or death, and it didn’t consider the fact that nothing was certain, computers are generally much better than humans at processing large amounts of information quickly.

[00:18:35] If you need to calculate the probability of certain things happening, the probability of being able to stop in time, the probability of death or serious injury in a certain type of collision, or similar complicated calculations, a computer is infinitely better and quicker at doing this than a human is.

[00:18:56] But, of course, it’s very complicated.

[00:18:58] And who decides? 

[00:19:00] Is it left to the technology companies building the software, writing the algorithms to decide what is right or wrong? 

[00:19:08] Or should it be a legal matter? 

[00:19:11] Should the law of each country specify how autonomous cars should behave? 

[00:19:16] And if so, how would this be different from country to country? 

[00:19:20] Would the software have to be adjusted based on the country the car was registered in? 

[00:19:25] Or would it be adjusted based on where the car was? 

[00:19:29] Would you have a situation where you were in a self-driving car, and when it crossed national borders its software would automatically update to the moral code of the country?

[00:19:40] It does seem that, although the technology behind self-driving cars isn’t so far away, there still isn’t complete agreement on how they should behave from a moral point of view.

[00:19:52] And this is quite telling.

[00:19:54] These moral questions are ones that humans have been battling with for millennia, for thousands of years. 

[00:20:02] The technology behind self-driving cars, although it is brilliant, is relatively new, and hasn’t taken that long to develop. 

[00:20:11] Yet it looks like the software will arrive before agreement on the ethics.

[00:20:16] And that, for me, is a good indication of what the harder problem to solve might be.

[00:20:23] OK then, that is it for this exploration of the Trolley Problem, what it is, how it makes us think about our relationship with each other, and its relevance for us today.

[00:20:35] I hope it's been an interesting one, and that you've learnt something new.

[00:20:39] You will note that I have not, or I have at least tried not to give any kind of moral judgments here. That is for you to decide, and there is clearly no right answer.

[00:20:50] I know we have lots of software developers who are members of Leonardo English, so what do you think of this moral problem? 

[00:20:57] Where does the role of the programmer end, and where does the role of the lawmaker, or moralist, start?

[00:21:04] I would love to know what you think.

[00:21:06] You can head right into our community forum, which is at community.leonardoenglish.com and get chatting away to other curious minds.

[00:21:15] You've been listening to English Learning for Curious Minds, by Leonardo English.

[00:21:20] I'm Alastair Budge, you stay safe, and I'll catch you in the next episode.

[END OF EPISODE]