Skip Navigation

BPR Interviews: Dr. Azim Shariff

Photo Credit: Azim Shariff

Dr. Azim Shariff is a social psychologist whose research focuses on where morality intersects with religion, cultural attitudes and economics. Shariff is one of the researchers behind the Moral Machine study, the largest ever study on machine ethics, which measured social responses to decisions faced by autonomous vehicles. Shariff currently holds the position of Canada 150 Research Chair of Moral Psychology at The University of British Columbia.

Neil Sehgal: Germany is the only country to have devised an ethical framework for driverless cars. They leave open a large number of questions, but how well designed do you find this framework?

Azim Shariff: The fact that you see inconsistencies between what those ethicists have suggested and what you see in the moral machine, as well as inconsistencies between what the philosophers and legal scholars have been focusing on and what you see in the moral machine shouldn’t be taken to suggest that the moral machine is right, that we should be following what the demos suggests. I don’t think ethics is a democracy. I think that the majority of the public could be wrong on these kinds of issues because they haven’t been trained as the ethicists on the German committee are. But I think that certain points that the Germans make in an abstract sense seem great, like not taking into account any demographic characteristics. And as an abstract idea that seems to be the right thing to do. Where I think the inconsistency matters is what type of tension, what type of pushback you’re going to get, between the intuitive moral position of the populace and what the legal scholars, policy makers, and industry decide. They could arrive at the most ethically correct positions, but these positions might not be supported by the public at large, in which case you have tension and tensions can manifest in all sorts of different ways, including people not buying into the technology, which would negate the whole purpose. Rather than saying that what the demos in the moral machine suggest is what we should use as an ethical guideline, it should be taken as a first step in indicating how the public thinks. Then we either figure out how to bring our trained ethical position closer to what the public thinks or try to figure out some way to bring the public closer to what our ethics professionals decide. In terms of the German ethics position, I think a lot of the guidelines are very sensible. Some might be a little impractical, but I think they’re very sensible from an ethical standpoint.

NS: How do you think we can go about bringing the public closer to what the trained ethicists believe? Your research has also found that most people tend to prefer that others ride utilitarian driverless cars that minimize the number of casualties, but they themselves would prefer to ride in driverless cars that protect the riders at all costs. How do we address these situations?

AS: I don’t think it’s by any means impossible because I think norms can change pretty quickly, especially when they’re norms about novel situations that people really haven’t thought about before. They might have an intuition that after thinking about it, after learning about how other people feel, or after really considering how it works when embedded in a complex system they come to reconsider and realize that maybe the original condition wasn’t correct. One possibility is to imagine that the first wave of autonomous or what’s called automated cars are not the private vehicles that people own, but is instead ridesharing. For ridesharing, if you’re to get into an Uber, 99.9% of the time, you will be a stakeholder outside of that particular car: it will be on the road and you will be either a pedestrian, another driver, or a cyclist to that particular Uber. On the other hand, 0.1% of the time or 0.01% of time you will be the passenger. And if you then say ‘what should the Uber do?’ you’re now thinking in both contexts. You’re primarily thinking about somebody who is a non-passenger to that Uber. And in that sense, you might think about all the roles that you’re going to be in. ‘I think this Uber should probably not prioritize the life of the passenger relative to everybody else because usually I’m going to be everybody else.’ And so that makes you recognize that even for the fraction of the time that you’re going to be a passenger in an Uber, you should actually subscribe to the Uber treating all lives equally rather than prioritizing the rights of the passenger, which you happen to be in that moment. If people start thinking about the cars in that context, not as them being passengers, but as them being stakeholders on a road where you could be one or the other, then they might recognize the wisdom or even just the expediency of a utilitarian approach. And then by the time it comes to some portion of the population having private, automated cars, they’d be totally understanding and game for them being utilitarian because the norms have been set.

NS: Do you think that if you repeated the Moral Machine study, let’s say when rideshare self-driving cars are mainstream, but private ownership is not, you would have found different results?

AS: Yeah, I think that’s totally possible.

NS: The Moral Machine study was published over a year ago. One of the most interesting findings was that the last place moral dimension, the dimension respondents showed least preference for, was action vs inaction. And up until this study, most of the legal and philosophical conversation centered around this idea that the car should not swerve off of its intended path–you should not involve others. In showing that the public doesn’t consider this aspect important, have you’ve noticed a shift in the legal or philosophical conversation? 

AS: I don’t know whether I’ve been privy enough to the legal conversations to notice whether the conversation’s changed either in light of the experiment itself or in light of any changes in the industry. But that observation is absolutely right: that philosophy and the legal side of things have really been focusing on that inaction versus action question, because it makes things so much simpler, especially in the legal sphere. That’s really the only criteria in which they seem to really care. So if it’s the case in some sort of hypothetical situation where the car could take an action and spare five people to kill one, that’s legally the wrong thing to do and legally the company would be held responsible, liable for the one death rather than given any credit for avoiding the five deaths. But the Moral Machine experiment found that regular people seem to have a very opposite intuition. And, partially that’s because they don’t think in terms of legal liability, they think in terms of moral blame and consequentialism. They’re not thinking like companies in this case. Now if these issues ever came to a head, you would expect some tension between how the companies legally operated and whether the public found what they did to be morally correct. One area that really focused the discussions was this Uber case from about a year and a half ago in Arizona. There, it seems like they cut a lot of corners not making ethical tradeoffs as much as just making expediency tradeoffs.

NS: In the end, the prosecutors didn’t charge Uber in that accident, right?

AS: Yeah, that’s it. They rushed to settle. I think they probably didn’t want a lot of those decisions they made to come out. But there’s been a steady drip, drip, drip ever since. Recently, it was revealed they hadn’t programmed the car to register jaywalking humans. They just programmed the car to detect humans when they were walking in designated areas. The woman who was walking was detected by the car, but first she was detected as an object. And then, within a six second span it started detecting her as a bunch of other things and ultimately decided to take no action. So, it’s not as if they couldn’t detect her, they just didn’t program it to register those people to begin with.

NS: One of the limitations of the Moral Machine study was that 70% of the respondents were male college graduates. That’s not necessarily a representative population. With a perfect sample, do you think you would have found different results?

AS: We did some stratifying with the sample and found that there weren’t particularly interesting demographic differences. But there were some sort of self-interested demographic differences. For example, everybody preferred to save women, but women preferred to save women more than men prefer to save women. And so with a more female sample, you’d see that pro-female skew be even stronger. Older people were more inclined to save older people rather than let older people die at the expense of younger people. So if we have the oldest sample, it would be less at the young skew. So you’d have some predictable demographic differences and in that sense, you’d want a more representative sample. People who were not intrinsically inclined towards answering these types of questions and people intrinsically inclined towards philosophical thought problems or self-driving cars in general might differ a little bit. And that would be important to figure out. And that’s an empirical question, which is actually not that difficult to do. 

What I think was one of the more interesting findings of the Moral Machine was the cultural differences and the fact that we had a similar slice of the population of each country as respondents. You tended to have the more affluent, computer savvy young male populations of each country being overrepresented in the Moral Machine, which would mean that any cultural differences that we saw were probably underestimates of the real cultural differences that existed. Which is to say that if you got representative samples from different countries, you’d probably see more exaggerated cultural differences than the ones that we saw when we have these limited, similar samples for each country. In that sense, some of the interesting cultural differences that we found might actually be more profound in representative samples.

NS: In the future, who will get the final word on what the ethical decisions are for driverless cars? Will it be individual manufactures, governments, or industry standards bodies? 

AS: From the people in the industry that I’ve talked to, they seem to want nothing more than to have this decision taken out of their hands. They don’t want to be held liable for having made ethical decisions which result in life or death outcomes. They would much rather it have been a government decision. They’ll say, ‘the government said we have to do X, so we did X,’ and the government is answerable to the people in a democratic system. And as a big government Canadian, I think that’s probably the way to go and I think that’s the way you actually maximize the utilitarian approach. Of course, it has the potential of actually cooling the water on the industry and cooling the sales of the car if the ethics that the government regulates are in opposition to what people actually want. 

NS: You found three major clusters of cultural variation, an Eastern, Western, and Southern bloc, in the Moral Machine study. If government regulation is the way to go, do you think that within these blocs, the governmental ethical standards will be identical? Or, do you think there’s room for difference within these blocs?

AS: I think there is room for difference. But I hope we don’t see something where some places make the cars draw distinctions on the socioeconomic status of the pedestrians as we found that some countries had a higher preference for. I hope that those demographic or ‘social value’ characteristics don’t actually come into play. But I do think that the governmental bodies will be responsive to the social context that they’re in. Certainly in the case of things like jaywalking norms, they might have to design the cars differently around places where you’re not going to expect any jaywalkers. You’re probably going to have the cars behave somewhat differently from places where people are walking all over the streets. So to have a one size fits all programming, would run into a lot more problems than having something which is flexible to the cultural context.

NS: How do you think we’ll be able to assign blame when there’s a crash involving two self-driving cars? 

AS: It’ll depend on why they crashed. If the crash was caused due to some mechanical oversight or some fault in programming, then you could imagine the car company being held liable. If it was due to unavoidable consequences like environmental conditions, then it would really depend. We would have to definitely get more specific with that particular crash situation is. So there’s a bunch of nuances. It’s not a one size fits all answer.

NS: How safe do driverless cars need to be before we let them on roads at a massive scale? 

AS: One of the challenges is that from a purely mathematical rational standpoint, it would make sense to put them on the road as soon as they are better than the average human. Because once you do that, if they really are genuinely better than the average human, then car accidents should start declining. Even if they’re just 10% better than the average human, you have legitimate arguments to put them on the roads. The problem is that, who’s going to buy cars that are just 10% better? Probably not very many people. In fact, our research shows that very few people are willing to buy a car that’s just 10% better than the average. Partially because everybody thinks they’re a good driver, everybody thinks they’re on average in the top quartile of drivers. People are not going to buy a car that is less good than they perceive themselves to be. Then, you’ll need the cars to be better than the top quartile of driver. So, it’s going to be a combination of government regulation: your car is not allowed on the road until X safe and, consumer preference: I will only buy a car when it is Y safe. And the gap between X and Y, that’s an empirical question that remains to be seen. 

That’s the case of consumer preferences. You might have different situations for ride sharing situations, public transit situations, private campus situations, situations where you don’t have individual private owners buying the cars. There you might have different standards where its whether people are willing to get into the car rather than buy it themselves. When you get into an Uber, you don’t know how much better or worse the Uber driver is than you. So, you decide the Uber is probably safe enough that you’re going to ride in it. That might be a different standard, but they should be better than humans of course.

NS: Is that difference between the X and Y something you’re studying right now?

AS: Yeah, and it’s probably considerable and that’s kind of unfortunate because that means we’re facing a situation where the cars have to be damn near perfect until somebody’s willing to buy one for themselves. Which is probably a pretty good standard, but it might be an unrealistic thing. Another complicating factor in that is that in order for the cars to get to that level, they need a ton of experience. They need to be on roads before they’re able to get up to that standard of quality. You have a chicken and the egg problem: you need the cars to be on the road to get safe enough for us to be willing to have the cars on the road.

NS: Are you worried that the focus on ethical issues like the trolley problem will at all blind experts or the public from grappling with other unaddressed ethical challenges surrounding driverless cars, say urban sprawl or environmental issues?

AS: So I share your concern about the focus on this question. My bigger concern was that I was worried about people having unreasonable fears about self-driving cars and if everybody starts caring about the trolley problem, maybe that’s actually making people overly fearful of the cars. When our research started getting traction, we did a little study and we found that it’s not the case. People who’ve heard about the trolley problem issues are not less excited about the cars and even people who haven’t heard about the problem, when you tell them about it, it doesn’t make them less excited. In terms of the relative emphasis of different issues, I think there we can walk and chew gum at the same time. I think that there’s enough people interested in this topic and enough money in this topic that we can be concerned about all the issues. And I don’t know whether it’s a zero-sum issue that focusing on trolley issues necessarily takes away from the focus on other things. In fact, it could potentially direct interest to those other topics. I think the environmental question is a big one. I think the urban sprawl issue is a related big one. And those are topics which I didn’t really think about much until I started thinking about the trolley issue.

NS: Separate from the Moral Machine platform, are there any questions related to automation ethics you’ve been meaning to investigate? 

AS: One of the areas is the labor displacement issue which has been really popularized in the last year by Andrew Yang and his presidential bid. There are a lot of varying estimates of how fast and to what extent jobs are going to be automated away. But they bring up a whole bunch of really important moral psychology issues about how we deal with this situation in which the nature of work changes. We’re in a world where we actually use work as signals for so many things. When people describe what a paradigmatic Canadian is, they’ll say like something like a polite Canadian. When they describe what a paradigmatic American is, they’ll be like a good hardworking American. That’s always how they describe themselves, right? So there’s a real important connection, especially in the United State, between work and being a good a member of society. And if some of those relationships were to change, how does that affect how we see ourselves and each other as members of our society? So that’s a real tangent that’s been an interesting offshoot in my own interests from the self-driving car stuff.

NS: And what do you think of universal basic income as a solution to that problem?

AS: I think universal basic income is an insufficient solution. Yang kind of sees it as a panacea: $1,000 a month is going to solve every problem under the sun. I don’t think that’s the case. It can address some of the material problems that will come with people losing work, but it doesn’t address most of the psychological problems that might come with us losing work. So it can be a good replacement in terms of your own material livelihood, but I don’t know whether it’s a good replacement in terms of the psychological livelihood.

NS: I don’t think it’d be fair to end without asking this. Would you pull the lever and kill one person, or would you do nothing and allow the five to die?

AS: Oh, no I’d definitely pull the lever.

NS: And then in the fat man variant would you push the fat man off the bridge to save the five?

AS: Yeah, though I’d call him a large man, not a fat man. But yeah, I’d push him. It’s funny, I was just teaching an intro psych class and we were talking about one of my favorite papers from the last few years in psychology. It was this one by Jim Everett where he showed that everybody prefers deontologists to utilitarians. Even utilitarians themselves would prefer a deontologist as their social partner. People don’t like the cold calculating utilitarians. And one of the reasons why they don’t like them, and this is really fascinating, is that people who use a consequentialist based calculus to arrive at their ethical decisions are too vulnerable to doing some creative calculus to figure out that the ethical thing is what they wanted to do all along. Whereas somebody who’s a deontologist, they’re tied to the math of their ethical decisions. And so you know that they’re predictable and you know they can’t waiver based on some sort of self-interest. So despite the fact that I know it’s the unpopular position to take, yeah I think I’m much more in the utilitarian camp when it comes to those questions.

When I was talking about it with my students, I was mentioning that all the mainstream super heroes like Batman, they’re all harsh deontologists. Think about Batman and Joker, right? Every year Batman captures the Joker, puts him in jail and every year the Joker gets out of jail and kills a whole bunch more people. And yet that’s still what Batman does, he never kills. He never breaks his own deontological rule, even though the consequence would be very positive. And I think that’s because those kinds of moral exemplars are attractive to us. They’re the unwavering deontologists who stick to their moral code no matter what.

NS: Lastly, what would you do in the ‘transplant’ variant of the question? You have five patients, each in need of a specific organ, each of whom will die without that organ. A health young man is in the waiting room whose organs are compatible with all five of the dying patients.

AS: The short answer is that I don’t think the organs of the waiting room guy should be harvested. I recognize that this is at least superficially inconsistent with my positions on the two trolley dilemmas and if I’m being totally honest with myself, I suspect that any reasons I give to explain myself are being heavily influenced by a knee-jerk intuitive response. There are a number of rationales I could offer (that it is actually costly from a utilitarian standpoint to have healthy people living in fear of having their organs harvested, that it betrays common sense, that there needs to be a space for individual autonomy), but it’s unclear whether these explanations are what are motivating my implicit revulsion, or whether it’s the other way around.  

SUGGESTED ARTICLES