A Brown Professor of Cognitive Sciences for over 30 years, Steven Sloman heads the Behavioral Decision Sciences concentration with Cognitive, Linguistic, and Psychological Sciences (CLPS). Professor Sloman changed the way we think about thinking and reasoning in his 2017 book, The Knowledge Illusion, Why We Never Think Alone—a book he co-authored with a Brown alum and former student, Phil Fernbach. Professor Sloman’s work “hammer[ed] another nail into the coffin of the rational individual,” according to philosopher Yuval Harari in his New York Times book review, and no one is perhaps more qualified to do so: Professor Sloman has spent much of his career examining how we make decisions, both rational and irrational, about policy and politics.
Ariella Reynolds: First question. How do toilets work? Now, of course, I ask you that question because you and your former doctoral student, Phil Fernbach, found out that most people not only don’t know the answer to this question but think that they do. After trying to explain their answers, people discover that they don’t have as great of a sense of understanding as they think. You called this the ‘Knowledge Illusion.’ So the real first question: When people tweet about how Hamas works, how Covid-19 works, or even how the last presidential election worked, do they ever realize that they don’t know what they think they know? And how do they come to realize that?
Steven Sloman: In general, people’s understandings of how things work are inflated, which they usually discover when they try to explain it. I guess it depends on the nature of the tweet, right? If their tweet is an effort to explain how something works, then either they do know because they happen to be an expert on that topic, in which case they had a strong sense of understanding before and they maintain that strong sense of understanding. Another scenario is that when they write the tweet, they realize that they don’t understand. Then either they abandon the tweet, or they go and look it up and become more informed. The truth is, I think what happens mostly when people tweet—and I don’t tweet myself, so this is just a guess based on my understanding of normal discourse in other contexts—is that when people say things to others, they’re generally not engaged in a process of explanation. They’re not spending their time trying to explain how something works. What they might be doing is stating their values. That’s something we do all the time, and that’s something that doesn’t require much understanding. We might be talking about our community and who agrees with me and who doesn’t agree with me. We might just be railing against the people we hate. That’s what Trump does all the time. So explanation is relatively rare, and I would be surprised if you see a lot of it on Twitter, although I don’t know because I don’t engage in it.
AR: If people were to realize that they don’t know what they think they know and they turn to the hive, to this larger knowledge community, whom are they turning to exactly? Because if they’re turning to other people on social media, then aren’t they just dipping into that same poisonous well?
SS: The first point is that they depend on their hives even before they tweet. They have this sense of understanding because they share a view with their tribe, and hence have this strong sense of understanding, which often makes them feel very strong in their positions and their attitudes. If they’re tweeting about something and discussing it, most of the time they’re discussing it with people that agree with them because that’s what human beings do. As a result, they’re just going to strengthen their position because they’re going to get a lot of positive feedback for what they think. On occasion, they’ll get negative feedback. They’ll run into someone who totally disagrees and maybe insults them. The standard response to that is to strengthen your position, not weaken it. Because now you not only have a position to state, but you have a position you have to justify. And what human beings tend to do is generate justifications fairly automatically. So they’ll come up with some kind of justification that, in the best case, could be an explanation. I mean, it could be a serious justification, but it could also just again, be a statement of values, or an argument from authority, or something else. Something that doesn’t have real substance, or at least doesn’t bring any new information into the conversation, but that nevertheless makes them feel more strongly about the position that they already held.
AR: So even if we get pushback, we’re still not inclined to change our beliefs.
SS: Certainly not inclined. The interesting question is what happens if you’re picked up out of your community and put in a new one, where all you’re getting is pushback. Everybody around you disagrees with you. Is that enough to cause people to change their minds? And clearly, sometimes it is, right? Like university students who, say, come from a red part of the country and then attend a blue state liberal arts college. They’re now suddenly surrounded by completely different points of view. Some of them stick to their guns, some of them don’t change their minds, but a lot of them do. It creates some friction when they go back home. Essentially, they’re kind of caught between two worlds.
AR: Exactly. Now, switching gears just a bit, we see a lot of studies emerging on the proliferation of political misinformation on social media. What kind of a knowledge ecosystem do we have out there right now? Do you think that this system is helping or hurting us as we wrestle with these challenging political and social issues?
SS: The first thing I’d like to say is that I don’t think our media environment is fundamentally different than it’s ever been in terms of its accuracy. Yes, there’s a lot of fake news, but during times of war and crisis, and, at all times, there’s always been a lot of fake news. If you go back to what newspapers reported during World War I, for instance, about the enemy, they were dehumanized. In that sense, fake news is not a new phenomenon. We used to call it propaganda, right? What has changed is the speed with which information gets out. But notice that it’s the speed not only of false information but also the speed of true information. There is some
evidence that false information travels faster and farther than true information, but both kinds of information are much more available than they ever have been before.
AR: That’s a more optimistic way to look at it than I’ve seen, for sure. I see a lot of people concentrating solely on false information.
SS: With the fake news, the thing that worries me the most is “deep fakes,” which are brand new because they replace sensory experience with something that’s not real. Those might have a very different kind of effect depending on how they’re deployed.
AR: Over 36 student groups at Harvard signed a letter saying Israel was responsible for Hamas’ attacks. Days later, several groups retracted their statements, with a few saying they hadn’t even seen the letter. You’ve talked about how people are willing to outsource their opinions to rely on the judgment of others about policy issues, even if they don’t know what those policies are. To what extent do you see this outsourcing as a driver of political and social unrest today?
SS: It is the driver. That is the dynamic by which social unrest happens. There are thought leaders and action leaders who make a move and try to get people to follow them. When they succeed, they can harness the power of tens, hundreds, thousands, and sometimes millions. And those tens, hundreds, thousands, millions generally do not understand the issues in great depth. And I realize that that’s an incendiary claim in some circles, but that’s what the data show. I mean, political scientists have been showing for decades now that the average person is relatively ignorant about any particular policy. Of course, some people know a lot about specific policies. The point is, that’s the exception. Even the experts tend to have only sort of narrow domains of expertise. They don’t know everything about everything. So mostly we’re operating in a kind of vacuum, a vacuum of ignorance. It has to be that way because we’re limited human beings with limited memory, limited time, and limited processing capability, and we can only know so much. So we have to depend on others, and there’s nothing wrong with depending on others. We’ve done incredible things as human beings by depending on others. The ability to depend on others cognitively is remarkable, but it does lead to this problem where, in politics, you’re going to find cases where there are huge numbers of people on board who don’t understand what they’re on board for.
AR: Do you believe that it is our fault that we’ve locked ourselves in these epistemic bubbles and echo chambers?
SS: That’s an interesting question. I believe it’s the human condition. But that fact doesn’t mean it’s not our fault. It’s also the human condition to get cut and bleed, but if we let someone bleed to death, that’s our fault. It’s the human condition to live in silos, to be surrounded by some people and not everyone, to be exposed to some information but not all information. It’s the human condition to share values and beliefs with our tribe. I don’t think there’s any other way to be a human being, except to be more like a cyborg, but then you’re not a human being anymore.
AR: What are some strategies for how to get to that small minority that doesn’t fall for the knowledge illusion as much?
SS: The data shows, sadly, that you can become more reflective, but generally only within a certain domain of practice. Physicians are more reflective about diagnosing their patient’s condition, especially if they’ve been trained to be or if they’re good doctors. Yet, when they go home, they’re not reflecting on who’s going to win the football game. It’s very hard to be reflective in general about everything.
AR: I’ve noticed that ideology seems to be driving a lot of socially mediated misinformation. Could you walk us through some of your findings about the power of ideologically-based reasoning? How do we transcend ideology to make more informed choices?
SS: I did some research with an honors student at Brown, a woman named Mae Fullerton ‘21, and we were looking at how to predict how an individual would respond to Covid-19. We found that the best predictor of people’s willingness to take these mitigation procedures was their political partisanship. It was a better predictor even than whether they had risk factors, like whether they had immunodeficiencies, for instance. And in most cases, it was a better predictor than their understanding of how Covid-19 was transmitted. In later work with Mugur Geana and Nat Rabb ‘14, we showed the same thing differently. There have been lots of studies now that have shown that the best predictor of how people respond to Covid-19 is their political party. So that to me is direct evidence that people aren’t thinking through their attitudes, but even in cases that involve life and death, they’re governed by the views of the people around them, and they don’t even engage in an analysis of their self-interest.
AR: So how do we get past that? How do we transcend that?
SS: If I knew, I would run to be king of the world. Look, I don’t know what the right answer is. I am in the process of finishing a book right now that is about sacred values versus consequentialism. The argument is that there is that the reason that we’re satisfied by limited subjective understandings of things, and the reason that we’re unwilling to compromise and willing to take certain actions is that instead of framing issues in terms of what the consequence of one or another policy will be, we frame issues in terms of our sacred values. If you know someone on the right is thinking about gun ownership, they’re just not thinking about what the consequences of different policies are. What is framing all their reasoning is the sacred value that they claim comes from the Constitution. When we frame things in terms of sacred values, things seem much simpler than they are, and we become much more intransigent.
AR: You would recommend speaking more in terms of the consequences rather than the values, but that’s more difficult, so fewer people end up doing that.
SS: Exactly.
AR: Coming back to what you said earlier about being king of the world—or in this case, of the country—if you were running your presidential campaign, and you wanted to win, how would you want voters to become more knowledgeable about you?
SS: In this book that I’ve just written, I have a section where I talk about this question, and I say, imagine this candidate, Terry, who wants to win, and Terry has an opponent on the left who’s pushing left-wing sacred values. Terry also has an opponent on the right who’s pushing right-wing sacred values. The question is, what should Terry do? Should Terry engage in a different kind of narrative that pushes moderate sacred values, or should Terry focus on consequences? I think we saw a real-life version of this to some extent when Clinton ran against Trump. Clinton took this very consequentialist position. Her campaign was very policy-wonkish. She had a view of everything. And Trump just says, “This is good, and this is bad.”
AR: Not much nuance there. For sure.
SS: So, if I wanted to win the election, I would probably tell stories and push my sacred values, even if I didn’t think that was the right thing to do.
AR: And in The Knowledge Illusion, you use the analogy of Neo in The Matrix, who took the red pill to wake up from this illusion of reality. And I couldn’t help but think of the traitor in the film who counter-argued that while he knows that the steak he’s eating doesn’t exist, ignorance is bliss. Misinformation is arguably appealing, as you noted—it goes viral easily and plays to confirmation bias since sacred values are a lot easier to produce. But knowledge based on research and evidence? Not so much. How can we get more consumers of political news, for example, to take the red pill to become aware of the fact that neither CNN nor Fox tells the whole story?
SS: Again, I’ve got to put on my “I wish I were king of the world” hat. You’re not asking for a red pill or a blue pill. You’re asking for a silver bullet. What’s the solution to this lifelong societal problem? I don’t know what it is. It could be regulation of the media, but that requires having a government that’s invested in taking a consequentialist perspective or at least in making the citizenry wiser.
AR: Would you be concerned about censorship?
SS: Absolutely. This has happened to some degree. Fox was sued and had to pay a huge amount of money, and there’s been a lot of conversation recently about regulating social media in the EU.
AR: Moving down a slightly different path, to what extent are you concerned about generative artificial intelligence (GAI) as a kind of hive, a distillation of the knowledge of various communities?
SS: Think about the alternative or what we’ve done in the past. We’ve turned to other people for answers, but we don’t know how other people work. It strikes me that the advice we get from other people is at least as mysterious as the advice we get from GAI. I think in that sense, I’m less concerned about it than a lot of people are. My concerns about GAI are more about the unforeseen consequences of having a brand-new technology change pretty much everything we do. It could be that we’re living very different lives in the future in the same sense that we’re living very different lives now because of the invention of the television. But in terms of relying on something we don’t understand, I think it’s at least as understandable as other human beings. If you ask a politician why you should give them money, presumably often it’s because they want to pursue political power, right? But that’s not what they’re going to say when you ask them for justification. People have ulterior motives, but sometimes people just don’t know what’s motivating them. People can make things up that don’t necessarily correspond to reality. GAI can do the same thing, but at least with GAI, we also have the potential to probe the machine in a way that we can’t probe humans.
AR: It sounds like you’re less concerned about the dangers of AI, potentially, than a lot of other scholars that I see writing about it right now that are calling it the harbinger of destruction.
SS: From my perspective, GAI has offered the biggest insight into the question that I’ve been pursuing for decades. You know, bigger insight than anything else has, namely how the mind works. It’s not the full story, but it’s amazing what it can do.
AR: What are you working on now? What should we be on the lookout for from you?
SS: I’m most excited about this book which is, more than anything, a decision-making textbook. It makes this distinction that I think is important and underappreciated among people reasoning about consequences and people reasoning about sacred values. I’ve been doing a lot of studies with colleagues on the difference between framing things in consequentialist terms versus in sacred-values terms. We have found that when things are framed in terms of sacred values, people become less willing to compromise, they become more willing to act on their beliefs, and they believe they have a greater sense of understanding. That’s the work that I’m most excited about.
*This interview has been edited for length and clarity.