Skip Navigation

The Gatekeepers to Online Discourse: An Interview with Daphne Keller ’95

Daphne Keller ’95 is a law professor at Stanford University and is the director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. She teaches and writes about platform content moderation practices, Internet users’ rights, and constitutional and human rights law, among other topics. Until 2015, Keller served as Associate General Counsel and Director of Intermediary Liability and Free Expression for Google. She has taught Internet law at Stanford, Berkeley, and Duke law schools and is a graduate of Yale Law School, Brown University, and Head Start. 

Sam Kolitch: How did a double concentration in history and modern culture and media lead you to law school? 

Daphne Keller: I was interested in thinking about how information propagates in society and the mechanisms that enable the spread of information, which is what a concentration in modern culture and media allowed me to focus on. On the other hand, the history concentration gave me an understanding of how events happen in the real world and how power is exercised. I came out of this double concentration thinking that I might want to go to law school because I wanted my work and words to have as much real world impact as possible. This might be a little bit of a ‘head in the clouds’ way of thinking about things, but I wanted to take some of my nerdy, ivory tower interests and make them as concrete and impactful as possible, which is how I arrived at the decision to go to law school—and I’m very glad that I did. 

SK: What was your professional trajectory at Google? 

DK: I started at Google in 2004 as a product counsel, which meant that I spent a lot of my time with the engineers and product managers who were building and operating products like Gmail, Maps, or Chrome. Whenever they wanted to do something like launch a new feature, I was supposed to look for and solve legal issues and then sign off on the launch of the updated product. It was really a jack-of-all-trades role that forced me to be very nimble and become conversant with aspects of laws from all over the world. I then became the lead lawyer for copyright, the lead lawyer for web search, and then ultimately Associate General Counsel with the title—which I made up for myself—called Director of Intermediary Liability and Free Expression. It’s a lot of words, but in this role I basically focused on the laws that tell platforms what their responsibility is for the content that is shared by their users. At the time in 2012, people thought that this was a niche, weird topic, but it’s in the headlines every single day in 2021. 

SK: Since you oversaw a lot of decisions about what content warranted removal from Google’s various services, are there any memorable requests for take-downs that are indicative of the difficulties of such decisions? 

DK: Very early on in my tenure, a politician in Turkey said that a Turkish news report accusing him of self-dealing wasn’t true and that the report was defamatory and illegal. Part of the difficulties of assessing such an accusation is that the politician’s claim could have been totally valid and he could have been lied about. The politician could have also been lying and the news report could have been incredibly important journalism for the people in Turkey, since they need to understand what their elected officials are doing. We just could not validate or invalidate the claim—and there are so many cases like this. 

SK: What would you say to someone who questions whether companies like Google really care about our democracy? 

DK: Everyone I worked with—certainly everyone I worked with in my early years at Google—really cared about users’ rights, such as privacy, free expression, and anti-surveillance issues. I would say, though, that this changed somewhat as Google became more successful and more people came into the company who weren’t necessarily as invested in these policy questions. But that early group of people at Google absolutely thought about those issues and raised questions about the company’s values all the time. 

SK: As of late, few laws have faced more political scrutiny than Section 230 of the Communications Decency Act (CDA). What is Section 230 and what is its purpose?

DK: Section 230 is a law enacted by Congress in 1996 that has two main provisions. One provision says that platforms like Facebook and Twitter are not legally responsible for the content shared by their users. There are some exceptions to this immunity when it comes to anything that’s a federal crime, like child sex abuse material or terrorist content, and Section 230 doesn’t immunize companies from copyright issues. The second provision basically says that platforms are not liable if they choose to set content moderation policies and take down speech that they want to prohibit, even if the speech is lawful. 

Congress had three goals when they created Section 230. The first goal was to protect online speech and prevent platforms from being bullied into taking down lawful but important speech, which is a very real problem. The second goal was to encourage platforms to set content moderation policies to maintain a civil space for discourse and enable platforms to remove content that is legal but that is very offensive and considered harmful—like hate speech and pro-suicide content. The third goal was economic since Congress wanted to shelter new internet companies from the crushing liability risk that they would face without having these kinds of immunities. This meant that smaller companies would be able to grow and compete with larger platforms. 

SK: Politicians on both sides of the aisle have called for Section 230 to be repealed. What would this mean for platforms and their users?

DK: Platforms would have to choose between two different directions for their companies if Section 230 were repealed. One path they could take is being so legally cautious that they would have to act like lawyers for The New York Times and vet everything to make sure that all the content on their platform complies with the law. This would result in taking down much more content than the law requires. The other direction they could go in is letting their platforms become a free-for-all, so they can’t be accused of acting as an editor. Neither of these outcomes is something that anybody really wants in the real world, even though we all disagree about what these platforms’ speech rules should be. Having every single legal thing remain on these platforms would create a cesspool and having really aggressive content moderation would not only make everything online really bland and anodyne, it would also suppress lawful speech and controversial ideas. 

There are bad takes on Section 230 all across the political spectrum. Conservatives’ bad takes tend to be something like, “Section 230 requires platforms to be neutral.” This is the opposite of what Congress envisioned when they originally passed the law. The liberal bad takes tend to involve blaming Section 230 for speech that is awful and offensive but is perfectly legal because of the First Amendment. So there’s this illusion on the part of a lot of liberals that if we got rid of Section 230, platforms would be forced to take down hate speech. In fact, Section 230 encourages and incentivizes platforms to remove hate speech. 

SK: What do you hope Congress does, if anything, with regard to Section 230? 

DK: I wish they would leave it alone. Section 230 isn’t perfect but once Congress starts messing with it, they’re going to do something dumb (laughs). They’re going to do something with unintended consequences because so many of the problems they think they can solve by amending Section 230 they really can’t fix. If they are going to do something, I think Congress needs to be realistic about how content moderation works in the real world and they need to be aware of how often platforms get false accusations like the Ecuadorian government trying to silence reporting about police brutally, the Church of Scientology trying to silence critics, or businesses trying to knock their competitors out of search results. Also, Congress should involve the courts more and let them determine what counts as illegal content. 

SK: Facebook has 223 million users in the United States, whereas Twitter has about 69 million users here. Do you think that congressional hearings that include both of these platforms are counterproductive, given that there is such a significant difference in their reach?  

DK: Mark Zuckerberg has said a couple of times that Facebook spends more on content moderation than Twitter’s entire annual revenue. So there’s a very meaningful difference in what those two platforms can do. On the other hand, you have to make the cut off at some point. I wish that congressional hearings on platform regulation invited people from platforms like Reddit, WordPress, or Wikipedia—all of whom also rely on Section 230 because they allow users’ comments on their platforms. By focusing only on the giant incumbent platforms, we get a really distorted impression of what’s possible for platforms to do and what the consequences would be if Section 230 were changed. 

SK: Many conservatives believe that there is rampant anti-conservative bias when it comes to platforms’ regulation of their online content. Is this a valid claim that can be tracked and quantified?

DK: I sympathize with their concern. The fact that we are in a situation with such a small number of gatekeepers to public discourse would make anyone feel paranoid that their group might be on the end of biased content moderation. Indeed, lots of groups—Black Lives Matter, Muslim rights groups, etc.—worry about this. So it’s not just a conservative issue. Yet nobody has a data-driven claim when it comes to biased content moderation. We are just nowhere near having the kind of transparency needed to substantiate claims of anti-conservative bias. Everyone has an anecdote-driven claim and the anecdotes tend to come from people who have an axe to grind and have a particular political agenda, which distorts the public conversation. 

SK: You have written about how biased content moderation disproportionately affects marginalized communities, particularly when platforms attempt to remove terrorist content. How so?

DK: There’s a very strong reason to suspect that biased content moderation would have a disparate impact on users based on things like gender, race, and native language. In fact, there is a growing body of empirical studies suggesting that over-removal by platforms is not evenly distributed and that it hits marginalized communities harder. For example, there’s a particular risk for people who are speaking Arabic and talking about Islam or about U.S. intervention in the Middle East that their content could be taken down by moderators trying to combat Islamic extremism online. Often, automated tools and speech filters meant to remove violent extremism cannot tell the difference between a particular video that might be used for ISIS recruitment or in a legal and important way such as news reporting. There’s actually a striking case of this where the Syrian Archive, a public interest group documenting human rights abuses in Syria for future prosecution, claimed that Youtube mistook its videos for extremist content and removed more than 100,000 of its videos. Also, automated hate speech filters tend to have a disparate impact on speakers of African American English by falsely tagging them as engaging in hate speech. So increasing reliance on automation poses a lot of risks that have been raised by human rights groups from all over the world. 

SK: Some have argued that Donald Trump’s indefinite ban (or suspension, in some cases) from numerous online platforms should have come earlier. Do you think that there were underlying motives behind his deplatforming, beyond the events he incited at the Capitol? 

DK: These companies certainly have more to fear from Democrats being mad and somewhat less to fear in the short term from Republicans being mad. So their decision to suspend/ban Trump might be a reflection of the outcome of the election. Yet, the circumstances of a sudden and violent moment became overwhelmingly clear and the events at the Capitol probably motivated the companies the most. You can certainly argue that they should have taken action before, but these were new facts and Trump’s ban wasn’t just because of the prevailing political winds. There was something really threatening happening in the world and these companies decided to take a stand. 

SK: A lot of attention is focused on how platforms are able to silence governments and the most powerful people in the world. But your work also focuses on how platforms can expand governments’ power in problematic ways. So how might foreign governments shape how online speech in the United States is regulated? 

DK: U.S. law gives platforms largely unfettered discretion to set the content moderation rules they want to enforce. This broad leeway makes them vectors for the power of whomever has leverage over them. For instance, Europe is exercising tremendous leverage over platforms’ content moderation as it pertains to hate speech rules, which people might applaud because it is establishing a set of rules that a lot of people want to see. Alternatively, foreign governments, advertisers, and other entities might be shaping online speech and discourse in troubling ways. A striking example of this happened when New York residents sued Baidu, Inc.—which is China’s biggest and dominant search engine—for censoring their pro-democracy speech at the behest of the Chinese government. The pro-democracy activists brought the case, called Zhang v. Baidu.com, Inc., before a court in the U.S. and wanted their speech reinstated. But the U.S. court ruled against the pro-democracy activists and asserted that the First Amendment allows Baidu, Inc. to set its own editorial policy and exclude whatever speech it wants, even if it is at the behest of a repressive regime. This is a stark example of how foreign governments can successfully pressure platforms to institute restrictive speech rules and effectively censor speech here in the United States. 

SK: What gives you hope that some of the issues we have discussed will be successfully addressed? 

DK: I am amazed by the diversity and talent of people who are working on platform regulation issues and bringing different perspectives and proposals to the table. I am also optimistic about technologists because I don’t believe we’re at an end point where we have built every possible technology to respond to these problems. If Congress allows it, we will continue to see innovation in ways to respond to the challenges that we’re seeing today. So I do have hope. But we’re not going to arrive at a solution tomorrow, and if Congress forces us to pick one tomorrow, it will probably be a bad one. Though there I went negative again, sorry (laughs).

*This interview has been edited for length and clarity.

SUGGESTED ARTICLES