Skip Navigation

Why YouTube Needs to (Better) Regulate its Content

YouTube—home of Nyan Cats, bad celebrity lip syncs, and more—is also a breeding ground for conspiracy theorists and white supremacists. Although popular conceptions of radical safe spaces bring to mind the Dark Web and non-mainstream platforms, the most influential home bases for today’s alt-right are the same sites most Americans use every day: Twitter, Facebook, and, most notably, YouTube.

YouTube has long claimed that it is primarily a platform rather than a publisher: When addressing claims that YouTube assisted in the spread of misinformation, CEO Susan Wojcicki expressed concern but clarified that YouTube is not a “news organization” and did not want to get involved in fact-checking content. If we believe this, however, we must also believe that YouTube, which profits off the vitriolic ideological discourse happening on its platform, bears no responsibility for the impact of that discourse. As it becomes increasingly apparent that YouTube is being exploited for the recruitment and radicalization of today’s youth, that narrative has grown increasingly unpalatable.

Is YouTube really responsible?

Yes. Conflict journalist Robert Evans reported that a fifth of the fascist activists in his exploration of the reactionary right “credited” YouTube with their conversion. Many alt-right personalities—figures who reject mainstream conservative politics and often engage with white supremacist beliefs, nativist ideologies, and ultra-nationalist ideas—grew their outreach primarily or at least significantly on the platform. One such figure is Infowars founder Alex Jones, who promoted conspiracy theories about the Sandy Hook elementary school shooting.  YouTube is the most frequently linked website by alt-right figures on Twitter, surpassing links to Facebook by a factor of three. YouTube offers the alt-right an outlet, a safe space where they can propagate and share radical, and often dangerous, ideas.

As concerning as this sounds, it would be (somewhat) acceptable if all YouTube did was allow alt-right content to exist on its platform. Unfortunately, YouTube also feeds alt-right content to viewers who are searching for mainstream videos, producing a dangerous radicalization effect.

As Zeynep Tufekci reported for The New York Times, watching videos of Donald Trump rallies on auto-play led her to white supremacist rants and Holocaust denials. The Wall Street Journal observed the same effect, concluding that YouTube “fed far-right or far-left videos to users who watched relatively mainstream news sources.” For instance, after searching “9/11” and watching one CNN video about the attacks, reporter Jack Nicas returned to his YouTube homepage to see top recommended videos with titles like “Footage Shows Military Plane hitting WTC Tower on 9/11—13 Witnesses React.”

This is not because YouTube has inherently political goals. As Cristos Goodrow, Vice President of Engineering at YouTube, said, YouTube’s algorithm for political content is “the same system that’s working for people who come to YouTube for knitting or quilting or cat videos.” Rather, YouTube’s goal is to maximize the amount of time its users spend on its website. It just so happens that radical and extremist content is most eye-catching and compelling to the vast majority of viewers. Because alt-right content is so shocking, such videos generally receive higher levels of engagement—those who watch them are more likely to like or dislike them, comment on them, or watch them in their entirety—which teaches YouTube’s algorithms, seeking to maximize engagement, that it should recommend these videos to a broader audience. This strategy is beneficial for YouTube’s bottom line, but in effect, it means that hundreds of thousands of YouTube watchers are being fed extremist content in order to generate YouTube advertising revenue.  

Furthermore, extremist ideologies are especially well-suited to grow on YouTube’s sprawling platform. As researcher Rebecca Lewis explains, “YouTube monetizes influence for everyone, regardless of how harmful their belief systems are.” The site has become a “cooperative ecosystem” in which key mediating individuals interview both mainstream and extremist figures, legitimizing their opinions through “credulous, friendly interviews.” This collaborative strategy is used by daily “vloggers,” makeup artists, video game streamers, and the alt-right alike to increase viewership and subscribers.

And although it is generally problematic that YouTube is a vehicle for radicalization, it is even more important that it is a vehicle for alt-right radicalization in particular, as this group has grown significantly in size and impact in the last five years. In 2017, it was responsible for the majority (20 out of 34) of extremist-related fatalities—and twice as many deaths as it has been responsible for in 2016. Alt-right radicalization is now a legitimate national security problem, and although it is one US lawmakers and law enforcement alike have long been reluctant to take seriously, it is crucial that YouTube is forced to confront its role in the spread of this dangerous ideology.

Finally, alt-right’s connection to YouTube, as opposed to its connection to extremist social networks like Gab, is uniquely dangerous because YouTube is both firmly mainstream and directed at a very young primary audience. At present, YouTube is the most popular social media platform among teenagers (ages 13-17), with 85 percent watching videos regularly; by contrast, just half reported consistent Facebook usage. Among young adults (ages 18-24), an incredible 96 percent accessed YouTube. Because of the sheer youth of YouTube’s primary audience, it is even more crucial that YouTube devise a legitimate strategy to slow the spread of extremist ideas.

At present, Youtube incentivizes the behavior of alt-right influencers. Unwittingly, it has become of one of the great radicalizing forces of the modern era.

Isn’t this illegal?

No. YouTube has limited incentive to monitor hate speech because the law is on its side. The Communications Decency Act, passed in 1996, offers social media giants carte blanche, stating explicitly: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” But the lack of US law requiring YouTube to police extremist content does not mean that YouTube is morally blameless in this situation.

The Communications Decency Act was passed before Google was formed, nearly a decade before YouTube and Twitter were created—and nearly fifteen years before Richard Spencer burst on the stage as a powerful online voice for the modern alt-right— and was intended to provide “protection” to fledgling tech companies. Now that these companies are among the world’s most profitable and influential corporations, US lawmakers have a clear incentive and motive to pass legislation requiring social media companies to take more responsibility for the content they promote, provide advertising revenue for, and publish on their platforms. At the very least, the US should develop a framework for allowing YouTube to take down extremist content without violating First Amendment rights.

Regardless of the legal situation, the facts are clear: YouTube consistently recommends alt-right content to mainstream viewers. And unlike other social media platforms, its engagement is primarily driven by its recommendations, which makes the extremist slant of its suggestions even more concerning. Even if YouTube is concerned about the First Amendment, there is nothing stopping it from dampening the spread of radical content by, for instance, revoking premier advertising privileges from alt-right channels, as it does already for channels that focus on “controversial” topics like sex education, or restricting recommendations of content flagged as extreme to adult viewers.

Won’t this turn social media companies into censors of free speech? Why do we need more government regulation of private corporations?

One persuasive argument against turning tech platforms into “censors” comes from The Economist, which argued that (1) censorship requirements would hurt small companies, which lack the infrastructure and capital to develop monitoring algorithms or pay human moderators; (2) it is fundamentally wrong for a corporation to aim to “engineer a particular outcome for content,” such as a “balanced news feed.” Instead, the article suggests a system in which there are clear guidelines for restricted content, individual rights to appeal takedowns, and independent advisory bodies.

These claims are valid but can be dealt with simply, if not cheaply; and, in fact, make a convincing case for government intervention. Just as YouTube only allows its content creators to receive advertising revenue if they receive a certain number of watch-hours, government policies around content regulation can target companies who exceed a certain quantity of watch (or engagement)-hours. The federal government is best equipped to provide any such “independent advisory bodies,” and it should be up to a federal body to establish the aforementioned clear guidelines for restricted content, lest legal scholar Jeffrey Rosen’s words are made true: “[The] lawyers at Facebook and Google and Microsoft have more power over the future of privacy and free expression than any king or president or Supreme Court justice.” After all, companies like YouTube are already acting as censors, developing a “de facto jurisprudence”: For instance, Facebook prohibits nudity.

Finally, The Economist proposes that YouTube and others “open their algorithms and data to independent scrutiny.” Although this is a compelling storyline, YouTube’s algorithms are developed by some of the highest-paid, best-educated, and most innovative computer science researchers in the world. Few in the general public have the motivation and necessary knowledge to comprehend them, much less challenge them legally. In addition, YouTube’s data is what powers its advertising and is one of its most valuable assets. It is unlikely to give up anything absent legal enforcement.

In fact, YouTube’s recommendations are based on deep neural networks, an artificial intelligence technique that uses millions of data points to make predictions. Although such networks are remarkably good at generating useful recommendations, their analysis takes place in a black box—humans are essentially unable to understand what is happening because the machines in question are “teaching themselves novel skills.” That is, not even YouTube’s own engineers are entirely sure what is going on. This is not YouTube’s failing but an open question in artificial intelligence research.

Furthermore, there is no standard for judging an algorithm illegal. This problem was articulately described two years ago by the Social Science Research Network when it proposed an FDA for algorithms, arguing that “…criminal and tort regulatory systems will provide no match for the difficult regulatory puzzles algorithms pose.”

Currently, the ball is in YouTube’s court: Without any legal precedent for regulation of online platforms, it is unlikely that anything short of public pressure will motivate YouTube to change its ways. What we need today is for the public to put such pressure on YouTube and lawmakers to develop the legal framework necessary to coerce YouTube to censor its content more thoughtfully. This could involve the creation of a federal advisory body to help YouTube (and other companies with a certain amount of engagement) develop a system to flag radical content or formalization of legal guidelines to protect controversial content and remove content that, for instance, calls for domestic terrorism.

It is important to remember that illegal and actively harmful behavior is not one and the same. After all, Equifax broke no laws, despite accidentally making public the personal data of thousands of Americans. Facebook’s engagement with Cambridge Analytica and its role in the 2016 election was wholly legal and it will face few consequences, despite the horrifically deleterious effects of its collaboration. Should YouTube produce the alt-right’s new young stars, it too will be held blameless for its role in profiting off the radicalization of a generation—unless US citizens and lawmakers force it to be held accountable.

Photo: “Social Social Networks Icon Network”

About the Author

Ashley Chen '20 is the President of the Brown Political Review and a Senior Staff Writer for the US Section. Ashley can be reached at ashley_chen2@brown.edu

SUGGESTED ARTICLES