Skip Navigation

Preempting the Dangers of Deepfakes

Although they may seem like novelties at first glance, “deepfakes” threaten to usher in a new era where our eyes and ears can no longer trust online content. These videos take the face of one person and realistically superimpose it onto the body of another, recreating lighting, movements, and expressions so well as to be nearly indistinguishable from the real thing. Recently, deepfakes have gained notoriety on the internet due to the creation of videos that insert celebrities’ faces into pornographic videos, prompting bans from platforms such as Reddit and Twitter. The possible malicious uses of deepfakes extend even further; particularly troubling is their potential weaponization in order to spread misinformation on an unprecedented scale. While the technology behind deepfakes cannot be suppressed, we must implement robust policies to preemptively counteract their potential abuse.

Rapid and continual developments in the field of machine learning have driven the evolution of these technologies from simple Snapchat face swaps to videos nearly indistinguishable from the real thing – all doable on a home computer. Several desktop applications for the creation of deepfakes are freely available online for personal use. Researchers have already made substantial progress on algorithms that synthesize text, speech, and poses, although today’s deepfakes are still in their infancy. Soon, it will be possible to generate videos of anyone doing and saying anything. Content-hosting and social media platforms have already found misinformation a difficult problem to quash and deceptive deepfakes of this quality will only compound these troubles. Imagine videos depicting presidential hopefuls making controversial statements, medical officials denouncing vaccinations, or politicians declaring nuclear war.

With these endless possibilities on the horizon, the use of deepfakes for pernicious purposes has been particularly troubling to lawmakers. Last year, Senator Ben Sasse introduced the Malicious Deep Fake Prohibition Act of 2018. Although it expired with the government shutdown, he plans on reintroducing it. The bill would criminalize the creation and distribution of deepfakes “with the intent that the distribution of the deep fake would facilitate criminal or tortious conduct.” This is common-sense legislation that would punish many illicit uses of deepfakes beyond libel and slander, such as the aforementioned involuntary pornography, but it does not address their use to spread misinformation.

Perhaps the greatest difficulty that legislation will face is in finding a balance between malicious deepfakes and protected speech as guaranteed by the First Amendment. A bill introduced last year by the New York State Assembly sparked controversy for banning the creation of “digital replicas” of other people without their written consent. Such extreme legislation would infringe upon freedom of speech, effectively outlawing things like parody videos and biopics that use computer-generated likenesses of individuals long deceased. Public outcry and backlash from entertainment companies prevented the bill from passing.

Regardless of constitutionality, a ban on malicious deepfakes would be virtually impossible to enforce. The immense amounts of content that are created, uploaded, and distributed online every day would make policing deepfakes a logistical nightmare. Content hosts lack the resources to systematically review all uploads, and reliance on users to flag potentially harmful deepfake content would be ripe for abuse. Collaborative efforts by researchers and the Pentagon, through the Defense Advanced Research Projects Agency (DARPA), have begun to develop techniques to distinguish doctored from real videos. However, these approaches are subject to a perpetual arms race between advancing technologies and video forensics experts.

For the time being, adopting preventative measures is the most prudent approach. Cryptographic methods of signing data, such as RSA, offer ways of verifying whether a particular message originated from a particular sender. Such digital signatures could potentially be mandated for important government communications. The Government Publishing Office, a Legislative branch agency that records and disseminates official government documents such as congressional bills, laws, and the budget, uses a similar encryption scheme to digitally sign its records. Colleges across the country, such as Stanford and Williams, offer electronically signed transcripts. While these and similarly utilized protocols work only with PDF files, more widespread implementation of similar conventions encompassing other digital media formats would allow news outlets and private individuals alike to check that information is really coming from a certain place.

A preemptive approach coupled with public awareness campaigns is the most feasible strategy to combating the impact of malicious deepfakes. Distribution and proliferation of misinformation is ultimately inevitable, and threatens to erode our trust in video and audio. The war on fake news has shown that suppressing misinformation is remarkably difficult; ever more sophisticated deepfakes are poised to raise the stakes and make that fight even more challenging. Raising awareness of the use of deepfakes for misinformation and providing the tools to verify the origin of messages is an important first step to take.

Photo: “Deepfakes

About the Author

Jonathan Huang '20 is a Staff Writer for the US Section of the Brown Political Review. Jonathan can be reached at