Skip Navigation

Deepfakes and Shallow Truth: AI-Induced Misinformation Calls For a Human-Focused Response

Image via Culture of Gaming

In late March, a video depicting former President Donald Trump being arrested circulated on social media. It was weeks before Trump’s actual indictment. The AI-fabricated video utilized deepfake technology to construct realistic, deceiving visuals. An online user, as a joke, created the video using Midjourney, an artificial intelligence text-to-image generator. Jack Posobiec, an internet influencer and one of the people behind the right-wing Pizzagate conspiracy theory, also recently made and shared an AI-fabricated video of President Biden announcing a draft for sending American soldiers to Ukraine.

AI, a relatively new technological development, is already being used and misused for political aims. Its deepfake abilities and its implications for publicly available online information will transform the political landscape by contributing to the growing influence of misinformation on political viewpoints. To mitigate AI’s impacts on society and safeguard public trust, it is necessary to address not just technology and AI-generated media, but also the people consuming it. 

Sometimes referred to as “AI-generated media,” the term synthetic media encompasses the creation and modification of media by AI and machine learning. Synthetic media can include AI-produced art, music, images, written material, videos, and altered audio. Deepfake technology, which can create hyper-realistic visual images inspired by real people, is one of AI technology’s more widely known and scary applications. The technology is cheap—Version 5 of Midjourney’s art generator costs $30 monthly—and it is still more affordable and sophisticated than previous versions. 

Sam Gregory, executive director of the human rights organization Witness, told The Washington Post, “There’s been a giant step forward in the ability to create fake but believable images at volume. And it’s easy to see how this could be done in a coordinated way with an intent to deceive.”

Moreover, synthetic media is hardly regulated. Regulations implemented by media platforms over the past few years are already outdated as deepfake technology becomes more advanced and harder to detect. It is also easy for users to cover their tracks. Globally, governments are responding slowly, and many worry about what state-ordered regulations could mean for free speech. Many politicians also do not understand many of the digital and technological elements necessary for drafting effective legislation.

The widespread distribution of digitally-generated media is inevitable. Regulations enforced by tech companies are limited in their ability to identify and control synthetic media—in fact, many of these companies use algorithms that will promote their spread, assuming the content is attention-grabbing. Deepfakes are not the first hard-to-regulate online tool that can manipulate public opinion. Bots programmed to spread fake news on social media played a significant role in the 2020 presidential election. They helped spread leaked information mixed with false reports to skew public perceptions of Emmanuel Macron days before the French election. More recently, platforms have tried to crack down on bots, but this has proven complicated and difficult

Like bots, deepfakes will likely face a fate of attempted regulation, all while online platforms’ programmed algorithms help enable their spread. Propaganda bots will assuredly play a role in helping spread misinformation created by deepfakes. There is also a monetary incentive motivating the spread of false information: Researcher Amber Case writes that if an article “causes enough outrage…advertising money will follow.” 

The consequences of the rapidly increasing prevalence of deepfakes and synthetic media on political life are twofold: AI’s latest form of misinformation will manipulate public views, and it will contribute to the erosion of public trust in all media and information. 

Misleading synthetic media, like the video incorrectly portraying President Biden announcing a draft, allows the public to ground political opinions in information shared through false visuals. That video of Biden was easy to debunk, but not until after it had left an imprint on a large number of people online. And, despite being disproved, it still has the power to drive conspiracies, leaving images in consumers’ minds and forcing people to confront the question of the reliability of debunking sources.

Local elections are particularly vulnerable to synthetic media as there are few news sources with the appetite for debunking it. And there is always the possibility of conspiracy theories like QAnon and “stop the steal”—which already race through internet forums—gaining popularity through AI-generated visual and audible “evidence.”

Deepfakes have a second important repercussion on public life and politics: the perception that all media could be unreliable. A 2020 Sentinel report on deepfakes explained, “When any video can be a deepfake, then instead of being deceived by deepfakes, people may grow to distrust all video and audio recordings as actors deflect and sow doubt on authentic media content.” Individuals (or governments) can now employ an AI-manipulation argument to falsify images exposing actual, and maybe unbecoming, behavior or events. New York Times journalist Shane Goldmacher questions if the tape of Trump bragging about assaulting women, leaked in 2016, were released today, “Would Mr. Trump acknowledge it was him, as he did in 2016?” He postulated that he would not.

Despite the many benefits of AI and the advantages of the internet and technology in strengthening democracy, AI has the propensity to manipulate perceptions of reality and truth. It is a question of epistemic security—keeping knowledge and truth safe at the risk of public health, safety, and democracy. 

So, what do we do about it? Unfortunately, any confidence in governments’ propensity to regulate emerging AI technologies and their effects is naive given the slow speed at which they act and the inability of legislators to even comprehend the regulatory arena of AI. Instead, Constance Kampf, a researcher in computer science and mathematics, argues solutions need to center a “socio-technical design” that combines technological solutions with sociological solutions grounded in education. 

On the technological side, ideas are quickly emerging to control and regulate deepfakes. Some are proposing requiring watermarks on synthetic media. Revel.ai, a synthetic media producer, and the digital identity security provider Truepic developed a virtual watermark to disclose if photos and videos are human or AI-generated. Resemble AI is trying to mark audio deepfakes—cloned voices—with an audio watermark. A few months ago, China became the first nation to require watermarks or digital signatures on AI-generated media, but enforcement might prove difficult. Many companies have also begun placing restrictions on manipulated media in the past few years, but deepfake technology remains hard to detect, and the creators are difficult to identify.

Watermarks and regulations are necessary steps and good ideas. But they cannot stop AI-generated media from spreading, nor can they prevent people from consuming and believing it. Digitally-focused solutions should be coupled with educational programs that teach media literacy and encourage a culture of skepticism in online information consumption, especially given the hard-to-regulate nature of deepfakes and their inevitable online dissemination. Michael Ann DeVito, a postdoctoral research fellow at the University of Colorado Boulder, wrote, “We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.” Similarly, Kampf calls for “substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy.” A human-based approach would involve digital literacy courses required in lower and high schools and making access to similar classes encouraged and available to adults. 

We must act with urgency; AI-generated media is spreading, and it is spreading fast. In a jarring estimation, Victor Riparbelli, the CEO and Founder of Synthesia, a company providing synthetic media for commercial use, predicts that, by 2026, 90 percent of online content could be synthetically generated—referring mainly to video content. Collective understandings of reality and truth are at stake—an issue well beyond the scope of technology-based solutions. 

Some researchers and journalists depict nightmare scenarios where the ability to identify truth from fiction becomes wholly lost, giving it names like “infocalypse,” “epistemic babble,” “information apocalypse,” and “reality apathy.” These futures conjure ghastly images beyond the narrow scope of corrupted electoral politics and swayed opinions; an “infocalypse” could mean complete political upheaval, compromised public health, threats to national security, loss of respect for knowledge, and both interpersonal and collective distrust. 

The growth of the internet has meant the decline of truth. The breadth of regulations and technological responses to AI are limited; the public holds the responsibility to ensure that truth is not entirely lost in the wake of new technology transforming how we consume and perceive digital media. It is up to all of us to forestall the dystopian future defined by “reality apathy” that emerging deepfake technology makes appear like a future reality. 

SUGGESTED ARTICLES