Skip Navigation

Politics, Patriarchy, and AI-Generated Pornography

Image via MIT Technology Review

This January, users on X, formerly Twitter, flooded the network with sexually explicit images of Taylor Swift that were likely generated by artificial intelligence (AI). One of these images gained 47 million views, while X scrambled to stop the spread. Just a week later, Congress introduced The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which formally defines “digital forgery” and allows victims of it to sue to “defend their reputations.” Unfortunately, The DEFIANCE Act does not go far enough to actually tackle the issues that the AI image generation brings for women.  Congress must hold generative artificial intelligence companies responsible for preventing the creation and spread of dehumanizing images.

In 2019, a study by Sensity AI found that 96 percent of all AI deepfakes were non-consensual sexual images of women. Yet the fear of deepfakes influencing politics has been at the forefront of regulatory conversations, leaving women without legislative support or even discourse. Deepfakes often lead to doxxing and threats to women’s lives, even though the contents of the images are entirely manufactured. Chidera Okolie, of the UK Ministry of Justice, wrote that “victims continue to call for more specific and stricter laws to regulate deepfakes and assign penalties for non-adherence.”

It is unclear whether the absolute stardom of Taylor Swift or Congress’ growing attack on Big Tech is what sparked a renewed interest in legislating deepfakes. But with wide support among polled Americans, the DEFIANCE act has brought together a rare bipartisan coalition, featuring senators from Dick Durbin (D-IL) to Josh Hawley (R-MO). However, the act shows regrettably little understanding of how rapidly deepfakes spread and how anonymous their creation can be. More importantly, the bill does not address the “ease of perpetuation” of deepfakes, especially now that cheap, public AI tools have made it increasingly easy to propagate such images and share them on social media networks. 

In theory, other non-consensual pornography laws should apply to AI-generated images. In practice, however, the ease and anonymity with which deepfakes can be created makes prosecution tough. Lack of data about the identity of the perpetrator makes it challenging for victims to prove their victimhood in court, which frequently means that nobody is held accountable. 

At the state level, laws in California and Illinois allow victims to sue “those who create images using their likeness,” while Georgia, Hawaii, Texas, and Virginia have laws that criminalize the creation of deepfake pornography. Minnesota and New York have laws that do both. Interest groups, primarily conservative ones, often push for tougher legislation, specifically criminalizing the possession and distribution of such materials. Social media companies have also banned the creation and sharing of deepfakes, but as Taylor Swift’s case shows, it is hard to catch these images before they go viral. Though these efforts are laudable, the widespread availability of deepfake creation technology means that only wide-scale or national regulation can induce any meaningful change.

Our best bet to protect women from deepfakes is to stop them from spreading. For instance, the European Union has already successfully legislated against the spread of deepfakes, criminalizing the spread of AI-generated pornographic material along with revenge porn. This legislation is substantially stronger than the DEFIANCE Act, as it allows victims to seek justice from all guilty parties, not just deepfake originators. China, the EU, Canada, South Korea, and the UK have all also attempted to tackle the deepfake crisis to different extents—many of their laws require deepfake creators to report the images they create and any suspicious behavior they observe. But in the long run, only anti-deepfake-spread legislation will create meaningful impact and curb gender violence in cyberspace.

Even as national governments take steps to ban deepfake pornography, companies should implement internal regulations so that such images can never be created. Google, for example, has banned the usage of deepfakes in training its AI tool, Colab. OpenAI, for its part, has been leading the charge against AI-generated election disinformation ahead of the 2024 elections, and calling for other platforms to adopt the same care—but has been remiss when it comes to deepfakes. Though the company has said that it will “digitally watermark” images generated through its platform to keep track of them, that pledge is insufficient, since detection of AI-generated images remains difficult. According to Jared Mondschein, a physical scientist at RAND, “There is a technological arms race between deepfake creators and deepfake detectors.” It is hard to predict what new ways of abusing AI perpetrators will find. The large Generative AI platforms will always be playing catch-up, thus creating a time gap between the creation of a new type of deepfake and a way to identify and curtail it. Before there is a tangible way to spot and track deepfakes, most legislation will be one step behind perpetrators.

As with many issues at the intersection of technology, human rights, and personal liberty protections, deepfake regulation requires collaborative efforts from legislative bodies and tech companies. All of these actors must have the interests of regular people, who are currently victims of AI abuses, in mind while creating safeguards. From the legislative side, we need to see firmer regulation against AI-pornography that takes into account the technological novelty and complexity of deepfakes. From generative AI companies, we need immediate action to restrict the materials that can be created by malicious users. Social media companies must also contribute to curbing the spread of such materials on their platforms. Only with multi-layered support and action can there be decisive blows against AI-based sexual abuse.

SUGGESTED ARTICLES