The past few decades have seen the invigoration of digital artists whose works have expanded to new frontiers due to grand achievements in information accessibility. Yet the dream of cyberspace for artists is ending—or so you might think. Generative AI has disempowered artists by saturating the internet with cheaply produced mimicry and abusing artists’ legal ownership of their own work. The federal government’s lazy pace, coupled with booming public interest in AI, has stacked the odds against an otherwise divided class. May 2023 saw a first glimmer of hope: Glaze, a newly developed technology that prevents image mimicry, was released to the public as open-source software. Its younger sibling, Nightshade, which works to poison an AI model’s ability to classify images, was released this spring. Taken together, these pieces of software seem to level the playing field back toward digital artists.
Glaze and Nightshade both fall under the category of data-poisoning software, which employs adversarial attacks on machine-learning models. Rather than deliberately attacking the architecture of a model with malware or stealing the identifying data of a developer, data-poisoning attacks work by subtly altering training data to cause catastrophic effects on a model’s final output without making the alterations obvious. For instance, if a chatbot is fed data sourced from a social media site populated with fake accounts and rife with politically biased content, then these biases will be apparent in the model’s output—frustrating a developer who hoped for a more reasonable interlocutor.
Glaze, the older of the two, works to prevent a model from mimicking the contents of a set of images. Given the internal structure of a model—effectively, how it analyzes an image to classify it—Glaze executes discreet changes to images’ pixels that will cause consistent misclassification of those images. A painting of a dog, when combined with shrewdly chosen pixel noise, is a cat in the eyes of a model. As image-generation models typically rely on underlying classification, this prevents a model from generating new images in a given category. Nightshade uses a similar technique to achieve a different effect. Rather than preventing a model from classifying a specific type of image, it distorts the classification and production of all images. It is a ‘poison pill’—if a sufficient number of Nightshade-poisoned images accumulates in a model’s training set, all outputs will be distorted and unrecognizable.
Both Glaze and Nightshade are the brainchildren of the University of Chicago’s Security, Algorithms, Networks and Data Lab, headed by computer science professor Benjamin Zhao. Zhao styles himself as the art world’s foremost free-wheeling vigilante. Nightshade is not just some generic piece of cybersecurity software; according to Zhao, it is a way to “tip the balance” of the uneven “playing field” between companies and freelance artists. In releasing it as open-source software, he seeks to stop the “destruction that’s facing the art community” rather than to turn a quick profit. Given this aim, Zhao validates the existence of both Glaze and Nightshade as heuristic solutions, critical to providing artists with some sense of security while Congress and state legislatures fail to enact meaningful protections
Zhao is justified in his concerns about the speed and efficacy of legal decisions concerning the fair use of artwork published online. AI companies have scored an early victory in Authors Guild v. Google, which found that Google’s copying of books into an online database fell under fair use. OpenAI has used the ruling to contend that model training on copyrighted text and images is justified, provided that training materials are not released to the public. Although illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz won in Andersen v. Stability AI, in which a district judge found that Stability AI and Midjourney’s storage of their copyrighted materials is illegal, the ruling failed to conclude whether or not model training on the materials is itself illegal. Even if protections for copyrighted material are sufficiently instituted in the future, they may only be enforced against larger AI companies—little is stopping individuals from scouring the internet and outfitting their own homebrew model.
Amid these grim legal outcomes, Zhao and his colleagues have achieved celebrity status among some artists—a week after its release, Glaze was downloaded 250,000 times. Zhao estimates that between May 2023 and May 2024, the software was downloaded 2.3 million times. Coverage by major outlets has been positive, highlighting the responses of artists like McKernan, who views Glaze as her “mace and…ax” against companies like Stability AI.
For all the lofty ambitions of their developers, Glaze and Nightshade are not indomitable. Image-cleansing techniques have proven to be effective tools to negate both software’s effects: AdverseCleaner, a program that consists of ten lines of code, almost completely neutralizes Glaze, for instance. Researchers at Google DeepMind showed that Gaussian blur, a more common technique that changes the content of a pixel based on neighboring pixels, is effective against both Glaze and Nightshade. For Glaze and Nightshade to remain viable in the face of these countermeasures, they must add much more pixel noise to images—but this renders a poisoned image recognizably distorted to the human eye and thus more obvious to curators of training datasets. Nor are the images employed by StableDiffusion and OpenAI scraped directly from the internet—typically, they come from established, commonly used datasets free of poisoning. Finally, should a sufficient quantity of poisoned images saturate the internet, dataset curators can simply turn to different sources of training material: frames taken from video works or videos of artwork, which cannot be protected by either Glaze or Nightshade.
Because of their uncertain future utility, the long-term impact of Glaze, Nightshade, and similar programs is unclear. But online artists, few of whom are likely to be following novel machine learning research on ArXiv, are treating Glaze and Nightshade as permanent extra-legal protections rather than recent iterations of a cat-and-mouse game between opposing computer scientists. Glaze has even been directly incorporated into the framework of Cara, a widely used community art platform. Artists like Autumn Beverly and McKernan have cited both software as motivation for reuploading their work after discovering earlier instances of scraping by AI companies. Glaze and Nightshade arguably offer a false sense of security. Online artists who would not otherwise publish their works online out of fear of misuse by AI companies may naively begin to do so.
Yet the symbolic purpose of Glaze and Nightshade as a means of uniting artists against a common foe endures beyond its technical failures. In order to disrupt a large model, millions of images must be poisoned with Nightshade; should an online artist act alone in using the software, they will fail to meaningfully damage a model trained on their data and thus see no benefit. The efficacy of Nightshade therefore rests wholly on the collective action of many artists. Jingna Zhang, the founder of Cara, sees Glaze as the “very first step in people coming together to build tools to help artists” rather than just a means of self-protection.
Nightshade represents another step in a broader turn toward solidarity among artists—a group that has historically failed to produce the powerful unions typical of blue-collar labor. It follows the unionization of the museum workers of the Guggenheim, the Museum of Contemporary Art Los Angeles, and the Philadelphia Museum of Art and their incorporation into groups like the United Auto Workers, along with last year’s SAG-AFTRA strike—one of the longest entertainment strikes in history—which saw actors militate against the use of their likenesses for training AI models. Though mass use of Nightshade does not replicate the hierarchical, centralized structure of a union, it does represent artistic self-defense: Its express purpose is to draw AI companies into explicit negotiations with artists and re-empower an otherwise unprivileged class. For creators like Beverly, Nightshade is a means of returning “power back to the artists.”