Skip Navigation

Designed for Depravity

Original illustration by Sadie Levine '25, an Illustration master's student at RISD and Illustrator for BPR

Paint me a picture of anguish. Tell me a story of hurt. Make it aching… agonizing. Let it scintillate with tortured tears and sing with cries of pain. Let it reverberate in darkness and bleed through curtains of silence. Let it corrupt innocence, pervert curiosity, and debase delight. Paint me a picture gleaming with debauchery.

Or is that too abstract? 

This is one tale of generative AI (GAI). Trained on increasingly large datasets, GAI is able to construct strikingly beautiful or horribly sickening illustrations from incoherent prompts, tipping toward one or the other with minute alterations. Prompt, revise, repeat. It is a meticulous process—if not an artistic one—to achieve the desired result. Despite the advancement of machine learning platforms, GAI remains contingent on human prompting and, if left unregulated, allows for the intentional and systematic creation of perverse illustrations. When the images produced by GAI are entirely fake, it seems impossible to fully restrict the sickening without censoring the beautiful—which should elicit concerns given the recent increase in AI-generated child sexual abuse material (CSAM). Fully fictitious CSAM consequently exists in a liminal space—neither fully unregulated nor sufficiently restricted—and the time has come for it to be addressed within the law. Its existence and potential proliferation pose threats to children’s online safety and raise questions about the normative morality of child pornography. Even when AI-generated CSAM is entirely fake, it subverts the ethical considerations underlying existing legislation, and regulations surrounding its creation should therefore be rooted in those same moral principles. The emergence and rise of AI-generated CSAM accordingly require the establishment of new precedents in federal legislation that more strictly censor its production and distribution. 

Tell me a story of irrelevance. Lawsuits against AI-generated CSAM are likely to be dismissed in federal court, as they extend beyond the scope of precedents set by landmark cases governing child pornography. Normally, to warrant censorship under the First Amendment, content must qualify as “obscene,” a label only applied if that content passes a stringent three-prong test. New York v. Ferber, a 1982 watershed case, held that child pornography could be censored under the First Amendment even if it did not pass the normal obscenity test—a ruling solely rooted in concerns about the harm that befalls children actively involved in the production of CSAM. Ferber therefore made no provisions for censoring sexual material depicting fictional children. GAI has the ability to entirely omit real people in its production of CSAM, rendering the precedent set by Ferber extraneous.

Paint me a picture of hypocrisy. Brought to court in 2002, Ashcroft v. Free Speech Coalition clarified that depictions of adults playing children in sexual scenarios did not fall under the Ferber ruling, thereby allowing such material to circulate. Sensible minds in the deliberation process argued that children who were inevitably exposed to these scenes might be more susceptible to pedophilic abuse, but the Supreme Court ultimately affirmed that the potential for crime did not warrant a restriction of free speech. This ruling could establish a troubling precedent for future cases determining whether fictional CSAM is legally protected. After all, since Ashcroft holds that using adults to mimic CSAM is constitutionally safeguarded, it would stand to reason that the artificial reproduction of child pornography would be as well; no children are directly exploited in the production of either type of content. Perhaps the threat posed by generated images did not seem salient during Ashcroft’s time, but today, as our real and virtual worlds become increasingly intertwined, it has become critical. Evidently, neither Ferber nor Ashcroft is expansive enough to curtail the rapidly developing AI-generated CSAM market.

Make it sinister… malicious. The majority in Ferber stated that child pornography is “intrinsically related to the sexual abuse of children,” underscoring the psychological damage people incur from knowing a digital record of their abuse exists. If the creation of AI-generated CSAM is legally permitted, children will inevitably discover it online, recognize their vulnerability to predation, and infer they may one day fall victim to such abuse—or worse, be reminded of the sexual abuse they have already endured. This experience would undoubtedly inflict profound psychological damage on impressionable minds. When the minimum age of consent in the United States is 16 years old, we cannot, in good conscience, accept the digital representation of acts children would be legally incapable of assenting to. Should the circulation of material that could be interpreted as abuse be constitutionally protected? Should the right to produce material that could promote the victimization of children be defended? If the instinctive answer to the latter is apparent, so should the former. Any alternative is a deplorable and apathetic normalization of child abuse. 

Let it breed perversion and impotence. Let it fester in the embers of perjury. Since the technology employed to produce AI-generated CSAM has only recently been developed, it is unconscionable to decide cases on its use under existing precedents. Decisions that could not conceive of the scale at which GAI would expand the CSAM market cannot truly determine the legality of actions that seemed unfathomable just years ago. Fragments of existing legislation may provide a basis on which to establish new legal frameworks, but laws restricting the creation and distribution of AI-generated child pornography should be built on cases of first impression—that is, cases where existing precedents are not expansive enough to secure an indictment. These laws should, however, reflect the latent morals underlying Ferber’s precedent. If AI-generated images contain illustrations of people that could be construed as children, they should have no legal right to be sexually explicit in nature. Defending the creation of such repugnant material, even if it does not involve children’s active participation, would be a reprehensible endorsement of depravity.

Even rationalizations proposing that AI-generated CSAM might decrease the demand for real child pornography fall short of a semblance of speciousness. GAI is trained on datasets that include real CSAM, suggesting the continued existence of AI-generated child pornography would rely on the perpetuation of its nonfictional counterpart. One’s existence would bolster the other’s proliferation rather than constrain it. Harm reduction to real children is therefore a laughable pretext for endorsing the creation of AI-generated CSAM, which would instead revictimize abused children, infringing on their right to dignity and integrity. Justifications for this new form of child pornography also blatantly ignore society’s tendency to lower the standard for what is morally acceptable when acts of deviance are repeatedly perpetrated.


Creating CSAM in any form is the paragon of perversion and should be censored to the greatest possible extent. Its unrestricted propagation will likely cause irreparable damage to children’s development, which is increasingly shaped by online exposure. When the inherent immorality of child pornography is widely accepted, why should the extent of its condemnation differ if the existence of the children portrayed cannot be substantiated? Expanding censorship laws around AI-generated CSAM would simply reflect a legal execution of the ethical principles already administering daily life. Private indignation is necessary but insufficient. If outrage is the antithesis to apathy and the conduit for change, let it serve its purpose. Let its power percolate.

SUGGESTED ARTICLES