“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are the 26 words that serve as the foundation for many policy and judicial decisions about freedom of speech and corporate responsibilities.
Legislation and policy have failed to protect people from the harms of social media and Big Tech. Whether it be algorithms that lead to extreme polarization and harm teenagers’ mental health, or social media’s raw addictive potential, the US government has had a hard time regulating the online sphere. We are now in the midst of another Big Tech revolution, this time with Generative AI (GenAI), which offers widespread tools and chatbots—and almost no regulation for their existing and potential harms.
The lack of social media regulation in the United States is not the new bug that many prescribe it to be. Rather, the harms of social media have been perpetuated by a well-executed legislative feature that has protected the tech industry for almost 30 years: Section 230. Congress and the courts can now make the detrimental mistake of extending the protections of Section 230 to GenAI, thereby magnifying the already existing harms of technology on young people in the United States.
Under Section 230, which is part of the Communications Decency Act of 1996, social media platforms operate as publishers. Much like book publishing houses, these platforms are not responsible for the content of their authors, meaning that the state cannot hold social media platforms accountable for user-created content. While Section 230 allows companies to enforce guidelines on their users, it shields them from being sued by anyone who does not like specific content found on that platform. Zeran v. America Online (1997) and Carafano v. Metrosplash.com (2003) were two of the early lawsuits that concretized Section 230 immunity for user-generated content released on social media platforms.
When Section 230 was enacted, the technology landscape was significantly different from what it is today, and even from what it looked like in the early 2000s. Pew Research estimates show that by 2000, 52 percent of US adults had access to the internet. The major tech Initial Public Offerings (IPOs) of 1996 were mainly for hardware companies: Acer, Asus, and the internet provider Digex.
In the mid-1990s, when Section 230 was drafted, proponents of it, notably Senator Ron Wyden (D-OR) and former Representative Christopher Cox (R-CA), argued that Section 230 would enable service providers to host user content without fear of being liable as publishers and encourage voluntary moderation rather than mandatory censorship models. This congressional protection would therefore promote technological innovation. The authors were trying to let the internet flourish without excessive government interference.
Yet, we are now fully aware of the size and reach of Big Tech, and investments in AI infrastructure promise even greater magnitude and influence.
In 2023, the original authors of Section 230, Cox and Wyder, who were not technologists but legal and public policy professionals, stated that they were against extending the legal shield to AI chatbots. However, a 2023 Congressional report on Section 230 and GenAI was not so explicit, saying: “Courts have not yet decided whether or how Section 230 may be used as a defense against claims based on outputs from recently released generative AI products.”
The time for courts to address AI chatbots’ liability for published material has come: Raine v. OpenAI. The parents of Adam Raine, who committed suicide after months of enablement from ChatGPT, filed a complaint against OpenAI in August 2025. This case may serve as the first significant ruling on the liability of AI platforms for harm caused to their users.
A good starting point would be to establish a clear distinction between social media and GenAI tech companies and their products. While social media platforms are intermediaries—they host and distribute content created by users—Large Language Models (LLMs) generate entirely new outputs. Social media algorithms amplify or suppress existing speech; they are curators of content. LLMs, in contrast, are creators of content, synthesizing and recombining massive amounts of training data into new textual outputs each time a user prompts them. Although LLMs draw from patterns in their training data, the internal recombination of those patterns produces novel expressions and inferences that cannot be traced to any single source, functioning more like a form of cognitive abstraction than simple quotation.
Legal scholars, such as Marco Bassini, note that the intermediary logic that once justified protections for platforms as neutral hosts of user speech no longer applies to generative systems. This shift transforms the role of technology from a channel to a creator. Extending the same immunity that shields intermediaries to active generators risks collapsing two fundamentally different categories of communication.
This distinction matters legally: Social media’s liability rests on what users post, whereas LLMs’ potential harm stems from what they themselves produce. Applying the same immunity that protects hosts of speech, like Facebook or Reddit, to systems that produce speech, like ChatGPT or Claude, is a fundamental categorical error. Treating LLMs as passive intermediaries misunderstands the core technical reality that their outputs are generated through internal data sets and learning algorithms, meaning any harmful statement is a product of the system’s design and training choices rather than the speech of a third-party user.
The legal assumption of passivity that underpins Section 230 cannot coexist with the autonomous, inferential nature of GenAI. Shielding generative outputs under intermediary immunity would effectively erase accountability for the system’s design choices, hallucinations, and rhetorical framing.
If courts ultimately rule that Section 230 does not extend to GenAI systems, hallucinations could become a direct basis for product-liability claims. Section 230 clearly protects platforms when users repost AI-generated misinformation, but it remains “an open question” whether it shields AI developers when the hallucination originates inside the model itself. Without Section 230, plaintiffs could argue that a hallucinated output reflects a design defect. Because companies have long been aware of the text-generative capabilities of AI—which are closer to authored speech than neutral hosting—judges may view harmful hallucinations as a failure to warn, especially when minors are involved. In this scenario, hallucinations would no longer be treated as a byproduct of user prompts but as actionable harms traceable to the product’s architecture, training regime, and safety design choices.
Before Raine v. OpenAI, the GenAI litigation had primarily focused on whether AI companies using copyrighted material to train their models was illegal or whether AI-generated outputs infringe on existing works. Raine v. OpenAI pushes the debate into the realm of product liability and human harm, opening a new legal frontier. Copyright cases test who owns the data and outputs of AI, but Raine probes who bears responsibility when the output itself causes harm.
Raine v. OpenAI unlocks a new argument against Section 230, as the plaintiffs claim that a teenager being encouraged to commit suicide demonstrates the true risk of unmitigated LLM linguistic production.
It is hard to currently assess the Raine v. OpenAI case because it is just in the pre-trial process. However, it is possible to extrapolate how the case will be ruled from the previous lawsuits against Big Tech’s impact on teenagers’ mental health.
In the past, Meta has repeatedly used Section 230 as a central defense in lawsuits alleging that their platforms led to teenagers’ self-harm or suicide. Courts have often agreed that platforms, operating as mediums for content production rather than actual producers, are immunized from liability. In the context of Raine v. OpenAI, Raine argues that the harm flowed from the design of ChatGPT itself.
Courts now face technological questions that existing legal frameworks cannot answer. Judges in both social media and AI cases are becoming increasingly alert to novel arguments and are being asked to set critical precedents regarding online harms and platform accountability involving an industry many of them barely understand. Many of the judges handling these cases built their legal expertise long before artificial intelligence was a public or intellectual issue. The frameworks and precedents they rely on from the 1990s struggle to capture how AI systems function today.
As a result, there is a generational and epistemic gap that questions what kind of training and technical development judges should go through before arbitrating AI laws and policies. The courts risk entrenching outdated assumptions and understanding into the emerging legal order, repeating the same mistake made during the rise of social media.
The clear solution to prevent the expansion of Section 230 into more tech contexts is to establish a precedent that Section 230 does not apply to cases of AI harm since GenAI produces new linguistic outputs that Section 230’s intermediary framework was never designed to cover. It could be the last stand against the law when pushback against its social media applications has clearly not worked, with potential reforms like the Big Tech Accountability Platform and the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act seeing little progress.
In contrast to the United States, the European Union has applied strict regulations to the internet, especially with regard to Big Tech companies, which are subject to increasingly stringent regulatory measures. The European framework already distinguishes between hosting and creating information. The 2024 EU AI Act and the Council of Europe Framework Convention on AI explicitly treat generative models as active systems subject to rights-based oversight, rather than passive intermediaries. Legal scholar Bassini situates this within a broader “Brussels effect,” where Europe’s cautious, human-rights-anchored governance contrasts with the US market-driven model. While imperfect, the European approach at least recognizes that algorithmic production of speech is a regulatory category of its own, not a subset of platform liability law. In the face of the upcoming Raine v. OpenAI case, the European approach is a model the United States should aim to follow, lest it risk repeating the legislative blind spots that allowed social media to grow and enact unregulated harm for decades.