Skip Navigation

AI’s Unethical Underbelly

Original illustrations by Nicholas Edwards ’23, an Illustration major at RISD

Countless artificial intelligence (AI) platforms make use of content moderation filters to prevent harmful and inappropriate output from being displayed. OpenAI, the creator of ChatGPT, cites its content filtering system, Moderation endpoint, as part of its “commitment to making the AI ecosystem safer.” Facebook has implemented an AI system to ensure that all content meets its Community Standards and “create[s] a safe environment” on the platform. Building massive datasets is essential to the functionality of these filtering systems, as they are annotated in ways that train the models on how to identify harmful content. However, the creation of these datasets for many moderation filters is dependent on cheap human labor. When considering the ethics of AI and its uses, it is essential to consider the ways in which AI has exploited—and is continuing to exploit—people from marginalized communities. In particular, attention must be paid to the immense psychological impact of AI on those who work on it globally. 

In January, TIME released a report detailing the conditions that Kenyan workers faced while annotating content on behalf of OpenAI and its Moderation endpoint. Employees were given 150 to 250 text passages during a nine-hour shift and tasked with flagging them if they constituted sexual abuse, hate speech, or violence. Workers were paid less than $2 per hour to read these incredibly graphic texts. One employee described reading a passage so graphic—involving bestiality and pedophilia—that she experienced torturous recurring visions. 

Sama, the US-based company responsible for outsourcing this labor to its Kenyan branch, told its data annotators that they were entitled to sessions with “wellness counselors.” However, workers widely reported that their requests for meetings were frequently denied and that the appointments they attended were unhelpful. Instead of receiving proper care, many employees were encouraged to prioritize productivity over their mental health. Nonetheless, Sama presents itself as committed to moral AI. On its website, it touts itself as a progressive company that performs “ethical data labeling that is socially responsible,” provides employees a living wage with benefits, and has lifted over 65,000 people out of poverty. 

Despite these claims, Sama has a history of subjecting its employees to unethical conditions, particularly during its past partnership with Meta, Facebook’s parent company. In another TIME report, workers described the content moderation work they performed for Meta for around $1.50 an hour as “mental trauma,” and many were diagnosed with PTSD, anxiety, and depression. Employees lacked mental health resources, and Sama engaged in union busting when workers attempted to organize for better conditions. In fact, both Meta and Sama are currently involved in a lawsuit spearheaded by former Sama employee Daniel Motaung, who was fired after attempting to lead a worker strike. Although Sama was Motaung’s direct employer, Motaung is accusing both companies of violating the Constitution of Kenya, and his advocates place special blame on Meta: Cori Crider, the co-director of the NGO representing Motaung, told TIME, “Meta designed the system that exploits moderators and gives them PTSD—and Meta is the one treating them as disposable.”

Other employee descriptions reinforce the veracity of Crider and Motaung’s claims, as Sama is not the only firm that Meta has used to outsource labor for content moderation. Accenture, a Dublin-based IT consulting firm, had a $500 million contract with Meta to report and filter out graphic content. Workers based in Poland, Ireland, the Philippines, and the United States provided the New York Times with strikingly similar accounts to those of Sama employees: incredibly traumatizing work experiences rewarded with low pay and inadequate access to mental health resources. 

The endless stories of large tech companies, such as Meta and OpenAI, outsourcing content moderation to unethical firms emphasize a grim fact: the psychological well-being of workers—often from marginalized backgrounds—is deemed an acceptable sacrifice to advance “revolutionary” technology. While the lack of regulation to protect workers in the Global South from exploitation is prevalent across industries, common perceptions about AI as operating independent of human labor make worker oppression even more poignant in the tech world. Dominant AI paradigms theorize about the possibility that programs can take the place of workers, artists, or even entire governments. However, with sensitive content screening being largely dependent on human labor, AI development as we know it today can never be entirely disentangled from humanity. 

Not only do AI algorithms have profound psychological impacts on the low-wage workers responsible for their data annotation, but they also perpetuate further oppression of these communities by working to police and unfairly monitor them. Notably, a sizable amount of data annotation work is outsourced to Latin America, where, similar to the Kenyan workers described previously, employees give accounts of long hours, low wages, and expulsion from tasks or chat channels after asking clarifying questions. Simultaneously, however, numerous countries throughout the region are beginning to utilize government-sponsored AI programs, many with the intent of reducing crime through predictive policing. Many of the datasets these programs use are known to perpetuate racial, gender, and class discrimination. 

For example, in Venezuela—a country whose residents make up the majority of AI data annotators in Latin America—videos of English-speaking, AI-generated news broadcasters began circulating on various social media sites and the state-owned television platform (Venezolana de Televisión) in February. The deep-fake news anchors worked to spread pro-government propaganda, arguing that claims of the country’s hyperinflation, food shortages, and state censorship are exaggerated. 

It is important to note that instances of governments exploiting AI like this are not inevitable: There are efforts within some Latin American countries to monitor AI usage through national councils or observatories, indicating some promise for the eventual reduction of harmful AI programs. However, some governments in Latin America and other parts of the Global South are currently willing to use AI algorithms to perpetuate harm, making it unlikely that any substantial regulations will be implemented in these regions in the immediate future. Until meaningful regulations are adopted, large tech companies will continue to target areas where they can get away with paying low wages and exploiting lax labor regulations, furthering long cycles of colonial exploitation in the Global South.

In dialogues about AI and its potential uses, labor exploitation and its intersection with the psychological well-being of marginalized communities need to be considered. Concerns about AI “taking over humanity” or debates surrounding the regulation of AI systems here in the United States cannot be separated from a broader conversation about exploitation of workers across the world. Until large tech companies and government entities stop treating data annotation workers as disposable, truly “ethical” implementations of AI cannot exist.  

SUGGESTED ARTICLES