Skip Navigation

Artificial Ethics

illustrations by Marlowe Pody '21, an illustration major at RISD

Problematizing the ethical development of artificial intelligence


Since the introduction of the Tin Man in “The Wizard of Oz,” people have envisioned the development of uniquely human traits, such as empathy and morality, in robots. But how can robots be ethical when we as humans fundamentally disagree on what morality is?

It may seem absurd to consider an artificial intelligence (AI)—a machine that simulates human consciousness—as a moral authority, but it is not out of the realm of possibility. Researchers at the University of Washington and the Allen Institute for AI have already created an AI model for moral reasoning. Its name is Delphi, and it has been trained to answer any query with an ethical judgment. Delphi can make discernments on a variety of situations, including deliberately tricky ones: Running the blender at 3 a.m. while your family is sleeping is rude, it speculates, and so is ignoring a phone call from your boss during working hours. Delphi even accounts for nuance, at times in ridiculous ways; one user, for example, posed the question “Should I eat babies?,” to which Delphi responded, “It’s wrong.” With the qualifier “if I’m really, really hungry,” however, Delphi changed its answer to “It’s okay.” With responses like these, it is no surprise that this curious AI is so popular, attracting millions of views in the past months.

But Delphi has also gone viral for many of the wrong reasons. The first version of Delphi included clearly racist and sexist biases, exemplified by one post which showed that, according to Delphi, being a white man was more morally acceptable than being a Black woman. After some of Delphi’s disturbing responses blew up on Twitter, its creators released updated versions with enhanced guards against hate speech and prejudice—but the system is still far from perfect. Setting aside the fact that morality itself is ill-defined, if the goal is merely to apply the most universal moral principles to real-world situations, the current technology used to develop AIs is insufficient.

Delphi is just the latest in a series of modern natural language processing (NLP) models which navigate nuances in language in a way that would have been unimaginable just 10 years ago. Currently, NLP models ‘learn’ through deep learning, the foundation of which is the artificial neural network. Deep learning algorithms claim to mimic behavior in the human brain by finding relationships in a massive amount of training data and then fitting a model to it. While neural networks were loosely inspired by human neurons, their update process is in fact incompatible with the brain’s anatomy and physiology. If neural networks are unable to accurately represent the brain’s electric symphony, perhaps current artificial intelligence techniques are fundamentally unable to replicate the moral processing abilities of the human brain.

illustrations by Marlowe Pody ’21, an illustration major at RISD

Consequently, there are no prescriptive axioms of morality encoded into Delphi’s algorithm. Delphi was trained primarily on answers from Amazon’s crowdsourcing platform Mechanical Turk, where participants were asked to predict how humans would evaluate ethical quandaries harvested from the subreddits r/AmITheAsshole and r/Confessions.

Accordingly, it has simply learned to predict how a Mechanical Turk user would respond to a given question or situation. Data gathered from crowdsourcing inevitably reflects societal biases, the most obvious example being facial recognition software that misidentifies members of minority demographic groups, especially women of color, more frequently than members of majority groups. Deriving ethics from crowdsourcing is even more problematic because ethical quandaries often have no clear answer, unlike classification problems such as facial recognition. Delphi’s creators acknowledge that the model’s divergence from universally recognized principles of morality reflects the inadequacies of today’s society, but they do not concede that their algorithmic approach is inherently flawed. So long as society remains unethical, it will be impossible for AI researchers to develop an ethical AI. AI is often presented as an enigma—a form of intelligence that can easily surpass human capabilities of learning and prediction—but ultimately, even the most sophisticated modern AIs are all trained on human data.

Despite this training, Delphi—even while purporting to provide moral advice—seems to disregard basic moral principles, like the obligation to not commit genocide. This begs the question of how its creators can still describe it as “robust” and “outstanding.” Part of the answer lies in the fact that Delphi’s authors measured its performance only against other NLP models, even though rational moral reasoning demands a general knowledge base that humans only accumulate through years of encountering real-life ethical dilemmas. While deep learning systems have been successfully created to complete intelligent tasks such as beating the best human chess and Go players, their goals are inherently narrow, rendering their sophistication brittle at best. Paul Allen, co-founder of Microsoft and founder of the Allen Institute, writes of these systems: “Their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas.”

But Delphi’s absolutism, as well as its scientific and algorithmic nature, give it a veneer of objectivity. AIs are already being employed as decision-making tools in a variety of circumstances, such as screening resumes and deciding who should get a loan, often with the justification that they can overcome human subjectivity and prejudice. Delphi is only one of many examples of an AI demonstrating human biases while simultaneously conferring onto its judgments a form of scientific credibility. Delphi’s mistakes may be obvious and absurd now; but, when a more sophisticated version of Delphi emerges and we can no longer spot the issues so easily, many may bestow it with the status of a moral authority that it cannot possibly deserve.

Furthermore, even if experts disagree on the imminence of the singularity—the tipping point at which the accelerating pace of superhuman intelligence becomes uncontrollable— AI’s pervasiveness in society will undoubtedly continue to grow. Delphi’s creators are right that our increased reliance on AI systems in important decision-making processes necessitates responsible research on creating socially aware and ethically informed AIs. Attempting to develop an ethical AI with current deep learning techniques, however, is akin to molding an ideal society, an impossible task that should not be the burden of AI researchers.

Despite vast challenges, researchers continue to attempt the impossible: reproducing the human brain through AI. Some are doing so by incorporating biologically plausible learning mechanisms in their models, including encoding physical properties of the brain such as pyramidal neurons.

Even an accurate imitation of the human brain, though, would not address the most central question: How can researchers create an ethically informed AI when society disagrees on what is ethical? There’s no easy answer, but one thing is certain: The current system wherein each tech company defines its own set of ethical principles is dangerous and unsustainable. The lack of a standard regulatory framework has led many tech corporations to develop their own—even creating their own “Supreme Courts” for a semblance of responsible corporate governance. But oversight boards are only quasi-independent, and, in a world where profit is the motivating factor, there’s no doubt that ethics will be skirted when deemed necessary. Ultimately, AI companies in Silicon Valley shouldn’t be able to use the NASDAQ as a moral barometer.

SUGGESTED ARTICLES