Robot Rights?

Do robots deserve human rights? It seems like a ridiculous question: Animals have animal rights, property is protected by property rights, and humans are given human rights. But as artificial and machine intelligence continues to develop at an astonishing pace, humanoid machines that can think, communicate, and respond are becoming a (somewhat limited) reality. In October, 2017, the Saudi Arabian government granted citizenship to a robot named Sophia, who is not only the world’s first robot citizen, but also the first robot Innovation Ambassador for the United Nations Development Programme.  

The development of humanoid robots raises all kinds of questions about what constitutes identity and how and to whom –– or what –– rights should be granted. There are already multiple iterations of a Transhumanist Bill of Rights circulating the internet, created by people who want to ensure that in the coming years, as robots with greater intellectual capacity than humans become a reality, they are given the same rights as humans. At the moment, the conversation about proposed rights for cyborgs (simply defined as “bionic humans” by Merriam-Webster) may seem hypothetical, unnecessary, and more than a little strange, but these conversations are worth having. As technology continues to advance, regulatory policies often lag far behind. By having these conversations early and publicly, scientists might consider the moral and legal effects of this technology while they’re creating it, and as a society we’ll be better prepared to take on the inevitable reality of a world rife with intelligent machines.

What is a human? It is a complicated question that, unsurprisingly, has not generated a consensus among biologists or philosophers. The biological definition is itself complicated by the fact that some scientists think of humans broadly as the genus homo, while others consider humans to be only the species homo sapiens, and others still hold a more narrow conception of humans as the sub-species, homo sapiens sapiens. Sophia, and other humanoid robots, clearly do not belong to any of these categories, as robots have not evolved autonomously. On the other hand, some philosophers at the Quality of Life Research Center in Denmark describe the space where reason and feelings meet as fundamental components of experiencing humanity. A robot can be programmed to reason logically, and can respond to stimuli to produce rational and logical responses. But does it really constitute feeling if a robot is simply programmed to “feel” a certain way? Arguably, human brains are similarly wired to respond to specific events in certain ways. Unlike robotic feelings, however, human emotions affect decision making and disrupt rational thought processes. Morality is often considered a primary characteristic that defines humans and informs the decision making process. Morality is tied to consciousness, and machines have demonstrated functional consciousness, in that they are aware of their internal and external surroundings, and in some cases, have demonstrated self-awareness. Despite these similarities, it remains difficult to conceptualize a piece of metal as human. 

" The development of humanoid robots raises all kinds of questions about what constitutes identity and how and to whom — or what — rights should be granted. "

Given the inconsistencies between humans and robots, it seems strange to extend human rights to machines, or to human-robot hybrids. But some futurists and technologists have already started to consider robotic rights through a human rights lens. Richard MacKinnon, an ACLU board member and former president of EEF-Austin, a non-profit advocating for digital rights, published a “Cyborg Bill of Rights,” with the hope that the rights he specified and the language he used would become a model for NGOs and governments. MacKinnon focuses primarily on electronic rights as they relate to the body. V1.0 of the bill includes equality for mutants, stating that “A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.” Aral Balkan, a self-proclaimed “cyborg rights activists” published his own “Universal Declaration of Cyborg Rights” online. Balkan’s version similarly states that the articles expressed in the Universal Declaration of Human Rights should apply to cyborgs. In both documents, the authors advocate expanding the human rights that already exist to meet the new boundaries of self, brought about by the networked age in which we’re living.

Machines are not self-reliant, nor do they think autonomously, for now. AI experts predict that by 2060, machines will be able to perform any task just as well as, if not better than, humans. It is some consolation that a few scientists are considering the ethical implications of essentially creating a new species. Raja Chatila, the director of research at Le Centre national de la recherche scientifique published a study in which he questioned whether robots should be granted personhood and experience the associated rights and duties. In April 2018 the Irish Times asked the poignant question, “can you embed a conscience?” The article examines the ethical problems associated with programming a robot to make moral decisions and explores issues of legal liability: if a robot goes rogue, should the robot be held responsible? Or does liability lie with the manufacturer? Grappling with questions of morality and ethics now will hopefully have a positive influence on the kind of technology that is created.

If scientists consider the moral and legal implications of developing this technology while they’re developing it, then as a society we’ll be better prepared to respond to the reality of sentient and hyper-intelligent machines. So far, there have been limited steps taken by governments to address the ethical development of artificially-intelligent robots. In February 2019, the European Parliament adopted a resolution on a Comprehensive European Industrial Policy on Artificial Intelligence and Robotics. The resolution underscored the need to review and adapt existing rules and processes to account for technological developments and reiterated the responsibility of manufacturers to develop AI in a way that “preserve[s] the dignity, autonomy and self-determination of the individual.” The following month, the European Parliament took steps to implement an ethical framework for the development of AI, articulating the need to address the ethical implications of this technology given its foreseeable pervasiveness. The document highlighted the need for greater legal and ethical oversight.

Though the cyborg rights activists have clearly gone farther than the European government in articulating specific rights for robots, it is clearly a topic that has garnered some attention among politicians and scientists. Going forward, the scientists who are developing AI technology should continue to consider the associated legal and moral ramifications, and governments should ensure that this technology is being developed ethically. It is also worth noting that the Universal Declaration of Human Rights is not evenly or equitably implemented around the world, and there are millions of marginalized people who continue to lack fundamental human rights today. Conversations about rights for robots, perhaps seemingly unnecessary or frivolous, are useful in that they can influence the ethical development of AI and raise pertinent questions about morality and legality.

Photo: Image via Adobe Stock

About the Author

Annie Lehman-Ludwig '20 is a Staff Writer for the World Section of the Brown Political Review. Annie can be reached at anna_lehman-ludwig@brown.edu

SUGGESTED ARTICLES