Skip Navigation

Algorithms and Bias in the Digital Age

Algorithms have been created in a way that both reflects and perpetuates biases, particularly those regarding race and gender. In the past, algorithms have been perceived as neutral; algorithms come from technology, are coded, and are ‘logical,’ which has led to a disavowal of the fact that they can be biased. In fact, the very human origins of machine learning and artificial intelligence make it just as fallible as human cognitive fallacies. As these algorithms become more ubiquitous, creating more neutral machine learning is more important than ever.

Machines “learn” from datasets that have been given to them as a sample pool. Because the data are collected by humans, they reflect human biases in collection and sampling. The machines learn biases as they learn the data, and their findings reflect the skewed sampling pool that they are given. The Face ID system in Apple iPhones had difficulty distinguishing between Chinese faces when distributed globally, leading to people being able to unlock each others phones.  Similarly, Google Photos AI accidentally labelled black people as gorillas because gorillas were the only comparable, dark-skinned beings in the data-recognition AI training set. These biases reflect the lack of diversity in the technology industry, which results in oversights during coding and design processes.

Although it is alarming that algorithms see the world through biased lenses based on their given data-sets, it is, perhaps, more concerning that algorithms can actively perpetuate these biases. Algorithms control the flow and visibility of information through search engines, which have been optimized for end-users interest, based on a recipe of factors privy only to the companies that have made them. As critiqued extensively by Safiya Noble in her book Algorithms of Oppression, Google’s software demonstrated gender and racial biases in search results. Googling the term “black girls” yielded pornographic material which referred to black girls by sexually objectifying them, their bodies, and stereotyped attributes. Although Google has fixed this issue and removed explicit content from being searched, there are still a number of subtle ways in which biases are visible in their search algorithm. For example, googling the word “professional attire” or “professor” yielded pictures of primarily white men. On the other hand, googling “unprofessional hair” yielded images of black women. In creating images and associations along racial lines, Google furthers biases that already occur in the real world by making them visible in digital space. Although the company has cleaned up a lot of controversial search topics and attempted to promote inclusivity following the release of Noble’s book, there is still a long way to go.

Interacting with technology in a racialized and gendered manner can further cultivate habits, not only mindsets, which are inherently biased: Many companies with AI machines, such as Apple with Siri and Google with Alexa, have chosen to voice and label them as women. This portrayal of a virtual assistant as a female furthers the idea that women need to be commanded and exist in secondary roles. Sophia — one of the first, most advanced human robots — garnered controversy because she was made a citizen of Saudi Arabia and had more rights than the average, sentient Saudi Arabian woman. Although the choice to make these machines women can be justified by the idea of women being more ‘trustworthy’ and well-perceived psychologically than men, there is just as much psychological evidence to the contrary. Male linguistic patterns are less criticized and just as positively described — there is no real reason as to why male robots or AI are not the default. In associating digital assistants and robots as women, technology furthers the notion of women as secondary to men. Much like technology is viewed as something which serves its user, women AI are made to serve their users wishes.  Therefore algorithms do not just result from biases, but perpetuate them by showing a fixed view of the world, cultivating biased habits and mindsets.

With the increased application of machine learning, algorithms have the potential to make life changing decisions, which have been found to reflect biases. As algorithms become more ubiquitous in deciding loans, information visibility, monitoring crime and policing, etc–the potential consequences of biased algorithms increase. If algorithms reflect racial biases, will people be able to appeal them in court? How can people analyze their reliability or prove their accountability?

Going forward, these questions make it evident that ensuring the “neutrality” of algorithms (or at least increasing it) are the most effective way to prevent unintended discrimination. Many have argued that the root of the unintended biases in programs is the fact that the tech industry lacks diversity — increasing diversity in Silicon Valley would lead to more aware programmers with more inclusive code and data. Diversity would help teach machines ethnic slang and terminology, help them predict diverse reactions, and craft better responses. Although algorithms at their current state have great power to perpetuate biases and behavior, changing the process of their creation can not only create a more inclusive society but will also result in neutral technology.

Photo: “Computer Code

About the Author

Kavya Nayak '22 is a Staff Writer for the Culture Section of the Brown Political Review. Kavya can be reached at kavya_nayak@brown.edu

SUGGESTED ARTICLES