Skip Navigation

Networked Realities of Portraits: Policing through Online Databases

What happens when you are no longer the sole owner of your face?

We live in a networked reality, meaning that the realities that dwell both online and offline don’t exist in isolation: They are passed through various channels, transmitted to other spaces, and shared with audiences larger than those immediately accessible to us. Our realities are often increasingly captured through photographs that circulate through various networks such as text messages, social media platforms, and even police databases by way of surveillance. While each of these platforms evokes interesting implications to consider, it is important to think of the implications of a networked reality, especially for a reality we wish to claim ownership over–our very bodies and faces, for example.

Due to their networked nature, photographs that pass through policing databases by way of facial recognition technology increase the potential incidence of harsh forms of policing by reduplicating and amplifying issues of profiling and misidentification that already exist in offline policing practices. Additionally, this technology allows for an increased visibility of policing subjects, while allowing for the police itself to take on a seemingly imperceptible identity as it polices the population through an online database. Issues of both misidentification and false anonymity for the policing apparatus abstract much for the visibility and responsibility of a physical iteration of a policing force. Thus, the increased usage of facial recognition technology for surveillance raises the need for increased accountability for the policing and monitoring tactics of the U.S. policing apparatus.

Many arms of the American policing apparatus use databases of facial photographs to monitor, police, and apprehend various populations. One such system is used by the Federal Bureau of Investigation. The FBI’s Next Generation Identification System has a variety of components to keep tabs on the population including facial recognition but also biometrics and iris image recognition. It is the largest facial recognition surveillance system in the U.S., accounting for about 117 million photographs of Americans. This means that the face of about one in two American adults is housed on a law enforcement facial recognition surveillance network. It collects photos mostly from drivers license databases amassed from states and cities who give the FBI access to their databases in exchange for access to this facial recognition surveillance system. These photographs are also collected as real time scans from cameras in public places like highways or ATMs.

The FBI’s networked database elucidates some of the more chilling aspects of a networked portrait with regards to visibility. The use of surveillance systems like the FBI’s First Generation Identification System produces the risk of aggrandizing issues of profiling and misidentification that already persist within policing methods that occur offline.

As faces become digitized through surveillance, they lose some of the nuances and subtleties that render them human. This facial recognition technology is harsh: It relegates a face to a set of physical features that are then policed and monitored. However, in reality, our faces are more than just physical features. A face must be considered within a context such that the way one’s lips might move when they speak or the curve of someone’s eyes when they smile are all taken into consideration. As Teju Cole postulates in one of his articles about modern-day portraits: “I am not disembodied…I am not an abstraction…I am not my face. But a set of features retains affect, as in a cistern, and from this something more subtle can be retrieved.” These very nuanced traits are what makes a face adherent to humanity rather than rendering it as an abstraction of physical features. However, policing surveillance networks do not consider these subtleties and faces are abstracted from their innate humanistic nuances. This, then, renders certain demographics of the United States population to be more heavily policed due to the fact they are more visible through surveillance systems.

Based on a study conducted by Georgetown Law’s Center for Privacy and Technology, as facial recognition technologies continue to be used by both local and federal police departments, they are expected to disproportionately affect Black communities. This is due to the fact that Black communities are already heavily policed in the United States. Thus, the police departments’ facial recognition databases already contain a disproportionate number of photographs of Black individuals. Additionally, because facial recognition technology may be likely to work less accurately on Black faces, based on evidence from a FBI study, Black individuals are more likely to be wrongly profiled and mistakenly identified through this facial recognition network.  

The harshness of the technology and it’s proneness to inaccuracy demonstrate the ways in which a networked photograph can amplify violent and inhumane policing; individuals are profiled and risk being misidentified, especially populations like Black communities that are already heavily policed.

The negative implications of the use of facial recognition technology for policing purposes are also on display at the US-Mexico border. Various technologies of recognition, including drones, remote surveillance monitors, and Tethered Aerostat Radar Systems, for example, are used to police the movements of individuals in the borderlands region. This has allowed for the hypervisibility and subsequent policing of certain migrants while others, such as those crossing through the Sonoran Desert or the Rio Grande, face tremendous hazards and dangers “outside” of the purview of the American policing apparatus. These policing networks of photographs are used to visualize a subject of policing while the true entity that is responsible, that is the US government, remains invisible. Thus, it continues to implement violent policies and policing strategies without a strong and public visualization of these problematic approaches.  

Further, the general population does not have access to the FBI’s Next Generation Identification database. Of the various police agencies in the U.S., only 52 police agencies or less than 10 percent of agencies, have their databases available to the general public. Additionally, according to a study from Georgetown Law, the general public is unaware of such a database. The problem with a general lack of access to such an expansive database is that this database provides more ways in which the American policing apparatus can non-consensually share information about the population from local to federal levels. Individuals understand that they are being monitored at an ATM or that taking their driver’s license picture gives state policing departments access to their photograph and information. However, given their limited access to such a database, these individuals may not be aware that a picture taken at an ATM or local DMV could be used to police them. If they had full knowledge of the circulation of their photograph, individuals would opt out of being monitored and policed in this way. However, because the general public has limited access and knowledge of this FBI database, they are then opted-in to this system of policing, without their acknowledged consent.

A lack of access to this network also means that individuals can be more heavily and frequently policed and monitored in ways and in places without their awareness. For example, the system is often used as a digitized standin for a line-up, in which suspects for a crime would be asked to stand in a line to be matched against the features of an alleged perpetrator. While individuals can be investigated for a crime without their knowledge even without the FBI’s database, policing through this technology, both within the U.S. and at its borders, perpetuates a hypervisibility of subjects and an invisibility of policing units. An online sphere in which the U.S. government can monitor individuals allows for harsh and violent policing methods that remain hidden behind a false sense of anonymity; the subjects of policing are highly visible by policing departments, yet the police apparatus itself, as it has typically manifested in physical forms of an officer in uniform or a police car, remains invisible through technologies that allow forms of policing to occur outside of the public eye.  

The problems of misidentification and visibility raised by the use of a network for police facial recognition surveillance reinvigorate the serious need for a transformation in the police apparatus in the U.S. Populations both within the country and regions contiguous to the U.S. already experience various forms of police brutality that result in various harms and deaths. With an online database for policing, individuals are still subjected to similar forms of policing that can more quickly manifest into real-life outcomes because individuals are more visible and more subjected to monitoring and surveillance. Police departments in the U.S. should be subjected to rigorous auditing of the technology and usage of this network by other agencies such as the Department of Justice. The auditing of police departments for the usage of this surveillance technology can also produce more forms of accountability and regulation for the U.S. police apparatus such that it cannot yield violent policing tactics without repercussions.

Photo: “Facial Recognition”

SUGGESTED ARTICLES