Skip Navigation

Go Fish: Artificial Intelligence and Policing

The past couple of weeks have seen the FBI and Apple unlock horns after the government managed to decrypt the data found on the infamous iPhone belonging to Syed Farook. Apple in turn, for both the sake of its brand and the benefit of its customers, is now reportedly working to close the point of access the FBI leveraged. Simply put, this saga was the culmination of a long-brewing ideological rift between the government and the tech industry. But while Apple and the FBI bickered over encryption’s past, the field of artificial intelligence (AI) heralded significant moments of its own. These achievements will likely follow the path that the encryption debate has already paved; sooner rather than later they will play out in the greater cultural limelight. And if the encryption fight focused on interpreting and re-interpreting established paradigms that might come to dictate the future, the recent AI breakthroughs perhaps promise new possibilities altogether. Ultimately, these breakthroughs both create new ways to navigate intricate problems and broaden what constitutes a conquerable challenge, giving AI increasing relevance in today’s world.

In early March, Google-developed artificial intelligence termed AlphaGo defeated championship caliber player Lee See-dol in a round of Go, a Chinese board game. The program’s decisive victory — it won four of the five games it played — represents the first time a human has been bested by a computer at Go. Previous artificial intelligences, like DeepBlue, have had successes against human champions in complex games like chess, but Go has always stood as something of a holy grail for those in the artificial intelligence community. Though Go possesses simple rules in comparison to chess, its gameplay is far more complicated. In chess, a player may have roughly 20 moves in a given turn; in Go, a player may have nearly 200. This dramatic increase in complexity fundamentally altered how Google’s engineers approached designing AlphaGo, as there were simply too many potential moves for the program to search through them all and select the best one. In lieu of sheer speed and power — the cornerstones of a brute force algorithm — AlphaGo utilizes “learning algorithms” coupled with a vast database of previous human moves. These clever design decisions reduced the universe of potential moves to a mere ocean.

But contrary to the opinions of technology writers from nearly two decades ago, this achievement has not significantly reduced the distance between humans and the intelligence they create. AlphaGo is an exceptional Go player, armed with an enviable appreciation of the subtleties of the game. But this pseudo-intuition is vanishingly confined; AlphaGo is unable to perform at a rudimentary level in the face of games similar to but distinct from Go. This artificial intelligence is really an artificial savant. Regardless of this fact, Google’s success demonstrates that previously unconquerable challenges are within the reach of computer scientists. However, the solutions those scientists create are distinctly limited in scope. The question then becomes whether these cutting-edge algorithmic advances reach beyond the confines of a board game, a contrived artificial system? Conversely, is it possible — or even advisable — to redraw the boundaries of those artificial systems to include human decisions or patterns of behavior?

Here it may be helpful to consider Watson, the supercomputer created by IBM. Watson first gained notoriety on the game show Jeopardy as it successfully beat historically great players like Ken Jennings. Since that 2011 victory, IBM has been trying to harness Watson’s immense computational power to tackle challenges in the field of medicine and other areas that employ large datasets. In the healthcare industry, IBM is hoping to eventually provide valuable diagnostic capabilities to help real-life patients. Though this goal has real-world implications, it is still realized within the confines of a strict, rule-governed universe. Watson is simply given a set of observations that it then matches to a known list of possible diagnoses. Input and output, rigidly defined and already well-established in the medical literature. The only difference between Watson the diagnostic tool and Watson the Jeopardy winner is the rules that govern the “game.”

In that regard, though Watson is a remarkable achievement, it has not truly transcended the game-like paradigm within which AIs tend to flourish. However, an algorithm designed and implemented under the direction of the White House Police Data Initiative may attempt to move beyond — or at least push — the boundaries of the game paradigm. The White House initiative seeks to improve the relationship between the police and the communities they patrol by harnessing publically available data to increase accountability. Complex data analysis is central to this mission. Under this directive, a group of researchers has produced an algorithm that functions as an early warning system, giving police departments a list of officers likely to commit misconduct while on the job. Though most police departments have some sort of preventative measures already in place, this algorithm has been heralded as a distinct improvement. Ultimately, this program, unlike Watson before it, is actually trying to predict human behavior. The fear such efforts create is simply stated: Is it fair to chart human behavior like you would chart a game — as a system or body with demonstrable cause-and-effect relationships governed by definable rules?

The algorithm’s designers have answered this question with a resounding yes. And the data they collected during the course of their research supports their assertion. This data demonstrated an unsurprising relationship between major life events, like divorce or personal debt, and poor job performance. It also illustrated connections between less obvious factors — like responding to stressful domestic violence calls — and potential misconduct. Their algorithm impressively harnessed this amalgamation of nuanced variables and factors to generate personal behavior predictions. In trial runs, this system identified fewer potential misconduct events than previous monitoring systems, but those individuals it did pinpoint were likely to perpetrate future improprieties. In short, the algorithm more precisely pinpointed officers who later would be involved in adverse interactions, while decreasing the number of erroneous predictions it made.

But that has not stopped detractors from denouncing police monitoring algorithms and others that attempt a similar feat. And that fear is not entirely unfounded; despite the success this system has had thus far, there are important considerations to be made when using datasets to predict human behavior. Detractors, for example, have decried the “adversarial” nature of interventions prescribed by monitoring programs. However, even with less confrontational preventative measures, it is still likely that a culture of defensiveness will always be associated with algorithms of this type, as they mandate interventions that can been seen as preemptive punishment. In the context of law enforcement, such measures may still be warranted as they could save lives and improve relationships between the police and the communities they patrol. But it remains important to question the ethical admissibility of these sorts of algorithmic determinations, especially if they come to affect environments beyond the rarefied field of law enforcement that lack vast caches of available data.

That being said, a whole host of small startup firms are striving headlong into just this sort of expansion. Their target: job-hiring decisions. Utilizing publically available data taken from sites like LinkedIn, these algorithms promise more objective assessments of a given job applicant. Such objectivity is designed to undo some of the implicit biases that litter human decision-making and directly affect the hiring process. Though such efforts are promising, they still raise questions about how to quantify and weigh individual traits or experiences. Humans are flawed decision-makers, and as such it seems problematic to assume that the programs we create — and the answers they produce — are without flaws of their own. In short, decision-makers should be wary of the easy, dogmatic spell of these objective analyses. Additionally, AI developers ought to catalogue data regarding the success rates of their program’s predictive efforts. Interpretation of this data will allow tuning and calibration of the rules and criteria that shape the predictions these algorithms produce.

Ultimately, both AlphaGo and the police misconduct prediction algorithm demonstrate innovation within the field of artificial intelligence. Further, the police misconduct example directly brings computer science into the public eye. This translation expands the dominion over which algorithms reign. Games are no longer restricted to the Go board or the Jeopardy game. If the encryption debate — as exemplified by Apple’s conflict with the US government — is any indication, concerns that exist at the intersection of personal rights and technological advancement need to be carefully and publically addressed. It is foolish to suggest that a single piece of legislation could adequately plumb the depths of the increasingly relevant field of artificial intelligence. However, it is also clear that such legislative efforts ought to be made, especially if algorithmic decision-making continues to spread throughout the lay world.

About the Author

Sean Blake '17 is a Culture Section Staff Writer for the Brown Political Review.

SUGGESTED ARTICLES