Skip Navigation

Why the US Government Needs to (Seriously) Invest in Artificial Intelligence

US military advantage has long relied on the superiority of US scientists, but China’s aggressive commitment to artificial intelligence (AI) research threatens to change that. In order to retain its place as the world’s No. 1 in AI, the US needs an injection of public funding. Luckily, the Department of Defense seems to be aware of this: Its research and development division, the Defense Advanced Research Projects Agency (DARPA), recently announced a $2 billion investment in developing AI that uses common sense. This investment by the US government represents a massive opportunity for the United States to become a leader in an emerging field with countless military applications, and it is essential that more funding follow this initial commitment.

Current approaches to AI rely on massive amounts of labeled data to succeed, which makes many applications prohibitively expensive to explore. For instance, if we wanted to develop a model to identify birds in photos, we might need to train our machine on millions of photos labeled as containing or not containing birds. Another problem is that it often assumes a static environment, which is generally not the case in the real world. But as NYU neural science Professor Gary Marcus points out, this “millions-of-data-points” approach is an inherently unintuitive way to learn about the world. Rather, researchers should aim to develop AI with the common sense of a two-year-old: common sense capable of making abstractions and generalizing from limited information.

A toddler, for example, can learn that the number “33” is “thirty-three” and conjecture that “11” is “onety-one,” over-generalizing very slightly but correctly guessing that there is a rule for naming numbers. A machine model would need to be taught the rules for naming numbers, or at the very least that there are rules for naming numbers. The goal is to create machine models that mimic cognitive development in early childhood.

If this project succeeds, the military applications are nearly limitless. Modern use of AI in the military is currently limited because of the weaknesses outlined above: Many of the essential questions asked on the battlefield, such as “should this weapon be fired?” cannot be reduced to millions of data points (because they simply aren’t available) and a static, defined environment (the terrain, our relationship to the enemy, and our tolerance for risk may be different in each case). These are problems that, at their core, cannot be solved by feeding a machine millions of data points. They require some sort of intuition, or the ability to “acquire deeper abstractions” from limited data. Furthermore, many AI projects so far have been relatively low-stakes. If a machine model fails to win a round of chess, it can learn from its mistakes and try again. If a machine model misjudges a military situation, it may lead to unnecessary loss of resources or even lives. That is, the cost of military failure is extraordinarily high, and we must calibrate our research accordingly.

In short, although modern artificial intelligence is already quite advanced, it has a long way to go before it is ready for the battlefield. The US has the chance to be the first to develop AI capable of real-time military decision-making, which would give it an unparalleled advantage in defense. Hence, more funding is necessary.

Perhaps most compellingly, China’s AI research production grows more and more impressive with each year, and much of this is due to the Chinese government’s financial investment in the field. Although the majority of groundbreaking research still comes from US scientists, some argue that AI research has already reached the tipping point where implementation matters more than discovery, and China’s strengths lie firmly in the former category. In any case, China’s research output is already extraordinary. In 2017, the Association for the Advancement of Artificial Intelligence accepted an equal number of papers from Chinese and American researchers, a statistic that would have been surprising just three or four years ago. It also re-scheduled its conference at the last minute because the original date had conflicted with Chinese New Year, and the organizers realized that they could not proceed with half the invitees absent.

In addition, as Stanford professor Andrew Ng points out, Chinese scientists often have an “information asymmetry” advantage inherent to foreign researchers: They can benefit from English-language research, whereas American researchers often cannot read or access Chinese AI research. This leads to a lack of awareness in the US of Chinese AI breakthroughs; for instance, Baidu achieved speech recognition accuracy surpassing that of a human expert before Google and Microsoft did, but few in the Western world are aware of the accomplishment.

Although DARPA’s new project aims to supplant the old style of “massive amounts of data” AI, current AI research still relies on this strategy, in which China has a built-in advantage because of the sheer amount of data available to Chinese AI researchers. In the nation nicknamed the “Saudi Arabia of data privacy” by The Economist, there are three times as many mobile phone users as there are in the US—almost all of whom make mobile payments, share posts, and tweet on the same platform, the app WeChat, making it easy for researchers to consolidate data. In Hangzhou, a city with a million more residents than New York City, the traffic lights are controlled by an AI model that uses information from domestic surveillance cameras to account for real-time traffic flow and the weather. American researchers have no way of accessing this type of information. This is a net positive because it means the private lives of Americans are protected, but it is a weakness of American AI research.

China is not pouring hundreds of millions of dollars into AI research because Xi Jinping finds the subject interesting. Beijing has explicitly military goals and has already established “military-commercial research laboratories” to explore possibilities such as autonomous vehicles. It recognizes that the same facial-recognition technology Snapchat uses so Gen Z can send funny pictures to each other can also be employed to conduct espionage. Artificial intelligence is a field where the line between general and military applications is quite thin, and infrastructure built to test a self-driving car, for instance, can quickly be modified to test a self-driving tank. Imagine the devastating consequences of a warfare in which the US is forced to send its soldiers against autonomous drones, or in which enemy AI systems learn through repeated attempts how to effectively cyber-attack US information operations. There is no doubt that Chinese dominance in artificial intelligence would put the US at a military and espionage disadvantage. This is what the US needs to stave off with aggressive AI funding.

Some may question why federal investment in AI is necessary given the incredible breakthroughs that have come out of private research labs. However, cooperation between the private technology sector and the military has historically been fraught. As a result, private AI research is not necessarily accessible to the military. As Eric Schmidt, the former executive chairman of Google, remarked, many in the industry fear “the military-industrial complex using their stuff to kill people incorrectly.” In fact, Google faced intense backlash after Gizmodo—a technology and science website—published an exposé revealing Google Research’s involvement in a Department of Defense drone project. Thousands of employees signed an open letter arguing that “Google should not be involved in the business of war” and several even resigned from their positions. A significant amount of concern also came from non-US citizens who did not want to get involved with the US military—after all, more than 70 percent of Silicon Valley engineers were born overseas. As a result, Google announced that it would end its involvement in the project in 2019 when its contract expires and published a new code of ethics that makes clear its unwillingness to use AI for weapons.

Furthermore, federal labs need all the funding they can get: Even top universities like Imperial College London and Carnegie Mellon struggle with “brain drain” from PhD students and professors leaving their programs for lucrative private-sector salaries that they cannot hope to match. (This “brain drain” also affects the pace of fundamental research, as the private sector tends to focus much less on the subject.) To attract the best of the best, the government needs to be able to offer at least vaguely competitive compensation.

The unsteady nature of the private sector’s willingness to cooperate with the US military, the lack of private sector investment in fundamental research, and the high salaries commanded by AI experts all highlight the necessity of greater federal funding in artificial intelligence.

DARPA is one of the nation’s crown jewels: Its research has given birth to the internet, GPS, Siri, and more. In an era where technology has revolutionized everything from the taxi industry to the way we go on vacation, it is vitally important that the US takes the necessary steps to ensure that its military sees the benefits of modern AI research. DARPA’s common sense AI program represents a huge opportunity for the US to become a leader in an emerging field and keep pace with the advances coming out of Chinese and US private labs. At the very least, it is a step in the right direction.

Photo: “Pink White Black Purple Blue Textile Web Scripts”

About the Author

Ashley Chen '20 is the President of the Brown Political Review and a Senior Staff Writer for the US Section. Ashley can be reached at ashley_chen2@brown.edu

SUGGESTED ARTICLES