Poised on the cutting edge of innovation, American universities continuously challenge the frontiers of knowledge in the pursuit of progress for humanity. Just what this lofty endeavor entails, or how it is to be accomplished, varies largely in accordance to universities’ interpretations of their intellectual functions. Though the academic community drastically diverges in its execution of these functions, there is an overwhelming consensus on one crucial matter: namely, that the purpose of the scholarly enterprise is to promote the common good. The type of language adopted in university mission statements is indicative of this higher calling to “serve the community, the nation, and the world” (Brown) in order to “promote the public welfare” (Stanford) through the “free exchange of ideas in an ethical, interdependent, and diverse community” (Yale).
Despite these lofty aims, the growing presence of universities on the forefront of groundbreaking research is challenging this perception of the “common good.” Universities are increasingly enmeshed in the corporate and scientific spheres through their acceptance of corporate and federally funded research. Once insulated within the walls of the institution, academic research in these fields now occupies a robust and vital place in American society, contributing to economic vitality, medical progress, and increased standards of living. Indeed, it is through innovations such as the Internet, artificial intelligence (AI), and genetic modification that universities have achieved their greatest strides in promoting the general welfare… And yet, it is these same technologies that have the potential to profoundly undermine it.
The emergence of national security concerns in university research raises contentious questions about the proper role for government regulation in this space, particularly when such research is federally funded. To make matters more complex, research institutions must balance competing visions of “the common good” when weighing between the demands of academic freedom and national security.
Biotechnology is the central fixture of this debate. In the wake of the post-9/11 anthrax mailings, the USA PATRIOT Act of 2001 and the Public Health Security and Bioterrorism Preparedness Response Act of 2002 established clearance requirements for personnel in research institutions that conduct studies involving select biological agents. (These requirements were in direct opposition to the sentiment of the National Security Decision Directive-189 of 1985, which mandated that “to the maximum extent possible, the products of fundamental research remain unrestricted.”) Although it was not the first time the federal government had enforced security protocols for sensitive research, these provisions marked a critical turning-point in government efforts to prioritize national security.
These provisions of the post-9/11 era have had direct implications for university research and academic freedom through the emergence of regulations for Dual Use Research of Concern (DURC), which is characterized as “life sciences research that…could be directly misapplied to pose a significant threat … [to] national security.” The National Institute of Health’s Office of Science Policy adds that “the United States Government’s oversight of DURC is aimed at preserving the benefits of life sciences research while minimizing the risk of misuse of the knowledge, information, products, or technologies provided by such research.” Thus, universities that receive federal funds to conduct research are subject to review by an institutional review entity (IRE) or a Dual Use Research of Concern Review Committee (DURRC) in the performance of potentially dangerous biological studies.
While this oversight is a valid exercise of government power in the interest of public health and national security, it also initiates a slippery slope. Many nascent disciplines, like AI, pose a conceivable threat to national security. Should DURC classification extend beyond the life sciences when it is clear that the applications of research in other disciplines are vulnerable to malicious exploitation? For example, current applications of AI can be leveraged for the national defense (as a vehicle for waging war and conducting espionage) but they also pose a tremendous cybersecurity threat. More specifically, today’s applications of AI and machine learning automate many of the processes that control the nation’s critical infrastructure (which includes everything from dams and banks to hospitals and energy grids). While these innovations make our infrastructure substantially more efficient and reliable, malicious actors with the technological know-how–whether foreign enemies or domestic terrorists–can exploit vulnerabilities in our critical infrastructure to literally paralyze the United States’ economy and defense systems.
Until an overriding national concern is made abundantly clear, it seems prudent that the university, not the government, make decisions regarding potentially sensitive research. After all, the concept of academic freedom is grounded in the belief that “the first condition of progress is complete and unlimited freedom to pursue inquiry and publish its results” (American Association of University Professors 1915 Statement on Academic Freedom). In turn, this raises questions of whether and to what degree universities ought to exercise caution in how they handle sensitive research topics, quantify threats, and draw that line between secure and potentially-vulnerable research.
In deciding for themselves where this hazy boundary lies, universities must weigh paternalistic justifications for the regulation of sensitive research against more generalized norms of transparency and accessibility. This becomes a particularly unenviable task when considering that advocates on both sides of the debate vociferously appeal to the ‘moral’ imperative of their respective causes.
Champions of academic freedom argue that research is an engine of medical progress and economic growth. The ability to freely conduct and publish sensitive research is thus vitally important to the university’s mission to promote the public welfare. This is particularly true of research in the life sciences, where work with biological agents is “critical to strengthening global response to all health threats and hazards” (World Health Organization). By framing academic freedom as the means to a better-off society, proponents of unfettered research present sensitive research as an ethical imperative. Meanwhile, the national security and public health communities generally believe that sensitive biological research poses a substantial threat to the public, given its potential for exploitation in a bioterrorist event. Thus, national security advocates provide a paternalistic justification for the need to regulate dual use research.
These competing views were put to the test in 2011, when controversy emerged over whether to fully release the results of two studies that indicated the pandemic potential of the H5N1 virus. Though the studies were ultimately published, Science magazine notes that “the debate prompted influenza scientists to self-impose a landmark moratorium on some types of H5N1 research [and] the U.S. government to set new controls on taxpayer-funded studies involving potentially dangerous pathogens.” The controversy that emerged from this event prompted several entities including the World Health Organization to consider the many ethical implications of sensitive research. In one sense, the research and publication of studies involving dangerous pathogens may enable bioterrorists, but self-imposed moratoriums on such research may also hinder emergency preparedness and response to a global pandemic.
The ramifications of this ethical quandary extend far beyond biological research to other emergent fields such as AI, making this debate more consequential than ever before. Though this research is not currently subject to DURC regulations, “self-imposed moratoriums” on sensitive research present the same trade-off between progress and safety, which are both crucial components of the general welfare. In addition to more general questions about whether universities are capable of policing themselves, the ethical questions surrounding sensitive research present no clear answer. Yet in formulating research policy, one must return to the primary mission of the American university: to pursue knowledge in the furtherance of collective welfare and progress. All byproducts of this core pursuit of knowledge–economic stimulation, technological innovation, medical progress, and so forth–are not merely subsidiary benefits to this core purpose, but rather key elements to its fulfillment.
As an increasing number of intellectual disciplines develop dual use technologies, policies surrounding sensitive research are sure to become a politically charged issue involving economic, health, corporate, and government interests. While this debate is still in its infancy, universities need to actively take ownership of the interplay between economic, technological, and medical levers in the production of knowledge. To the extent that research both promotes and hinders these elements, research should be duly regarded as a vehicle of great power and great responsibility. While it is still unclear what this implies for the balance between academic freedom and national security, one thing is overwhelmingly clear: If universities want to remain faithful to their mission to promote the public welfare, they must do some serious soul searching to balance transparency and academic freedom against protecting the public from potential harm.