Over the past decade, society has witnessed the rising presence of artificial intelligence (AI) technology. At this point, AI has garnered an expected “compound annual growth rate of 37.3 percent from 2023 to 2030.” While it has helped to enhance productivity and decision-making processes, the possibility of its malicious use is rising, prompting much unease. These concerns are particularly relevant to Americans as in the next five years, it is expected that the United States AI market will grow to a value of $223.40 billion. As the United States is undoubtedly in the competition to become the world’s AI hub, we must ask: How successful are we in regulating its use?
With America’s two major political rivals, China and Russia, viewing AI as the “new global arms race,” it is imperative now more than ever that the US consider this question. Up until March 3, 2023, no bill, whose primary aim is “to protect or thwart the development of AI’s potentially dangerous aspect” had been proposed. This lack of regulation can be primarily attributed to the deep lack of familiarity with AI in American governing bodies. The only member of Congress that brandishes any academic background in the field is Rep. Jay Obernolte (R-CA), who received a master’s degree in Artificial Intelligence from the University of California, Los Angeles. In fact, Congressman Obernolte further reaffirms that it is surprising “how much time [he] spend[s] explaining to [his] colleagues that the chief dangers of AI will not come from evil robots with lasers coming out of their eyes.”
The AI sector is subject to constant change, growth, and ultimately, evolution. It goes without saying that the threats posed by AI also behave similarly. How can the United States be expected to be up to date on and regulate this technology if the people making and improving laws and regulations are not equipped with the knowledge necessary to understand the capacity of AI?
While it is commendable that some members of Congress, like Rep. Don Beyer (D-VA) who is currently pursuing a master’s degree in AI from George Mason University, aim to develop their understanding of AI and apply it to their legislative work, the fact remains that it is unrealistic to expect all of Congress to do the same.
Currently, the United States displays a certain degree of divisiveness between its technological and legislative institutions. This is especially evident in the purpose of two separate organizations: the Cybersecurity and Infrastructure Agency (CISA) and Congress. While the CISA is grounded in employing technological solutions to combat cybersecurity risks, lawmakers primarily rely on their potential strengthening of existing policies to overcome technological threats. The two are distinguished in the sense that the CISA’s technocentric stance suggests that it believes in fighting technological threats with technology, while the law-making process employs a more anthropocentric viewpoint, holding human beings accountable for their use of technology.
The key difference? CISA is equipped with individuals that demonstrate a superior understanding of the technological field—a crucial feature lacking in Congress. While lawmakers have taken steps to address their lack of AI knowledge by establishing an apolitical and nonpartisan Technology Policy Committee in the Senate which “regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States,” it is not enough. The Technology Policy Committee is a step in the right direction, however, there is only so much that policymakers can do with information regarding technological developments they do not fully understand. Questions like “What are the implications and the scope of AI technological developments on current policies?” can only be answered when there is an understanding of both technology and policymaking.
To combat the threats of AI, it is critical that solutions are rooted in an understanding of technology’s inner mechanisms. This can be accomplished through the implementation of a government body that serves as an intersection of technology and policy making. I would like to refer to this as the Technocentric Coalition of Lawmakers (TCL). This would be a body of individuals that have an academic background in a technology-related field (Computer Science, Data Science, Artificial Intelligence, Computer Engineering, and so on) and an interest in policymaking thought processes. Leading by example is the European Union, which pioneered to protect its general public from the dangers of AI through the introduction of the Artificial Intelligence Liability Detective. This law seeks to “establish rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI.” Such examples demonstrate the potential for the regulation of AI globally. They elucidate that the future of regulating AI is rooted in technical understanding in order to ensure that the solutions are sustainable and feasible.
The solution I propose does not involve a singular law, but rather the establishment of a committee specifically dedicated to regulating AI as the field is one that is constantly evolving and growing. Therefore, AI and the threats associated with it demand a committee that is able to keep up with the technology’s growth and evolution, which a written law on its own would not accomplish without constant amendments.
How would the logistics of the TCL work? Considering that democracy is rooted in America’s foundational values, it is only fitting that the formation of the TCL would involve an election as opposed to an appointment. Eligible candidates must have acquired at least a college-level degree in a technological field, in addition to the qualifications required to be elected as a representative to the US Senate. It is important to note that in this manner, the TCL functions as a permanent subcommittee of the Senate, regardless of the party leading the country. In addition, it will have a representative from every state, ensuring that the needs and concerns of AI pertaining to every state have a platform where they can be heard and addressed. As such, the TCL seeks to regulate AI in a manner that promotes bipartisan and interstate collaboration, cooperation and compromise, promoting a responsible and secure use of AI nationwide.
The current institutions in the United States relevant to the technology space serve as an advisory board. The TCL does not seek to replace these bodies. Instead, it aims to distinguish itself from the existing bodies by remaining active at the heart of policy-making. This body would have the power and capacity to formulate informed laws that prevent AI from being used with malicious intent. It would serve as a symbol of growth and the evolution in the nation’s legislative system into one that is equipped to keep pace with today’s rapidly advancing tech space. With great power comes great responsibility, and it is time we harness the potential of Artificial Intelligence in a responsible manner.