As technology’s impact expands, the consequences of security vulnerabilities grow steeper. In 2016, the software testing company Tricentis reported that 4.4 billion people and $1.1 trillion in assets were impacted by software failures in 2016. Among these failures was the misprescription of heart medication to 300,000 people in the UK’s National Health Service, a mistake that caused patients to suffer “otherwise preventable heart attacks or strokes.”
Despite the potentially disastrous effects of software failure, the industry has no regulatory oversight. It is almost completely immune to both criminal and civil penalties in cases of negligence. To prevent catastrophic damage caused by the recklessness of unregulated firms, liability law must be extended to cover software firms, allow executives and programmers to be prosecuted for gross negligence, and force companies to publicly report all instances of preventable software failure.
Software development is intrinsically risky because it is so abstract: It is impossible to test every possible execution of a program, meaning some problems just cannot be foreseen. In his book Software Engineering, Roger S. Pressman points out that for “even a small 100-line [program]…executing [it] less than twenty times may require 10 to the power of 14 possible paths to be executed.” For real-life systems, the possibility of testing is even more absurd: Boeing 787’s software contains six million lines of code; the Chevy Volt, 10 million; a F-35 fighter jet, almost 25 million.
Yet despite software’s huge potential for failure, software companies that sell proprietary products are rarely held liable for their mistakes. American cryptographer Bruce Schneier argues that the current legal system places an undue burden on the consumer: Although many parties are involved in software attacks, our modern landscape places “100 percent of the burden” on users. Thus, the effects of a cyberattack or failure are borne by unsuspecting third parties rather than those responsible for the breakdown.
In short, since companies are rarely legally responsible for the failures of their products, they have no reason to guard against failure. Perversely, the market rewards sloppiness by shelling out money for more features and quick releases. We have adopted a patch-and-release cycle where software is hastily launched and then partially fixed patch after patch, ad nauseam.
Some argue that the free market can sort it all out. After all, the cybersecurity industry is large and promising—worth almost $90 billion in 2017. But antivirus software is not a panacea. Just like biological viruses, which mutate too quickly for vaccines to be 100 percent effective, computer viruses are diverse, emerge constantly, and change extremely quickly. To make matters worse, too many Americans buy into the false attitude that the internet is so vast that a hacker is unlikely to target them. Computational power is now so advanced that it is feasible to scan the “entire internet,” making any publicly accessible, poorly secured IP device immediately vulnerable.
Additionally, most users are unaware of the viruses on their computers that do not affect performance. More subtle viruses can recruit private computers to a botnet—a coordinated group of internet-connected devices infected and controlled by malware—often without the owners’ knowledge. For example, in 2010, a “Citadel” botnet infected five million computers and harnessed their power to steal $500 million from bank accounts over 18 months. But as Hoover Institution law associate Jane Chong writes, “many people lack reason to truly care that their computers are infected, because being part of a botnet does not especially harm them.”
Even proprietary software—software with intellectual property protection—is written to be exempt from legal liability. For example, software is generally sold as a license rather than a product, removing culpability from the vendor unless specifically stated in the often-skimmed licensing agreement. In this way, the legal boilerplate shifts all risk to the user. And because virtually all software companies write similar disclaimers, there is no way for consumers to enter into a more favorable agreement with another provider.
Consumers also cannot count on the government to punish negligence. Take, for example, Equifax, the credit reporting firm that recently released the personal information of 143 million Americans as a result of cybercrime. Although Equifax could have done much more to ensure its data was secure, its inaction was legal because there is no law explicitly criminalizing gross negligence in data security nor any precedent for prosecuting such malpractice under existing laws. Although bank executives and directors can be removed for unsafe and unsound practices, similar rules do not apply to Silicon Valley. In this way, criminal justice is eons behind current technology.
The only possible avenue to justice, then, is a class-action lawsuit. Unfortunately, this is often impossible for several reasons. First, tort law, which is routinely used in negligence cases, does not cover the badly written software that enables cybercrime, according to law professors Michael L. Rustad and Thomas H. Koenig. Rather, explains New York Times reporter Peter Henning, “negligence is used regularly only in federal criminal prosecutions for food and drug safety problems and environmental contaminations.” Further, application of the “responsible corporate officer doctrine,” under which corporate officials may be punished for offenses even without proof of personal bad intent or wrong conduct, is similarly difficult. Again, the doctrine’s precedent of application has been typically limited to food and drug safety cases. Even if that doctrine’s previous applications were not so narrow, it would still lack the force to adequately incentivize security upgrades.
The solution lies in legal reform. If courts consider software failure the responsibility of software companies, firms will change their behavior to avoid legal repercussions. By extending tort law, contract law, and the responsible corporate officer doctrine to data breaches and requiring that software be sold as a product rather than a license, we can incentivize responsible software development rather than letting companies continue to release under-tested and over-bugged products without regard for the consequences of failure.
Even without financial repercussions for software failure, simply requiring companies to provide the technical details of any attack would significantly encourage security and help other possible targets learn from each attack. Currently fewer than 20 percent of software failures are disclosed and publicly analyzed. If disclosure was mandated, users could learn more about which companies suffer more frequent or significant security breaches and act accordingly, and companies would be further pushed to guard against vulnerabilities to attacks.
In the 1960s, the legal system also had a hands-off approach in car safety cases, opting not to apply tort law even where accident victims claimed their injuries were due to negligent manufacturing. But due to growing public pressure and an increase in car accidents in the following 30 years, state and federal courts moved toward applying liability to defective and dangerous cars, and holding manufacturers responsible for not making a reasonable effort to prevent serious accidents. Increasing liability has been effective in the past in encouraging an under-regulated industry to improve the quality of its products. The same should be done for software today.