Skip Navigation

A No-Brainer

Original illustration by Yushan Jiang '25, an Illustration major at RISD

Within four years of State Grid Zhejiang Electric Power requiring its workers to wear electroencephalogram (EEG) neurodata headsets, the Chinese company’s profits jumped by $315 million. During this time, employees faced pressure to work more efficiently or risk being dismissed for the day. “When the system issues a warning, the manager asks the worker to take a day off or move to a less critical post,” Jin Jia, a professor of brain science at Ningbo University, told the South China Morning Post. “Some jobs require high concentration. There is no room for a mistake.” 

Workplace surveillance has been on the rise in the past decade, especially with the recent push to monitor the activities of remote employees during the Covid-19 pandemic. Going beyond tracking mouse movements, keystrokes, and other indicators of productivity, nascent neurotech advances offer employers the ability to track brain activity itself. Some have already begun taking advantage of this opportunity. For example, SmartCap technologies, which monitor fatigue in workers, are now required for many professional drivers. With the global market for neurotechnology growing at a compound annual rate of 12 percent, it is clear that the integration of this technology into everyday life, including places of work, will not slow down. These advancements offer vast opportunities as well as significant costs. To balance competing interests, we need stronger regulations to protect human dignity and privacy in the digital age.

Current domestic and international laws have not caught up with the fast pace of technological developments that threaten to permeate every aspect of life. The past year saw collective, frantic realizations across the globe––namely after the launch of ChatGPT––that futuristic technology is here, competition is accelerating its production, and it is already changing many aspects of modern life. The subsequent realization that governments worldwide lack existing legal frameworks and the capacity to enforce regulations that protect our neurological rights has made it more apparent than ever that there is a consequential (though not irrevocable) separation between policymakers and technological experts. 

A divergence of science and government can be expected—and should be, as governments should not have decisive control over research on scientific, medical, and technological advancement, despite their integral role in providing funding. However, there needs to be better communication between the private and public sectors. Recent antitrust cases in Germany and the United States are attempting to affirm the rights of governments to regulate Big Tech––rightfully so, as regulatory power is necessary to reduce harm that some technologies and developer monopolies can induce. Yet, conversely, technological expertise must also be valued and welcomed in policy-making processes.  

Technology experts can provide insight into some of the innumerable benefits neurotechnology offers to society. In the health sector, cutting-edge products have the potential to help scientists better understand and treat neurodegenerative conditions and cognitive disabilities. Elon Musk’s brain implant company, Neuralink, just received FDA clearance to begin clinical trials on paraplegic patients. Other technologies can enable remote physiotherapy, aid those with epilepsy, and allow people unable to speak the ability to communicate their thoughts into words on a screen. Given neurotechnology’s promise for people with disabilities, it is essential that future conversations between decision makers and technical experts welcome voices from disability rights advocates and center disabled populations.

However, the widespread use of neurotechnology in the workforce—monitoring and analyzing brain patterns and the inclusion of “hands-free” communication devices—also comes with two primary ethical concerns: the capacity for discrimination and the collection of neural data.

The Information Commissioner’s Office (ICO), which reports to the UK Parliament, recently published a report on neurotechnology warning of “neurodiscrimination” from both biased technological systems and overt unfair treatment by employers. Employers factoring neurodata into hiring processes can unintentionally discriminate by relying on technology biased against neurodivergent brain types—systems “trained on neuro-normative patterns.” The ICO report also raises concerns over the potential continuous collection of neurodata from consenting employees (which could include information estimating emotional states, productivity, and effectiveness of educational tools or information), all of which may increase the risk of employers discriminating against mental health conditions. Furthermore, because non-medical applications of neurodata are often not classified as “special category data,” which is regulated in the United Kingdom, neurodata lacks current protections against workplace discrimination—both in the United Kingdom and across the globe.

Then there is the question of who will have access to and ownership over the brain data––both of employees who are encouraged or required to use neuro-tracking devices as well as of individual consumers. Regardless of the company—Meta, Neurolink, Blackrock Neurotech, or Precision Neuroscience—without any privacy protections or regulations in place, the legal and ethical risks of corporations commodifying brain data must be taken seriously.

Whether it be letting your employer have access to your brain data or giving it in exchange for “hands-free” communications tools, bioethics expert Nita Farahany worries that people will make hasty decisions. In an interview with The Wall Street Journal, Farahany said, “I do worry people are unwittingly giving up [information] without realizing the full implications. That is true for privacy in general, but we ought to have a special place we think about when it comes to the brain. It is the last space where we truly have privacy.” Even more, the ICO reports cautions that speech decoding technology has the “potential to misrepresent what a person has said or to reveal thoughts that might otherwise have been private or meant to be edited before sharing. In both cases, highly sensitive information could be revealed with no way to recover it.” 

Farahany considers solutions in her book, The Battle for Your Brain, writing, “When it comes to privacy protections in general, trying to restrict the flow of information is both impossible and potentially limits the insights we can gain… Instead, we should focus on securing rights and remedies against the misuse of information.” Establishing rights to mitigate the use of new technology, she argues, should take precedence over prohibiting them. For state governments, expanding definitions is a good start. Farahany recommends looking at current definitions of “privacy” and adding clauses on “mental privacy.” Additionally, she calls for establishing freedom from both “interference with our mental privacy” and punishment based on thoughts in definitions for “freedom of thought.”

On the international level, Farahany also strongly advocates for establishing “cognitive liberty” as a human right recognized in the Universal Declaration of Human Rights. However, Brown University professor and international law and human rights expert Nina Tannenwald cautions against tampering with the Universal Declaration. Reopening the document, she tells me, could open a door for countries who “might want to weaken its provisions rather than strengthen them.” Instead, she argues that we should expand our understanding of current rights articulated in the Declaration to include “cognitive liberty,” which could be addressed in an international treaty. To begin, she proposes the creation of an international code of conduct or ethical standards that could then “progress to a binding treaty negotiated at the UN Human Rights Council.” While Farahany stands at the center of many neurorights discussions, perhaps Tannenwald is correct that establishing international norms while leaving the Declaration untouched is the best course of action to get states to consider “cognitive liberty” a human right. Regardless, both can agree immediate action is necessary.

As Farahany writes, “Neurotechnology has unprecedented power to either empower us or oppress us. The choice is ours.” However, the choice is really up to governments and international bodies, who must urgently establish which rights humans are entitled to in the age of digital surveillance and rapidly evolving technology. The UN Human Rights Council should pass a treaty soon, and in the meantime, national and state governments must begin expanding definitions of mental privacy. Above all, policymakers across the world need to solicit advice from technological experts to inform their decisions.

SUGGESTED ARTICLES