Skip Navigation

High Tech, High Minded

Original Illustration by Christine Wang

When one thinks of Silicon Valley’s politics, tech billionaire Peter Thiel’s high-profile backing of Donald Trump, Elon Musk’s partisan tweets, or the sheer amount of tech money pouring into political campaigns might come to mind. What might not, however, are the unique beliefs held by some of the most powerful technologists, including Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Sometimes crudely grouped as TESCREAL, these ideologies often center around technological and quantitative solutions to societal advancement. Nuanced and sometimes conflicting, TESCREAL views are easily misunderstood or misrepresented. But while they may seem idiosyncratic, these ideologies have emerged as potentially powerful influences on public policy. In particular, Effective Altruism (EA) and Longtermism can provide crucial insights into priorities like AI safety, which has become increasingly relevant in an unprecedented age of technological advancement.

Before the EA movement was championed by the Silicon Valley powerful, it found its roots in academic circles. Coined in 2011 by Oxford philosophers Toby Ord and Will Macaskill, EA was inspired by famous Princeton ethicist Peter Singer. It started out with the simple utilitarian goal of “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.” Effective Altruists (EAs) propose a formula for doing the most good by combining Singer’s philosophy that a moral person has a strict duty to donate significant amounts to charity with quantitative observations that certain charities are more effective than others at saving lives with limited resources. However, since its inception, EA has been constantly evolving. EAs have branched out from their core principle of cost-effective philanthropy, becoming key sponsors of research in AI alignment, biosecurity, animal welfare, and nuclear risk. 

The expansion of EA is in part a result of its fusion with other TESCREAL ideologies, especially a controversial concept called Longtermism. The basic premise of Longtermism is that one has a moral responsibility to protect future lives and that human extinction must be avoided at all costs. Vocal critics of Longtermism argue that it is dangerous to put an outsized weight upon existential risk given past false alarms. For example, Paul Ehrlich’s claims of the existential threat of overpopulation were used to justify horrific measures like mass forced sterilization in developing countries. However, many EAs do not argue that people should sacrifice basic moral principles to prevent human extinction. Instead, EAs often hold a qualified view on Longtermism: that people should take the risk of human extinction more seriously. As with many environmentalists who criticize past generations for being short-sighted or selfish toward future generations, EAs and Longtermists argue that, without proper guardrails, the current trajectory of civilization may lead to catastrophic suffering or extinction. Longtermism pushes for the expansion of the existing climate change framework to encompass the long-term risks from other fields, such as AI and biosecurity.

EA has experienced rapid growth into a real force within and outside the Silicon Valley bubble since first being brought into the public eye by disgraced crypto billionaire Sam Bankman-Fried, the founder and CEO of cryptocurrency exchange FTX. Despite incurring significant reputational damages from FTX’s collapse, players behind the EA movement have continued to shape public policy. Over the past year, OpenPhilanthropy—funded primarily by Dustin Moskovitz, co-founder of Facebook and Asana, and his wife Cari Tuna—bankrolled $650 million in EA causes in various AI labs, think tanks, universities, and nonprofits. Georgetown University’s new Center for Security and Emerging Technology and the Horizon Institute of Public Service, both funded by OpenPhilanthropy, support thinkers of varying political affiliations who incorporate Longtermism in their policy advisory. EA-aligned think tanks like The Future of Life Institute have published statements like the famous “AI Pause open letter,” in which signatories such as Elon Musk and Apple co-founder Steve Wozniak advocated for a six-month pause on frontier models. EA thinkers also populate research roles, especially on AI safety teams, at top AI labs like OpenAI, Anthropic, and DeepMind. Other significant AI labs like the Center for AI Safety (which hosted the “Statement on AI Risk”), Centre for the Governance of AI, Institute for AI Policy and Strategy, and the Machine Intelligence Research Institute harbor EA-aligned individuals and are significantly funded by OpenPhilanthropy

Even though the prospect of vast tech money flowing into politics is bound to raise concerns, we should not simply dismiss EA-funded efforts as attempts to maliciously secure corporate interests. OpenPhilanthropy itself only writes grants, not investments, and therefore has no financial upside. Moreover, EAs also often argue for a slowdown or significant regulation of AI development due to associated risks, despite the fact that such regulation may compromise profits. For example, when OpenPhilanthropy made a $30 million grant to OpenAI in 2017, it was with the goal of increasing responsible oversight and funding safety and alignment. In a conversation with BPR, Holly Elmore, a grassroots AI policy advocate who previously led Harvard’s student EA group as a PhD student, advocates for a full AI pause, claiming that it would be “a chance to figure out everything.” Other policy proposals being pushed by EA affiliates range from an FDA-style regulatory body to windfall clauses, which would require AI firms to donate significant portions of their earnings. Strongly arguing that AI technologies might destroy the world and that AI labs should give away their money does not seem to be a coordinated effort to hoard profits.

AI safety threats are similar to negative environmental externalities, a cost that all of us will bear if irresponsible AI companies short-sightedly pursue development. The field of AI safety research is young but broad, with projects on all the ways AI could lead to ruin. Examples include power-seeking AI, AI-enabled totalitarian regimes, and AI-enhanced great power conflict. The resulting conclusions and proposals are predictably wide-ranging, and the field lacks a clear, united front on what exact policies should be passed. OpenPhilanthrophy hasn’t narrowed this field; instead, it funds a highly diverse set of labs and people with varying political affiliations. As the organization most financially capable of acting as a stopgap as the topic of AI safety gathers public attention, OpenPhilanthrophy’s political play is thus being prompted by the need to get Washington thinking, not a grab for power.

Moreover, EAs are not the only people who think AI safety risks require a serious political response. Forty-two percent of top CEOs surveyed at the 2023 Yale CEO summit believe that AI-caused extinction in the next 10 years is possible. Repeated warnings about AI are signed by some of the most respected experts in the field. And 76 percent of the general public believes that AI risks could eventually pose an extinction-level threat to humanity. On top of existential risk, other AI issues like algorithmic bias, misinformation, and cyberattacks are credible threats. 

In the policy world, the Overton window has also started moving. The EU has already passed its first “AI Act” to provide a legal framework to deal with high-risk societal impacts of AI, such as social scoring or real-time biometric surveillance, and plans to lead the way on AI existential risk mitigation. At the landmark Global AI Safety Summit, Vice President Kamala Harris acknowledged the grave risks associated with AI. 

Of course, the public and government should be wary of an echo chamber of Longtermism at the expense of addressing current systemic issues. However, with a rapidly changing and potentially dangerous technology, Washington needs to work fast and be willing to explore the perspectives of TESCREALists. If one understands EA, Longtermism, and the other ideologies that have coalesced around them, one might also understand the real risks of emerging technologies. Policymakers should be willing to lend an ear to the intellectual communities most involved in studying the dangerous long-term consequences of era-defining technologies.

SUGGESTED ARTICLES