Skip Navigation

Governing for the Future, Paying in the Present

Photo by K Howard on Unsplash

With our world facing increasingly imminent catastrophic problems—ranging from climate change and unprecedented pandemics to the rapid development of AI—there seems to be a growing consensus that humanity might be approaching its final days. With the rapid warming of the planet and the rampant, unbridled growth of AI, governance frameworks are beginning to emphasize the mitigation of “worst-case scenarios.” This approach follows the framework of effective altruism, which encourages individuals and institutions to direct resources toward causes that produce the greatest measurable good, focusing on efficiency, impact, and evidence-based decision-making. ‘Longtermism,’ a branch of effective altruism, places particular emphasis on crisis mitigation and philanthropy for future generations; it is rooted in the belief that future generations are just as important as ours, and further, that it is our present duty to ensure the sustainability of our future. This trending policy orientation toward future risk reduction is necessary, considering the magnitude of the prospective crises humanity could face, but relying on elite-centric definitions of an “ideal” future can lead policy to bypass democratic processes and grossly overlook the needs of the present.

After COVID-19, governments and policy institutions began to direct attention toward long-term biosecurity investments designed to prevent future high-impact outbreaks. Effective altruists are idealistic, envisioning a utopic future that is sustained by past generations’ strategic wealth and resource allocation. Longtermists radically expanded this philosophy by asserting that the reverse also holds true: that we have a similar moral obligation to the well-being of our future brethren. What started as a small movement at Oxford University has grown into a movement followed and preached by many influential figures, including Elon Musk and Sam Bankman-Fried. 

Given that effective altruism has found popularity within the tech industry, it has played a pivotal role in shaping artificial intelligence policy. Since around 2014, the movement has been advocating for safe AI measures out of fear of scenarios where advanced systems become too difficult for humans to control, are mismanaged by governments or bad actors, or destabilize existing economic and political systems. Some theorists warn of even more extreme scenarios that would threaten human survival as a whole. Elon Musk, Vitalik Buterin, Peter Thiel, Dustin Moskovitz, and Sam Bankman-Fried are just a few of the many billionaires who have donated money to this effort. Heavily valued companies such as OpenAI, x.AI, and Anthropic all have strong ties to the effective altruism and longtermist movements, begging the question of why its founders are so invested in ensuring that their technology is safely regulated. The answer lies in the philosophy of the movement itself. Effective altruists may view their AI mitigation and handling as successful because of their ability to proactively, rather than reactively, manage an emergent, era-defining technology. This, in their view, lends credence to their ideology; preparation for potential catastrophes in the future mitigates outsized, often irreparable damage. 

On the surface, this approach appears politically neutral, since it may appear to be difficult to object to preventing possible catastrophic harm. However, critics of effective altruism have argued that models like these, which emphasize quantifiable impact, can have political and institutional blind spots. In development policy, for example, anti-poverty interventions such as mosquito bed net distribution and deworming programs get selected for their strong performance under randomized controlled trials championed by effective altruists. The interventions are chosen because they are considered to be highly cost-effective ways of improving health outcomes, saving the greatest number of lives per dollar spent. However, these methods often fail to account for wider systemic effects, including how private interventions may alter state capacity or accountability. Organizations like GiveWell and Giving What We Can rely heavily on these studies as the “best available evidence” to recommend top charities. 

In practice, we can already see what these blind spots might look like. When longtermism is wielded by elites who have access to power and money, it can result in rapid detriments to current living conditions. Elon Musk’s rocket company SpaceX has sought to establish a city on Mars, an effort he has described as “life insurance for life collectively” due to eventual sun expansion and Earth no longer being a viable planet for our species. Yet SpaceX’s method of securing a brighter future is done at the documented expense of present-day worker safety, environmental ecosystems, and wildlife–all of which generate problems for today’s society. This line of thinking aligns with the longtermist belief that protecting humanity’s long-term survival against even the most distant of risks, such as planetary collapse, should guide our present-day investment and innovation decisions.

SpaceX is a direct example of how longtermism intensifies a core feature of effective altruism—prioritization—by extending it across time, allowing hypothetical future harms to outweigh the status quo’s social and environmental problems. If such an expanse of resources are used for the purpose of safeguarding humanity’s distant future, what resources remain to be directed toward improving the conditions of life in the present? The problem with longtermism, therefore, is not just that its framework can redirect political attention toward distant risks while immediate priorities receive less attention, but also that the most well-resourced people, who are often farther removed from the issues that plague society, are leading these discussions of prioritization.

Governments have always had to balance present and future needs, but this balance becomes difficult to strike when policy priorities and discussions begin to be structured in ways that move decision-making toward the well-resourced and thus further away from democratic responsiveness. The RAND Corporation is a think-tank and grantmaking organization that received $15 million in 2023 from Open Philanthropy, one of the largest funders of the effective altruist movement. The organization is led by CEO Jason Matheny and senior information scientist Jeff Alstott, who are well-known effective altruists. Both men have government connections, having worked together previously at the White House Office of Science and Technology Policy and the National Security Council during the Biden administration. The RAND Corporation played a critical role in advocating for an executive order on AI governance signed by Joe Biden in 2023, and helped shape its provisions, including reporting requirements. Similarly, Open Philanthropy has funded the salaries of more than a dozen AI fellows in congressional offices, federal agencies, and influential think tanks. These lobbying efforts make clear that the tomorrow effective altruists seek to protect is not defined through democratic processes but rather by elite interests. The infiltration of longtermism in governance isn’t immense, nor is it unique, but the philosophy’s place in policymaking is nonetheless alarming as it allows a small elite group to make high-stakes, speculative decisions that override the pressing matters of the public.

Policies designed to maximize long-term safety can often deprioritize present-day needs. The issue, then, is not whether preventing catastrophic futures is worthwhile, but how those priorities are determined and whose needs are displaced in the process. Preventing distant catastrophes is important and valuable, but when those priorities override present social and environmental needs, longtermism risks undermining the very communities it claims to protect.

SUGGESTED ARTICLES