4 MONTHS AGO • 5 MIN READ

What is Longtermism?

profile

Welcome to Accelerationist

Explore the intersection of technology and governance with a new essay every week, straight to your inbox.

What is Longtermism?

The core principle of effective altruism, explained.

You’ve probably heard of effective altruism – the philosophy golden child of Silicon Valley, championed by the likes of Elon Musk and Facebook cofounder Dustin Moskovitz. But what you might not have heard so much about is its guiding principle, longtermism.

Very basically, longtermism is the idea of putting the wellbeing of future people before anything else. It’s a bold idea that’s far from reaching a consensus. But with over $400 million going to effective altruism every year, it’s worth paying attention to.

Effective Altruism: A Crash Course

The term “effective altruism”, or “EA”, came from a group of UK-based philosophers, who founded The Centre For Effective Altruism back in 2011. The Centre is now one of ten companies now federated under the Effective Ventures Group, along with other EA-affiliated organizations like Open Philanthropy. One of the major feathers in their cap is a collective $1bn donated to fighting malaria, saving an estimated 150,000 lives.

The idea of effective altruism is to use "evidence and reason to figure out how to benefit others as much as possible” – aka maximisng ROI for charity. In practice, this typically takes two forms: giving to organizations that fight EA-approved, long-term issues, and pursuing jobs that maximize contribution to human progress, often via services like 80,000 Hours. The causes championed by effective altruism have changed over the years, but typically include global health, animal welfare, and existential risk.

Longtermism: The Good, The Bad and The Unknown

That last issue, existential risk, is where longtermism really comes into play.

Existential risk includes anything that threatens the survival of our species, whether they’re likely to occur 10 years or 10 millennia from now. Commonly identified existential risks include nuclear fallout, another pandemic, and the rise of ‘bad’ AI.

"It's 100 percent certain we'll be hit by a devastating asteroid,
but we're not 100 percent sure when."
The B612 Foundation, 2018

Preventing existential risk includes everything from NASA’s asteroid deflection research to the safeguards we put in place to avoid accidentally launching a nuke. We can also make positive trajectory changes like combating climate change that make a positive future more likely for successive generations.

But with so much wrong with the world already, why focus on hypothetical futures?

Effective altruism is all about numbers, and the simple truth is that there are more people who will live than there are currently alive. As researcher Cody Fenwick puts it, “If we survive to the end of Earth’s habitable period, all those who have existed so far will have been the first raindrops in a hurricane.” So if you want to positively impact the most amount of people, the future is the logical place to start.

Another answer comes from philosopher Derek Parfit:


Suppose that I leave some broken glass in the undergrowth of a wood. A hundred years later this glass wounds a child. My act harms this child. If I had safely buried the glass, this child would have walked through the wood unharmed.
Does it make a moral difference that the child whom I harm does not now exist?”

As a species, we already struggle to empathize with people across geographic distances or cultural divides. But time is an even greater division, and the idea that these people are ‘hypothetical’, despite the certainty of their existence, gives us a convenient excuse for putting humanity’s present needs first. Effective altruism stresses the importance of overcoming this prejudice.

From a purely pragmatic point of view, a third reason for investing in future generations is more people = more progress – and the rate of progress has never been more rapid. Think about it this way:

  • The first self-driving car came only 100 years after the very first car.
  • OpenAI was founded just 70 years after the first general-purpose computer.
  • Less than 60 years passed between the first man to fly on a plane and the first man in space.

However, some effective altruists argue that ‘more people’ is no longer the best or most efficient way of making progress. Rather than an increase in people (and a surplus of resources) leading to more new ideas, it’s begun to lead to behaviors like wealth hoarding and a plateaued rate of progress.

But AI might provide a solution.

Some effective altruists propose that we might soon have ‘digital people’, a type of AGI that produces a copy of a real person to take over some of their responsibilities, help them make decisions, or to conduct research on. Digital people would only require the servers to run their programs – no physical bodies, no hunger or pain, no poverty or disease.

Holden Karnofsky, one of the founders of effective altruism, gives this example: Imagine you wanted to try meditation, but wasn’t sure if it was worth it. Hypothetically, you could create a digital copy of yourself with an identical life, and get that ‘person’ to try meditation for a few months, condensed into only a few hours of real time. You could then see the result and decide if meditation is right for you.

This kind of experiment could even be scaled up to a sample size big enough to provide insights into the benefits of meditation for the general population, in an environment that is uniquely controllable and free from the biases of most clinical research.

When previously it was simply more people = more progress, to maintain our current rate of growth we might need digital people to lead the way.

Effective AI

‘Digital people’ are one of many reasons why effective altruists are investing heavily in AI, particularly in what’s known as ‘AI alignment’, or the responsible development of AI that aligns with human values.

EA has been leading the push for safe AI. Currently, the EA organization 80,000 Hours rates AI alignment as the most important global cause today, ranking highly against the metrics of importance, tractability and uncrowdedness. In 2019, EA organizations allocated $40 million to AI alignment research, up from $9 million in 2017.

Effective altruists cite recent surveys where AI experts predict a 5% chance that AI leads to human extinction and that we’re currently spending 1000x more on accelerating AI progress than mitigating the potential risks.

Doomers in Disguise?

Some of EA’s most prolific critics are their Silicon Valley neighbors. Sam Altman, for example, has called the philosophy an "incredibly flawed movement" that shows "very weird emergent behavior". Following Altman’s brief ejection from OpenAI, VC Vinod Khosla even blamed EA, claiming “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence.”

Of course, more critics have come out of the woodwork after the fall of Sam Bankman Fried. Journalist Jeff John Roberts said that "Bankman-Fried and his cronies professed devotion to 'EA,' but all their high-minded words turned out to be flimflam to justify robbing people", and his close association with the movement has damaged its public image. However, EA’s founders have widely condemned SBF’s “ends justify the means” mentality.

Effective altruists may be more cautious about the future, but they’re not doomers. They believe in investing in the future, not just for ourselves but for hundreds of generations to come. Longtermism is first and foremost a commitment to humanity’s survival.

I can’t imagine a more worthy cause.

Welcome to Accelerationist

Explore the intersection of technology and governance with a new essay every week, straight to your inbox.