Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value.

Nick Bostrom, “Existential risk prevention as global priority“

### New series: Academic critiques of longtermism

I am an academic philosopher. I think that it is important to bring **academic research **to bear on the allocation of philanthropic resources.

This is the first in a series of posts introducing my academic papers on effective altruism, which together go some way towards saying why I am skeptical of longtermism.

This is, for the most part, a cross-post from an earlier post I made on the EA forum. If you’re very keen, you can read the full paper here or watch me talk about it here.

### 1. Introduction

Many effective altruists (EAs) endorse two claims about existential risk. First, existential risk is currently high:

(Existential Risk Pessimism)Per-century existential risk is very high.

For example, Toby Ord puts the risk of existential catastrophe by 2100 at 1/6, and participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a median 19% chance of human extinction by 2100. Let’s ballpark Pessimism using a 20% estimate of per-century risk.

Second, many EAs think that it is very important to mitigate existential risk:

(Astronomical Value Thesis)Efforts to mitigate existential risk have astronomically high expected value.

You might think that Existential Risk Pessimism supports the Astronomical Value Thesis. After all, it is usually more important to mitigate large risks than to mitigate small risks.

In this series, I extend a series of models due to Toby Ord and Tom Adamczewski to do five things:

- I show that across a range of assumptions,
**Existential Risk Pessimism tends to hamper**, not support the Astronomical Value Thesis. - I argue that the most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the
**Time of Perils Hypothesis.** - I clarify two features that the Time of Perils Hypothesis must have if it is going to vindicate the Astronomical Value Thesis.
- I suggest that
**the Time of Perils Hypothesis is probably false.** - I draw
**implications for the importance of mitigating existential risk.**

Proofs are in the appendix of the full paper if that’s your jam.

### 2. The simple model

Let’s start with a **Simple Model** of existential risk mitigation due to Toby Ord and Tom Adamczewski. On this model, it will turn out that Existential Risk Pessimism has no bearing on the Astronomical Value Thesis, and also that the Astronomical Value Thesis is false.

Many of the assumptions made by this model are patently untrue. The question I ask in the rest of the post will be which assumptions a Pessimist should challenge in order to bear out the Astronomical Value Thesis. **We will see that recovering the Astronomical Value Thesis by changing model assumptions is harder than it appears.**

The Simple Model makes three assumptions.

**Constant value:**Each century of human existence has some constant value*v*.**Constant risk:**Humans face a constant level of per-century existential risk*r*.**All risk is extinction risk:**All existential risks are risks of human extinction, so that no value will be realized after an existential catastrophe.

Under these assumptions, we can evaluate the expected value of the current world *W*, incorporating possible future continuations, as follows:

(Simple Model)

But we’re not interested in the value of the world. We care about the value of existential risk mitigation. Suppose you can act to reduce existential risk in your own century. More concretely, you can take some action X which will reduce risk this century by some fraction *f*, from *r* to* (1−f)r*. How good is this action?

On the Simple Model, **it turns out that V[X] = fv.** This result is surprising for two reasons.

**Pessimism is irrelevant:**The value of existential risk mitigation is entirely independent of the starting level of risk*r*.**Astronomical value begone!:**The value of existential risk mitigation is capped at the value*v*of the current century. Nothing to sneeze at, but hardly astronomical.

That’s not good for the Pessimist. She’d like to do better by challenging assumptions of the model. Which assumptions would she have to challenge in order to square Pessimism with the Astronomical Value Thesis?

Stay tuned for an answer in Part 2 and Part 3. In the meantime, let me know in the comments section what you think of Existential Risk Pessimism, the Astronomical Value Thesis, and the Simple Model.

## Leave a Reply