# Existential risk pessimism and the time of perils (Part 2: Failed solutions)

It is often supposed that existential risk pessimism bolsters the case for the astronomical value thesis … I use a series of models to draw a counterintuitive conclusion. Across a range of assumptions, existential risk pessimism not only fails to increase the value of existential risk mitigation, but in fact substantially decreases it, so much so that existential risk pessimism threatens to falsify the astronomical value thesis.

David Thorstad, “Existential risk pessimism and the time of perils

### 1. Recap

This is the second installment in a series of posts based on my paper “Existential risk pessimism and the time of perils“.

In Part 1, we looked at the relationship between two claims:

(Existential Risk Pessimism) Per-century existential risk is very high.

(Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value.

It is very natural to think that Existential Risk Pessimism supports the Astronomical Value Thesis. If EAs can show that existential risk is very high, then it must be very important to reduce existential risk, right?

Not so. Part 1 introduced a Simple Model of the value of existential risk mitigation. We saw that on the Simple Model:

1. Pessimism is irrelevant: The value of existential risk mitigation is entirely independent of the starting level of risk r.
2. Astronomical value begone!: The value of existential risk mitigation is capped at the value v of the current century. Nothing to sneeze at, but hardly astronomical.

Can the Pessimist do better by challenging model assumptions? Here are some challenges that won’t work.

### 2. Absolute versus relative risk reduction

In working through the Simple Model, we considered the value of reducing existential risk by some fraction f of its original amount. But this might seem like comparing apples to oranges. Reducing existential risk from 20% to 10% may be harder than reducing existential risk from 2% to 1%, even though both involve reducing existential risk to half of its original amount. Wouldn’t it be more realistic to compare the value of reducing existential risk from 20% to 19% with the value of reducing risk from 2% to 1%?

More formally, we were concerned about relative reduction of existential risk from its original level r by the fraction f, to (1−f)r. Instead, the objection goes, we should have been concerned with the value of absolute risk reduction from r to r−f. Will this change help the Pessimist?

It will not! On the Simple Model, the value of absolute risk reduction is fv/r. Now we have:

1. Pessimism is harmful: The value of existential risk mitigation grows inversely with the starting level r of existential risk. If you are 100× as pessimistic as I am, you should be 100× less enthusiastic than I am about absolute risk reduction of any fixed magnitude.
2. Astronomical value is still gone: The value of existential risk mitigation remains capped at the value v of the current century.

That didn’t help. What might help the Pessimist?

### 3. Value growth

The Simple Model assumed that each additional century of human existence has some constant value v. That’s bonkers. If we don’t mess things up, future centuries may be better than the current century. These centuries may support higher populations, with longer lifespans and higher levels of welfare. What happens if we modify the Simple Model to build in value growth?

Value growth will certainly boost the value of existential risk mitigation. But it turns out that value growth alone is not enough to square Existential Risk Pessimism with the Astronomical Value Thesis. We’ll also see that the more value growth we assume, the more antagonistic Pessimism becomes to the Astronomical Value Thesis.

Let v be the value of the present century. Ord and Adamczewski consider a model on which value grows linearly over time, so that the value of the Nth century from now will be N times as great as the value of the present century, if we live to reach it.

(Linear Growth) $V[W] = v \sum\limits_{i=1}^{\infty} i(1-r)^i = v(1-r)/r^2.$

On this model, the value of reducing existential risk by some (relative) fraction f is fv/r. Perhaps you think Toby and Tom weren’t generous enough. Fine! Let’s square it. Consider quadratic value growth, so that the Nth century from now is N^2 times as good as this one.

(Quadratic Growth) $V[W] = v \sum\limits_{i=1}^{\infty} i^2(1-r)^i = v(1-r)(2-r)/r^3.$

On this model, the value of reducing existential risk by f is fv(2−r)/(r2). How do these models behave?

To see the problem, consider the value of a 10% (relative) risk reduction in this century (Table 1).

Table 1: Value of 10% relative risk reduction across growth models and risk levels

This table reveals two things:

1. Pessimism is (very) harmful: The value of existential risk mitigation decreases linearly (linear growth) or quadratically (quadratic growth) in the starting level of existential risk. This means that Existential Risk Pessimism emerges as a major antagonist to the Astronomical Value Thesis. If you are 100× as pessimistic as I am about existential risk, on the quadratic model you should be 10,000× less enthusiastic about existential risk reduction!
2. Astronomical Value Thesis is false given Pessimism: On some growth modes (i.e. quadratic growth) we can get astronomically high values for existential risk reduction. But we have to be less pessimistic to do it. If you think per-century risk is at 20%, existential risk reduction doesn’t provide more than a few times the value of the present century.

Now it looks reasonably certain that Existential Risk Pessimism, far from supporting the Astronomical Value Thesis, could well scuttle it. Let’s consider one more change to the Simple Model, just to make sure we’ve correctly diagnosed the problem.

### 4. Global risk reduction

The Simple Model assumes that we can only affect existential risk in our own century. This may seem implausible. Our actions affect the future in many ways. Why couldn’t our actions reduce future risks as well?

Now it’s not implausible to assume our actions could have measurable effects on risk in nearby centuries. But this will not be enough to save the Pessimist. On the Simple Model, cutting risk over the next N centuries all the way to zero gives only N times the value of the present century. To salvage the Astronomical Value Thesis, we would need to imagine that our actions today can significantly alter levels of existential risk across very distant centuries. That is less plausible. Are we to imagine that institutions founded to combat risk today will stand or spawn descendants millions of years hence?

More surprisingly, even if we assume that actions today can significantly lower existential risk across all future centuries, this assumption may still not be enough to ground an astronomical value for existential risk mitigation. Consider an action X which reduces per-century risk by the fraction f in all centuries, from r to (1−f)r each century. On the Simple Model, the value of X is then $\frac{f}{1-f}\frac{v}{r}$. What does this imply?

First, the good newsIn principle the value of existential risk reduction is unbounded in the fraction f by which risk is reduced. No matter how small the value v of a century of life, and no matter how high the starting risk r, a 100% reduction of risk across all centuries carries infinite value, and more generally we can drive value as high as we like if we reduce risk by a large enough fraction.

1. The Astronomical Value Thesis is still probably false: Even though the value of existential risk reduction is in principle unbounded, in practice it is unlikely to be astronomical. To illustrate, setting risk r to a pessimistic 20% values a 10% reduction in existential risk across all centuries at once at a modest 5v/9. Even a 90% reduction across all centuries at once is worth only 45 times as much as the present century.
2. Pessimism is still a problem: At the risk of beating a dead horse, the value of existential risk reduction varies inversely with r. If you’re 100× as pessimistic as I am, you should be 100× less hot on existential risk mitigation.

Now it’s starting to look like Existential Risk Pessimism is the problem. Is there a way to tone down our pessimism enough to make the Astronomical Value Thesis true? Why yes! We should be Pessimists about the near future, and optimists about the long-term future. We’ll see how to do this in Part 3.

In the meantime, let me know what you think of Absolute vs. Relative Risk Reduction; Value Growth; Global Risk Reduction; and other strategies. Could we do something to shore up these strategies? Is there another strategy that I haven’t discussed? Comments are open.

Posted

in

by