# Existential risk pessimism and the time of perils (Part 7: An application)

In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios.

Millett and Snyder-Beattie, “Existential risk and cost-effective biosecurity

### Preface

I feel conflicted about posting today. Yesterday, TIME magazine released an article alleging “a toxic culture of sexual harassment and abuse” within effective altruism on the basis of testimony by seven women. Although the reaction to that article has been better than the reaction to similar revelations in the past, there is considerable room for improvement. As I write these words, I am fighting actively to make the EA Forum delete attempts to reveal the identities of several of those women, and I am not sure whether I will win. (Many names were visible for at least seven hours yesterday, and some have still not been removed).

This isn’t the first such incident in recent memory. Last month, an email and subsequent apology by Nick Bostrom ignited a firestorm of discussions around inclusion and belonging within effective altruism. On this blog, that led to a three-part series on inclusion and belonging. This series will continue, and I suspect that the next installment in the series will discuss issues related to the TIME magazine article.

This isn’t a news blog. The purpose of this blog is to use academic research to drive positive change within and around the effective altruism movement. Academic research moves slowly and tries to a large extent to avoid being driven reactively to and fro by news cycles and current events. In that spirit, I want to avoid the temptation to blog immediately about each new crisis that arises.

I have decided to go ahead today with my scheduled post about biosecurity. I hope that this post can be understood in the context of an evolving discussion, which seeks neither to downplay the horror of recent events nor to let this blog become consumed by them. Rest assured that I will speak about the TIME magazine article soon enough, and that in the meantime I will do everything in my (limited) power to ensure that lessons are learned, and changes are made. If any of my readers find themselves in positions of power and influence within the effective altruism community, I hope that they will do the same.

### 1. Recap

This is Part 7 in a series based on my paper “Existential risk pessimism and the time of perils“.

Part 1 and Part 2 set up a tension between two claims:

(Existential Risk Pessimism) Per-century existential risk is very high.

(Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value.

Part 3 introduced a potential solution: the Time of Perils Hypothesis on which risk is high now, but will soon fall to a permanently low level.

Part 4, Part 5, and Part 6 considered leading arguments for the Time of Perils Hypothesis and argued that they don’t work.

Before wrapping up (Part 8), I want to look at an application of this discussion.

### 2. An application: Heed background risk!

One way to read the lesson of this paper is that background levels of risk matter. If existential risk is high, this tends to substantially drag down the value of feasible strategies for mitigating existential risk.

Many estimates of the value of existential risk mitigation do not consider background levels of risk. As a result, these estimates can be inflated by many orders of magnitude. To see the point, let’s look at a leading estimate of the value of reducing biohazards.

### 3. Snyder-Beattie and Millett on cost-effective biosecurity

Andrew Snyder-Beattie holds a DPhil in Zoology from Oxford, and works as a Senior Program Officer at Open Philanthropy. Snyder-Beattie is widely considered to be among the very most influential voices on biosecurity within the effective altruist community.

Piers Millett is a Senior Research Fellow at the Future of Humanity Institute. Millett holds advanced degrees in science policy, research methodology and international security, and has extensive industry experience in biosecurity.

Millett and Snyder-Beattie’s paper, “Existential risk and cost-effective biosecurity“, is among the most-cited papers on biosecurity written by effective altruists. The paper argues that even very small reductions in existential risks in the biosecurity sector (henceforth, `biorisks’) are cost-effective by standard metrics.

Our previous discussion suggests that this will not be true if background risk is high. Let’s see how this qualification could be introduced into Millett and Snyder-Beattie’s discussion.

Millett and Snyder-Beattie estimate the cost-effectiveness of an intervention as C/(NLR), where:

• C is the cost of the intervention.
• N is “the number of biothreats we expect to occur in 1 century”.
• L is “the number of life-years lost in such an event”.
• R is “the reduction in risk [in this century only] achieved by spending … C”.

Millett and Snyder Beattie estimate these quantities as follows:

• C is fixed at $250 billion. • N is estimated in min/max ranges using three different approaches. • L is calculated assuming that humanity remains earthbound, with an annual population of 10 billion people, lasting for a million years, so that L = 1016 life years. • R is estimated at a 1% relative reduction, i.e. risk is reduced from N to .99N. Because N is estimated using three different models, Millett and Snyder-Beattie estimate cost-effectiveness as C/(NLR) on each model, giving: Standard government cost-effectiveness metrics in the United States value a life-year in the neighborhood of a few hundred thousand dollars, so Millett and Snyder-Beattie conclude that across models, a small intervention (such as a 1% relative risk reduction in this century) is cost-effective even at a high price (such as$250 billion).

### 4. Two complaints about background risk

Not so fast. There are a fair number of complaints to be raised about this model (Food for thought: try to explain why the expected number of lives saved should be NLR.). But here are two complaints I’d like to focus on today.

First, the expected number of future lives is estimated without reference to background risk. Parts 1-6 of this series warned us that this is a mistake. Let’s see why.

Stick with Millett and Snyder-Beattie’s model where the human population is fixed at 10 billion people, so that each century of human existence contains 1012 life-years. Calculating as in the simple model, with an extinction risk of r per century, the expected number of future humans is:

$L = 10^{12} * \Sigma_i (1-r)^i = 10^{12}(1-r)/r$.

With a pessimistic 20% level of risk, this works out to 4*1012 lives, in which case the Millett and Snyder-Beattie model overestimates future lives by a factor of 2,500. To recover the Millett/Snyder-Beattie estimate of 1016 future lives, we would have to set per-century risk at 0.001%, orders of magnitude below most estimates given by effective altruists.

Second and more specifically, the expected number of future lives is model-dependent. Because the number L of future lives depends on the background level of risk, it depends specifically on the background level N of biorisk. Hence it makes very little sense to hold L fixed across models which vary N.

How can we fix these problems? And how much will the results change once we do?

### 5. Patching the Millett/Snyder-Beattie estimates

Let’s hold fixed as much of the Millett and Snyder-Beattie model as we can. In particular, let’s assume a constant population of 10 billion humans per year.

We’ve already seen how to estimate the number of future lives L as a function of the background level of risk r:

$L = 10^{12} * \Sigma_i (1-r)^i = 10^{12}(1-r)/r$.

In the full paper (Section 3.1), I model the value of an absolute reduction in extinction risk, from r to r-f. The same proof can be used to show in the present model that an act X which reduces absolute extinction risk in our century by f saves an estimated $10^{12}*f/r$ lives.

We can decompose risk as

r = b + n

where b denotes biorisk, and n captures all other risks. Since biorisk can be reduced by at most b, in absolute terms, this means that the value of X is capped at the value of saving $10^{12}*b/r$ lives.

And now we see the problem, for in the Millett and Snyder-Beattie models, b is very low. After all, they want to show that even very small biorisks are cost-effective to mitigate. However, unless r is comparably low, background risk dominates, making reductions in biorisk ineffective when all other modeling assumptions are held fixed.

By way of example, assume a pessimistic 20% risk per century, and suppose that X leads to a 10% (relative) reduction in biorisk this century, i.e. an absolute risk reduction of 0.1b. For comparison, take the number N of biothreats/century as a proxy for b, per-century biorisk. (This isn’t quite right, but any complaints would apply equally to the original Millett and Snyder-Beattie model).

On this model, cost-effectiveness is rather lower than Millett and Snyder-Beattie’s estimates.

What a difference background risk makes! Now reductions in biorisk look many orders of magnitude less cost-effective than they were before.

That is not to say that biosecurity is worthless. But as we begin to reduce the cost-effectiveness of longtermist interventions, leading short-termist interventions become increasingly competitive against the best long-termist interventions.

For example, GiveWell estimates with high confidence that the best short-termist interventions can save a life for several thousand dollars. This means that even if we ignore the downstream effects of short-termist interventions (which we should not do), those interventions clock in at a cost per life-year in the range of perhaps \$30-70. That beats the stuffing out of the biorisk interventions on almost all of Millett and Snyder-Beattie’s modeling assumptions.

Even if we reduce our pessimism substantially, the problem does not go away. Suppose we become twenty times less pessimistic, adopting a 1% level of background risk. Now the value of X, a 10% relative reduction in biorisk this century, is as follows:

Amazingly, even ignoring knock-on effects, our best short-termist interventions often still come out as more cost-effective than biorisk on these models.

What a difference background risk makes.

### 6. Softening our conclusions

So far, we have seen that a standard model of cost-effective biosecurity, due to Piers Millett and Andrew Snyder-Beattie, wrongly ignores the level of background existential risk. As a result, the model likely overestimates the cost-effectiveness of biosecurity by many orders of magnitude.

In particular, we have seen that once levels of background risk accepted by many effective altruists are built into the model, biosecurity interventions often come out as less cost-effective than our best short-termist interventions, even if we pretend that those short-termist interventions have no downstream effects.

Does this show that short-termist interventions are necessarily more cost-effective than longtermist interventions? That would be too hasty, for several reasons.

First, the Millett and Snyder-Beattie model makes a restrictive population assumption: that the human population will be fixed at 10 billion people per year. This is unlikely, since humanity surely has a nonzero chance of colonizing the stars, in which case our population may rise considerably above ten billion. I discussed space colonization in Part 4 of this series.

We might adapt the Millett and Snyder-Beattie model to incorporate population growth by analogy with value growth, as considered in Part 2. We saw that value growth will boost the cost-effectiveness of longtermist interventions by a noticeable amount, although perhaps not as much as the longtermist had hoped.

Second, the model adopts some very particular assumptions about levels of biorisk and the cost of reducing them. In particular, it deals with cases in which biorisk is orders of magnitude below standard effective altruist estimates of overall existential risk. That is probably not an accurate way to capture the prevailing view.

If we raise biorisk to be among the leading sources of existential risk, then the quantity b/r in the previous section no longer drags down the value of biosecurity interventions, and biosecurity may become a good deal more cost-effective. However, this would no longer be a case in which small reductions in biorisk are cost-effective, at least not in absolute terms, since the absolute magnitude of risk reductions would be increased by many orders of magnitude. This would then not bear out the original point of the Millett and Snyder-Beattie discussion.

Third, existential risk mitigation, and biosecurity in particular, is one of many things that longtermists could fund. To say that biosecurity may not be as cost-effective as we hoped is not yet to show in any direct way that other longtermist interventions fail to be cost-effective.

Still, perhaps we can draw the following lesson from today’s discussion. Many models of the value of longtermist interventions do not build in background risk. When we build high levels of background risk into existing models, the cost-effectiveness of longtermist interventions drops substantially.

Does that seem like the right takeaway? Are there other ways to salvage the Millett and Snyder-Beattie argument? For that matter, are there other problems with their methodology that I have not mentioned? Let me know what you think in the comments section.

Posted

in

by