Even if we use … conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.Nick Bostrom, “Existential risk prevention as global priority“
This is Part 1 of a series based on my paper, “Mistakes in the moral mathematics of existential risk.”
(Almost) everyone agrees that human extinction would be a bad thing, and that actions which reduce the chance of human extinction have positive value. But some authors assign quite high value to extinction mitigation efforts. For example:
- Nick Bostrom argues that even on the most conservative assumptions, reducing existential risk by just one millionth of one percentage point would be as valuable as saving a hundred million lives today.
- Hilary Greaves and Will MacAskill estimate that early asteroid-detection efforts saved lives at an expected cost of fourteen cents per life.
These numbers are a bit on the high side. If they are correct, then on many philosophical views the truth of longtermism will be (nearly) a foregone conclusion.
I think that these, and other similar estimates, are inflated by many orders of magnitude. My paper and blog series “Existential risk pessimism and the time of perils” brought out one way in which these numbers may be too high: they will be overestimates unless the Time of Perils Hypothesis is true.
My aim in this paper is to bring out three novel ways in which many leading estimates of the value of existential risk mitigation have been inflated. (The paper should be online as a working paper within a month.)
I’ll introduce the mistakes in detail throughout the series, but it might be helpful to list them now.
- Mistake 1: Focusing on cumulative risk rather than per-unit risk.
- Mistake 2: Ignoring background risk.
- Mistake 3: Neglecting population dynamics.
I show how many leading estimates make one, or often more than one of these mistakes.
Correcting these mistakes in the moral mathematics of existential risk has two important implications.
- First, many debates have been mislocated, insofar as factors such as background risk and population dynamics are highly relevant to the value of existential risk mitigation, but these factors have rarely figured in recent debates.
- Second, many authors have overestimated the value of existential risk mitigation, often by many orders of magnitude.
In this series, I review each mistake in turn. Then I consider implications of this discussion for current and future debates. Today, I look at the first mistake, focusing on cumulative rather than per-unit risk.
2. Bostrom’s conservative scenario
Nick Bostrom (2013) considers what he terms a conservative scenario in which humanity survives for a billion years on the planet Earth, at a stable population of one billion humans.
We will see throughout this series that is far from a conservative scenario. Modeling background risk (correcting the second mistake) will put pressure on the likelihood of humanity surviving for a billion years. And modeling population dynamics (correcting the third mistake) will raise the possibility that humanity may survive at a population far below one billion people. However, let us put aside these worries for now and consider Bostrom’s scenario as described.
In this scenario, there are 1018 human life-years yet to be lived, or just over 1016 lives at current lifespans. Bostrom uses these figures to make a startling claim: reducing existential risk by just one millionth of one percent is, in expectation, as valuable as saving one hundred million people.
Even if we use … conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.
It can seem obvious that Bostrom must be correct here. After all, a reduction of existential risk by one part in a million gives a 10-8 chance of saving just over 1016 people, and so in expectation it saves just more than 108 lives, or a hundred million lives.
Today, we will see that this estimate is not strictly speaking false. It is rather worse than false: it is badly misleading. Once the estimate is described in more revealing terms, we will see that the seemingly small reduction of 10-8 in the chance of existential catastrophe required to deliver an expected value equivalent to a hundred million lives saved is better described as a very large reduction in existential risk.
3. Relative and absolute risk reduction
To see the point, we need to make two distinctions. First, reductions in risk can be described in two ways.
Typically, we speak about relative reductions in risk, which chop a specified fraction off the current amount of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 72% risk, from 20% to 18% risk, or from 2% to 1.8% risk. (Formally, relative risk reduction by f takes us from risk r to risk (1-f)r).
More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 70% risk, from 20% to 18% risk, or from 10% to 0% risk. (Formally, relative risk reduction by f takes us from risk r to risk r – f).
Bostrom must be concerned with absolute risk reduction for his argument to make sense. Otherwise, the benefit of existential risk reduction would have to be multiplied by the current level of risk r.
In general, a focus on absolute risk isn’t especially nefarious, just a bit nonstandard. It does tend to overstate risk a bit, since absolute risk reduction is equivalent to relative risk reduction from a starting point of 100% risk. In this way, stating risk in absolute rather than relative terms overstates risk by 1/r, where r is the starting level of risk. This can be quite a strong boost if starting levels of risk are low, however many effective altruists think that levels of existential risk are rather high, in which case the overstatement may be one order of magnitude or less. Let’s not dwell on this.
4. Cumulative and per-unit risk
Over a long period (say, a billion years), we can report risk in two ways. On the one hand, we can report the cumulative risk rC that a catastrophe will occur at least once throughout the period. Cumulative risk over a billion years can be quite high: it’s hard to go a billion years without catastrophe.
On the other hand, we can divide a long period into smaller units (say, centuries). Then we can report the per-unit risk rU that a catastrophe will occur in any given unit.
How is cumulative risk related to per-unit risk? Well, if there are N units, then we have:
rC = 1-(1-rU)N
Therein lies the rub, for if N is very high (in this case, N = ten million!), then for almost any value of rU, the rightmost term will be driven exponentially towards zero, so that rC is driven almost inescapably towards one. This means that over a long period, driving rC meaningfully away from one requires very low values of rU.
Therein lies the rub, because Bostrom is concerned with cumulative risk. For an intervention to be credited with increasing the survival probability not only of current humans, but also of all future humans, by 10-8, that intervention must reduce cumulative risk by 10-8.
No problem, you say. Surely it cannot be so hard to reduce cumulative risk by a mere one millionth of one percent. However, an absolute reduction of cumulative risk by 10-8 requires (by definition) driving cumulative risk at least below 1-10-8. Again, you say, that must be easy. Not so. Driving cumulative risk this low requires driving per-century risk to about 1.6*10-6, barely one in a million.
5. First mistake: Focusing on cumulative over per-unit risk
We can describe this intervention in two ways. On the one hand, we can describe it flatteringly, as Bostrom does, in terms of (absolute) cumulative risk reduction. All that’s needed, Bostrom says, is a reduction by one millionth of one percent.
On the other hand, we can describe it unflatteringly, in the more standard terms of relative per-unit risk reduction. Now what’s needed is driving risk to almost one in a million per century. By contrast, many effective altruists think that per-century risk is currently above 10%. This would put us at a relative risk reduction of over 100,000x, not one part in a million, but the tenth part of a million.
Our first mistake in the moral mathematics of existential risk is therefore focusing on cumulative rather than per-unit risk. This is a mistake for two reasons.
First, as we saw, focusing on cumulative risk significantly understates risk by describing very, very large changes in per-century risk as very, very small changes in cumulative risk. This gives the misleading sense that what is, in quite a natural and intuitive sense, an astronomically large change, is in fact only a tiny change.
Second, focusing on cumulative risk moves debates away from what we can affect. Our actions may reduce risk in our own century, and perhaps if we are lucky they will even affect risk in nearby centuries. But it is unlikely that we can predictably affect risk in far distant centuries with anything approaching the ease that we affect risk in nearby centuries. For this reason, in assessing the value of feasible acts taken to mitigate existential risk, we should focus on per-unit risk rather than cumulative risk in order to bring the focus back to quantities that our actions can predictably affect.
Today’s post introduced the series and discussed a first mistake in the moral mathematics of existential risk: focusing on cumulative over per-century risk. We saw that a leading paper by Nick Bostrom makes this mistake, and that once the mistake is corrected, what appeared to be a very small change in existential risk turned out in a natural sense to be a very large change. We will also see in the next post that another leading paper makes the same mistake.
We will also look at two more mistakes in the moral mathematics of existential risk: ignoring background risk, and neglecting population dynamics. We will see how these mistakes combine in leading papers to overestimate levels of existential risk, and serve to mislocate debates about the value of existential risk mitigation.
Leave a Reply