Where does this [discussion] leave the case for existential risk mitigation? To some extent, this depends on readers’ views about the cost-effectiveness of existing opportunities to mitigate existential risks. Perhaps some efforts to mitigate existential risks are so cost-effective that they can be justified only by their benefits for agents alive today … More generally, we might sympathize with Ord, who bemoans the fact that the world spends more on ice cream than on existential risk mitigation. But in general, the models of this paper suggest that existential risk mitigation may not be as important as many pessimists have taken it to be, and crucially that pessimism is a hindrance rather than a support to the case for existential risk mitigation.David Thorstad, “Existential risk pessimism and the time of perils“
This is Part 8 in a series based on my paper “Existential risk pessimism and the time of perils“.
Part 1 and Part 2 set up a tension between two claims:
(Existential Risk Pessimism) Per-century existential risk is very high.
(Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value.
Part 3 introduced a potential solution: the Time of Perils Hypothesis on which risk is high now, but will soon fall to a permanently low level.
Part 4, Part 5, and Part 6 considered leading arguments for the Time of Perils Hypothesis and argued that they don’t work.
Part 7 looked at an application of this discussion to estimating the cost-effectiveness of biosecurity.
Today, I want to draw this series towards a close by looking at five implications of the discussion in Parts 1-7.
There are at least five implications of the discussion in this series.
2.1. An enduring tension
One implication of our discussion is an enduring and counterintuitive tension between Pessimism and the Astronomical Value Thesis. At first glance, it seems natural, even obvious that Pessimism should support the Astronomical Value Thesis. But in fact, we saw that across a range of models and parameters, Pessimism tells strongly against the Astronomical Value Thesis. Concretely, this means two things.
First, it is often assumed that a good way to argue for the importance of existential risk mitigation is to argue that levels of existential risk are very high. This is misleading. On its own, Pessimism tends to decrease rather than increase the value of existential risk mitigation. In fact, high levels of pessimism may singlehandedly scuttle the case for existential risk mitigation.
Second, it is often assumed that a good way to argue against the importance of existential risk mitigation is to argue that levels of existential risk are low. This is also misleading. Across most reasonable models and parameters, denying Pessimism tends to increase rather than decrease the value of existential risk mitigation.
This second point is somewhat delicate, because one of my blog series (`Exaggerating the risks‘) claims that effective altruists have often severely overstated levels of existential risk. On its own, that claim should increase rather than decrease the importance of existential risk mitigation. Why, then, would it make sense to regard the series `Exaggerating the risks‘ as a criticism of effective altruism, rather than a novel way of supporting longtermism?
To be honest, I am open to that interpretation of this series. Although I am often publicly critical of effective altruism, my primary allegiance is to the truth. If it turns out that in criticizing leading estimates of existential risk I have strengthened, rather than weakened the case for existential risk mitigation, then that is what I would like my readers to conclude.
However, there are plausible auxiliary claims which would make the discussion in `Exaggerating the risks‘ tell against the importance of existential risk mitigation. One natural hypothesis concerns tractability: lower risks are often more difficult to mitigate. For example, you may think that there are diminishing returns to risk mitigation. Large risks can be quickly driven down by pursuing cheap but neglected steps for risk mitigation. However, as those risks fall by several orders of magnitude, remaining strategies for risk mitigation become more difficult to identify and implement, reducing the tractability of risk mitigation.
Across many models and parameters, it is an unalterable mathematical fact that a fixed absolute or relative reduction in risk becomes more valuable as levels of background risk decrease. However, if risk reduction also becomes less tractable as levels of background risk decrease, then the returns to risk-mitigation efforts may eventually decrease as more effort becomes required to produce a fixed reduction in risk. Realistically, this means that moderate criticisms of existential risk estimates (for example, that they overestimate risk by a factor of 5) will not help my case, but that stronger criticisms of existential risk estimates (for example, that they overestimate risk by a factor of 50,000) may help a good deal, if tractability drops sharply as risks become very low.
2.2. Background risk matters
Part 7 of this series brought out another lesson of this paper: heed background risk!
One way to read the lesson of this paper is that background levels of risk matter … Many estimates of the value of existential risk mitigation do not consider background levels of risk. As a result, these estimates can be inflated by many orders of magnitude.
Building background risk into models that previously assumed no background risk will have a global effect of cooling down the value of existential risk mitigation. This happens because the gains to surviving one catastrophe are no longer a sure thing: we could always face another catastrophe later.
We saw in Part 7 that introducing background risk into cost-effectiveness can lead to qualitative changes in our analysis. In particular, we looked at the cost-effectiveness of biosecurity interventions on three models from a paper by Piers Millett and Andrew Snyder-Beattie. We saw that the cost-effectiveness of biosecurity interventions in the Millett and Snyder-Beattie (MSB) model jumps from a low figure (table: “MSB cost/life-year”) to a rather higher figure (table: “Actual cost/life-year”).
|MSB Model||N b (biothreats/century)||MSB cost/life-year||Actual cost/life-year|
|Model 1||0.005 to 0.02||$0.125 to $5.00||$25 to $1,000|
|Model 2||1.6*10-6 to 8*10-5||$31.00 to $1,600||$6,250 to $312,500|
|Model 3||5*10-5 to 1.4*10-4||$18.00 to $50.00||$10,000 to $36,000|
This change is significant. GiveWell estimates that we can save a life right now for several thousand dollars, putting the best short-termist interventions at a cost-effectiveness of around $30-70/life-year, even ignoring knock-on effects. Comparing with the table above, we see that in the MSB model, incorporating background risk takes biosecurity interventions from `likely more cost-effective’ than the best short-termist interventions to `almost certainly less cost-effective’ than the best short-termist interventions.
What a difference background risk makes.
2.3. The Time of Perils Hypothesis is very important
Part 2 of this series explored a variety of ways to reconcile Existential Risk Pessimism with the Time of Perils Hypothesis. We saw that by far the most plausible strategy involves the Time of Perils Hypothesis, on which risk is high now but will soon fall to a significantly lower level if only we survive the next few perilous centuries. The Time of Perils Hypothesis is one version of a broader Hinge of History Hypothesis on which we are now living through one of the most influential times in history.
Although the Time of Perils Hypothesis is often discussed by effective altruists, I do not think it is always appreciated how important the Time of Perils Hypothesis is. If the argument in this series is on the right track, without the Time of Perils Hypothesis effective altruists would either have to radically reduce their existential risk estimates, or else make peace with the fact that existential risk mitigation may not be more important than the most cost-effective short-termist causes.
With this in mind, I think it is surprising that many leading arguments for the Time of Perils Hypothesis are somewhat underdeveloped. We looked at three arguments for the Time of Perils Hypothesis in this series, including an appeal to space settlement (Part 4), an existential risk Kuznets curve (Part 5), and wisdom (Part 6). While one of these arguments (the existential risk Kuznets curve) is quite well-developed, I struggled at times to identify detailed and plausible versions of the appeals to space and wisdom.
In the case of space settlement, there wasn’t a clear story about how near-term space settlement is meant to dramatically reduce anthropogenic risks. And in the case of wisdom, we struggled a bit to pin down what notion of wisdom is meant to drive the argument and to see in detail why we should expect wisdom to grow dramatically enough to bring an end to the Time of Perils.
The problem of underdevelopment will recur in Part 9, where I will discuss a common objection to my argument: that progress in artificial intelligence will bring the Time of Perils to an end. We will see that despite the widespread nature of this view among effective altruists, there is surprisingly little in the way of written defense, and it is in fact quite hard to pin down why one might take the view to be plausible.
If these complaints are on the right track, then one subsidiary lesson would be that it is important for effective altruists to develop leading arguments for the Time of Perils Hypothesis in greater detail. Given the central importance of the Time of Perils Hypothesis, it will not do to leave key arguments for the Time of Perils Hypothesis underspecified or undermotivated.
2.4: Skepticism about longtermism for everyone
One of my first posts, `A note about methodology’, has this to say:
Many challenges to effective altruism turn on fundamental normative debates in ethics and decision theory. Where decision theory is concerned, we might worry that effective altruists place fanatical emphasis on low probabilities of high payoffs, allowing them to assume arbitrarily large importance in decisionmaking. Or we might be concerned that effective altruists are not sensitive enough to risk in decisionmaking. Where ethics is concerned, we might worry that effective altruists overemphasize the value of unidentified future lives, the strength of duties to aid future people, or the ability of many small claims to assistance to aggregate into very large claims. We might also worry that effective altruists underemphasize competing factors, such as duties of reparative justice, collective obligations, or permissible partiality to our own interests over the interests of others. For the most part, I will remain neutral on these debates.
It is often assumed that views such as consequentialism, expected value maximization, totalism and risk neutrality lead inexorably, or at least nearly inexorably to longtermism. By contrast, I hope to show how it is possible to hold any or all of these views while remaining skeptical of longtermism.
How is the conjuring trick done? Unfortunately, it is a long and winding road. One-shot philosophical moves such as denying that future people matter, or that small claims can aggregate to outweigh large claims, would shorten the road, but precisely because they make the road so short, they tend to be a bit too strong for many readers.
This blog proposes an alternative path: work, for the sake of argument, within a framework that is broadly totalist, consequentialist, and the like. Then develop a series of challenges within this framework which combine to put pressure against longtermism. Readers who wish to push back against my philosophical assumptions are welcome to do so: as is well known, denying most of these assumptions will only strengthen the case against longtermism. But I hope to show that no radical philosophical changes are needed to put strong pressure against longtermism.
This series explored the first of many challenges to longtermism: if you really think that levels of existential risk are very high, then you might not be able to hold that it is astronomically important to mitigate existential risk. That puts pressure against one way of motivating longtermism: by appeal to the importance of existential risk mitigation. It does not entirely foreclose on the possibility of longtermism. But together with the rest of my academic papers, the remaining blog series, and arguments raised by many other authors, I hope to weave a slow, steady case against longtermism.
2.5. Cumulative risk
In his paper “Existential risk prevention as global priority,” Nick Bostrom estimates the number of future descendants of humanity, should we avoid existential catastrophe, using several methods. The smallest of these estimates leads to a future population of just over 1016 lives. Bostrom uses this estimate to derive a rather surprising claim:
Even if we use the most conservative of these estimates … we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.
That is rather different from the conclusion of this series, on which the expected value of reducing existential risk by a millionth of a percentage point is often quite small. What explains the difference in our estimates?
Existential risk is a repeated risk: a risk that humanity faces on a repeated basis throughout our history. Repeated risks can be described in two ways. The first, and most descriptive characterization of repeated risks is a per-unit characterization which specifies, for each time period, the level of risk. I looked at a simple model on which per-unit risk takes a constant value r during each century of human existence, as well as a time of perils model on which per-unit risk later falls to a lower value rl.
Another way to characterize repeated risk is a cumulative characterization: over a given time period (say, one billion years, or all of human history), what is the cumulative probability that an existential catastrophe occurs at least once?
Over a long period, cumulative risk is much higher than repeated risk. For example, suppose that humanity faces a per-unit risk r = 0.2 of existential catastrophe each century. Then over a billion years, we face cumulative risk 1-(1-r)10,000,000, a number so mind-bogglingly close to certain doom that the best calculators shudder in protest.
When Bostrom says that a reduction of one millionth of one percentage point in existential risk would produce as much value as saving a hundred million lives, he is not talking about a reduction in per-unit risk. We saw in this series that such a small reduction in per-unit risk would, for the pessimist, have much lower value. Bostrom is talking about a reduction of one millionth of a percentage point in cumulative risk.
Therein lies the rub: Bostrom asks us to imagine that we could reduce the current level of existential risk r by enough to drop the quantity 1-(1-r)10,000,000 by at least a millionth of a percentage point (10-8). `That’s easy’, you say `because 10-8 is very small’. Not so fast: 1-(1-r)10,000,000 is calculator-breakingly close to 1, so any change in r sufficient to drive cumulative risk down by 10-8 would in fact have to be calculator-breakingly dramatic.
The lesson of this discussion is that it is very important to be clear about when we are talking about cumulative or per-unit characterizations of repeated risks. Switching to cumulative characterizations produces a false hope that small changes in existential risk can produce astronomically large gains, because we fail to see that small changes in cumulative risk often require astronomically large changes in repeated risk.
2.6. Taking stock
In this section, we explored five implications of the series `Existential risk pessimism and the time of perils‘.
First, there is an enduring and counterintuitive tension between Existential Risk Pessimism and the Astronomical Value Thesis. If you want to convince me that a given reduction in existential risk is important, you would do better to be an optimist than a pessimist.
Second, background risk matters: building background risk into our models tends to lower the cost-effectiveness of existential risk mitigation, often by enough to make a qualitative difference in model behavior.
Third, the Time of Perils Hypothesis is very important. Without it, the pessimist’s case for existential risk mitigation may founder. For this reason, it is critical to devote more resources to fleshing out arguments for the Time of Perils Hypothesis.
Fourth, skepticism about longtermism is for everyone. You don’t have to deny consequentialism, totalism, or any other ethical or decision-theoretic principle to be worried about longtermism. This series is one of a rough patchwork of challenges to longtermism that everyone should take seriously, regardless of their background view.
Finally, it is very important to separate cumulative and per-unit characterizations of repeated risks. Many leading effective altruists use cumulative characterizations of existential risk to suggest that small reductions in existential risk carry astronomical value. This is misleading: `small’ reductions in cumulative existential risk are often very large in per-unit terms.
3. Further information
This is the penultimate post in my series `Existential risk pessimism and the time of perils‘. A planned conclusion (Part 9) will discuss objections and replies. However, I am conscious that not all readers will be interested in hearing objections and replies, so I think that now is a good time to handle some closing business.
This post is based on an academic paper by the same name, which can be found in the Global Priorities Institute working paper series. I would encourage interested readers to take a look at the full paper, together with the formal appendix. A version of this paper has been conditionally accepted to an academic journal, and if all goes well it should be available under an open-access model within a year or so.
Some readers may prefer to watch short video presentations of the paper. Here is a short presentation I gave about this paper at EAGxOxford.
My goal in writing this blog is to use academic research to drive positive change both within and outside of the effective altruism movement. I hope that this series goes some way towards showing how academic research might yield action-relevant insights about questions raised by effective altruism. More generally, I hope that readers may find themselves motivated to take a look at some of my other papers on effective altruism, and at related work in collections such as the GPI working papers series and our soon-to-be-forthcoming edited volume on longtermism. I don’t always agree with these papers, but I heartily recommend them.
I will, in due course, try to blog about other academic papers. If you would rather wait for more blog series, fret not. But I would encourage readers, if they find themselves with a spare moment, to do some digging through the academic literature on their own.
In the meantime, what do you think of the implications drawn from this paper? Are there any implications that I missed? And how did you like this series? Let me know.
Leave a Reply