[Technology] grants immense powers. In a flash [we] create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others [who] are not so lucky or so prudent, perish.
Karl Sagan, Pale blue dot
1. Recap
This is the third installment in a series of posts based on my paper “Existential risk pessimism and the time of perils“.
In Part 1, we looked at the relationship between two claims:
(Existential Risk Pessimism) Per-century existential risk is very high.
(Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value.
It might seem obvious that Pessimism supports the Astronomical Value Thesis. But we looked at a Simple Model of the value of existential risk mitigation on which Pessimism is in fact irrelevant to the Astronomical Value Thesis.
In Part 2, we looked at ways to modify the Simple Model to square Pessimism with the Astronomical Value Thesis. We saw that natural modifications will not be enough to recover the Astronomical Value Thesis. We also saw that many of these modifications make Pessimism tell increasingly against the Astronomical Value Thesis.
The natural conclusion is that Pessimism is the problem. If we want to recover the Astronomical Value Thesis, then we need to relax our pessimism. How might we do this?
2. Introducing the Time of Perils Hypothesis
The natural solution is to be pessimists about the short-term, but optimists about the long term. We can hold that existential risk is high now, but will soon drop to a very low level, and will stay there forever.
This point is usually credited to the astronomer Carl Sagan:
It might be a familiar progression, transpiring on many worlds … life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges … and then technology is invented. It dawns on them that there are such things as laws of Nature … and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others [who] are not so lucky or so prudent, perish
Sagan’s view has become the received wisdom in many EA circles. Humanity, they tell us, is like an infant. She finds herself with increasingly powerful technology to play with, but does not yet have the wisdom to use that technology safely. Until humanity learns to manage the risks posed by new technology, we will remain in our infancy, stuck in the Time of Perils. But once we learn to manage technology safely, we will reach adulthood, and existential safety.
Here is Ord, quoting Sagan:
The problem is not so much an excess of technology as a lack of wisdom. Carl Sagan put this point especially well: “Many of the dangers we face indeed arise from science and technology—but, more fundamentally, because we have become powerful without becoming commensurately wise. The world-altering powers that technology has delivered into our hands now require a degree of consideration and foresight that has never before been asked of us.”
We are infants, but not yet wise, and therefore we are at risk. So says the Time of Perils Hypothesis. How can the Time of Perils Hypothesis be modeled?
3. Modeling the Time of Perils Hypothesis
To math it up, let N be the length of the perilous period: the number of centuries for which humanity will experience high levels of risk. Assume we face constant risk r throughout the perilous period, with r set to a pessimistically high level. If we survive the perilous period, existential risk will drop to the level rl of post-peril risk, where rl is much lower than r.
On this model, the value of the world today is:
(Time of Perils)
![]()
That works out to a mouthful:
but it’s really not so bad.
Let be the value of living in a world forever stuck at the perilous level of risk, and
be the value of living in a post-peril world. Let SAFE be the proposition that humanity will reach a post-peril world and note that
. Then the value of the world today is a probability-weighted average of the values of the safe and perilous worlds.
As the length N of the perilous period and the perilous risk level r trend upwards, the value of the world tends towards the low value of the perilous world envisioned by the Simple Model. But as the perilous period N shortens and the perilous risk r decreases, the value of the world tends towards the high value
of a post-peril world. We’ll see the same trends when we think about the value of existential risk mitigation.
Let X be an action which reduces existential risk in this century by the fraction f, and assume that the perilous period lasts at least one century. Then we have:
This equation decomposes the value of X into two parts, corresponding to the expected increase in value (if any) that will be realized during the perilous and post-peril periods. The first term, is bounded above by v, so it won’t matter much. The Astronomical Value Thesis needs to pump up the second term,
, representing the value we could get after the Time of Perils. Call this the crucial factor.
How do we make the crucial factor astronomically large? First, we need the perilous period N to be short. The crucial factor decays exponentially in N, so a long perilous period will make the crucial factor quite small. Second, we need a very low post-peril risk rl. The value of a post-peril future is determined entirely by the level of post-peril risk, and we saw in Section 2 that this value cannot be high unless risk is very low.
To see the point in practice, assume a pessimistic 20% level of risk during the perilous period. Table 2 looks at the value of a 10% reduction in relative risk across various assumptions about the length of the perilous period and the level of post-peril risk.
Table 2: Value of 10% relative risk reduction against post-peril risk and perilous period length
N=2 | N=5 | N=10 | N=20 | N=50 | |
rl=0.01 | 1.6v | 0.9v | 0.4v | 0.1v | 0.1v |
rl=0.001 | 16.0v | 8.3v | 2.8v | 0.4v | 0.1v |
rl=0.0001 | 160.0v | 82v | 26.9v | 3.0v | 0.1v |
Note that:
- Short is sweet: With a short 2-century perilous period, the value of X ranges from 1.6-160v, and may be astronomical once we build in value growth. But with a long 50-century perilous period, there isn’t much we can do to make the value of X large.
- Post-peril risk must be very low: Even if post-peril risk drops by 20x to 1% per century, the Astronomical Value Thesis is in trouble. We need risk to drop very low, towards something like 0.01%. Note that for the Pessimist, this is a 2,000x reduction!
What does this mean for Pessimism and the Astronomical Value Thesis? It’s time to take stock.
4. Taking stock
So far, we’ve seen three things:
- Across a range of assumptions, Existential Risk Pessimism tends to hamper, not support the Astronomical Value Thesis: The best-case scenario is the Simple Model, on which Pessimism is merely irrelevant to the Astronomical Value Thesis. On other models the value of risk reduction decreases linearly or even quadratically in our level of pessimism.
- The most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis: The Simple Model doesn’t bear out the Astronomical Value Thesis. It’s not enough to think about absolute risk reduction; value growth; or global risk reduction. We need to temper our pessimism somehow, and the best way to do that is the Time of Perils Hypothesis: we’ll be Pessimists for a few centuries, then optimists thereafter.
- Two features that a Time of Perils Hypothesis must have if it is going to vindicate the Astronomical Value Thesis: We need (a) a fairly short perilous period, and (b) a very low level of post-peril risk.
New question: why would you believe a Time of Perils Hypothesis with the required form?
One reason you might believe this is because you think that superintelligence will come along, and that if superintelligence doesn’t kill us all, it would radically and permanently lower the level of existential risk. Let’s set that aside as part of our conclusion and ask a weaker question: is there any other good way to ground a strong enough Time of Perils Hypothesis? I want to walk through two popular arguments and suggest that they aren’t enough to do the trick.
I’ll do that in Part 4. Before I do, I’d like to hear what you think. How plausible is the Time of Perils Hypothesis? Is this the right way to set it up? Comments are open.
Leave a Reply