Existential risk pessimism and the time of perils (Part 4: Space)

Every surviving civilization is obliged to become spacefaring — not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive.

Karl Sagan, Pale blue dot

1. Recap

In this series, we have looked at the relationship between two claims:

(Existential Risk Pessimism) Per-century existential risk is very high.

(Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value.

Part 1 introduced a Simple Model of existential risk mitigation and used that model to show that Existential Risk Pessimism, far from supporting the Astronomical Value Thesis, threatens to tell against it.

Part 2 considered some failed strategies for reconciling Pessimism with the Astronomical Value Thesis.

Part 3 introduced a strategy that might work, the Time of Perils Hypothesis that risk is high now, but will soon drop to a lower level. We saw that the Time of Perils Hypothesis would be enough to reconcile Pessimism with the Astronomical Value Thesis provided we assume (a) a short time of perils, and (b) a very sharp drop in existential risk after the time of perils.

But why should we believe the Time of Perils Hypothesis? Here is one reason you might believe it.

2. Settling the stars

You might think that the Time of Perils will end as humanity expands across the stars. So long as humanity remains tied to a single planet, we can be wiped out by a single catastrophe. But if humanity settles many different planets, it may take an unlikely series of independent calamities to wipe us all out.

This thought has been defended at length by the astronomer Martin Cirkovic, who argues that space colonization is the only viable prospect for long-term human survival. It was also voiced by Carl Sagan, who concluded immediately after introducing the concept of the time of perils that “every surviving civilization is obliged to become spacefaring — not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive”. And the same thought has been cited by Elon Musk as one of his primary reasons for pursuing Mars colonization. Could the prospects for space settlement ground a time of perils hypothesis of the needed form? 

The problem with space settlement is that it’s not fast enough to help the Pessimist. To see the point, distinguish two types of existential risks:

  1. Anthropogenic risks: Risks posed by human activity, such as greenhouse gas emissions and bioterrorism.
  2. Natural risks: Risks posed by the environment, such as asteroid impacts and naturally occurring diseases.

It’s quite right that space settlement would quickly drive down natural risk. It’s highly unlikely for three planets in the same solar system to be struck by planet-busting asteroids in the same century. But the problem is that Pessimists weren’t terribly concerned about natural risk. For example, Toby Ord estimates natural risk in the next century at 1/10,000, but overall risk at 1/6. So Pessimists shouldn’t think that a drop in natural risk will do much to bring the Time of Perils to a close.

Could settling the stars drive down anthropogenic risks? Recall that the Pessimist needs a (a) quick and (b) very sharp drop in existential risk to bring the Time of Perils to an end. The problem is that although space settlement will help with some anthropogenic risks, it’s unlikely to drive a quick and very sharp drop in the risks that count.

To see the point, consider Toby Ord’s estimates of the risk posed by various anthropogenic threats (Figure 1). Short-term space settlement would doubtless bring relief for the threats in the green box. After we’re done destroying this planet, we can always find another. And it is much easier to nuke Russia than it is to nuke Mars (but please don’t!). However, the Pessimist thinks that the threats in the green box are inconsequential compared to those in the red box. So we can’t bring the Time of Perils to an end by focusing on the green box.

Figure 1: Ord’s estimates of anthropogenic risk

What about the threats in the red box, such as AI risk and engineered pandemics? Perhaps long-term space settlement will help with these threats. It is not so easy to send a sleeper virus to Alpha Centauri. But we saw in Part 3 that the Pessimist needs a fast end to the Time of Perils, within the next 10-20 centuries at most. Could near-term space settlement reduce the threats in the red box?

Perhaps it would help a bit. But it’s hard to see how we could chop 3−4 orders of magnitude off these threats just by settling Mars. Are we to imagine that a superintelligent machine could come to control all life on earth, but find itself stymied by a few stubborn Martian colonists? That a dastardly group of scientists designs and unleashes a pandemic which kills every human living on earth, but cannot manage to transport the pathogen to other planets within our solar system? Perhaps there is some plausibility to these scenarios. But if you put 99.9% probability or better on such scenarios, then boy do I have a bridge to sell you.

So far, we have seen that banking on space settlement won’t be enough to ground a Time of Perils Hypothesis of the form the Pessimist needs. How else might she argue for the Time of Perils Hypothesis? In the next two parts of this series, I’ll consider other arguments for the Time of Perils Hypothesis.

In the meantime, let me know what you think of the Time of Perils Hypothesis. I certainly hope that humanity will manage to settle the stars. What do you think that this prospect implies for the Time of Perils Hypothesis?





7 responses to “Existential risk pessimism and the time of perils (Part 4: Space)”

  1. Alejandro Acelas Avatar

    It seems a little weird to me that your referent for space colonization during the next 10-20 centuries is settling other planets on our own solar system. Twenty centuries is a lot of time for a civilization with technologies like ours. That could be enough to develop artificial minds and send them to colonize the galaxy, which could take only a few years once we have the technology necessary for that (as described in Armstrong and Sandberg’s paper, Eternity in Six Hours)

    1. David Thorstad Avatar

      Thanks Alejandro! Colonizing the galaxy in a few years would certainly be an impressive feat. Could you say a bit more about why you think we could manage this? (I take it this would draw on Armstrong and Sandberg).

      1. Alejandro Acelas Avatar

        I answered to your last comment in the thread below. I’m just commenting here so that you receive a notification.

    2. Alejandro Acelas Avatar

      Sure, just let me do a small clarification first. When I mentioned colonizing the galaxy I meant setting up probes that would eventually carry that task and launching them to spread throughout the galaxy (the probes would take several thousand years to arrive at their destinations). At that point it seems at least somewhat harder to limit the spread of intelligent life.

      As for why that sounds feasible in the span of one or two thousand years, I think my main intuition comes from extrapolating from the technological progress we’ve had since the Industrial Revolution. That said, if I wanted to paint a concrete story, it would be similar to what Armstrong and Sandberg present.

      From memory, I think they argue that colonizing the galaxy involves only a few technologies that are either well within the bounds given by physics (space travel at fractions of the speed of light) or have current day analogs (self-replicating probes, intelligent minds). As for why I think those technologies could come relatively soon, I think there’s a reasonable chance that something like Moore’s law (exponential growth in compute) holds for the following decades or centuries, granting us mind-numbing amounts of compute which we could use to speed up technological progress. This last part doesn’t rely on a particular stance around AGI, it’s just the intuition that there are lots of problems that we could solve by brute force or computationally intensive methods.

    3. Alejandro Acelas Avatar

      See here for an illustration of what we could if we had lots of compute:

      1. David Thorstad Avatar

        Thanks! Very interesting. I’ll take a look at the Armstrong/Sandberg paper and the post you linked.

        (Just FYI, I’ve told the blog not to allow comment replies more than 3 levels deep since that currently breaks my layout. Very happy to talk more — just reply to one of my higher-level comments to ping me. If you know anyone who knows how to fix this, let me know!).

        On Moore’s law, I wonder what you would say to people who think that Moore’s law is in trouble in this decade? I take it that the majority view among many professionals (semiconductor manufacturers, computer scientists, etc.) is that Moore’s law is going to end in this decade, if it hasn’t already.

        I guess a lot of people tend to think that sustained exponential growth is a pretty strong growth assumption. So to project something like a doubling of capacities every 2 years for the next 50-100 years is … certainly physically possible (we managed it before, for 50 years), but not necessarily the most likely thing without a good argument to expect it to happen.

        So i.e. in the scholarly literature Mack 2011 (https://ieeexplore.ieee.org/document/5696765), Theis and Wong 2017 (https://ieeexplore.ieee.org/document/7878935), Waldrop 2016 (https://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338) and Shalf 2020 (https://royalsocietypublishing.org/doi/10.1098/rsta.2019.0061) are good places to start for skepticism about Moore’s law continuing for another 50-100 years.

        Economists also have some interesting perspectives. For example, Bloom et al. 2020 (https://web.stanford.edu/~chadj/IdeaPF.pdf) estimate that even during the heyday of Moore’s law, semiconductor manufacturing was seeing the kind of drop in research productivity that you would expect of other industries. They think the reason that the pace of semiconductor manufacturing didn’t slow down is that we just threw more and more money and people at the problem to make up the difference. But if that’s right, it looks like an unsustainable solution: building a new semiconductor plant already costs more than a billion dollars. We can’t just keep spending.

        What do you think?

      2. Alejandro Acelas Avatar

        Oh, thanks for the resources on Moore’s law! Something like Moore’s law was behind a lot of my intuitios for progress in the following decades (particularly on AI timelines and capabilities), so being more pessimistic about it would certainly make me expect a less grandious future (at least for the next century or so).
        To be honest, the idea of continued exponential growth in compute always seemed somewhat wild to me (physical limits should bind eventually, right?). I think my intuitions around Moore’s law were mostly based on hearing people in EA circles take that trend for granted (either explicitly or implicitly), from which I guessed that they probably had good reasons to do so.

Leave a Reply