facade of residential high building

Mistakes in the moral mathematics of existential risk (Part 4: Optimistic population dynamics)

[My model assumes that] human-originating civilization will eventually begin to settle other star systems, and that this process will (on average over the long run) proceed in all directions at a constant speed. Further, the model assumes that the expected value of a civilization in state S at time t is proportionate to its resource endowment at t, which grows (not necessarily linearly) with the spatial volume it occupies. A civilization’s resource endowment (in particular, the quantities of raw materials and usable energy at its disposal) determine how large a population it can support, which in turn determines its value in total welfarist consequentialist terms. I assume that in the long run and on a large enough scale, an interstellar civilization will convert resources into population and welfare at some roughly constant average rate.

Christian Tarsney, “The epistemic challenge to longtermism
Listen to this post

1. Recap

This is Part 4 in my series “Mistakes in moral mathematics” based on a paper of the same name. In this series, I discuss three mistakes that are often made in calculating the value of existential risk mitigation. I show how these mistakes have led the value of existential risk mitigation efforts to be overestimated, and how they have mislocated debates about the value of existential risk mitigation.

Part 1 introduced the series and discussed the first mistake: focusing on cumulative rather than per-unit risk. When existential risk mitigation is framed as an intervention on risk in our own century, rather than risk across many distant centuries, the value of existential risk mitigation drops dramatically. We saw how a leading paper by Nick Bostrom makes the first mistake.

Part 2 discussed a second mistake: ignoring background risks which are not affected by our intervention. We saw how a leading estimate of the cost-effectiveness of biosecurity interventions by Piers Millett and Andrew Snyder-Beattie falls once background risk is incorporated. We also saw how the second mistake compounds with the first in Millett and Snyder-Beattie’s discussion, leading to a larger overestimate.

Part 3 introduced a third mistake: neglecting population dynamics. We saw how standard population models threaten to push the expected number of future lives far below longtermist projections. And we saw that standard models are robust against a number of objections that might be raised against them.

To this, it might be objected that we should place nontrivial confidence in very optimistic scenarios, under which current demographic trends reverse and the human population expands at a breakneck pace throughout the stars. I hope that the responses to objections in Part 3 went some way towards alleviating this response. But today, I want to show something else. Even on the most optimistic demographic and technological assumptions, population dynamics matter. Incorporating population dynamics into optimistic models will still dramatically reduce the value of existential risk mitigation.

2. Malthusian techno-optimism

What makes for an optimistic model? Two things. First, it should be Malthusian. It should say that the primary constraint on the size of future populations will be the availability of resources such as land, energy, and food. This means that if we project continuous rapid growth in the stock of resources available to humanity, we should also project continuous rapid growth in the size of the future human population.

On what basis would the model project continuous, rapid resource growth? The second model component should be techno-optimism: the claim that advances in technology will enable rapid expansion in the stock of human resources, together with the overcoming of all other barriers, including norms and culture, to rapid population expansion.

I don’t think we should place much confidence in techno-optimist, Malthusian models. It is not so plausible that as humanity gobbles up ever-increasing amounts of resources, resource constraints will continue to be the dominant factor in fertility decisions. Nor is it terribly clear that we will continue to gobble resources at a rapid pace until the end of time. But let’s look at a very optimistic model of this form to see how the discussion might evolve.

3. The Tarsney model

Christian Tarsney is a Research Fellow at the Population Wellbeing Initiative at UT-Austin and a Senior Research Affiliate at the Global Priorities Institute. In a paper known to many effective altruists, “The epistemic challenge to longtermism“, Tarsney develops a techno-optimist, Malthusian model of the human future.

This model is techno-optimist because it assumes that within a thousand years, humanity will develop the capacity to settle the stars in all directions at a tenth of the speed of light, and will continue to do so until we become extinct. This is optimistic not only due to the speed of expansion, but also due to the extent of expansion that must be supported: expansion occurs in all directions at once. The model also assumes that there are no significant technological barriers to building a large and thriving population on each planet we encounter.

The model is Malthusian because it assumes that the primary limiting factor on human population is the availability of land, minerals and other resources. As we reach each new habitable planet, we quickly build up sizeable populations on each new planet that would not otherwise have existed.

In more detail, the model assumes that after 1,000 years, humanity settles the stars in all directions at a breakneck pace of s = 0.1c, for c the speed of light. This continues until either humanity becomes extinct (with some constant probability r/year), or the universe becomes uninhabitable at some distant time tf).

Each year, we reap value ve from Earth settlement, and value vs from each settled star. Tarsney estimates ve at 6 billion QALYs, around today’s benchmark. Tarnsey estimates vs at 1/20th the value of ve, to allow that some star systems may not have habitable planets, and also that new planets may be less hospitable than Earth is.

Tarsney compares two interventions, each costing a million dollars. The neartermist intervention N provides 10,000 QALYs with certainty, a figure which Tarsney derives from GiveWell estimates. The longtermist intervention L reduces the (absolute) risk of extinction in the next millennium with probability p. Tarsney argues that 2*10-14 is a reasonable value for p.

Since humanity will be settling the stars at a constant rate in all directions at once, our expansion will be spherical. Hence we will need a function n(x) returning the number of stars contained in the sphere with radius of x light-years centered around Earth. Tarsney estimates n(x) in three pieces, but for now the key fact to know about n(x) is that it grows cubically in x, because the volume of a sphere grows cubically in the radius of the sphere.

With these figures in mind, the longtermist intervention L is valued at:

E[V(L)] = p\int_{t=0}^{t_f} (v_e + v_s n(st))e^{-rt}dt

This gives E[L] > E[N] when r < 0.000135, so that the longtermist intervention is more valuable than the near-termist intervention for values of per-century risk beneath about 1.34%.

An important first observation is that this level of per-century risk is already well below where many effective altruists take risk to be today (often 10-20% or higher). If this is right, then even the most optimistic modeling may not be enough to rescue the value of existential risk mitigation.

An important second observation is that this model says nothing about population dynamics. It is one thing to assume that humanity will eventually expand to fill the land given to us, but this model assumes that we drop down a full colony of humans and instantly find enough willing volunteers to set forth another five light-years or so for the next star. Even the most optimistic Malthusian models do not predict that. What happens if we build an optimistic version of the Malthusian demographic story into the Tarsney model?

4. The Tarsney model with population dynamics

On the Malthusian story, human populations expand to make use of the resources available to them. When land, minerals and other resources are plentiful, they raise many children. Those children intensify the exploitation of existing resources, heightening resource scarcity. As colonies develop into maturity, resource scarcity forces each successive generation to think seriously about migration.

At an average distance of five light years to the nearest star, migration is no mean feat. Even under the optimistic assumption that settlers travel at a tenth of the speed of light, this makes migration out to be a fifty-year trip. However, once resources become sufficiently scarce, fortune-seekers and second sons may have no choice but to set out for the next planet.

Let us assume, optimistically, that a small group of settlers can develop a planet to maturity within a thousand years. After this, settlers sail forth not only towards the nearest planet, but in fact in all directions at once. They travel, on average, five light-years, then found a new colony and begin the process over again. On this model, the average speed of interstellar expansion is s = 0.005c, not 0.1c.

Suppose we keep all other features of the Tarsney model unchanged, except that we reduce the speed of expansion to s = 0.005c, representing the most optimistic speed that could be grounded by Malthusian demographic assumptions. This is bad news for the value of existential risk mitigation, which grows cubically in s. Now, it turns out, E[L] > E[N] only at r = 0.0000145 [Edit: Fixed typo in value of r. Thanks to titotal for catching this]. That is, the longtermist intervention is best only if per-century risk falls below 0.145%. This is a very low level of risk, below not only what most effective altruists project, but also below what many others might be comfortable projecting.

The lesson is that even on a techno-optimist, Malthusian model, incorporating population dynamics substantially reduces the value of existential risk mitigation. By way of illustration, the annual risk r = 0.000135 which previously sufficed to favor the longtermist intervention L over the short-termist intervention N now favors N by a factor of more than 500.

Population dynamics matter for everyone. Even the most optimistic models cannot, and should not do away with population dynamics.

5. But couldn’t we do better?

An objection that is sometimes made to me by effective altruists is that in principle, humanity could perhaps expand at the breakneck pace that Tarsney predicts. If it is within our power to expand at this rate, and if it would add enormous value to expand at this rate, then why wouldn’t we do so?

Well, yes. We could devote all of our resources to peopling the galaxy. We could make this among the foremost guiding principles of humanity. And we could keep at this project until the end of time.

We could do all of this in the same sense that we could, right now, set out to fill the Earth with a hundred billion humans. Nothing is stopping us from doing that either. But from the bare fact that it is technologically feasible to dramatically expand the human population, and that on some axiologies it would be a very good thing to expand the human population, it does not follow that we will do so.

The reason why we do not expand as fast as possible is that demographics are driven by the reproductive choices that people make, and people do not generally choose to devote their lives to pumping out and raising as many children as they can. It is, of course, possible that humanity will undergo a moral transformation that will convince us all to behave in this way. But that would be a rather surprising development, whose likelihood should be justified on the basis of substantial argument.

However, to say that human populations are unlikely to expand in this way is not to say that no possible beings could expand in this way. In the next post, I consider whether a population of digital minds might do better, and what would follow if that were true.

6. Wrapping up

In Part 3 of this series, we saw that standard population models put substantial downwards pressure on the number of expected future lives, and consequently on the value of existential risk mitigation.

Today, we asked what happens under very optimistic demographic assumptions. Specifically, we considered a techno-optimist, Malthusian model due to Christian Tarsney on which technological advances quickly enable rapid interstellar expansion and remove barriers to rapid population growth, and on which even as humanity acquires vast quantities of resources, resource constraints continue to be the dominant restriction on the size of the future human population. We saw that even on this model, incorporating the most optimistic Malthusian population dynamics we could justify leads to a substantial reduction in the value of existential risk mitigation.

Our third, and final mistake in the moral mathematics of existential risk is therefore neglecting population dynamics. On a wide range of models, population dynamics matter. Once they are incorporated, the value of existential risk mitigation is likely to fall.

Comments

6 responses to “Mistakes in the moral mathematics of existential risk (Part 4: Optimistic population dynamics)”

  1. titotal23 Avatar
    titotal23

    Great points! It seems like many aspects of the longtermist argument rests on this “existential risk will be zero if aligned AGI arrives” assumption.

    I think I spotted a typo though:

    “Now, it turns out, E[L] > E[N] only at r = 0.000145. That is, the longtermist intervention is best only if per-century risk falls below 0.145%”

    I assume this is meant to be r = 0.0000145?

    1. David Thorstad Avatar

      Thanks titotal! So it is. Fixing now and adding an acknowledgment.

  2. Michael St. Jules Avatar
    Michael St. Jules

    For your adjustment to 0.005c, it would be helpful to write it out as a formula, with units. Is it supposed to be

    (1000 years*0 speed + 50 years*0.1c) / (1000 years + 50 years), which is about (a bit less than) 0.005c
    ?

    1. David Thorstad Avatar

      Thanks Michael! (Replying to this and the other comments in a single thread).

      Yes, the average speed is calculated in the usual way, as distance travelled over time. In this case, people cover 5 light years every 1050 years, which rounds to 0.005c dividing through. I appreciate the request for clarification.

      I’ve written about the time of perils hypothesis in my paper and blog series “Existential risk pessimism and the time of perils” (now published as “High risk, low reward: A challenge to the astronomical value of existential risk mitigation”). I discuss the relevance of space expansion in Part 4 of my blog series, and Part 6 of the paper. My concern there was based on a modelling insight that for space colonization to provide enough boost to the case for existential risk mitigation, we’d need it to provide a (a) very quick (maximum 10-20 centuries), and (b) very sharp (at least 3-4 orders of magnitude) drop in risk. I suggested that the kind of independence of risk that you appeal to is likely to be a good approximation of non-anthropogenic risks over this time period, but that it is harder to see why anthropogenic risks would show this pattern. For example, effective altruists often place the bulk of their risk estimates in AI, and as you said, it’s not so clear how short-term space settlement could knock orders of magnitude off of AI risk.

      Here it is important to remember the background time of perils story: technology drives risk. On this story, risk is much higher now than in previous centuries, and that’s because we have much better technology than we did in previous centuries. Once we start telling a story involving rapid space colonization, we are also telling a story about rapid technological development. On the standard time of perils story, that would tend, ceteris paribus, to strongly increase rather than decrease risk. So any story we tell about space colonization must not only show how colonization reduces existing risks by many orders of magnitude, but also how risk reduction outpaces the risks generated by new technology.

      I’m going to leave non-existential risks, such as the capture of colonies by totalitarian regimes, out of this paper for now. I like to keep a tight focus, so this paper is a paper about existential risk. It might be important to discuss this and other non-existential risks more in the future. Thanks for bringing these up.

  3. Michael St. Jules Avatar
    Michael St. Jules

    I think very low levels of risk are pretty plausible when we start colonizing space. Extinction would require wiping all of us out without giving us a chance to recover, but it’s hard to imagine that with colonies so distant from one another. By spreading out, often light years apart, risks become increasingly independent across distant colonies, so local risks are less likely to reach everyone. If one colony gets wiped out, others will generally be unaffected by the same events. However, some potential common risks remain, like aliens or rogue AI (aliens’ or our own descendants’) expanding at a faster average rate than us and wiping us out everywhere.

    1. Michael St. Jules Avatar
      Michael St. Jules

      Also, maybe totalitarian human regimes capturing colonies and making life on the whole close to neutral or bad on average.

      I wouldn’t take people’s estimates for extinction risk this century to say almost anything about extinction risk after we’ve been colonizing space for some time.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading