Exaggerating the risks (Part 1: Introduction)

Given everything I know, I put the existential risk this century at around one in six: Russian roulette.

Toby Ord, The precipice

1. New series: Exaggerating the risks

Many effective altruists think that humanity faces high levels of existential risk, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”.

My series “Existential risk pessimism and the time of perils” looks at what would follow if that were true.

But is it true? Just why are we so convinced that humanity is on the brink of doom?

In this series, Exaggerating the risks, I look at some places where claimed levels of existential risk may be exaggerated.

2. From catastrophic risk to existential risk

Effective altruists care deeply about catastrophic risks, risks “with the potential to wreak death and destruction on a global scale”. So do I. Catastrophes are not hard to find. The world is emerging from a global pandemic. There are ongoing genocides throughout the world. And nuclear saber-rattling is on its way to becoming a new international sport. Identifying and stopping potential catastrophes is an effort worth our while.

But effective altruists are also deeply concerned about existential risks, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable for desirable future development”. Most catastrophic risks are not existential risks. A hundred million deaths would not pose an existential risk. Nor, in many scenarios, would a billion deaths. Existential risks are literal risks of human extinction, or the permanent destruction of our ability to develop as a species. 

Many authors give alarmingly high estimates of the level of current existential risk. Toby Ord puts the risk of existential catastrophe by 2100 at “one in six; Russian roulette’’. The Royal Astronomer Martin Rees gives a 50% chance of civilizational collapse by 2100. And participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a median 19% chance of human extinction by 2100.

These are striking numbers, and you might expect them to be backed by solid research. But the deeper we dig into existing discussions, the harder it becomes to support these numbers.

I will argue in a different series that many of the most familiar threats, such as asteroid impacts or nuclear war, are unlikely to lead to existential catastrophe. Effective altruists know this. As a result, they rest their case on a number of less familiar risks. However, I argue, these risks are often exaggerated. That is the project of this series.

In Part 2 of this series, I’ll look at a discussion of climate risk by Toby Ord.

In the meantime, let me know what you think about current levels of existential risk. What are you worried about? Are there any risks that you would like me to discuss?

Comments

7 responses to “Exaggerating the risks (Part 1: Introduction)”

  1. Aaron Maiwald Avatar
    Aaron Maiwald

    Hey David,

    Thanks for the post, I really like this series so far! Always glad to hear that x-risk is lower than I thought!

    I recently heard Hilary Greaves’ talk about the concept of “existential catastrophes” at GPIs workshop. She proposed several definitions and my favourite one I would paraphrase as “at time t, an existential catastrophe is an event that sufficiently reduces the expected value of the future (relative to a maximally informed observer at t).” If we go with this definition, I think we need to specify some threshold c beyond which the expected value is *sufficiently* reduced to count as an existential catastrophe. Here’s my main question: doesn’t then the size of the x-risk from AI/climate change etc. heavily depend on our choice of this treshhold value?

    Also, I wonder how decision-guiding x-risk estimates are: isn’t it (almost) always only relevant by how much some actions *change* x-risk for philanthropic decisions? I’m not sure if (say) x-risk from AI is sufficiently related to the amount by which I can reduce x-risk from AI so that the former number is important for me.

    I’m looking forward to your thoughts:)

    1. David Thorstad Avatar

      Hi Aaron,

      It’s good to hear from you!

      We were glad to have you as our guest for the GPI workshop this month. Apologies for the technical issues with the online recording. I hope it was still a good experience for you.

      If I remember right, you do Cog Sci, right? How are you finding it? Have you been able to get a hold of some of the recent psychological work in global priorities research?

      On to your question: you’re definitely right that threshold concepts are sensitive to the choice of the threshold. So i.e. if we take an expected-value-loss-threshold conception of X-risk, then whether or not something (say, climate change) counts as an X-risk depends on where we set the threshold c. One way to read my discussion of climate change would be as suggesting that the threshold c would have to be very low for climate change to count as an existential catastrophe.

      I’m a big fan of your suggestion to look at estimates of *changes* in x-risk when assessing actions. I try to do this in my paper “Existential risk pessimism and the time of perils”. It turns out that the values of *changes* in x-risk are highly sensitive to your views about what the background risks are.

      All best,
      David

      1. Aaron Maiwald Avatar
        Aaron Maiwald

        Hi David,

        it was definitely a good experience, thanks a lot for having me!
        Yes, right, I do cognitive science, how nice that you remember that! Cog Sci is great for intellectual exploration — I get to choose courses from nine different subdisciplines! The cost is that I sometimes feel like a jack of all trades but master of none.
        Which recent work in psychology are you referring to? I read the empirical paper on population ethical intuitions and enjoyed that a lot, but otherwise I’m not sure.

        To your first point: I wonder how much of the disagreements about the size of different x-risks then turn out to be largely disagreements about this treshhold value. For instance, I might say that x-risk from climate change this century is 1%, you might say it’s much lower, but only because we use drastically different thresholds. If most people used this treshhold concept, I would predict a lot of variance about x-risk estimates that we observe among experts, since there doesn’t seem to be an obviously best or agreed upon treshhold value. And even if people used a lot of different definitions (which seems most likely to me), the situation is not better: I’d still be at least as worried that our disagreements are to a significant extent based on definitional unclarity. How worried are you about this?

        To the second point: that sounds very interesting, I’ll go and read that paper!

    2. David Thorstad Avatar

      Hi Aron,

      Good to hear from you again! I’ll definitely try to keep in mind thresholds for existential risk in future posts.

      As for reading recs, take a look at the brand new EA psychology lab (https://www.eapsychology.org/). In particular, Lucius Caviola has a number of relevant papers already. They also have a book on the psychology of effective altruism coming out soon.

      1. Aaron Maiwald Avatar

        Hi David,

        sorry for the late response (I had forgotten to click the notification button). Thanks a lot for the recommendations and I’m looking forward to read that book!

  2. Wyman Kwok Avatar

    Hello David,

    Thanks for writing this series. I have two concerns for this piece.

    1. Just want to let you know that it seems that your above two quotations of “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable for desirable future development” have “for desirable” duplicated — seems that it should only appear once.

    2. In response to your last question, “Are there any risks that you would like me to discuss?”, I would like to draw your attention to the risks associated with quantum technologies (QT) like quantum computing, communications and sensing, or some other amazing quantum applications yet to be thought of. I understand that QT are still in their early stage of development. Yet I believe QT will be extremely powerful and therefore may have concomitant great risks. I just wonder if you would be interested in exploring their possible risks.

    Thank you.

    1. David Thorstad Avatar

      Re 1: Thanks, fixed!
      Re 2: Absolutely interested. I would really like to see effective altruists focus more narrowly on a broader range of technologies (including quantum technologies) rather than more narrowly on risks like artificial intelligence and biological catastrophe. Let me see if I can wrangle a quantum computing expert to help.

Leave a Reply

%d bloggers like this: