Exaggerating the risks (Part 9: Biorisk – Grounds for doubt)

In 2017, I had dinner with Nicola Sturgeon, the first minister of Scotland, and was given the opportunity to pitch her on one policy. I chose pandemic preparedness, focusing on worst-case pandemics. Everyone laughed, and the host of the dinner, Sir Tom Hunter, joked that I was “freaking everyone oot”.

Will MacAskill, What we owe the future
Listen to this post

1. Recap

This is Part 9 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

This series is getting a bit long, so I’ve reorganized it into subseries, one for each type of risk. Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk, focusing on Toby Ord’s treatment of climate risk in the precipice (Part 2) and the Halstead report on climate change and longtermism (Parts 3-5).

Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Many effective altruists rank biological risks, such as engineered pandemics, as second only to AI risk in severity. For example, in The precipice, Toby Ord puts the chance of irreversible existential catastrophe by 2100 at 3.3%. I think these numbers are a bit too high.

This post begins a sub-series, “Bioriskarguing that levels of existential biorisk may be lower than many effective altruists suppose.

I have to admit that I had a hard time writing this series. The reason for that is that I’ve had a hard time getting people to tell me exactly what the risk is supposed to be. I’ve approached high-ranking effective altruists, including those working in biosecurity, and been told that they cannot give me details, because those details pose an information hazard. (Apparently the hazard is so great that they refused my suggestion of bringing their concerns to a leading government health agency, or even to the CIA, for independent evaluation). One of them told me point-blank that they understood why I could not believe them if they were not willing to argue in detail for their views, and that they could make do with that result.

I’ve asked several times on EA-related Slack channels and social media for recommendations of a single paper or book making a detailed case that existential biorisk is high, but come up largely empty. Because of this, I have something of a feeling of taking a stab in the dark, without knowing precisely what I am meant to be arguing against. It is a bit of an awkward situation, but I will try to make due.

The current plan for this series is as follows. I’ll write three posts giving preliminary reasons to suspect that existential biorisk might not be as high as many effective altruists think it is. Then I’ll review two prominent assessments of existential biorisk. We’ll see how well I do.

2. Existential biorisk

We began this series with a distinction between two types of risks.

Effective altruists care deeply about catastrophic risks, risks “with the potential to wreak death and destruction on a global scale”. So do I. Catastrophes are not hard to find. The world is emerging from a global pandemic. There are ongoing genocides throughout the world. And nuclear saber-rattling is on its way to becoming a new international sport. Identifying and stopping potential catastrophes is an effort worth our while.

But effective altruists are also deeply concerned about existential risks, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”. Most catastrophic risks are not existential risks. A hundred million deaths would not pose an existential risk. Nor, in many scenarios, would a billion deaths. Existential risks are literal risks of human extinction, or the permanent destruction of our ability to develop as a species. 

There should be little doubt that biological hazards (`biorisks’) pose a significant catastrophic risk to humanity in this century. The world just experienced a global catastrophe in the form of the COVID-19 pandemic and we are ill-prepared to prevent or respond to another. Catastrophic biorisks are important and neglected, and effective altruists are right to worry about that.

However, many effective altruists hold that there is a significant chance of existential catastrophe from biological causes.

In The Precipice, Toby Ord estimates the chance of irreversible existential catastrophe by 2100 due to engineered pandemics alone at 1 in 30, second only to risks posed by artificial intelligence.

Participants at the 2008 Oxford Global Catastrophic Risks Conference estimated a median 2% chance of extinction by 2100 from engineered pandemics.

In What we owe the future, Will MacAskill writes that “Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%”.

These are very high numbers. These numbers need to be supported by a good deal of solid evidence. Later in this sub-series, I will review leading arguments for the view that biorisks pose a significant existential threat in this century. I will show that these arguments fall considerably short of grounding anything like the above estimates. Crucially, we will also see that expert consensus is far more skeptical than MacAskill suggests: MacAskill’s claim could only be justified on a view which treats effective altruists as the lone experts on existential biorisk, and dismisses leading scientists, policymakers, and biosecurity professionals as non-experts.

In the meantime, I want to give some initial reasons for skepticism about existential biorisk. That is not to say that the bulk of the case against existential biorisk rests on these reasons for skepticism – it rests, instead, on the inability of effective altruists to provide plausible arguments in support of their risk estimates. But it does seem appropriate to begin by saying why many are skeptical of existential biorisk claims.

3. The difficulty of the problem

The main reason why scientists and policymakers are skeptical of existential biorisk is that it is terribly hard to engineer a pandemic that kills everyone.

First, you would need to reach everyone. The virus would have to be transmitted to the most rural and isolated corners of the earth; to antarctic research stations; to ships at sea, including nuclear submarines on uncharted, long-term and isolated paths; to doomsday preppers in their bunkers; to hermits and uncontacted tribes; to astronauts in space; to each new child born every second; to island nations; and so on. And you would need a transmission mechanism that could spread the virus this far without being detected: otherwise, those with the means would be whisked away to safety and might well survive.

Second, you would need a virus that was virtually undetectable until all, or nearly all humans had been infected. That conflicts in a stark way with the goal of producing a virus that is 100% lethal, since lethal viruses tend to leave a trace as they spread through a population.

Third, you would need a virus that is unprecedentedly infectious. The virus would need to be capable of being transmitted, without fail, to every human being on the planet, in sufficient quantities to actually make them sick. It would need to avoid respirators and other forms of protective equipment. And it would have to maintain its transmissibility throughout many generations of mutation.

Fourth, you would need a virus that is unprecedentedly lethal, killing not 90%, 99%, or 99.999% but effectively all of those infected by it, no matter their age, health, or genetic makeup. This lethality would need to be preserved even against the best medical treatments, including quite possibly vaccines or synthesized antibodies. And it would have to be maintained throughout many generations of mutation despite selective pressures towards less lethal variants.

Fifth, you would need to find a way to evade basic public health measures such as masking and social distancing. This isn’t as easy as it sounds. How do you transmit a virus to someone who doesn’t leave their house?

Sixth, you would need the technological capability and equipment to synthesize the hypothesized biological agent, something it is widely agreed that humanity currently lacks.

Finally, you would need to find someone crazy enough to manufacture and deploy this biological agent, yet also competent enough to pull it off, which we have seen is no easy feat.

I am not sure if I would go so far as to say that the above is physically impossible. But without a very good argument, we should regard it as highly implausible that all of the above will come to pass by the end of the century.

4. Human history

We have experienced a great number of catastrophic pandemics throughout human history. These pandemics have all fallen far short of existential catastrophe: indeed, they have often failed to cause even local forms of societal collapse, and only extremely rarely if ever caused a region to become uninhabited for any significant amount of time.

One of the best known pandemics is the `Black Death’ which swept Europe from 1346 to 1352. Though the pandemic led to massive loss of life, there was little evidence of societal collapse or even breakdown of large-scale social institutions. The historian John Haldon and colleagues write:

Claims of mortality rates of as much as 50% give the impression that this event must have been devastating for the societies affected. Yet when we examine how different states and societies responded, we find that—without minimizing the terrible impact on people and communities—the medieval world did not grind to a halt, still less did a series of revolutionary transformations occur. Indeed, the Black Death struck at the beginning of the Hundred Years’ War, and in spite of its demographic impact both the kingdoms of England and France continued to field effective armies, even if there was a brief pause in hostilities (similar to contemporary calls for ceasefires in ongoing international conflicts in the context of COVID-19). Instead, some societal developments that were already under way accelerated while various groups within society responded by exploiting their situation and attempting to slow down, stop or otherwise control changes which they perceived as disadvantageous.

Still more devastating pandemics, combined with other risk hazards such as brutal colonial expansion, have still been frequently unable to produce even regional societal collapse. For example, the political scientist Alberto Diaz-Cayeros and colleagues study the combined effect of pandemic, violence and other colonialist disasters on Mexico from the mid-1500s onwards. Despite dramatic loss of life (average settlement population fell 95% by 1646), most settlements survived, and 13% even grew by the end of the colonial period. They write:

We develop a new disaggregated dataset on pre-Conquest economic, epidemiological and political conditions both in 11,888 potential settlement locations in the historic core of Mexico and in 1,093 actual Conquest-era city-settlements. Of these 1,093 settlements, we show that 36% had disappeared entirely by 1790. Yet, despite being subject to Conquest-era violence, subsequent coercion and multiple pandemics that led average populations in those settlements to fall from 2,377 to 128 by 1646, 13% would still end the colonial era larger than they started.

Human history is replete with such examples of resilience not only to pandemics, but also to the combination of pandemics with numerous other catastrophes. If our ancestors could survive and flourish with little in the way of modern medical knowledge or technology, then we should become relatively more confident in our own ability to survive future pandemics.

5. Mammalian history

It is not just human history that provides tales of resilience. Throughout history, there is only one recorded instance in which disease has led to the extinction of a mammalian species. Don’t take it from me – here are Piers Millett and Andrew Snyder-Beattie:

The number of mammalian species that have gone extinct due to disease (as opposed to other factors) is very limited, with just 1 confirmed example.

The confirmed example is the extinction of a species of rat in a very small and remote location (Christmas Island).

While we might take the relatively larger incidence of disease-driven extinction in non-mammalian species as evidence of the threat posed to such species, if the closest thing to a mammalian extinction that we can pin on pandemic disease is the extinction of an isolated species of rat, then we should be hesitant to place too much stock in the idea that humanity is soon to be driven extinct by disease.

6. Everything, everywhere all at once

Effective altruists often draw attention to experiments in which a single feature of a virus was manipulated: for example, the virus was made more virulent or more transmissible. But that is not what omnicidal actors need. They need a virus which is improved on a great number of dimensions at once: it must be quite virulent; highly transmissible; have the desired incubation time; resist likely treatments and antibodies; survive storage and transportation across the globe; and so on.

The problem is that we do not know how to induce all of these many mutations at once, and indeed there is a good chance that work done to produce one mutation may cut against other desired mutations. A bioweapons expert interviewed by Filippa Lentzos explains:

I do not think that it’s very easy to create weapons of mass destruction with biological weapons.

Ray Zilinskas’ book on the Soviet programmes, for example, points out how very, very hard it is to engineer something which is usable in that context. He lists 12 criteria that you want in your pathogen for it to be a useful biological warfare agent, and they include things like survivability in the air, survivability through whatever mechanical forces are brought to bear during dispersion, resistance to common antibiotics or anti-viral agents so that the enemy can’t simply treat the disease, high transmissibility etc. In evolutionary terms, normally as transmissibility increases, pathogenicity decreases; so you’re having to fight nature in trying to create a perfect biological weapon.

Some of the key lessons that came out of the Soviet programme go to this issue of pleiotropy, where a single gene affects more than one characteristic. And if you change that gene to improve one of the characteristics you’re going for in that list of 12, the chances are you’re going to diminish one or more of the other characteristics that are desirable. So getting something that works is extraordinarily difficult.

(Let me add for legal reasons that the existence of a Soviet bioweapons program is contested. So is the fact that someone cracked Kasparov over the head with a chessboard when he spoke out about certain issues. Make of that comparison what you will.)

How bad is the problem? Bad enough that an extremely well-funded, decades-long Soviet bioweapons program was likely unable to come close to overcoming it. The biosecurity expert continues:

In the 20 years of the Soviet programme, with all the caveats that we don’t fully know what the programme was, but from the best reading of what we know from the civil side of that programme, they really didn’t get that far in creating agents that actually meet all of those criteria. They got somewhere, but they didn’t get to the stage where they had a weapon that changed their overall battlefield capabilities; that would change the outcome of a war, or even a battle, over the existing weapon systems available to them.

Of course, we can insist that designer viruses with all of the characteristics an omnicidal agent would like them to have are just around the corner. But we should not pretend that manufacturing such viruses is an easy task, or that historical programs and experiments provide any significant evidence in favor of the possibility of manufacturing such viruses.

7. Conclusion

This post, and its immediate successors, will give preliminary reasons to doubt published estimates of near-term existential biorisk. So far, we have seen that engineering biological events that lead to existential catastrophe is a very difficult problem; that biological events have rarely led to mammalian extinction; and that producing all of the needed mutations at once is so difficult that a well-funded (alleged) Soviet bioweapons program didn’t come close.

Are those the only reasons why biosecurity experts are often skeptical of existential risk claims? I think they are a good start, but there are more reasons to come. We’ll talk in the next two posts about some other reasons to doubt some effective altruists’ estimates of existential biorisk.


7 responses to “Exaggerating the risks (Part 9: Biorisk – Grounds for doubt)”

  1. River Avatar

    You dramatically overestimate what is required of a virus to cause extinction. Remember that the body count from covid isn’t limited to people who were infected by covid. It also includes people who died of other treatable conditions, who either couldn’t or didn’t get treatment in time because of covid. The same principle applies here.

    For example, astronauts in space and researchers in antarctica are utterly useless for preventing extinction. Humans cannot survive in either environment for more than a few months without being resupplied by civilization.

    What few primitive tribes remain may not be much more useful. I’m going from memory here, but I believe it takes a breeding population of about 1200 humans to keep the species going long term. Are any of the remaining primitive tribes that big? Even if they are, if they are all that is left, what are the chances they ever rebuild technologically advanced civilization.

    Finally, society is much more brittle than it was during the black death, or when European colonizers showed up in the Americas. Drop a European peasant from 1200, or a Native American from 1490, in the middle of the wilderness, with no other humans around, and they probably know enough about how to build shelter and gather food to still be alive a year later. Do the same to me, and I’ll probably be dead in a few months. I don’t have those skills. Most people today don’t. So if a virus can just disrupt the production or distribution of food, it will kill a lot of people it never infects. It conceivably might kill more people by starvation than infection. I don’t have a particularly rigorous model of how where the line between recovery and extinction is in these kinds of extreme scenarios, but it seems clear that extinction requires far less of a virus than you make out.

    Unrelatedly, when looking at other species going extinct due to disease, you put a lot of emphasis on the distinction between mammals and other animals. Is there any biological significance to this distinction? Is there a biological difference between mammals and other animals that makes it more difficult for us to transmit viruses, or less likely that we will die of them? If not, I’m not sure why we should place any weight on that particular line.

    1. titotal23 Avatar

      Even if society as a whole is brittler (and I’m not sure it is, we have signficant advantages in terms of scientific knowledge over medieval peoples), that doesn’t matter overmuch for extinction risk. As long as 0.0001% of the population survives, humanity doesn’t go extinct. There is no real risk of starving: they can simply eat the food stores that the rest of dead humanity left behind, and the survivors are disproportionately likely to have been living off the land anyway. Even the antarctic people have a good shot: they will find out about the disease, and will have months to form a plan to relocate to an uninhabited area where food can be grown.

      As for why they looked at mammals instead of other animal species, I think the obvious reason is that humans are mammals? Mammals includes all the species that are most closely related to us in biology. I’m not sure if they actually looked at other species, if you could find any examples of non mammal extinctions by disease I would be interested.

  2. Jason Avatar

    I generally agree with this, but would suggest that some scenarios involving very catastrophic but not extinction-level results could still plausibly have a significant effect on the rest of human history (e.g., a loss of genetic diversity, or a cultural crisis that raised the risk of a nuclear or similar risk at a time the population was particularly vulnerable to.a second catastrophic event). So sub-extinction risks should probably count for something in the longtermist analysis.

    1. David Thorstad Avatar

      Thanks Jason!

      I’m generally very happy to talk about catastrophic (rather than existential) risks. To be honest, I am quite worried about future pandemics, even natural pandemics. We just had one and it wasn’t pleasant. We don’t seem to be doing much to prevent another.

      Could you say a bit more about the types of catastrophic events you are interested in here? It sounds like you might have something very catastrophic in mind.

      1. Jason Avatar

        The threat model I had in mind involved some sort of bioengineered pandemic. I don’t have any reason to believe it is nearly as large a threat as Ord, MacAskill, etc. suggest. I think section 3 of your post does a good job explaining why the odds of a single extinction-level pandemic are awfully low. However, I think that is a less likely scenario than a convergence of multiple events, such as:

        * A catastrophic pandemic that turns the world upside down and leaves it vulnerable to a second catastrophe; the combined effect of the two catastrophes could be extinction-level or at least have some sort of significant effect on the long-term future. For example, the folks with their fingers on the trigger of nuclear arsenals are not always paragons of stability in the best of times; one would imagine this would be much worse after an apparently bioengineered pandemic that wiped out half the world’s population.

        * Although getting away with launching a deliberately bioengineered pandemic would be extremely hard, a group that managed to do that could potentially launch another, different agent in a few years. So if they achieved a fatality rate of 80% per round, that could be 96% after two rounds, over 99% after three. Although that’s still not extinction level, it’s plausible that an over 99% global population loss would have a significant negative effect on the long-term future.

        To be really clear, the reasons you describe in section 3 are also very good reasons to assign very low probabilities that a group could pull off a 50% or 80% lethal pandemic! However, I’d speculate that pulling off an 80% pandemic would be several orders of magnitude more plausible than pulling off a 100% one, and pulling off a 50% pandemic would be probably at least an order or two easier than an 80% one. So I think too much focus on the 100% single-shot threat model understates the overall risk *in relative terms.*

        Bereft of access to classified information, the inner halls of EA, or any formal biology training for the past ~ 25 years, it’s difficult for me to assess exactly how small these risks are over the next 50 years or so. I do give at least some weight to the general effect of technological advancement in other areas, which has given certain small groups of people the ability to have increasingly profound effects on large groups of people. I also give some at least weight to the possibility that AI (which need not be AGI) could make it easier for midsize rogue actors to devise dangerous things. It’s not something I lose sleep over, but I find it likely that the world should spend more on biosecurity / pandemic prevention than it does (even if my rationale is more neartermist than that of most EA pandemic folks, and I’m not pulling my own money from bednets for it).

        1. David Thorstad Avatar

          Thanks Jason!

          It sounds like we agree on many things, including the core claim of the post that one-off existential biorisk has been exaggerated. That’s good to hear. We also agree that the world should spend more on pandemic prevention.
          There’s no need to chase speculative claims about human extinction to make the very real point that the world just experienced a catastrophic pandemic, that the pandemic led to enormous losses, and that if we fail to make cost-effective preparations to meet the next pandemic, we might see another pandemic of this magnitude by the end of the century.

          I do certainly agree that malicious actors are orders of magnitude more likely to pull off a pandemic that kills, say, 80% of the human population than to pull off a pandemic that kills all of us. I do also think that it is worth pushing against the likelihood of pandemics killing 80% of the human population as well. Many of the reasons for skepticism given in this series will also tell quite strongly against the likelihood of such pandemics.

          You’re quite right to turn to compound hazard models, in which extinction as well as near-extinction events such as societal collapse tend to result from the convergence of multiple catastrophes, rather than a single catastrophe. As you know, that’s how most risk analysts think about extinction, and it’s a line that CSER and others at Cambridge have been (correctly) pushing for some time. It’s not a coincidence that the three, rather high, estimates of one-off extinction risk I cited in this post all came from Oxford.

          One challenge for compound hazard models is that they’re even harder to produce than single-hazard models. That’s challenging in an area like existential biorisk, where the single-hazard models have largely failed to materialize.

          Of course, we can always *say* that multiple hazards may come together to cause extinction, but to get a handle on the likelihood of these hazards happening in short succession, as well as on the likely consequences that would follow for human civilization, we need a great deal of data, as well as good models supported by relevant science.

          My understanding is that we don’t have many good models of this sort yet, and that many biosecurity experts are fairly skeptical that those models are going to materialize any time soon. For that reason, while I’m certainly open to the multiple-hazard story, I don’t know that effective altruists have yet done enough work to gain very much by telling a multiple-hazard story in this domain.

          1. Jason Avatar

            I think we’re largely on the same page. I’d say that, particularly in the absence of a “great deal of data, as well as good models supported by relevant science,” my error bars for predictions going ~ 50 years in the future can be moderately broad.

            I find 100% one-shot to be implausible enough on its face to assign it the a probability of *close enough to zero to comfortably treat as zero, with no relevant error bars* [for space: “close-0”] in the absence of decent modeling and data.

            Multiple 80% events strike me as extremely unlikely in the next ~ 50 years, but not so implausible as to assign close-0 probability in the absence of decent modeling and data. Rather, in the absence of good modeling and data, this would stay at the vague level of “seems awfully unlikely, but not enough to confidently treat the same as the risk of ‘all Pokemon becoming sentient and destroying the world.’” That is much more a reflection of how improbable the 100% event is than an endorsement of the multiple-event theories.

            It’s possible that if I read more deeply in biosecurity, I would conclude that the modeling and data were strong enough to move my position to close-0 for the multiple theory too. But for the purposes of this post, I thought the multiple theory was in a different enough likelihood category than the 100% to warrant a comment from me for completeness.

Leave a Reply

%d bloggers like this: