bowl full of stubs with spoon

Exaggerating the risks (Part 11: Biorisk: Still more grounds for doubt)

In the words of one of my mentors … “Deference to senior EAs and concern for infohazards mean that advocates for biosecurity policies cannot fully disclose their reasoning for specific policy suggestions. This means that a collaborative approach that takes non-EAs along to understand the reason behind policy asks and invites scrutiny and feedback is not possible. This kind of motivated, non-evidence-based advocacy makes others suspicious, which is already leading to a backlash against EA in the biosecurity space.”

Nadia Montazeri, “How can we improve infohazard governance in EA biosecurity? Or: Why EA biosecurity epistemics are whack“.

Listen to this post

1. Recap

This is Part 11 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Part 9 and Part 10 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.

Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach is to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Then I will review the arguments for high levels of existential biorisk that have been provided by effective altruists and argue that they are insufficient.

Parts 9 and 10 gave eight preliminary reasons for skepticism, which I review at the end of this post. But first, I want to round out the dozen with four final reasons for skepticism about existential biorisk.

2. Superforecaster skepticism

Forecasting future risks is hard. One strategy favored by many effective altruists is to train and recruit superforecasters, individuals with a proven record of reliable predictions on a horizon of months or sometimes years, often in relatively discrete and tractable problems. Superforecasters are then asked to predict future levels of existential risk, and their predictions are taken as evidence about the true level of risk.

For my own part, I’m not terribly fond of this strategy. There isn’t much evidence to suggest that individuals who are good at predicting short-term, moderately tractable phenomena are especially reliable at predicting long-term, highly inscrutable phenomena. A small reason for skepticism is that different skills may be needed: predicting future risks requires a greater degree of speculation and imagination. A large reason for skepticism is the difficulty of the problem: predicting existential risks is very hard, and even if you think superforecasters are the best forecasters we have, this doesn’t guarantee they will have much insight into phenomena that are too distant and complex to reliably predict.

But I know that many of my readers do care what superforecasters think about biorisk. A recent study by Philip Tetlock and colleagues asked a large sample of forecasters to predict the risk of extinction caused by engineered pathogens before the year 2100. Superforecasters initially put the risk at a median of 0.1%. Then they went through a period of deliberation, debate and evidential examination. Matters got worse. Once superforecasters had worked through the arguments, they reduced their estimates to a median of 0.01%. This suggests that superforecasters were especially unconvinced by the available evidence and arguments.

Such results should be taken with a grain of salt. For one thing, this phase of the study was conducted in a somewhat irregular manner: because the study’s funders (Open Philanthropy) were concerned about infohazards, questions about catastrophic and existential biorisk were discussed in a shortened and somewhat ad-hoc continuation to the main study, about which readers are given few details. And as I have said, I’m not sure how much faith should be put in superforecaster judgments.

But at the very least, these judgments should serve as a sanity check. A large sample of smart and reflective forecasters thinks that leading estimates of existential biorisk are at least two orders of magnitude too high.

3. Knowledge and expertise

In trumpeting the dangers of biotechnology, effective altruists often focus on technological developments while neglecting the role of knowledge and expertise, often institutionalized and tacit, in producing and using biotechnology. Even highly sophisticated group actors often struggle to replicate the results of top laboratories, so the focus must not only be on the question of what top laboratories can do, but also on what significantly less sophisticated actors can do.

In a book-length study, Barriers to bioweapons, biosecurity expert Sonia Ben Ouagrham-Gormley explains how she came to appreciate the importance of these observations:

When the Soviet Union broke up and revealed the enormity and desperate state of its former bioweapons complex, like many researchers and policy analysts then, I was convinced that a state or terrorist group could readily exploit the expertise available at these former facilities and use it to produce a bioweapon. But after spending extensive time in the former Soviet Union, interacting with former bioweapons scientists supported by government or privately funded research, I found that my assessment of the threat began to change. Several themes started to emerge from my discussions with these individuals about their past bioweapons work and their current civilian work. A key observation—the importance of which I came to appreciate only later—is that working with live organisms is not easy. Live agents are capricious, and modifying or controlling their behavior to achieve specific objectives requires special knowledge and skills. Second, it was clear that the economic, political, and social environment in which people worked affected their results.

Drawing on a rich variety of case studies, Ben Ouagrham-Gormley concludes:

The challenge in developing biological weapons lies not in the acquisition but in the use of the material and technologies required for their development. Put differently, in the bioweapons field, expertise and knowledge—and the conditions under which scientific work occurs—are significantly greater barriers to weapons development than are procurement of biomaterials, scientific documents, and equipment. Contrary to popular belief, however, this specialized bioweapons knowledge is not easily acquired. Therefore, current threat assessments that focus exclusively on the formative stage of a bioweapons program and measure a program’s progress by tracking material and technology procurement are bound to overestimate the threat.

Others have echoed the same line. For example, in a study of risks from `do-it-yourself’ biological science, David Sarpong and colleagues consider the risk that sophisticated technologies may be readily available to malicious actors. They respond, echoing Ben Ouagrham-Gormley, that it is not the mere availability of technology but also a rich tapestry of expert knowledge that is required to synthesize and distribute advanced bioweapons, and that this knowledge is unlikely to be available to nonstate actors.

Among the possible safety risks, Mukunda et al. (2009) warn that, considering the history of computer hacking where people with malicious intent create software viruses for nefarious purposes, the democratisation of synthetic biology would make the technology readily available to individuals and groups who would use it to intentionally cause harm. However, some analysts note the high improbability of this scenario, given that the effective use of synthetic biology techniques relies on skills and techniques that cannot be easily codified (in this case not easily found on the internet), but instead are acquired through a hugely lengthy process of ‘learning by doing’ or ‘learning by example’. This is what these analysts refer to as ‘tacit knowledge’ (Revill and Jefferson, 2013; Jefferson et al., 2014). Here, we draw on the discussions as presented by Tucker (2011) and Jefferson et al. (2014) in order to reveal the dynamics of the biosecurity risk assessment. In the assessment of risk of misuse, it becomes pertinent to differentiate among potential actors according to their financial assets and technical capabilities – from states with well-developed bio-warfare programmes, to terrorist organisations of varying size and sophistication, or individual hackers motivated by ideological or other grievances. As revealed by the study of state-level bio-warfare programmes, the production of biological weapons necessitates an interdisciplinary team of experts with expansive knowledge in fields such as microbiology, aerobiology etc. States are therefore generally more capable of organising and sustaining such teams than non-state actors.

What doomsayers must show is not merely that the information and equipment necessary to produce harmful bioweapons will become available, but also that omnicidal actors are likely to acquire the rich tapestry of expert knowledge needed to make effective use of this information to produce and disseminate bioweapons.

To see how these challenges arise in practice, consider the case of Aum Shinrikyo. While Aum Shinrikyo is often cited by effective altruists as an example of would-be-bioterrorists, they were, in fact, quite bungling bioterrorists. While they did carry out a series of small-scale chemical weapons attacks, most of their efforts at bioterrorism were abject failures. A recent report on Aum by Richard Danzig and colleagues draws a series of lessons, the very first of which is this:

Aum’s biological program was a failure, while its chemical program was even more capable than would have been evident from its successful release of sarin in the Tokyo subway system in 1995. Though the reasons for this disparity are complex, a number of factors suggest that chemical weapons are likely to be more accessible than biological capabilities for terrorist groups intent on killing substantial numbers of people.

Not only did the group face persistent difficulties producing appropriate pathogens, they also had a good bit of trouble disseminating them. Indeed, the report’s second lesson is the following:

Effectively disseminating biological and chemical agents was challenging for Aum. Difficulties of this kind are likely to burden other groups.

How badly did Aum bungle the job? Consider the group’s attempts to spray various groups, including neighbors and a royal wedding parade, with anthrax. They tried and failed to procure an appropriate sprayer to disseminate a crude anthrax slurry, then decided to make a sprayer themselves. The results, described by Danzig and colleagues on the basis of interviews with Aum physician Tomomasa Nakagawa, are straight out of a cartoon:

The sprayer, according to Nakagawa, “broke down over and over again”. For example, during an attack on a royal wedding parade, “the sprayer broke down at the very time we began to spray. The medium full of anthrax leaked from several pipes under the high pressure (500 atmospheres or more).” During an effort to disseminate the material from the cult’s dojo in the Kameido neighborhood of Tokyo, mechanical difficulties with the sprayer made it, in Nakagaway’s memorable words, “spout like a whale.” The smell was so bad that numerous complaints of foul odors from neighbors provoked police inquiries.

If even a relatively large, organized, motivated and educated group such as Aum faced such memorable difficulties in producing and disseminating biological weapons, and if leading experts believe these difficulties will be faced by future groups, then we should not be too quick to assume that would-be-bioterrorists will be able to make effective use of the most threatening future developments in biotechnology.

4. Lack of mechanisms

In our discussion of climate risk (this series, Parts 2-5), a recurring complaint was the lack of credible mechanisms by which climate change could lead to existential catastrophe. It is not enough to say that climate change will be uncomfortable: we need to point at specific, if broad categories of threats, such as heat stress, crop failure, rising sea levels, or runaway greenhouse effects, and make the case that these effects are likely to lead to existential catastrophe. By contrast, an exhaustive review of all plausible mechanisms in Parts 2-5 of this series showed that there is no plausible mechanism by which climate change could lead to existential catastrophe any time soon, which was the chief reason given for taking climate risk to be low.

Similarly, those claiming that biological catastrophes could soon pose an existential, as opposed to merely catastrophic risk to humanity need to identify plausible mechanisms by which such an event could occur.

We will see in the next two parts of this series that leading arguments for biorisk are largely silent on the mechanisms by which existential catastrophe is meant to come about. This is perhaps a wise choice, for when we ask what could possibly be bad enough to bring about an existential catastrophe through biological means, it becomes difficult to produce a good answer.

Only, without a mechanism there is little in the way of a credible threat. It is all well and good to express concern, in the abstract, about increasing capabilities to sequence, synthesize and modify biological compounds. But to link these abstract developments to a concrete prediction – a significant risk of existential catastrophe in this century – we need an actual argument, citing actual mechanisms connecting these developments to a risk of existential catastrophe.

When I have pressed leading effective altruists for mechanisms, I have been met with silence. The most generous of my interlocutors have said that they would like to explain the underlying mechanisms, only these mechanisms are infohazards. The strategy director of EA Oxford went so far as to accuse me of exhibiting bad epistemic habits in daring to ask effective altruists to support their views with arguments.

There are a few problems with the infohazard-based defense of biorisk estimates. First, effective altruists have no serious scientific credibility, typically lack terminal degrees in any relevant field, and are knowingly and dramatically contradicting the consensus of scientific and policy experts. Against this background, the plea to `trust us, this is an infohazard’ can only be met by the natural response that we have little reason to trust them.

Second, there are simple means to reduce the risk of infohazards. When a prominent effective altruist told me that they could not reveal mechanisms, because mechanisms are infohazards, I asked if they would be willing to reveal those mechanisms to a panel of experts chosen by leading organizations such as the Center for Disease Control or the National Health Service. They said that they would not. I asked if they would be willing to reveal those mechanisms to experts at highly secure organizations with a stake in preventing bioterrorism, such as MI5 and the CIA. Again, I was told, they would not.

What plausible infohazard could be posed by such a disclosure? The chief infohazard that I can see is that a panel of actual experts would report back that the proposed mechanisms were implausible, ruining the reputation of authors with a habit of hiding behind vague insinuation and cloaking their lack of argument in protests about infohazards.

Is this too mean? Prove me wrong. Effective altruists have enough money and political influence to convene a panel of serious, independently selected experts to judge the strength of any existential risk mechanism claimed to be an infohazard. Will they be willing to submit these infohazards to the judgment of independent experts? If not, why not? Who is really being protected here?

5. They don’t believe it either

One way to evaluate the credibility of existential biorisk is to ask which leading effective altruist organizations are willing to argue for it. The answer is: precious few are. In leading papers, books and problem profiles, talk of existential biorisk is increasingly replaced by talk of catastrophic biorisk.

The Center for the Study of Existential RIsk lists a focus area in “Biology, biotechnology and global catastrophic risks.” Not only the title of this focus area, but in fact the entire linked description, contains no mention of existential biorisk.

Giving What We Can has a problem profile on “Improving biosecurity and pandemic preparedness“. They clarify point-blank that “This page is specifically focused on biosecurity that reduces global catastrophic biological risks.” There is no textual mention of existential risk apart from the following, largely irrelevant passage:

It is unclear whether supporting biosecurity is currently the most promising option for safeguarding humanity’s future. You may be one of many who think that AI poses a higher risk of existential catastrophe. As noted earlier, AI safety receives far less funding than biosecurity overall, though it’s unclear whether there’s much difference between the two if we compare between funding explicitly focused on reducing existential risks.

The 80,000 Hours problem profile on pandemics is entitled “Preventing catastrophic pandemics” and focuses almost entirely on catastrophic rather than existential risk. When they do discuss existential risks, they assert only that “We think there is a greater than 1 in 10,000 chance of a biological existential catastrophe within the next 100 years”, far lower than any of the estimates surveyed in Part 9 of this series. And the reasoning here is surprisingly lax: they give no explicit reasoning for their estimate and outright admit that they have not done much thinking about the matter.

We’ve looked at the reasoning behind these estimates and are uncertain about which ones we should most believe. Overall, we think the risk is around 0.1%, and very likely to be greater than 0.01%, but we haven’t thought about this in detail.

The 80,000 Hours problem profile on biorisk is entitled “Reducing global catastrophic biorisks“. As expected, the discussion focuses almost entirely on catastrophic rather than existential risks. The reason for this is that they explicitly acknowledge that individual global catastrophic biorisks are unlikely to be existential risks:

The aggregate risk of GCBRs appears to be mainly composed of non-existential dangers, and so this problem area would be relatively disfavoured compared to those, all else equal, where the danger is principally one of existential risk (AI perhaps chief amongst these).

However, they claim, perhaps several catastrophic risks could in aggregate pose an existential threat:

We may also worry about interaction terms between catastrophic risks: perhaps one global catastrophe is likely to precipitate others which ‘add up’ to an existential risk.

It is, of course, quite possible to assert that catastrophic biorisks are likely to contribute significantly to the risk of existential catastrophe, just as it is possible to point to any other failed existential risk (say, climate risk) and assert that it instead significantly raises the risk of other forms of existential catastrophe. A few remarks are in order here.

First, even the most generous such assertion is unlikely to recover anything like the 1-3% estimates of existential catastrophe due to engineered pandemics that we encountered in Part 9, even if those estimates are revised to allow for the possibility of counting proportional contributions to a larger existential threat.

Second, we are running out of existential threats for would-be-contributory risks such as biorisk and climate change to be contributing factors to. As it turns out that more and more alleged existential threats are nothing of the kind, we may simply run out of plausible risks for which these threats could be exacerbating risk factors.

Third and most importantly, we need an argument. We need to be told why biological catastrophe poses a risk of especially severe outcomes such as civilizational collapse that could plausibly be significant risk factors for other existential catastrophes. And we need plausible models of how these risk factors raise the risk of distant existential catastrophes.

To simply state the claim that catastrophic risks lead to a significant chance of other threats producing existential catastrophe is to simply state a surprising claim, not yet to make an argument.

6. Conclusion

I began this sub-series on biorisk with a striking dichotomy: many leading effective altruists give high estimates of existential biorisk in this century (often 1-3%), but provide little in the way of argument for those estimates. This left me at something of a loss as to how these risk estimates might be argued against.

My solution was to do two things. First, I would give preliminary reasons to suspect that existential biorisk is not so great as leading estimates suggest. Second, I would survey the arguments that have been given in favor of high risk estimates and argue that they are insufficient.

This post completes the first task: giving preliminary reasons to doubt high estimates of existential biorisk. So far, we have seen:

  1. Engineering existential biothreats is a very difficult problem: the pathogen would need to reach everyone, be virtually undetectable, unprecedently lethal and infectious, and circumvent basic public health measures. Then you would need a group with the sophistication and motivation to pull off an attack.
  2. Human history suggests surprising resilience: Human history is replete with examples of resilience not only to highly destructive pandemics, but also to the combination of pandemics with numerous other catastrophes.
  3. Mammalian history: Throughout history, there is only one recorded instance in which disease has led to the extinction of a mammalian species. The victim was a species of rat living on a single island.
  4. Everything must come together at once: Producing a virus with surprising features of many types (lethality, infectiousness, detectability, …) is much harder than producing a virus which is surprising along a few dimensions.
  5. Biological threats can be met with effective public health responses. These may be as simple as the now-familiar package of non-pharmaceutical interventions such as masking, social distancing and travel restrictions. But they can also be as complex as the increasingly rapid and sophisticated techniques for bringing vaccines to market in time to counteract a novel pandemic.
  6. Motivational obstacles:  it is hard to find credible examples of sophisticated actors with both the capability and the motivation to carry out omnicidal attacks. Many groups wish to cause harm to some fraction of humanity, but far fewer groups wish to kill all of us.
  7. Expert skepticism: experts are generally skeptical of existential biorisk claims. Experts do not doubt the possibility or seriousness of small-scale biological attacks, and indeed such attacks have occurred in the past. But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.
  8. Inscrutability: the biorisks being appealed to are relatively inscrutable in two ways: they involve speculation about far-off future technologies, and much of this speculation is hidden from public view. 
  9. Superforecaster skepticism: Superforecasters are uniquely skeptical of existential biorisk claims, and become much more skeptical after taking a careful look at the arguments.
  10. Knowledge and expertise: Even sophisticated groups often lack the knowledge and expertise to make effective use of bioweapons. Recent groups, such as Aum Shinrikyo, bungled their bioweapons programs rather badly.
  11. Lack of mechanisms: Making an effective case for existential biorisk requires exhibiting the mechanisms by which biological threats might produce existential catastrophe. This has, in many cases, not been done.
  12. They might not believe it either: Leading EA organizations often focus their public defenses on catastrophic rather than existential biorisk, and several prominent problem profiles openly contradict published high estimates of existential biorisk.

Together, these considerations suggest that a strong argument is needed to ground high estimates of existential biorisk. The remainder of this series will examine existing arguments in favor of high levels of existential biorisk and argue that they do not overcome the case for skepticism.

Comments

7 responses to “Exaggerating the risks (Part 11: Biorisk: Still more grounds for doubt)”

  1. River Avatar
    River

    I see too big ways in which you are misrepresenting what I understand the EA position to be. Firstly, you seem to equate biological risks with biological weapons, and these are not the same. Most obviously, biological risks include pandemics of zoonotic origin. But more relevantly, a biological weapon is meant to kill only some specific people, and therefor requires a level of targeting and specificity that makes the endeavor harder. The EA concern is a pathogen that kills civilization as a whole, I expect that to be much easier to pull off. My understanding is that a single person with the right set of biology wet lab skills could pull it off if they were trying.

    Secondly, you make a lot out of “existential” versus “catastrophic” risks. I would use the two words almost interchangeably. The difference doesn’t matter. Most of the value of humanity is in futures where civilization continues to advance and colonize the stars. If civilization collapses and never recovers, what does it matter whether a few hundred thousand humans continue to occupy this particular rock indefinitely? In my view, it doesn’t. If you want to make a serious argument, just pretend that every time someone says “existential” they mean “catastrophic”.

    1. David Thorstad Avatar

      River,

      The distinction between existential and catastrophic risks is one of the most fundamental and important distinctions in all of effective altruist theorizing. Effective altruists noticed that as bad as catastrophic risks are, existential catastrophes would be many, many times worse. Even a catastrophe that killed the majority of humans alive today could very well preserve human civilization, allowing us to recover and grow, living through a long, healthy, and prosperous future. By contrast, existential risks prevent this from happening. By destroying not only the value realized by present human life, but also much or all of the potential value realized by our future descendants, existential catastrophes produce significantly more damage than catastrophic risks.

      I am aware that some effective altruists pass more freely than I would like between discussions of existential and catastrophic risks. I did not expect a commentator to openly admit equivocating between these categories, then take me to task for failing to equivocate myself. I don’t want to hold effective altruists responsible for this specific behavior, since it is largely unrepresentative of the movement as a whole. In your case, I would suggest reviewing key texts discussing the distinction between existential and catastrophic risk. On catastrophic risk, the Bostrom and Cirkovic anthology is a classic. On existential risk, books by Ord and MacAskill as well as Bostrom’s paper “Existential risk prevention as global priority” are useful introductions.

      I do not equate biological risks with biological weapons. In particular, I discuss the example you mention (naturally occurring pandemics) in Parts 9 and 10 of this series. There, I argue that human history suggests surprising resilience to naturally occurring pandemics; that mammalian history suggests the same; and that public health responses can be effective in curbing the worst harms.

      I am not certain why you view biological weapons as only directed at killing specific people, particularly as you go on to discuss the possibility of a maliciously motivated individual in a wet lab aiming to destroy civilization as a whole. What I would like to see is a sustained argument for the likelihood of the scenario that you mention: a maliciously motivated individual in a wet lab causing an (existential?) catastrophe.

      In this series, I have discussed a number of reasons to doubt the likelihood of such a scenario. For example, engineering an appropriately dangerous pathogen is very difficult (Part 9), particularly given that a number of dangerous features must be brought together at once (Part 9). Biological threats can be met by public health responses (Part 10). Experts are extremely skeptical that this kind of `do-it-yourself science’ poses anything near the level of threat claimed (Part 10), and I discussed a number of detailed academic arguments against that very possibility. Superforecasters are also skeptical (Part 10). That skepticism is driven by the relative inscrutability of claimed risks (Part 10), lack of knowledge and expertise on the part of malicious actors (Part 11) and lack of credible, specific mechanisms by which existential catastrophe could result (Part 11). As a result of these and other concerns, many effective altruist organizations do not endorse strong claims about current levels of existential biorisk (Part 11), and those individuals and organizations who do endorse those claims would be well advised to provide a detailed argument for their views.

  2. Joshua Blake Avatar

    You might be interested in the preprint below, written by authors from SecureBio (a non-profit) and Sculpting Evolution (an academic group at MIT). Both are led by Kevin Esvelt, a respected synthetic biologist and arguably the most influential scientist in the EA Biosecurity space. I don’t find it’s arguments convincing, but it at least lays out the argument more clearly than anything I have previously seen. Esvelt’s interview on the 80,000 Hours podcast goes into parts of this in a more informal setting too.
    https://docs.google.com/document/d/1aPQ3B6QdKE8Lm1uqZsy9EIkP8lTAjiK09u7U7ntTlE4/edit?usp=drivesdk

    1. David Thorstad Avatar

      Thanks a lot!

      I’ll take a look, and hopefully be able to engage with it in future posts.

  3. Stephen Clare Avatar
    Stephen Clare

    I think you’re being unfair when you write “effective altruists have no serious scientific credibility” and “typically lack terminal degrees in any relevant field”. I can think of several EA people working in biosecurity who have an MD or PhD.

    1. David Thorstad Avatar

      Thanks Stephen!

      That’s fair enough. Maybe a better way to put the point is this.

      A smattering of terminal degrees in a research field is, at best, enough for effective altruists to fight their way onto some scholars’ reading lists. That becomes considerably harder given that effective altruists are pushing startling and heterodox views, but it can be done.

      For effective altruists to ask scholars or members of the general public to take their conclusions on trust, without substantial arguments given to support those conclusions, requires a good deal more. Usually, the minimal credentials to pull off such a claim are official pronouncements of one or more world-leading research groups, together with at least tentative endorsement by the overall scholarly community. Often the requirements are a good deal higher, and even then people do not like to take claims on trust.

      When effective altruists go ahead and make unevidenced and extreme claims about future risks, then ask their readers to take those claims on trust, they go far beyond anything that could be warranted by their credentials or position. This is the kind of overstepping that Nadia Montazeri described in the passage quoted at the beginning of this essay, and it motivates the type of backlash that Montazeri describes.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading