In the words of one of my mentors … “Deference to senior EAs and concern for infohazards mean that advocates for biosecurity policies cannot fully disclose their reasoning for specific policy suggestions. This means that a collaborative approach that takes non-EAs along to understand the reason behind policy asks and invites scrutiny and feedback is not possible. This kind of motivated, non-evidence-based advocacy makes others suspicious, which is already leading to a backlash against EA in the biosecurity space.”
This is Part 11 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.
Part 9 and Part 10 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.
Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach is to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Then I will review the arguments for high levels of existential biorisk that have been provided by effective altruists and argue that they are insufficient.
Parts 9 and 10 gave eight preliminary reasons for skepticism, which I review at the end of this post. But first, I want to round out the dozen with four final reasons for skepticism about existential biorisk.
2. Superforecaster skepticism
Forecasting future risks is hard. One strategy favored by many effective altruists is to train and recruit superforecasters, individuals with a proven record of reliable predictions on a horizon of months or sometimes years, often in relatively discrete and tractable problems. Superforecasters are then asked to predict future levels of existential risk, and their predictions are taken as evidence about the true level of risk.
For my own part, I’m not terribly fond of this strategy. There isn’t much evidence to suggest that individuals who are good at predicting short-term, moderately tractable phenomena are especially reliable at predicting long-term, highly inscrutable phenomena. A small reason for skepticism is that different skills may be needed: predicting future risks requires a greater degree of speculation and imagination. A large reason for skepticism is the difficulty of the problem: predicting existential risks is very hard, and even if you think superforecasters are the best forecasters we have, this doesn’t guarantee they will have much insight into phenomena that are too distant and complex to reliably predict.
But I know that many of my readers do care what superforecasters think about biorisk. A recent study by Philip Tetlock and colleagues asked a large sample of forecasters to predict the risk of extinction caused by engineered pathogens before the year 2100. Superforecasters initially put the risk at a median of 0.1%. Then they went through a period of deliberation, debate and evidential examination. Matters got worse. Once superforecasters had worked through the arguments, they reduced their estimates to a median of 0.01%. This suggests that superforecasters were especially unconvinced by the available evidence and arguments.
Such results should be taken with a grain of salt. For one thing, this phase of the study was conducted in a somewhat irregular manner: because the study’s funders (Open Philanthropy) were concerned about infohazards, questions about catastrophic and existential biorisk were discussed in a shortened and somewhat ad-hoc continuation to the main study, about which readers are given few details. And as I have said, I’m not sure how much faith should be put in superforecaster judgments.
But at the very least, these judgments should serve as a sanity check. A large sample of smart and reflective forecasters thinks that leading estimates of existential biorisk are at least two orders of magnitude too high.
3. Knowledge and expertise
In trumpeting the dangers of biotechnology, effective altruists often focus on technological developments while neglecting the role of knowledge and expertise, often institutionalized and tacit, in producing and using biotechnology. Even highly sophisticated group actors often struggle to replicate the results of top laboratories, so the focus must not only be on the question of what top laboratories can do, but also on what significantly less sophisticated actors can do.
In a book-length study, Barriers to bioweapons, biosecurity expert Sonia Ben Ouagrham-Gormley explains how she came to appreciate the importance of these observations:
When the Soviet Union broke up and revealed the enormity and desperate state of its former bioweapons complex, like many researchers and policy analysts then, I was convinced that a state or terrorist group could readily exploit the expertise available at these former facilities and use it to produce a bioweapon. But after spending extensive time in the former Soviet Union, interacting with former bioweapons scientists supported by government or privately funded research, I found that my assessment of the threat began to change. Several themes started to emerge from my discussions with these individuals about their past bioweapons work and their current civilian work. A key observation—the importance of which I came to appreciate only later—is that working with live organisms is not easy. Live agents are capricious, and modifying or controlling their behavior to achieve specific objectives requires special knowledge and skills. Second, it was clear that the economic, political, and social environment in which people worked affected their results.
Drawing on a rich variety of case studies, Ben Ouagrham-Gormley concludes:
The challenge in developing biological weapons lies not in the acquisition but in the use of the material and technologies required for their development. Put differently, in the bioweapons field, expertise and knowledge—and the conditions under which scientific work occurs—are significantly greater barriers to weapons development than are procurement of biomaterials, scientific documents, and equipment. Contrary to popular belief, however, this specialized bioweapons knowledge is not easily acquired. Therefore, current threat assessments that focus exclusively on the formative stage of a bioweapons program and measure a program’s progress by tracking material and technology procurement are bound to overestimate the threat.
Others have echoed the same line. For example, in a study of risks from `do-it-yourself’ biological science, David Sarpong and colleagues consider the risk that sophisticated technologies may be readily available to malicious actors. They respond, echoing Ben Ouagrham-Gormley, that it is not the mere availability of technology but also a rich tapestry of expert knowledge that is required to synthesize and distribute advanced bioweapons, and that this knowledge is unlikely to be available to nonstate actors.
Among the possible safety risks, Mukunda et al. (2009) warn that, considering the history of computer hacking where people with malicious intent create software viruses for nefarious purposes, the democratisation of synthetic biology would make the technology readily available to individuals and groups who would use it to intentionally cause harm. However, some analysts note the high improbability of this scenario, given that the effective use of synthetic biology techniques relies on skills and techniques that cannot be easily codified (in this case not easily found on the internet), but instead are acquired through a hugely lengthy process of ‘learning by doing’ or ‘learning by example’. This is what these analysts refer to as ‘tacit knowledge’ (Revill and Jefferson, 2013; Jefferson et al., 2014). Here, we draw on the discussions as presented by Tucker (2011) and Jefferson et al. (2014) in order to reveal the dynamics of the biosecurity risk assessment. In the assessment of risk of misuse, it becomes pertinent to differentiate among potential actors according to their financial assets and technical capabilities – from states with well-developed bio-warfare programmes, to terrorist organisations of varying size and sophistication, or individual hackers motivated by ideological or other grievances. As revealed by the study of state-level bio-warfare programmes, the production of biological weapons necessitates an interdisciplinary team of experts with expansive knowledge in fields such as microbiology, aerobiology etc. States are therefore generally more capable of organising and sustaining such teams than non-state actors.
What doomsayers must show is not merely that the information and equipment necessary to produce harmful bioweapons will become available, but also that omnicidal actors are likely to acquire the rich tapestry of expert knowledge needed to make effective use of this information to produce and disseminate bioweapons.
To see how these challenges arise in practice, consider the case of Aum Shinrikyo. While Aum Shinrikyo is often cited by effective altruists as an example of would-be-bioterrorists, they were, in fact, quite bungling bioterrorists. While they did carry out a series of small-scale chemical weapons attacks, most of their efforts at bioterrorism were abject failures. A recent report on Aum by Richard Danzig and colleagues draws a series of lessons, the very first of which is this:
Aum’s biological program was a failure, while its chemical program was even more capable than would have been evident from its successful release of sarin in the Tokyo subway system in 1995. Though the reasons for this disparity are complex, a number of factors suggest that chemical weapons are likely to be more accessible than biological capabilities for terrorist groups intent on killing substantial numbers of people.
Not only did the group face persistent difficulties producing appropriate pathogens, they also had a good bit of trouble disseminating them. Indeed, the report’s second lesson is the following:
Effectively disseminating biological and chemical agents was challenging for Aum. Difficulties of this kind are likely to burden other groups.
How badly did Aum bungle the job? Consider the group’s attempts to spray various groups, including neighbors and a royal wedding parade, with anthrax. They tried and failed to procure an appropriate sprayer to disseminate a crude anthrax slurry, then decided to make a sprayer themselves. The results, described by Danzig and colleagues on the basis of interviews with Aum physician Tomomasa Nakagawa, are straight out of a cartoon:
The sprayer, according to Nakagawa, “broke down over and over again”. For example, during an attack on a royal wedding parade, “the sprayer broke down at the very time we began to spray. The medium full of anthrax leaked from several pipes under the high pressure (500 atmospheres or more).” During an effort to disseminate the material from the cult’s dojo in the Kameido neighborhood of Tokyo, mechanical difficulties with the sprayer made it, in Nakagaway’s memorable words, “spout like a whale.” The smell was so bad that numerous complaints of foul odors from neighbors provoked police inquiries.
If even a relatively large, organized, motivated and educated group such as Aum faced such memorable difficulties in producing and disseminating biological weapons, and if leading experts believe these difficulties will be faced by future groups, then we should not be too quick to assume that would-be-bioterrorists will be able to make effective use of the most threatening future developments in biotechnology.
4. Lack of mechanisms
In our discussion of climate risk (this series, Parts 2-5), a recurring complaint was the lack of credible mechanisms by which climate change could lead to existential catastrophe. It is not enough to say that climate change will be uncomfortable: we need to point at specific, if broad categories of threats, such as heat stress, crop failure, rising sea levels, or runaway greenhouse effects, and make the case that these effects are likely to lead to existential catastrophe. By contrast, an exhaustive review of all plausible mechanisms in Parts 2-5 of this series showed that there is no plausible mechanism by which climate change could lead to existential catastrophe any time soon, which was the chief reason given for taking climate risk to be low.
Similarly, those claiming that biological catastrophes could soon pose an existential, as opposed to merely catastrophic risk to humanity need to identify plausible mechanisms by which such an event could occur.
We will see in the next two parts of this series that leading arguments for biorisk are largely silent on the mechanisms by which existential catastrophe is meant to come about. This is perhaps a wise choice, for when we ask what could possibly be bad enough to bring about an existential catastrophe through biological means, it becomes difficult to produce a good answer.
Only, without a mechanism there is little in the way of a credible threat. It is all well and good to express concern, in the abstract, about increasing capabilities to sequence, synthesize and modify biological compounds. But to link these abstract developments to a concrete prediction – a significant risk of existential catastrophe in this century – we need an actual argument, citing actual mechanisms connecting these developments to a risk of existential catastrophe.
When I have pressed leading effective altruists for mechanisms, I have been met with silence. The most generous of my interlocutors have said that they would like to explain the underlying mechanisms, only these mechanisms are infohazards. The strategy director of EA Oxford went so far as to accuse me of exhibiting bad epistemic habits in daring to ask effective altruists to support their views with arguments.
There are a few problems with the infohazard-based defense of biorisk estimates. First, effective altruists have no serious scientific credibility, typically lack terminal degrees in any relevant field, and are knowingly and dramatically contradicting the consensus of scientific and policy experts. Against this background, the plea to `trust us, this is an infohazard’ can only be met by the natural response that we have little reason to trust them.
Second, there are simple means to reduce the risk of infohazards. When a prominent effective altruist told me that they could not reveal mechanisms, because mechanisms are infohazards, I asked if they would be willing to reveal those mechanisms to a panel of experts chosen by leading organizations such as the Center for Disease Control or the National Health Service. They said that they would not. I asked if they would be willing to reveal those mechanisms to experts at highly secure organizations with a stake in preventing bioterrorism, such as MI5 and the CIA. Again, I was told, they would not.
What plausible infohazard could be posed by such a disclosure? The chief infohazard that I can see is that a panel of actual experts would report back that the proposed mechanisms were implausible, ruining the reputation of authors with a habit of hiding behind vague insinuation and cloaking their lack of argument in protests about infohazards.
Is this too mean? Prove me wrong. Effective altruists have enough money and political influence to convene a panel of serious, independently selected experts to judge the strength of any existential risk mechanism claimed to be an infohazard. Will they be willing to submit these infohazards to the judgment of independent experts? If not, why not? Who is really being protected here?
5. They don’t believe it either
One way to evaluate the credibility of existential biorisk is to ask which leading effective altruist organizations are willing to argue for it. The answer is: precious few are. In leading papers, books and problem profiles, talk of existential biorisk is increasingly replaced by talk of catastrophic biorisk.
The Center for the Study of Existential RIsk lists a focus area in “Biology, biotechnology and global catastrophic risks.” Not only the title of this focus area, but in fact the entire linked description, contains no mention of existential biorisk.
Giving What We Can has a problem profile on “Improving biosecurity and pandemic preparedness“. They clarify point-blank that “This page is specifically focused on biosecurity that reduces global catastrophic biological risks.” There is no textual mention of existential risk apart from the following, largely irrelevant passage:
It is unclear whether supporting biosecurity is currently the most promising option for safeguarding humanity’s future. You may be one of many who think that AI poses a higher risk of existential catastrophe. As noted earlier, AI safety receives far less funding than biosecurity overall, though it’s unclear whether there’s much difference between the two if we compare between funding explicitly focused on reducing existential risks.
The 80,000 Hours problem profile on pandemics is entitled “Preventing catastrophic pandemics” and focuses almost entirely on catastrophic rather than existential risk. When they do discuss existential risks, they assert only that “We think there is a greater than 1 in 10,000 chance of a biological existential catastrophe within the next 100 years”, far lower than any of the estimates surveyed in Part 9 of this series. And the reasoning here is surprisingly lax: they give no explicit reasoning for their estimate and outright admit that they have not done much thinking about the matter.
We’ve looked at the reasoning behind these estimates and are uncertain about which ones we should most believe. Overall, we think the risk is around 0.1%, and very likely to be greater than 0.01%, but we haven’t thought about this in detail.
The 80,000 Hours problem profile on biorisk is entitled “Reducing global catastrophic biorisks“. As expected, the discussion focuses almost entirely on catastrophic rather than existential risks. The reason for this is that they explicitly acknowledge that individual global catastrophic biorisks are unlikely to be existential risks:
The aggregate risk of GCBRs appears to be mainly composed of non-existential dangers, and so this problem area would be relatively disfavoured compared to those, all else equal, where the danger is principally one of existential risk (AI perhaps chief amongst these).
However, they claim, perhaps several catastrophic risks could in aggregate pose an existential threat:
We may also worry about interaction terms between catastrophic risks: perhaps one global catastrophe is likely to precipitate others which ‘add up’ to an existential risk.
It is, of course, quite possible to assert that catastrophic biorisks are likely to contribute significantly to the risk of existential catastrophe, just as it is possible to point to any other failed existential risk (say, climate risk) and assert that it instead significantly raises the risk of other forms of existential catastrophe. A few remarks are in order here.
First, even the most generous such assertion is unlikely to recover anything like the 1-3% estimates of existential catastrophe due to engineered pandemics that we encountered in Part 9, even if those estimates are revised to allow for the possibility of counting proportional contributions to a larger existential threat.
Second, we are running out of existential threats for would-be-contributory risks such as biorisk and climate change to be contributing factors to. As it turns out that more and more alleged existential threats are nothing of the kind, we may simply run out of plausible risks for which these threats could be exacerbating risk factors.
Third and most importantly, we need an argument. We need to be told why biological catastrophe poses a risk of especially severe outcomes such as civilizational collapse that could plausibly be significant risk factors for other existential catastrophes. And we need plausible models of how these risk factors raise the risk of distant existential catastrophes.
To simply state the claim that catastrophic risks lead to a significant chance of other threats producing existential catastrophe is to simply state a surprising claim, not yet to make an argument.
I began this sub-series on biorisk with a striking dichotomy: many leading effective altruists give high estimates of existential biorisk in this century (often 1-3%), but provide little in the way of argument for those estimates. This left me at something of a loss as to how these risk estimates might be argued against.
My solution was to do two things. First, I would give preliminary reasons to suspect that existential biorisk is not so great as leading estimates suggest. Second, I would survey the arguments that have been given in favor of high risk estimates and argue that they are insufficient.
This post completes the first task: giving preliminary reasons to doubt high estimates of existential biorisk. So far, we have seen:
- Engineering existential biothreats is a very difficult problem: the pathogen would need to reach everyone, be virtually undetectable, unprecedently lethal and infectious, and circumvent basic public health measures. Then you would need a group with the sophistication and motivation to pull off an attack.
- Human history suggests surprising resilience: Human history is replete with examples of resilience not only to highly destructive pandemics, but also to the combination of pandemics with numerous other catastrophes.
- Mammalian history: Throughout history, there is only one recorded instance in which disease has led to the extinction of a mammalian species. The victim was a species of rat living on a single island.
- Everything must come together at once: Producing a virus with surprising features of many types (lethality, infectiousness, detectability, …) is much harder than producing a virus which is surprising along a few dimensions.
- Biological threats can be met with effective public health responses. These may be as simple as the now-familiar package of non-pharmaceutical interventions such as masking, social distancing and travel restrictions. But they can also be as complex as the increasingly rapid and sophisticated techniques for bringing vaccines to market in time to counteract a novel pandemic.
- Motivational obstacles: it is hard to find credible examples of sophisticated actors with both the capability and the motivation to carry out omnicidal attacks. Many groups wish to cause harm to some fraction of humanity, but far fewer groups wish to kill all of us.
- Expert skepticism: experts are generally skeptical of existential biorisk claims. Experts do not doubt the possibility or seriousness of small-scale biological attacks, and indeed such attacks have occurred in the past. But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.
- Inscrutability: the biorisks being appealed to are relatively inscrutable in two ways: they involve speculation about far-off future technologies, and much of this speculation is hidden from public view.
- Superforecaster skepticism: Superforecasters are uniquely skeptical of existential biorisk claims, and become much more skeptical after taking a careful look at the arguments.
- Knowledge and expertise: Even sophisticated groups often lack the knowledge and expertise to make effective use of bioweapons. Recent groups, such as Aum Shinrikyo, bungled their bioweapons programs rather badly.
- Lack of mechanisms: Making an effective case for existential biorisk requires exhibiting the mechanisms by which biological threats might produce existential catastrophe. This has, in many cases, not been done.
- They might not believe it either: Leading EA organizations often focus their public defenses on catastrophic rather than existential biorisk, and several prominent problem profiles openly contradict published high estimates of existential biorisk.
Together, these considerations suggest that a strong argument is needed to ground high estimates of existential biorisk. The remainder of this series will examine existing arguments in favor of high levels of existential biorisk and argue that they do not overcome the case for skepticism.