Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%
Will MacAskill, What we owe the future
While most leading biological disarmament and non-proliferation experts believe that the risk of a small-scale bioterrorism attack is very real and very present, they consider the risk of sophisticated large-scale bioterrorism attacks to be very small.
Jefferson, Lentzos and Marris, “Synthetic biology and biosecurity: Challenging the `myths’“.
1. Introduction
This is Part 14 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.
Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.
Parts 9, 10 and 11 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.
Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts 9, 10 and 11 gave a dozen preliminary reasons for doubt, surveyed at the end of Part 11.
The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism. Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates. Part 13 looked at Ord’s arguments in The Precipice. These did not support anything like Ord’s estimated 3.3% risk of existential catastrophe in the next 100 years.
Today’s post looks at another leading text: What we owe the future, by Will MacAskill. We saw in Part 9 of this series that MacAskill writes “Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%”. We also saw in Parts 9-11 that this is not an accurate reflection of expert opinion: most experts are deeply skeptical of anything approaching a 1% level of existential biorisk in this century.
In this post, I ask what else MacAskill has to say in defense of high levels of existential biorisk. We will see that many of the arguments made by MacAskill are similar to Ord’s, as are the complaints to be raised: while MacAskill correctly points to a number of concerning trends in biotechnology and some lapses in biosecurity, MacAskill gives us mostly familiar facts with few new arguments, which together do not support a 1% level of existential biorisk in this century.
2. A matter of time
MacAskill’s primary discussion of biorisk takes place in Chapter 5, “Extinction”, under the heading “Engineered pathogens”. MacAskill begins by noting that the book was produced during the COVID-19 pandemic, which led to millions of excess deaths and trillions of dollars in economic damages.
Although the COVID-19 pandemic did not pose an existential threat to humanity, MacAskill argues that future pandemics may be a good deal worse, because they could be driven by pathogens engineered to be more transmissible and more deadly.
MacAskill begins by discussing the declining cost of technologies such as gene sequencing and gene editing:
Progress in [biotechnology] has been extremely rapid. We typically think Moore’s law – halving the cost of computing power every few years – is the prime example of quick progress, but many technologies in synthetic biology actually improved faster than that. For example, the first time we sequenced the human genome, it cost hundreds of millions of dollars to do so. Just twenty years later, sequencing a full human genome costs around $1,000. A similar story is true for the cost to synthesise single-strand DNA, as well as the cost of gene editing.
We saw in Part 13 of this series that Toby Ord makes a similar point based on many of the same statistical trends. We saw then that Ord struggles to connect these trends, which are common knowledge among skeptical experts, to a new argument for substantial levels of existential biorisk in this century. Does MacAskill do better?
Certainly, MacAskill assures us, there is no evidence that technologies available today pose high levels of existential risk. Nevertheless, MacAskill claims, it is “only a matter of time” before existential threats emerge.
Engineered pathogens could be much more destructive than natural pathogens because they can be modified to have dangerous new properties. Could someone design a pathogen with maximum destructive power – something with the lethality of Ebola and the contagiousness of measles? Thankfully, with current technology this would be at least very difficult. But given the rate of progress in this area, it’s only a matter of time.
One feature of this passage worth remarking on is that it understates the target. The lethality of Ebola (about 50%) and the contagiousness of measles (R0 in the range of 12-18, with some disagreement) are well below what would be needed to produce a genuinely existential threat.
We also saw in Part of this series that it is not so easy to produce a pathogen which combines many problematic traits, such as high lethality and contagiousness. For example, we saw that Filippa Lentzos, reflecting on Ray Zilinskas’s work on the Soviet bioweapons program, draws the following conclusion:
Some of the key lessons that came out of the Soviet programme go to this issue of pleiotropy, where a single gene affects more than one characteristic. And if you change that gene to improve one of the characteristics you’re going for in that list of 12, the chances are you’re going to diminish one or more of the other characteristics that are desirable. So getting something that works is extraordinarily difficult.
More generally, we saw a dozen reasons in Parts 9-11 of this series to be skeptical of high levels of near-term existential biorisk.
3. What the argument might be
Does MacAskill give us reasons to think that pathogens will soon be developed which pose a high risk of existential catastrophe? MacAskill, like Ord, points to the growing democratization of biotechnology:
Not only is biotechnology rapidly improving; it is becoming increasingly democratised. The genetic recipe for smallpox is already freely available online. In a sense, we were “lucky” with nuclear weapons insofar as fissile material is incredibly hard to manufacture. The capability to do so is therefore limited to governments, and it is comparatively easy for outside observers to tell whether a country has a nuclear weapons programme. This is not so for engineered pathogens: in principle, with continued technological progress, viruses could be designed and produced with at-home kits. In the future, cost and skill barriers are likely to decline. Moreover, in the past we only had to deal with one pandemic at a time, and usually some people had natural immunity; in contrast, if it’s possible to engineer one type of new highly destructive pathogen, then it’s not that much harder to manufacture hundreds more, nor is it difficult to distribute them in thousands of locations around the world at once.
Until perhaps the last sentence, there is nothing new here. We discussed the risk of `do-it-yourself’ science in Part 10 of this series. There, we saw that a paper by David Sarapong and colleagues laments “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.” These experts find little empirical support for any such claims.
That skepticism is echoed by most leading experts and policymakers. For example, we also saw in Part 10 that a study of risks from synthetic biology by Catherine Jefferson and colleagues decries the “myths” that “synthetic biology could be used to design radically new pathogens” and “terrorists want to pursue biological weapons for high consequence, mass casualty attacks”, concluding:
Any bioterrorism attack will most likely be one using a pathogen strain with less than optimal characteristics disseminated through crude delivery methods under imperfect conditions, and the potential casualties of such an attack are likely to be much lower than the mass casualty scenarios frequently portrayed. This is not to say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.
The previous three paragraphs have been copied from my discussion of Ord in Part 13 because, once again, what we see from MacAskill is a repetition of the same basic facts appealed to by other effective altruists, which are known to and dismissed by experts, rather than the production of a novel argument.
There might be a novel argument in the last sentence of MacAskill’s discussion, namely:
Moreover, in the past we only had to deal with one pandemic at a time, and usually some people had natural immunity; in contrast, if it’s possible to engineer one type of new highly destructive pathogen, then it’s not that much harder to manufacture hundreds more, nor is it difficult to distribute them in thousands of locations around the world at once.
Only, if there is an argument here it is at odds with expert consensus and requires substantial support. We saw in Part 11 of this series that even if it becomes possible for leading laboratories to engineer a destructive pathogen, most individuals and groups may face substantial difficulties in producing and distributing such pathogens. For example, in a book-length study, Barriers to bioweapons, biosecurity expert Sonia Ben Ouagrham-Gormley explains:
The challenge in developing biological weapons lies not in the acquisition but in the use of the material and technologies required for their development. Put differently, in the bioweapons field, expertise and knowledge—and the conditions under which scientific work occurs—are significantly greater barriers to weapons development than are procurement of biomaterials, scientific documents, and equipment. Contrary to popular belief, however, this specialized bioweapons knowledge is not easily acquired. Therefore, current threat assessments that focus exclusively on the formative stage of a bioweapons program and measure a program’s progress by tracking material and technology procurement are bound to overestimate the threat.
This means that it is not plausible, without substantial argument, that malicious individuals and groups will be able to manufacture even one, let alone hundreds of varieties of the most dangerous pathogens.
Importantly, we also saw that there are substantial difficulties involved in distributing pathogens. MacAskill claims that it should not be difficult to release harmful pathogens in thousands of locations in the world at once, but we saw in Part 11 that one of the most sophisticated recent actors, Aum Shinrikyo, struggled substantially with distribution.
A recent report on Aum by Richard Danzig and colleagues draws a series of lessons, the second of which is the following:
Effectively disseminating biological and chemical agents was challenging for Aum. Difficulties of this kind are likely to burden other groups.
How challenging was dissemination for Aum?
Consider the group’s attempts to spray various groups, including neighbors and a royal wedding parade, with anthrax. They tried and failed to procure an appropriate sprayer to disseminate a crude anthrax slurry, then decided to make a sprayer themselves. The results, described by Danzig and colleagues on the basis of interviews with Aum physician Tomomasa Nakagawa, are straight out of a cartoon:
The sprayer, according to Nakagawa, “broke down over and over again”. For example, during an attack on a royal wedding parade, “the sprayer broke down at the very time we began to spray. The medium full of anthrax leaked from several pipes under the high pressure (500 atmospheres or more).” During an effort to disseminate the material from the cult’s dojo in the Kameido neighborhood of Tokyo, mechanical difficulties with the sprayer made it, in Nakagaway’s memorable words, “spout like a whale.” The smell was so bad that numerous complaints of foul odors from neighbors provoked police inquiries.
This is not a picture on which it is simple to distribute pathogens in many thousands of locations at once.
4. Changing the subject
So far, we have seen that MacAskill makes familiar arguments on the basis of familiar data, which are known to and widely rejected by biosecurity experts. MacAskill makes, we saw, perhaps one new argument: it could be very easy to manufacture and distribute great numbers of pathogens in many different places at once. However, we saw that this is at odds with the conclusions of many biosecurity specialists, and that MacAskill provides no new support for this claim.
At this point, what we have been given is something of a gentle introduction to trends in biotechnology. What we are really looking for is an argument drawing together these trends to support high estimates of near-term existential biorisk. This poses a problem for MacAskill, because he doesn’t have much evidence beyond the familiar trends and claims discussed above. Therefore, MacAskill changes the subject from demonstrating the dangers of biothreats to lamenting the levels of precautions taken to prevent them.
We would expect laboratories doing this research to have extremely high safety standards and the research to be very strictly regulated, with severe punishment for any lapses in safety. But in fact, the level of biosafety around the world is truly shocking.
For example, MacAskill reminds us that a single lab in the UK produced two separate outbreaks of foot and mouth disease in 2007; that the Soviet bioweapons program probably leaked anthrax and smallpox, leading to light fatalities; and that smallpox has escaped from three separate UK labs.
Again, this isn’t what we need to be told. Insufficient safety precautions at some laboratories are indeed a tragedy, and they have led to preventable loss of life. But this does not get us anywhere closer to seeing how it will soon be possible to manufacture and distribute pathogens capable of causing existential catastrophe, nor does it support the claim that pathogens might be released in thousands of locations simultaneously.
5. Motivations
MacAskill next turns to a discussion of motivations for building destructive pathogens. He writes:
Even if it becomes possible to build pathogens that are far more destructive than foot-and-mouth or COVID-19, surely no one would want to do so? After all, bioweapons seem useless for warfare because it’s extremely difficult to target who is infected. If you create a virus to decimate the opposing side, it’s likely that the pandemic will invade your home country too. One can think up counterarguments. Perhaps, for example, the country deploying the bioweapons would first vaccinate its population against them; perhaps, as a deterrent, the country would create an automated system guaranteed to release such pathogens in the event of a nuclear attack. But the stronger counterargument is that, as a matter of fact, major bioweapons programs have been run.
Let’s not take our eyes off the prize. The question is not whether someone would build “pathogens that are far more destructive than foot-and-mouth or COVID-19”, but whether someone would build pathogens capable of causing an existential catastrophe. Although major bioweapons programs have been run, and have attempted to weaponize pathogens such as smallpox and anthrax, major bioweapons programs have not, to anyone’s knowledge, attempted to create pathogens capable of producing an existential catastrophe, so MacAskill’s “stronger counterargument” risks, once again, changing the subject.
MacAskill discusses the Soviet bioweapons program, which MacAskill estimates to have run for six decades and employed about 60,000 people at its height. We are, once again, in familiar territory: we saw in Part 13 of this series that Ord discusses similar figures from the Soviet bioweapons program. Ord improves on MacAskill’s discussion in one respect: he reminds readers that many of the most threatening-sounding bioweapons developed by the Soviets were not operationally useful. MacAskill does not. We saw in Part 9 and Part 11 of this series that many experts have taken the Soviet bioweapons program as an illustration of the difficulty, rather than the ease, of creating existentially catastrophic pathogens, a fact not mentioned by Ord or MacAskill.
After a brief second discussion of laboratory leaks, MacAskill asserts the conclusion that we were looking to find a novel argument for:
I think it is difficult to rule out the possibility that synthetic biology could threaten human extinction.
Now, of course, if the claim is that we should not assign zero credence to this possibility, then MacAskill is quite right, but if MacAskill wants us to assign, say, 1% credence or higher to existential catastrophe from biological causes in this century, he really ought to provide more in the way of argument.
6. Experts
The next two paragraphs warn that efforts to mitigate existential biorisk may backfire by inspiring malicious actors to pursue bioweapons. I don’t know that this claim is especially germane to pressing the case for high levels of near-term existential biorisk, so let us pass on to MacAskill’s final argument: an appeal to expert opinion.
(1) Many extinction risk specialists consider engineered pandemics the second most likely cause of our demise this century, just behind artificial intelligence. (2) At the time of writing, the community forecasting platform Metaculus puts the probability of an engineered pandemic killing at least 95 percent of people by 2100 at 0.6. (3) Experts I know typically put the probability of an extinction-level engineered pandemic this century at around 1 percent; (4) in his book The Precipice, my colleague Toby Ord puts the probability at 3 percent. [Numbers added]
Let’s take these forecasts in reverse order.
On (4): We discussed Ord’s estimate in Part 13 of this series. There, I reviewed every argument made by Ord. There wasn’t much in the way of support for anything like a three percent risk estimate, and most of the arguments were quite similar to MacAskill’s.
On (3): The claim about experts MacAskill knows assigning 1 percent credence or higher to extinction-level engineered pandemics in this century is deeply troubling. We saw in Parts 9-11 of this series that most genuine biosecurity experts are deeply skeptical of high estimates of existential biorisk. This leaves two possible bases for MacAskill’s claim. The first is an insular conception of expertise on which only `extinction risk specialists’, but not genuine biosecurity experts, count as experts on existential biorisk. For example, we saw in Part 12 that Piers Millett and Andrew Snyder-Beattie use the 2% median estimate provided by attendees at a conference hosted by the Future of Humanity Institute as the basis for one of three models of existential biorisk. We saw there that even Millett and Snyder-Beattie concede this is not a reliable way to estimate biorisk. Of course people who devote their time to studying existential biorisk think that the risk is nontrivial, but that might tell us more about those who study existential biorisk than it does about the risks themselves.
Another basis for MacAskill’s claim that experts he knows assign a 1% or greater probability to existential biorisk in this century would be a failure to consult or engage with the work of mainstream biosecurity experts. I don’t think this is what drove MacAskill’s estimate, but it would be troubling if it did.
On (2): There is a great deal to be said about the role of forecasting platforms in predicting existential risk. Let me begin with some facts that may interest readers sympathetic to forecasting. We saw in Part 11 that when the existential risk persuasion tournament asked superforecasters to predict the risk of existential catastrophe by 2100, they put the risk at 0.01%, not 0.6%. Crucially, superforecasters initially put the risk at 0.1%, but revised it down by a factor of 10 after reviewing the arguments. This suggests that superforecasters do not take the evidence behind existential biorisk to be especially persuasive, which is consistent with the line of criticism advanced in this post.
Moving to concerns about forecasting long-term risks, Metaculus struggles at complex, low-evidence, long-term prediction tasks, and predicting existential biorisk in the next century certainly qualifies. Even those who think Metaculus performs significantly above chance concede this much.
Continuing this theme, let me cite some remarks by the associate director of Epoch research, Tamay Besiroglu. These were posted on Metaculus as a discussion of the community’s existential risk predictions, and I think readers may be more receptive to these points if they are voiced by another:
While [Metaculus’ approach] performs well for predicting events that occur in months or in a few years … there is little evidence that this ability generalizes to long-term, decade-out forecasts. Moreover, there is no very strong reason to expect that the incentives created by the questions are particularly conducive to creating high-quality forecasts (see, e.g. this thread). There might also be substantial biases stemming from the fact that those most concerned with global catastrophic risks are also more likely to provide and update their forecasts.
Besiroglu does go on to say that if we must produce numerical forecasts, Metaculus’ method might be no worse than any other. But that is hardly a ringing endorsement!
On (1): We spoke about the reliance on “extinction risk specialists” in Part 12 of this series, where we saw that Piers Millett and Andrew Snyder-Beattie take the opinions of attendees at a catastrophic risk conference hosted by the Future of Humanity Institute as the basis for a model of existential biorisk. We saw that even Millett and Snyder-Beattie think this is a weak basis for modeling, and we saw that they are quite right to think this. Polling the faithful is not a particularly reliable way to assess the truth of their views.
Summing up, MacAskill cites four groups of people as predicting high levels of existential biorisk: (1) Existential risk specialists, (2) Metaculus forecasters, (3) uncited “experts”, and (4) Ord in the Precipice. We saw that (1) and (2) are not reliable bases for estimates, (3) mischaracterizes expert consensus, and (4) was discussed in Part 13 of this series.
7. Taking stock
Today’s post looked at MacAskill’s arguments for high levels of existential biorisk in this century. We saw that MacAskill’s claim that experts he knows typically assign a probability of around 1% to existential biorisk in this century mischaracterizes expert consensus. We saw that MacAskill’s argument largely repeats basic facts familiar to experts and cited by Ord and other effective altruists.
MacAskill did make a few new arguments: for example, MacAskill worries that it might be easy to distribute hundreds of pathogens in thousands of locations in a short amount of time. We didn’t find any support for this claim, and we saw that it contradicts much of our earlier discussion.
We saw, as in our discussion of Ord, that MacAskill correctly draws attention to some failures of biosecurity and the weakness of oversight. This is concerning, but it doesn’t go that far towards establishing the claimed levels of existential risk without supplementation.
We saw that MacAskill gives four bases for estimated levels of existential biorisk this century in the neighborhood of 1%. We saw that these bases are not especially convincing.
MacAskill punctuates his discussion of existential biorisk with a story:
In 2017, I had dinner with Nicola Sturgeon, the first minister of Scotland, and was given the opportunity to pitch her on one policy. I chose pandemic preparedness, focusing on worst-case pandemics. Everyone laughed, and the host of the dinner, Sir Tom Hunter, joked that I was “freaking everyone oot”.
Readers are intended to sympathize with MacAskill: it is important to make sure that political leaders know and respond to future dangers. But at this point, one might sympathize increasingly with the dinner’s host. Effective altruists’ claims about existential biorisk are unprecedentedly strong, and unless they are supported by very good arguments, they might begin to look a bit more like strategies for “freaking everyone oot”, and a bit less like substantial evidence for surprising claims.
There are only so many times that one can cry “wolf!” before others stop responding. Is biosecurity really an area in which effective altruists want to be sounding the wolf call? If so, I hope they can show us a wolf.
Leave a Reply