Exaggerating the risks (Part 14: MacAskill on biorisk)

Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%

Will MacAskill, What we owe the future


While most leading biological disarmament and non-proliferation experts believe that the risk of a small-scale bioterrorism attack is very real and very present, they consider the risk of sophisticated large-scale bioterrorism attacks to be very small.

Jefferson, Lentzos and Marris, “Synthetic biology and biosecurity: Challenging the `myths’“.
Listen to this post

1. Introduction

This is Part 14 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Parts 910 and 11 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.

Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts 910 and 11 gave a dozen preliminary reasons for doubt, surveyed at the end of Part 11.

The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism. Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates. Part 13 looked at Ord’s arguments in The Precipice. These did not support anything like Ord’s estimated 3.3% risk of existential catastrophe in the next 100 years.

Today’s post looks at another leading text: What we owe the future, by Will MacAskill. We saw in Part 9 of this series that MacAskill writes “Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%”. We also saw in Parts 9-11 that this is not an accurate reflection of expert opinion: most experts are deeply skeptical of anything approaching a 1% level of existential biorisk in this century.

In this post, I ask what else MacAskill has to say in defense of high levels of existential biorisk. We will see that many of the arguments made by MacAskill are similar to Ord’s, as are the complaints to be raised: while MacAskill correctly points to a number of concerning trends in biotechnology and some lapses in biosecurity, MacAskill gives us mostly familiar facts with few new arguments, which together do not support a 1% level of existential biorisk in this century.

2. A matter of time

MacAskill’s primary discussion of biorisk takes place in Chapter 5, “Extinction”, under the heading “Engineered pathogens”. MacAskill begins by noting that the book was produced during the COVID-19 pandemic, which led to millions of excess deaths and trillions of dollars in economic damages.

Although the COVID-19 pandemic did not pose an existential threat to humanity, MacAskill argues that future pandemics may be a good deal worse, because they could be driven by pathogens engineered to be more transmissible and more deadly.

MacAskill begins by discussing the declining cost of technologies such as gene sequencing and gene editing:

Progress in [biotechnology] has been extremely rapid. We typically think Moore’s law – halving the cost of computing power every few years – is the prime example of quick progress, but many technologies in synthetic biology actually improved faster than that. For example, the first time we sequenced the human genome, it cost hundreds of millions of dollars to do so. Just twenty years later, sequencing a full human genome costs around $1,000. A similar story is true for the cost to synthesise single-strand DNA, as well as the cost of gene editing.

We saw in Part 13 of this series that Toby Ord makes a similar point based on many of the same statistical trends. We saw then that Ord struggles to connect these trends, which are common knowledge among skeptical experts, to a new argument for substantial levels of existential biorisk in this century. Does MacAskill do better?

Certainly, MacAskill assures us, there is no evidence that technologies available today pose high levels of existential risk. Nevertheless, MacAskill claims, it is “only a matter of time” before existential threats emerge.

Engineered pathogens could be much more destructive than natural pathogens because they can be modified to have dangerous new properties. Could someone design a pathogen with maximum destructive power – something with the lethality of Ebola and the contagiousness of measles? Thankfully, with current technology this would be at least very difficult. But given the rate of progress in this area, it’s only a matter of time.

One feature of this passage worth remarking on is that it understates the target. The lethality of Ebola (about 50%) and the contagiousness of measles (R0 in the range of 12-18, with some disagreement) are well below what would be needed to produce a genuinely existential threat.

We also saw in Part of this series that it is not so easy to produce a pathogen which combines many problematic traits, such as high lethality and contagiousness. For example, we saw that Filippa Lentzos, reflecting on Ray Zilinskas’s work on the Soviet bioweapons program, draws the following conclusion:

Some of the key lessons that came out of the Soviet programme go to this issue of pleiotropy, where a single gene affects more than one characteristic. And if you change that gene to improve one of the characteristics you’re going for in that list of 12, the chances are you’re going to diminish one or more of the other characteristics that are desirable. So getting something that works is extraordinarily difficult.

More generally, we saw a dozen reasons in Parts 9-11 of this series to be skeptical of high levels of near-term existential biorisk.

3. What the argument might be

Does MacAskill give us reasons to think that pathogens will soon be developed which pose a high risk of existential catastrophe? MacAskill, like Ord, points to the growing democratization of biotechnology:

Not only is biotechnology rapidly improving; it is becoming increasingly democratised. The genetic recipe for smallpox is already freely available online. In a sense, we were “lucky” with nuclear weapons insofar as fissile material is incredibly hard to manufacture. The capability to do so is therefore limited to governments, and it is comparatively easy for outside observers to tell whether a country has a nuclear weapons programme. This is not so for engineered pathogens: in principle, with continued technological progress, viruses could be designed and produced with at-home kits. In the future, cost and skill barriers are likely to decline. Moreover, in the past we only had to deal with one pandemic at a time, and usually some people had natural immunity; in contrast, if it’s possible to engineer one type of new highly destructive pathogen, then it’s not that much harder to manufacture hundreds more, nor is it difficult to distribute them in thousands of locations around the world at once.

Until perhaps the last sentence, there is nothing new here. We discussed the risk of `do-it-yourself’ science in Part 10 of this series. There, we saw that a paper by David Sarapong and colleagues laments “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.” These experts find little empirical support for any such claims.

That skepticism is echoed by most leading experts and policymakers. For example, we also saw in Part 10 that a study of risks from synthetic biology by Catherine Jefferson and colleagues decries the “myths” that “synthetic biology could be used to design radically new pathogens” and “terrorists want to pursue biological weapons for high consequence, mass casualty attacks”, concluding:

Any bioterrorism attack will most likely be one using a pathogen strain with less than optimal characteristics disseminated through crude delivery methods under imperfect conditions, and the potential casualties of such an attack are likely to be much lower than the mass casualty scenarios frequently portrayed. This is not to say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.

The previous three paragraphs have been copied from my discussion of Ord in Part 13 because, once again, what we see from MacAskill is a repetition of the same basic facts appealed to by other effective altruists, which are known to and dismissed by experts, rather than the production of a novel argument.

There might be a novel argument in the last sentence of MacAskill’s discussion, namely:

Moreover, in the past we only had to deal with one pandemic at a time, and usually some people had natural immunity; in contrast, if it’s possible to engineer one type of new highly destructive pathogen, then it’s not that much harder to manufacture hundreds more, nor is it difficult to distribute them in thousands of locations around the world at once.

Only, if there is an argument here it is at odds with expert consensus and requires substantial support. We saw in Part 11 of this series that even if it becomes possible for leading laboratories to engineer a destructive pathogen, most individuals and groups may face substantial difficulties in producing and distributing such pathogens. For example, in a book-length study, Barriers to bioweapons, biosecurity expert Sonia Ben Ouagrham-Gormley explains:

The challenge in developing biological weapons lies not in the acquisition but in the use of the material and technologies required for their development. Put differently, in the bioweapons field, expertise and knowledge—and the conditions under which scientific work occurs—are significantly greater barriers to weapons development than are procurement of biomaterials, scientific documents, and equipment. Contrary to popular belief, however, this specialized bioweapons knowledge is not easily acquired. Therefore, current threat assessments that focus exclusively on the formative stage of a bioweapons program and measure a program’s progress by tracking material and technology procurement are bound to overestimate the threat.

This means that it is not plausible, without substantial argument, that malicious individuals and groups will be able to manufacture even one, let alone hundreds of varieties of the most dangerous pathogens.

Importantly, we also saw that there are substantial difficulties involved in distributing pathogens. MacAskill claims that it should not be difficult to release harmful pathogens in thousands of locations in the world at once, but we saw in Part 11 that one of the most sophisticated recent actors, Aum Shinrikyo, struggled substantially with distribution.

A recent report on Aum by Richard Danzig and colleagues draws a series of lessons, the second of which is the following:

Effectively disseminating biological and chemical agents was challenging for Aum. Difficulties of this kind are likely to burden other groups.

How challenging was dissemination for Aum?

Consider the group’s attempts to spray various groups, including neighbors and a royal wedding parade, with anthrax. They tried and failed to procure an appropriate sprayer to disseminate a crude anthrax slurry, then decided to make a sprayer themselves. The results, described by Danzig and colleagues on the basis of interviews with Aum physician Tomomasa Nakagawa, are straight out of a cartoon:

The sprayer, according to Nakagawa, “broke down over and over again”. For example, during an attack on a royal wedding parade, “the sprayer broke down at the very time we began to spray. The medium full of anthrax leaked from several pipes under the high pressure (500 atmospheres or more).” During an effort to disseminate the material from the cult’s dojo in the Kameido neighborhood of Tokyo, mechanical difficulties with the sprayer made it, in Nakagaway’s memorable words, “spout like a whale.” The smell was so bad that numerous complaints of foul odors from neighbors provoked police inquiries.

This is not a picture on which it is simple to distribute pathogens in many thousands of locations at once.

4. Changing the subject

So far, we have seen that MacAskill makes familiar arguments on the basis of familiar data, which are known to and widely rejected by biosecurity experts. MacAskill makes, we saw, perhaps one new argument: it could be very easy to manufacture and distribute great numbers of pathogens in many different places at once. However, we saw that this is at odds with the conclusions of many biosecurity specialists, and that MacAskill provides no new support for this claim.

At this point, what we have been given is something of a gentle introduction to trends in biotechnology. What we are really looking for is an argument drawing together these trends to support high estimates of near-term existential biorisk. This poses a problem for MacAskill, because he doesn’t have much evidence beyond the familiar trends and claims discussed above. Therefore, MacAskill changes the subject from demonstrating the dangers of biothreats to lamenting the levels of precautions taken to prevent them.

We would expect laboratories doing this research to have extremely high safety standards and the research to be very strictly regulated, with severe punishment for any lapses in safety. But in fact, the level of biosafety around the world is truly shocking.

For example, MacAskill reminds us that a single lab in the UK produced two separate outbreaks of foot and mouth disease in 2007; that the Soviet bioweapons program probably leaked anthrax and smallpox, leading to light fatalities; and that smallpox has escaped from three separate UK labs.

Again, this isn’t what we need to be told. Insufficient safety precautions at some laboratories are indeed a tragedy, and they have led to preventable loss of life. But this does not get us anywhere closer to seeing how it will soon be possible to manufacture and distribute pathogens capable of causing existential catastrophe, nor does it support the claim that pathogens might be released in thousands of locations simultaneously.

5. Motivations

MacAskill next turns to a discussion of motivations for building destructive pathogens. He writes:

Even if it becomes possible to build pathogens that are far more destructive than foot-and-mouth or COVID-19, surely no one would want to do so? After all, bioweapons seem useless for warfare because it’s extremely difficult to target who is infected. If you create a virus to decimate the opposing side, it’s likely that the pandemic will invade your home country too. One can think up counterarguments. Perhaps, for example, the country deploying the bioweapons would first vaccinate its population against them; perhaps, as a deterrent, the country would create an automated system guaranteed to release such pathogens in the event of a nuclear attack. But the stronger counterargument is that, as a matter of fact, major bioweapons programs have been run.

Let’s not take our eyes off the prize. The question is not whether someone would build “pathogens that are far more destructive than foot-and-mouth or COVID-19”, but whether someone would build pathogens capable of causing an existential catastrophe. Although major bioweapons programs have been run, and have attempted to weaponize pathogens such as smallpox and anthrax, major bioweapons programs have not, to anyone’s knowledge, attempted to create pathogens capable of producing an existential catastrophe, so MacAskill’s “stronger counterargument” risks, once again, changing the subject.

MacAskill discusses the Soviet bioweapons program, which MacAskill estimates to have run for six decades and employed about 60,000 people at its height. We are, once again, in familiar territory: we saw in Part 13 of this series that Ord discusses similar figures from the Soviet bioweapons program. Ord improves on MacAskill’s discussion in one respect: he reminds readers that many of the most threatening-sounding bioweapons developed by the Soviets were not operationally useful. MacAskill does not. We saw in Part 9 and Part 11 of this series that many experts have taken the Soviet bioweapons program as an illustration of the difficulty, rather than the ease, of creating existentially catastrophic pathogens, a fact not mentioned by Ord or MacAskill.

After a brief second discussion of laboratory leaks, MacAskill asserts the conclusion that we were looking to find a novel argument for:

I think it is difficult to rule out the possibility that synthetic biology could threaten human extinction.

Now, of course, if the claim is that we should not assign zero credence to this possibility, then MacAskill is quite right, but if MacAskill wants us to assign, say, 1% credence or higher to existential catastrophe from biological causes in this century, he really ought to provide more in the way of argument.

6. Experts

The next two paragraphs warn that efforts to mitigate existential biorisk may backfire by inspiring malicious actors to pursue bioweapons. I don’t know that this claim is especially germane to pressing the case for high levels of near-term existential biorisk, so let us pass on to MacAskill’s final argument: an appeal to expert opinion.

(1) Many extinction risk specialists consider engineered pandemics the second most likely cause of our demise this century, just behind artificial intelligence. (2) At the time of writing, the community forecasting platform Metaculus puts the probability of an engineered pandemic killing at least 95 percent of people by 2100 at 0.6. (3) Experts I know typically put the probability of an extinction-level engineered pandemic this century at around 1 percent; (4) in his book The Precipice, my colleague Toby Ord puts the probability at 3 percent. [Numbers added]

Let’s take these forecasts in reverse order.

On (4): We discussed Ord’s estimate in Part 13 of this series. There, I reviewed every argument made by Ord. There wasn’t much in the way of support for anything like a three percent risk estimate, and most of the arguments were quite similar to MacAskill’s.

On (3): The claim about experts MacAskill knows assigning 1 percent credence or higher to extinction-level engineered pandemics in this century is deeply troubling. We saw in Parts 9-11 of this series that most genuine biosecurity experts are deeply skeptical of high estimates of existential biorisk. This leaves two possible bases for MacAskill’s claim. The first is an insular conception of expertise on which only `extinction risk specialists’, but not genuine biosecurity experts, count as experts on existential biorisk. For example, we saw in Part 12 that Piers Millett and Andrew Snyder-Beattie use the 2% median estimate provided by attendees at a conference hosted by the Future of Humanity Institute as the basis for one of three models of existential biorisk. We saw there that even Millett and Snyder-Beattie concede this is not a reliable way to estimate biorisk. Of course people who devote their time to studying existential biorisk think that the risk is nontrivial, but that might tell us more about those who study existential biorisk than it does about the risks themselves.

Another basis for MacAskill’s claim that experts he knows assign a 1% or greater probability to existential biorisk in this century would be a failure to consult or engage with the work of mainstream biosecurity experts. I don’t think this is what drove MacAskill’s estimate, but it would be troubling if it did.

On (2): There is a great deal to be said about the role of forecasting platforms in predicting existential risk. Let me begin with some facts that may interest readers sympathetic to forecasting. We saw in Part 11 that when the existential risk persuasion tournament asked superforecasters to predict the risk of existential catastrophe by 2100, they put the risk at 0.01%, not 0.6%. Crucially, superforecasters initially put the risk at 0.1%, but revised it down by a factor of 10 after reviewing the arguments. This suggests that superforecasters do not take the evidence behind existential biorisk to be especially persuasive, which is consistent with the line of criticism advanced in this post.

Moving to concerns about forecasting long-term risks, Metaculus struggles at complex, low-evidence, long-term prediction tasks, and predicting existential biorisk in the next century certainly qualifies. Even those who think Metaculus performs significantly above chance concede this much.

Continuing this theme, let me cite some remarks by the associate director of Epoch research, Tamay Besiroglu. These were posted on Metaculus as a discussion of the community’s existential risk predictions, and I think readers may be more receptive to these points if they are voiced by another:

While [Metaculus’ approach] performs well for predicting events that occur in months or in a few years … there is little evidence that this ability generalizes to long-term, decade-out forecasts. Moreover, there is no very strong reason to expect that the incentives created by the questions are particularly conducive to creating high-quality forecasts (see, e.g. this thread). There might also be substantial biases stemming from the fact that those most concerned with global catastrophic risks are also more likely to provide and update their forecasts.

Besiroglu does go on to say that if we must produce numerical forecasts, Metaculus’ method might be no worse than any other. But that is hardly a ringing endorsement!

On (1): We spoke about the reliance on “extinction risk specialists” in Part 12 of this series, where we saw that Piers Millett and Andrew Snyder-Beattie take the opinions of attendees at a catastrophic risk conference hosted by the Future of Humanity Institute as the basis for a model of existential biorisk. We saw that even Millett and Snyder-Beattie think this is a weak basis for modeling, and we saw that they are quite right to think this. Polling the faithful is not a particularly reliable way to assess the truth of their views.

Summing up, MacAskill cites four groups of people as predicting high levels of existential biorisk: (1) Existential risk specialists, (2) Metaculus forecasters, (3) uncited “experts”, and (4) Ord in the Precipice. We saw that (1) and (2) are not reliable bases for estimates, (3) mischaracterizes expert consensus, and (4) was discussed in Part 13 of this series.

7. Taking stock

Today’s post looked at MacAskill’s arguments for high levels of existential biorisk in this century. We saw that MacAskill’s claim that experts he knows typically assign a probability of around 1% to existential biorisk in this century mischaracterizes expert consensus. We saw that MacAskill’s argument largely repeats basic facts familiar to experts and cited by Ord and other effective altruists.

MacAskill did make a few new arguments: for example, MacAskill worries that it might be easy to distribute hundreds of pathogens in thousands of locations in a short amount of time. We didn’t find any support for this claim, and we saw that it contradicts much of our earlier discussion.

We saw, as in our discussion of Ord, that MacAskill correctly draws attention to some failures of biosecurity and the weakness of oversight. This is concerning, but it doesn’t go that far towards establishing the claimed levels of existential risk without supplementation.

We saw that MacAskill gives four bases for estimated levels of existential biorisk this century in the neighborhood of 1%. We saw that these bases are not especially convincing.

MacAskill punctuates his discussion of existential biorisk with a story:

In 2017, I had dinner with Nicola Sturgeon, the first minister of Scotland, and was given the opportunity to pitch her on one policy. I chose pandemic preparedness, focusing on worst-case pandemics. Everyone laughed, and the host of the dinner, Sir Tom Hunter, joked that I was “freaking everyone oot”.

Readers are intended to sympathize with MacAskill: it is important to make sure that political leaders know and respond to future dangers. But at this point, one might sympathize increasingly with the dinner’s host. Effective altruists’ claims about existential biorisk are unprecedentedly strong, and unless they are supported by very good arguments, they might begin to look a bit more like strategies for “freaking everyone oot”, and a bit less like substantial evidence for surprising claims.

There are only so many times that one can cry “wolf!” before others stop responding. Is biosecurity really an area in which effective altruists want to be sounding the wolf call? If so, I hope they can show us a wolf.

Comments

9 responses to “Exaggerating the risks (Part 14: MacAskill on biorisk)”

  1. Kai Williams Avatar
    Kai Williams

    Thanks for the article! I’ve read the blog for a while, and really appreciate this series. I had one quick comment to add on (2), the Metaculus prediction. I haven’t engaged directly with prediction markets, but the sense I get is that low probability markets are not profitable for those betting against the proposition, so they also tend to overestimate these types of questions. In this example, even if it resolved quickly you are at best getting a 0.6% premium on your investment, so not enough people are going to buy the “No” side to push the number down to a calibrated figure.

    1. David Thorstad Avatar

      Hi Kai,

      Thanks for your readership, and for the comment! That’s a very good point. I wouldn’t have thought to raise that point.

  2. Vasco Grilo Avatar
    Vasco Grilo

    Thanks for the post, David!

    You may be interested in the estimates for the likelihood of wildfire and stealth pandemics provided in the Appendix of this report (https://dam.gcsp.ch/files/doc/securing-civilisation-against-catastrophic-pandemics-gp-31). The 2 types of pandemics were discussed on the episode of The 80,000 Hours Podcast with Kevin Esvelt (https://80000hours.org/podcast/episodes/kevin-esvelt-stealth-wildfire-pandemics/). They are defined as follows in the report:
    – Wildfire pandemic: “One or more sufficiently debilitating and transmissible pandemic agents trigger civilisational collapse by interrupting the distribution of food, water, power and law enforcement”.
    – Stealth pandemic: “An initially mild or asymptomatic agent with a long incubation
    period infects most of humankind before its harmful effects become apparent”.

    The risk of a deliberate wildfire pandemic is estimated to be 22.2 %/year (Table A1), and that of a deliberate stealth pandemic 13.4 %/year (Table A2). I am confused by these high estimates, as they would imply a much higher frequency of such pandemics than what we have observed. I wonder whether they are supposed to refer to future risk, and wish the authors had defined the 2 types of pandemics more precisely in terms of a given death toll as a fraction of the global population. For what is worth, when I skimmed the back-of-the-envelop calculations used to obtain the risk estimates, I got a sense risk was being overestimated in multiple steps. Still, I am glad the authors shared how they got their estimates, which has often not been done in the context of extreme risk (e.g. Toby Ord’s estimates in The Precipice).

    1. David Thorstad Avatar

      Thanks Vasco!

      I’m sorry for the late reply. I got back from a very fun conference in Italy and promptly proceeded to sleep for twenty hours straight.

      Thanks for the link. You’re absolutely right – while this doesn’t approach the standard of evidence and explicitness we’d like to see, there’s enough detail in the linked report that it might be worth separately engaging with. I share your sense that the high range of the estimates in Table A1 is probably too high. I’d also like to see more evidence linking these pandemics to civilizational collapse. I’ll take a more detailed look at the report later, but from what I can see this is the least-evidenced part of the report. There is something of a pattern in biorisk discourse of providing good evidence for the least-catastrophic conclusions (likelihood of lab-leaks, likelihood of infection) and providing rather less evidence for the more catastrophic conclusions (civilizational collapse, human extinction).

      Certainly, Esvelt is a serious guy with good credentials in biosecurity. His work deserves to be taken seriously.

  3. David Manheim Avatar

    To start, I think that you’ve made a slightly unfair omission or stretch in contrasting “many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100” with “the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk.” When I have spoken to those experts, as I did several years prior to most of this discussion – https://www.openphilanthropy.org/grants/david-manheim-research-on-existential-risk-from-infectious-disease/ – they said that intermediate and long term biorisk – say, by 2050 – was worryingly high, but wasn’t their focus.

    In the wake of that, partly spurred by concerned of both experts and EAs, I investigated the risk of multiple disease outbreaks, i.e. multi-disease syndemics, https://www.biorxiv.org/content/10.1101/401018v1.full , and concluded on the basis of arguments like those you made about public health response, that “(absent much clearer reasons for concern) the suggested dangers are overstated.” But I said they were overstated in comparison to very naive estimates that concluded that the impacts of multiple diseases would be multiplicative, not in comparison to the claim that there was a single-digit risk this century. So I did not think this is nearly as strong a case as you claim even then, and I would further temper my optimism in the wake of COVID-19.

    All of that said, I also agree that these estimates are difficult, and I agree that extinction from bioterrorism narrowly is very unlikely – though I don’t understand the confidence of those who put the probability of biorisks playing a significant role in extinction below 1%. Relatedly, I also think much of the better argument to focus on these risks is that if they occur, they are likely to be catastrophic, so they are worth preventing, and that they would destabilize the world in ways that make existential catastrophe much more likely, rather than directly cause extinction. But that’s a far broader discussion.

    1. David Thorstad Avatar

      Thanks David! Could you provide a link to the results of the first investigation? The Open Philanthropy page linked is an overview of what the grant was tasked with, but I could not find a link to the research produced through this grant.

    2. David Thorstad Avatar

      Thanks David!

      I am glad to hear that you think appeals to multi-disease syndemics don’t go as far as some effective altruists suggest. This is exactly the reason that I urge such claims to be made and supported in more detail, and I appreciate the care you put into investigating this claim.

      The paper that came out of your OpenPhil investigation, “Questioning estimates of natural pandemic risk”, is certainly important. I take it you don’t think that natural pandemics pose a significant existential (as opposed to catastrophic) risk in this century? If that is right, we might do better to focus on existential risk for this discussion.

      I’m not very enthusiastic about the practice of supporting existential risk estimates with secret models and secret testimony. The situation that you describe, where serious engagement with existential risk claims would be viewed as discrediting, is not one that I view as especially supportive of high existential risk estimates. But at the same time, for the reasons discussed in Part 11 of this series, effective altruists are not in a position to expect much credence to be lent to existential risk claims if they are not willing to publicly disclose strong evidence for those claims.

      I’d like to hear more about why you are puzzled by the fact that many people place far less than 1% credence in existential catastrophe from biological causes in this century. A 1% chance of extinction in a single century from any cause is frighteningly high, and in the absence of strong reasons to expect a threat of this magnitude, we should probably expect most things to pose much less of a risk than this.

      I certainly agree that it is more productive to discuss catastrophic than existential biorisk. We just survived a catastrophic pandemic, and it was not fun.

      At the same time, it is worth recalling the purpose of this series. Effective altruists give very high estimates of the levels of existential risk facing humanity today. They do not always provide the level of support that might be expected for these claims. The purpose of this series is to suggest that claimed levels of existential risk may be exaggerated. Many effective altruists think that biorisk is second only to AI risk in the level of existential risk that it poses, so it would be a major blow to standard risk estimates if biorisk discourse were to shift from existential to catastrophic risk.

      1. David Manheim Avatar

        First, I agree with your point that secret testimony is useless as evidence, I was just pointing out that I did work that couldn’t be disclosed. However, I think it’s absolutely reasonable that government officials are unwilling to say true things publicly because their role would be undermined if they did so.

        Second, I agree that natural pandemic risk is primarily catastrophic, but there were several reasons I claimed in the paper that the evidence against even existential risk is deeply flawed. That doesn’t mean I think the risk is high, but I think that the typical baseline claim that it’s below 1/200,000 per year based on human history is deeply flawed.

        Third, I think I don’t understand your claim that the risk is “frighteningly high” therefore “we should probably expect most things to pose much less of a risk.” How is this relevant? Technological risks don’t need to follow the intuitive historical distribution, and we discovered nuclear weapons, which pose a non-trivial existential risk, so our base rate of “invent tech that could kill everyone” jumped from zero to something like once per century, and it seems reasonable that the rate of technological risks would increase if we have even more powerful technologies. I think there are strong reasons to think that the risk from new technologies is increasing over the coming decades, and while I agree that it’s under 0.01% for 2024, that’s very different than claiming it’s under 1% for the coming century. The degree of implicit confidence that engineering biology cannot be dangerous, or is fairly strongly defense dominant, seems absurd to me. I’ll point here: https://www.lesswrong.com/posts/sJPbmm8Gd34vGYrKd/deep-atheism-and-ai-risk#What_s_the_problem_with_trust_ to gesture in a very vague and general way about why I disagree with most people’s confidence that reality isn’t that dangerous.

        Lastly, I would not call COVID-19 “catastrophic” in the sense of GCRs and GCBRs; it was horrible, but it didn’t get anywhere near the level of what we expect from an actually dangerous catastrophe.

  4. David Manheim Avatar

    A number of experts in government were unable to comment on the record, even to the extent that I could quote them or use their views in aggregate, so the direct outputs were not published, though parts were shared with Open Phil. I understand that other investigators have found the same – most experts who have agreed the risk is non-trivial will not speak on the record about any estimates that might be seen as sensationalist and thereby undermine their credibility. So I can’t tell you which experts are being referred to, which I absolutely agree is a real problem with external people evaluating the credibility of the claims.

    This work did lead to the analysis that was published in Health Security summarizing and extending arguments against the claims that natural existential pandemic risk was low; https://www.liebertpub.com/doi/full/10.1089/hs.2018.0039 and the linked preprint investigating the claimed non-natural multi-disease pandemic threat mode.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading