In 2017, I had dinner with Nicola Sturgeon, the first minister of Scotland, and was given the opportunity to pitch her on one policy. I chose pandemic preparedness, focusing on worst-case pandemics. Everyone laughed, and the host of the dinner, Sir Tom Hunter, joked that I was “freaking everyone oot”.
Will MacAskill, What we owe the future
1. Recap
This is Part 9 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.
This series is getting a bit long, so I’ve reorganized it into subseries, one for each type of risk. Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk, focusing on Toby Ord’s treatment of climate risk in the precipice (Part 2) and the Halstead report on climate change and longtermism (Parts 3-5).
Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.
Many effective altruists rank biological risks, such as engineered pandemics, as second only to AI risk in severity. For example, in The precipice, Toby Ord puts the chance of irreversible existential catastrophe by 2100 at 3.3%. I think these numbers are a bit too high.
This post begins a sub-series, “Biorisk” arguing that levels of existential biorisk may be lower than many effective altruists suppose.
I have to admit that I had a hard time writing this series. The reason for that is that I’ve had a hard time getting people to tell me exactly what the risk is supposed to be. I’ve approached high-ranking effective altruists, including those working in biosecurity, and been told that they cannot give me details, because those details pose an information hazard. (Apparently the hazard is so great that they refused my suggestion of bringing their concerns to a leading government health agency, or even to the CIA, for independent evaluation). One of them told me point-blank that they understood why I could not believe them if they were not willing to argue in detail for their views, and that they could make do with that result.
I’ve asked several times on EA-related Slack channels and social media for recommendations of a single paper or book making a detailed case that existential biorisk is high, but come up largely empty. Because of this, I have something of a feeling of taking a stab in the dark, without knowing precisely what I am meant to be arguing against. It is a bit of an awkward situation, but I will try to make due.
The current plan for this series is as follows. I’ll write three posts giving preliminary reasons to suspect that existential biorisk might not be as high as many effective altruists think it is. Then I’ll review two prominent assessments of existential biorisk. We’ll see how well I do.
2. Existential biorisk
We began this series with a distinction between two types of risks.
Effective altruists care deeply about catastrophic risks, risks “with the potential to wreak death and destruction on a global scale”. So do I. Catastrophes are not hard to find. The world is emerging from a global pandemic. There are ongoing genocides throughout the world. And nuclear saber-rattling is on its way to becoming a new international sport. Identifying and stopping potential catastrophes is an effort worth our while.
But effective altruists are also deeply concerned about existential risks, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”. Most catastrophic risks are not existential risks. A hundred million deaths would not pose an existential risk. Nor, in many scenarios, would a billion deaths. Existential risks are literal risks of human extinction, or the permanent destruction of our ability to develop as a species.
There should be little doubt that biological hazards (`biorisks’) pose a significant catastrophic risk to humanity in this century. The world just experienced a global catastrophe in the form of the COVID-19 pandemic and we are ill-prepared to prevent or respond to another. Catastrophic biorisks are important and neglected, and effective altruists are right to worry about that.
However, many effective altruists hold that there is a significant chance of existential catastrophe from biological causes.
In The Precipice, Toby Ord estimates the chance of irreversible existential catastrophe by 2100 due to engineered pandemics alone at 1 in 30, second only to risks posed by artificial intelligence.
Participants at the 2008 Oxford Global Catastrophic Risks Conference estimated a median 2% chance of extinction by 2100 from engineered pandemics.
In What we owe the future, Will MacAskill writes that “Typical estimates from experts I know put the probability of an extinction-level engineered pandemic this century at around 1%”.
These are very high numbers. These numbers need to be supported by a good deal of solid evidence. Later in this sub-series, I will review leading arguments for the view that biorisks pose a significant existential threat in this century. I will show that these arguments fall considerably short of grounding anything like the above estimates. Crucially, we will also see that expert consensus is far more skeptical than MacAskill suggests: MacAskill’s claim could only be justified on a view which treats effective altruists as the lone experts on existential biorisk, and dismisses leading scientists, policymakers, and biosecurity professionals as non-experts.
In the meantime, I want to give some initial reasons for skepticism about existential biorisk. That is not to say that the bulk of the case against existential biorisk rests on these reasons for skepticism – it rests, instead, on the inability of effective altruists to provide plausible arguments in support of their risk estimates. But it does seem appropriate to begin by saying why many are skeptical of existential biorisk claims.
3. The difficulty of the problem
The main reason why scientists and policymakers are skeptical of existential biorisk is that it is terribly hard to engineer a pandemic that kills everyone.
First, you would need to reach everyone. The virus would have to be transmitted to the most rural and isolated corners of the earth; to antarctic research stations; to ships at sea, including nuclear submarines on uncharted, long-term and isolated paths; to doomsday preppers in their bunkers; to hermits and uncontacted tribes; to astronauts in space; to each new child born every second; to island nations; and so on. And you would need a transmission mechanism that could spread the virus this far without being detected: otherwise, those with the means would be whisked away to safety and might well survive.
Second, you would need a virus that was virtually undetectable until all, or nearly all humans had been infected. That conflicts in a stark way with the goal of producing a virus that is 100% lethal, since lethal viruses tend to leave a trace as they spread through a population.
Third, you would need a virus that is unprecedentedly infectious. The virus would need to be capable of being transmitted, without fail, to every human being on the planet, in sufficient quantities to actually make them sick. It would need to avoid respirators and other forms of protective equipment. And it would have to maintain its transmissibility throughout many generations of mutation.
Fourth, you would need a virus that is unprecedentedly lethal, killing not 90%, 99%, or 99.999% but effectively all of those infected by it, no matter their age, health, or genetic makeup. This lethality would need to be preserved even against the best medical treatments, including quite possibly vaccines or synthesized antibodies. And it would have to be maintained throughout many generations of mutation despite selective pressures towards less lethal variants.
Fifth, you would need to find a way to evade basic public health measures such as masking and social distancing. This isn’t as easy as it sounds. How do you transmit a virus to someone who doesn’t leave their house?
Sixth, you would need the technological capability and equipment to synthesize the hypothesized biological agent, something it is widely agreed that humanity currently lacks.
Finally, you would need to find someone crazy enough to manufacture and deploy this biological agent, yet also competent enough to pull it off, which we have seen is no easy feat.
I am not sure if I would go so far as to say that the above is physically impossible. But without a very good argument, we should regard it as highly implausible that all of the above will come to pass by the end of the century.
4. Human history
We have experienced a great number of catastrophic pandemics throughout human history. These pandemics have all fallen far short of existential catastrophe: indeed, they have often failed to cause even local forms of societal collapse, and only extremely rarely if ever caused a region to become uninhabited for any significant amount of time.
One of the best known pandemics is the `Black Death’ which swept Europe from 1346 to 1352. Though the pandemic led to massive loss of life, there was little evidence of societal collapse or even breakdown of large-scale social institutions. The historian John Haldon and colleagues write:
Claims of mortality rates of as much as 50% give the impression that this event must have been devastating for the societies affected. Yet when we examine how different states and societies responded, we find that—without minimizing the terrible impact on people and communities—the medieval world did not grind to a halt, still less did a series of revolutionary transformations occur. Indeed, the Black Death struck at the beginning of the Hundred Years’ War, and in spite of its demographic impact both the kingdoms of England and France continued to field effective armies, even if there was a brief pause in hostilities (similar to contemporary calls for ceasefires in ongoing international conflicts in the context of COVID-19). Instead, some societal developments that were already under way accelerated while various groups within society responded by exploiting their situation and attempting to slow down, stop or otherwise control changes which they perceived as disadvantageous.
Still more devastating pandemics, combined with other risk hazards such as brutal colonial expansion, have still been frequently unable to produce even regional societal collapse. For example, the political scientist Alberto Diaz-Cayeros and colleagues study the combined effect of pandemic, violence and other colonialist disasters on Mexico from the mid-1500s onwards. Despite dramatic loss of life (average settlement population fell 95% by 1646), most settlements survived, and 13% even grew by the end of the colonial period. They write:
We develop a new disaggregated dataset on pre-Conquest economic, epidemiological and political conditions both in 11,888 potential settlement locations in the historic core of Mexico and in 1,093 actual Conquest-era city-settlements. Of these 1,093 settlements, we show that 36% had disappeared entirely by 1790. Yet, despite being subject to Conquest-era violence, subsequent coercion and multiple pandemics that led average populations in those settlements to fall from 2,377 to 128 by 1646, 13% would still end the colonial era larger than they started.
Human history is replete with such examples of resilience not only to pandemics, but also to the combination of pandemics with numerous other catastrophes. If our ancestors could survive and flourish with little in the way of modern medical knowledge or technology, then we should become relatively more confident in our own ability to survive future pandemics.
5. Mammalian history
It is not just human history that provides tales of resilience. Throughout history, there is only one recorded instance in which disease has led to the extinction of a mammalian species. Don’t take it from me – here are Piers Millett and Andrew Snyder-Beattie:
The number of mammalian species that have gone extinct due to disease (as opposed to other factors) is very limited, with just 1 confirmed example.
The confirmed example is the extinction of a species of rat in a very small and remote location (Christmas Island).
While we might take the relatively larger incidence of disease-driven extinction in non-mammalian species as evidence of the threat posed to such species, if the closest thing to a mammalian extinction that we can pin on pandemic disease is the extinction of an isolated species of rat, then we should be hesitant to place too much stock in the idea that humanity is soon to be driven extinct by disease.
6. Everything, everywhere all at once
Effective altruists often draw attention to experiments in which a single feature of a virus was manipulated: for example, the virus was made more virulent or more transmissible. But that is not what omnicidal actors need. They need a virus which is improved on a great number of dimensions at once: it must be quite virulent; highly transmissible; have the desired incubation time; resist likely treatments and antibodies; survive storage and transportation across the globe; and so on.
The problem is that we do not know how to induce all of these many mutations at once, and indeed there is a good chance that work done to produce one mutation may cut against other desired mutations. A bioweapons expert interviewed by Filippa Lentzos explains:
I do not think that it’s very easy to create weapons of mass destruction with biological weapons.
Ray Zilinskas’ book on the Soviet programmes, for example, points out how very, very hard it is to engineer something which is usable in that context. He lists 12 criteria that you want in your pathogen for it to be a useful biological warfare agent, and they include things like survivability in the air, survivability through whatever mechanical forces are brought to bear during dispersion, resistance to common antibiotics or anti-viral agents so that the enemy can’t simply treat the disease, high transmissibility etc. In evolutionary terms, normally as transmissibility increases, pathogenicity decreases; so you’re having to fight nature in trying to create a perfect biological weapon.
Some of the key lessons that came out of the Soviet programme go to this issue of pleiotropy, where a single gene affects more than one characteristic. And if you change that gene to improve one of the characteristics you’re going for in that list of 12, the chances are you’re going to diminish one or more of the other characteristics that are desirable. So getting something that works is extraordinarily difficult.
(Let me add for legal reasons that the existence of a Soviet bioweapons program is contested. So is the fact that someone cracked Kasparov over the head with a chessboard when he spoke out about certain issues. Make of that comparison what you will.)
How bad is the problem? Bad enough that an extremely well-funded, decades-long Soviet bioweapons program was likely unable to come close to overcoming it. The biosecurity expert continues:
In the 20 years of the Soviet programme, with all the caveats that we don’t fully know what the programme was, but from the best reading of what we know from the civil side of that programme, they really didn’t get that far in creating agents that actually meet all of those criteria. They got somewhere, but they didn’t get to the stage where they had a weapon that changed their overall battlefield capabilities; that would change the outcome of a war, or even a battle, over the existing weapon systems available to them.
Of course, we can insist that designer viruses with all of the characteristics an omnicidal agent would like them to have are just around the corner. But we should not pretend that manufacturing such viruses is an easy task, or that historical programs and experiments provide any significant evidence in favor of the possibility of manufacturing such viruses.
7. Conclusion
This post, and its immediate successors, will give preliminary reasons to doubt published estimates of near-term existential biorisk. So far, we have seen that engineering biological events that lead to existential catastrophe is a very difficult problem; that biological events have rarely led to mammalian extinction; and that producing all of the needed mutations at once is so difficult that a well-funded (alleged) Soviet bioweapons program didn’t come close.
Are those the only reasons why biosecurity experts are often skeptical of existential risk claims? I think they are a good start, but there are more reasons to come. We’ll talk in the next two posts about some other reasons to doubt some effective altruists’ estimates of existential biorisk.
Leave a Reply