adorable blur cat close up

Mistakes in the moral mathematics of existential risk (Part 2: Ignoring background risk)

In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios.

Millett and Snyder-Beattie, “Existential risk and cost-effective biosecurity

Listen to this post

1. Introduction

This is Part 2 of a series based on my paper “Mistakes in the moral mathematics of existential risk”.

Part 1 introduced the series and discussed the first mistake: focusing on cumulative rather than per-unit risk. We saw how bringing the focus back to per-unit rather than cumulative risk was enough to change a claimed `small’ risk reduction of one millionth of one percent into an astronomically large reduction that would drive risk to almost one in a million per century.

Today, I want to focus on a second mistake: ignoring background risk. The importance of modeling background risk is one way to interpret the main lesson of my paper and blog series “Existential risk pessimism and the time of perils.” Indeed, in Part 7 of that series, I suggested just this interpretation.

It turns out that blogging is sometimes a good way to write a paper. Today, I want to expand my discussion in Part 7 of the existential risk pessimism series to clarify the second mistake (ignoring background risk) and to show how it interacts with the first mistake (focusing on cumulative risk) in a leading discussion of cost-effective biosecurity.

Some elements of this discussion are lifted verbatim from my earlier post. In my defense, I remind my readers that I am lazy.

2. Snyder-Beattie and Millett on cost-effective biosecurity

Andrew Snyder-Beattie holds a DPhil in Zoology from Oxford, and works as a Senior Program Officer at Open Philanthropy. Snyder-Beattie is widely considered to be among the very most influential voices on biosecurity within the effective altruist community.

Piers Millett is a Senior Research Fellow at the Future of Humanity Institute. Millett holds advanced degrees in science policy, research methodology and international security, and has extensive industry experience in biosecurity.

Millett and Snyder-Beattie’s paper, “Existential risk and cost-effective biosecurity“, is among the most-cited papers on biosecurity written by effective altruists. The paper argues that even very small reductions in existential risks in the biosecurity sector (henceforth, `biorisks’) are cost-effective by standard metrics.

Millett and Snyder-Beattie estimate the cost-effectiveness of an intervention as C/(NLR), where:

  • C is the cost of the intervention.
  • is “the number of biothreats we expect to occur in 1 century”.
  • L is “the number of life-years lost in such an event”.
  • R is “the reduction in risk [in this century only] achieved by spending … C”.

Millett and Snyder Beattie estimate these quantities as follows:

  • C is fixed at $250 billion.
  • is estimated in min/max ranges using three different approaches.
  • is calculated assuming that humanity remains earthbound, with an annual population of 10 billion people, lasting for a million years, so that L = 1016 life years.
  • R is estimated at a 1% relative reduction, i.e. risk is reduced from N to .99N.

Because N is estimated using three different models, Millett and Snyder-Beattie estimate cost-effectiveness as C/(NLR) on each model, giving:

ModelN (biothreats/century)C/NLR (cost per life/year)
Model 10.005 to 0.02$0.125 to $5.00
Model 21.6*10-6 to 8*10-5$31.00 to $1,600
Model 35*10-5 to 1.4*10-4$18.00 to $50.00

Standard government cost-effectiveness metrics in the United States value a life-year in the neighborhood of a few hundred thousand dollars, so Millett and Snyder-Beattie conclude that across models, a small intervention (such as a 1% relative risk reduction in this century) is cost-effective even at a high price (such as $250 billion).

3. A complaint to set aside

There are many complaints that could be made about this model. One such complaint was nicely formalized by Joshua Blake in a comment on my earlier presentation of the MSB estimate.

MSB want to estimate the expected cost of saving a life, E[C/NLR]. By separately estimating each of C, N, L, and R, they have in effect estimated E[C]/(E[N]E[L]E[R}).

This move is legal if, and only if, C,N,L,R are probabilistically independent. [Edit: Whoops, that’s still not enough to bail them out. See comment by Mart.] However, N and L are highly negatively correlated: the riskier the future is, the fewer humans we should expect to live in it, because humanity becomes more likely to meet an early grave.

I want to set this complaint aside for now. Perhaps I am not really setting this complaint aside, since it will be one way of explaining how the MSB estimate commits the first mistake, a point to which I return below. But for now, I mention this complaint to illustrate first that there can be other mistakes beyond those mentioned in this paper, and second that my readers are quite helpful and good at math.

4. Second mistake: Ignoring background risk

When we intervene on some given risks (in this case, existential biorisk), we leave other risks largely unchanged. Call these unchanged risks background risks.

The lesson of my paper and blog series “Existential risk pessimism and the time of perils” is that background risk matters quite a lot. However, the MSB model does not say anything about background risk. Building background risk into the MSB model will reduce the MSB cost-effectiveness estimates, particularly when background risk is large.

If we assume biological and nonbiological risks cannot occur in the same century, then we can split per-century risk r into its biological component b and non-biological component n as:

r = b + n.

MSB envision an intervention X which provides a 1% relative reduction in biorisk, shifting risk to:

rX = 0.99b + n.

Prior to intervention, how many life-years did the future hold in expectation? On the MSB model, a century involves a stable population of 1010 lives, for a total of 1012 life-years. We make it through this century with probability (1-r), survive the first two centuries with probability (1-r)2, and so on, so that the expected number of future lives (over a million years, or ten thousand centuries) is:

E[L] = 10^{12} \sum_{i=1}^{10,000} (1-r)^i.

Our intervention X increases the expected number of future lives by reducing per-century risk from r to rX, giving a post-intervention expected number of lives of:

E[L|X] = 10^{12} \sum_{i=1}^{10,000} (1-r_X)^i.

Intervention X adds, in expectation, E[L|X] – E[L] lives, which works out to (see appendix):

E[L|X] - E[L] = 10^{12}*\frac{0.01b}{r(r-0.01b)}.

In rough outline, X provides ten billion additional life-years, scaled down by the initial biorisk b, but scaled up (approximately) by 1 / the square of background risk r. If background risk is low, this may be a large boost indeed, but if background risk is high, things become more dire.

Below, I’ve revised the MSB estimates across two levels of background risk: a 20%/century risk close to that favored by many effective altruists, and a more optimistic 1%/century risk.

ModelN MSB estimater = 0.2r = 0.01
Model 10.005-0.02$0.125-5.00$50-200$0.25-$0.50
Model 21.6*10-6-8*10-5$31.00-1,600$12,500-$625,000$30-1,500
Model 35*10-5-1.4*10-4$18.00-50.00$7,100-20,000$18-50

For comparison, GiveWell estimates that the best short-termist interventions save a life for about $5,000. Hence even if we assume that short-termist interventions do not have positive knock-on effects (an assumption we should not make), a good short-termist benchmark will be on the order of $50-200/life-year.

Already, we see that under high levels of background risk, the MSB estimate at best ties the short-termist benchmark, and at worst falls significantly below it. By contrast, on low levels of background risk, the MSB estimate may fare well.

Does this mean that we can salvage the MSB estimate by being optimistic about background risk? Not so fast.

5. First mistake: Focusing on cumulative risk

MSB, like Bostrom, are concerned with cumulative risk. They treat risk reduction as providing an increased chance that all future people throughout a million-year stretch will come to exist, not just that people in this century will come to exist.

We saw in Part 1 of this series that this is a mistake. Focusing on cumulative risk dramatically overstates the value of existential risk reduction, and also takes us away from the policy-relevant question of how we should intervene on risks in nearby centuries.

Let us replace MSB’s stylized intervention X with an intervention X’ that provides a 1% relative reduction in biorisk, not in all centuries at once, but rather in our own century. That is, X’ reduces risk in this century to:

rX’ = 0.99b + n

but leaves risk in future centuries at r = b + n.

How many additional lives does X’ provide in expectation. It turns out (see paper for details) that:

E[L|X'] - E[L] = 10^{12}*\frac{0.01b}{r}.

Whereas before, this expression was divided through (roughly) by r2, now it divides through only by r. That will tend to reduce the cost-effectiveness of X’, since r is a number between 0 and 1. Importantly, the penalty is worse for low values of r. The loss of a second r in the denominator shaves a factor of approximately five off this expression for a pessimistic r = 0.2, but a whopping two orders of magnitude off in the case that r = 0.01.

Below, I’ve revised the MSB estimates to incorporate not only background risk, but also risk reduction within a single century rather than across all time.

ModelN MSB estimater = 0.2r = 0.01
Model 10.005-0.02$0.125-5.00$250-1,000$13-50
Model 21.6*10-6-8*10-5$31.00-1,600$60,000-3,100,000$3,000-150,000
Model 35*10-5-1.4*10-4$18.00-50.00$35,000-100,000$1,800-5,000

This is bad news. Only the most optimistic model (Model 1 with r=0.01) makes biosecurity competitive with the short-termist benchmark of $50-200/life-year. Across most other models, biosecurity turns out not only to be less cost-effective than the best short-termist interventions, but often many orders of magnitude less cost-effective.

Again, we see that mistakes in moral mathematics matter. Correcting the first and second mistakes within the MSB model took biosecurity from robustly cost-effective to robustly cost-ineffective in comparison to short-termist benchmarks.

6. Wrapping up

So far, we have met two mistakes in the moral mathematics of existential risk:

  • First mistake: Focusing on cumulative risk rather than per-unit risk.
  • Second mistake: Ignoring background risk.

We looked at a leading model of cost-effective biosecurity due to Piers Millett and Andrew Snyder-Beattie. We saw that the model commits both mistakes, and that correcting the mistakes leads to a near-complete reversal of Millett and Snyder-Beattie’s conclusion: it makes biosecurity look robustly less cost-effective than leading short-termist interventions, rather than robustly more cost-effective.

In the next post, I introduce a final mistake in the moral mathematics of existential risk: neglecting population dynamics.

Comments

16 responses to “Mistakes in the moral mathematics of existential risk (Part 2: Ignoring background risk)”

  1. John Halstead Avatar
    John Halstead

    This post is very good. I have one meta comment. I’ve been following your other posts about peer review, about which I am somewhat sceptical and I wonder how we should epistemically relate to this post relative to the Snyder-Beattie and Millet paper. Their paper is peer reviewed, whereas this is a humble non-peer-reviewed blog adapted from a non peer reviewed working paper. In your last post on peer review, you approvingly quote “The fact remains that these posts are simply no substitute for rigorous studies subject to peer review (or genuinely equivalent processes) by domain-experts external to the EA community.” You say “A great deal of what we know as a species has been produced through standard systems of peer review, and academics and members of the public alike rightly place high confidence in the rigor, quality and informativeness of research published in leading journals. Standard practices of pre-publication peer review are good ways of sorting and improving manuscripts, and they should be adopted because they work.”

    Doesn’t this mean we should believe the Snyder-Beattie paper and not your blog? How should we update from these arguments?

    1. David Thorstad Avatar

      Thanks John!

      I appreciate the kind words.

      The short answer is that you would be well within your rights to wait for the paper to be peer-reviewed. I hope that my track record of placing peer-reviewed papers in good journals will be some evidence that this paper is destined for a similar fate, but if you wanted to hold off and see what the reviewers think, I wouldn’t blame you a bit. Some of my papers do flop.

      The claim about peer review is not that all peer-reviewed papers are better than all non-peer-reviewed outputs. It’s a bit more modest than that: there’s very good reason to think that peer review improves the quality of papers, and good reason to be skeptical of authors or communities that rarely publish in good journals. That isn’t to say that every peer-reviewed paper is good. As you know, academics spend a living discussing the merits of peer-reviewed papers. I definitely give MSB some credit for publishing their paper, and that’s the reason why I’ve chosen to discuss their paper over other writings on this topic. I’d give them even more credit if it were in a journal I trusted more, but they do deserve credit for publishing in a reputable journal and I think I’ve shown them the respect they deserve.

      I’ve often said (and meant) that I wish all of my readers who have the training to read and engage with peer-reviewed papers would put down my blog, turn off their favorite podcast, and sit down with a cup of coffee to read the scholarly literature. My experience has been that this doesn’t always happen: one of the reasons I started this blog is that not many effective altruists were reading my papers. (The other main reason is that most of the topics I talk about are unpublishable, in large part because journals don’t think many of the positions I’m arguing against are worth refuting). But if I lost my audience tomorrow to Google Scholar, I’d be happy as a clam.

      Should I be uncomfortable with the idea of writing a blog, given my skepticism about the quality of the average blog? Yes, I should be, and am very uncomfortable. It’s been a long time since I took the link to this blog off of my academic website. I’m not always sure if I’d like my colleagues to read this blog. I don’t think it’s as good as I’d like it to be.

  2. Mart Avatar
    Mart

    Thanks for your work – I think it is really good to have a healthy debate on these important topics

    One nitpick:
    > MSB want to estimate the expected cost of saving a life, E[C/NLR]. By separately estimating each of C, N, L, and R, they have in effect estimated E[C]/(E[N]E[L]E[R}).
    >
    > This move is legal if, and only if, C,N,L,R are probabilistically independent.

    My impression is that probabilistic independence is also not sufficient and would lead us to E[C]*E[1/N]*E[1/L]*E[1/R] instead. If N,L and R span many orders of magnitude, the expectation value of 1/E[N] is dominated by the large values of N, whereas E[1/N] is dominated by the low values of N.
    I haven’t thought through whether this changes anything about your arguments, but it seems relevant as you use the very exact-sounding “if, and only if”.

    One more important point which I disagree with is the formulation “robustly cost-ineffective”.
    (Background: I only know abou their work what you summarized, so I might easily be mistaken here)
    My impression is that they attempted to use conservative estimates, found cost-effectiveness, and concluded (due to the conservative estimates) that cost-effectiveness of biorisks is robust.
    If your arguments show that their model should be corrected and would instead lead to non-impressive cost-effectiveness, this clearly makes the effectiveness of biorisk interventions less clear-cut. But I do not agree that ‘robustly cost-ineffective’ is the right conclusion. When regarding the expected effectiveness of an intervention, given your arguments, one should think “I need to see the details if you want to convince me” compared to the previous “probably effective as long as it does reduce biorisk”. But one should not go away thinking “biorisk? This is likely not worth it anyway”.
    For that, one would also need to flip the conservativeness of the other assumptions towards overestimating the cost-effectiveness and still get non-impressive numbers.

    A third thing I would like to note is that, although many effective altruists would indeed put the risk of extinction for this century closer to the 20% than to 1%, the largest source here are risks of artificial intelligence. My impression is that most people think that this risk will only persist over a few centuries at most, so that the longer-term background risk would then be somewhat closer to the 1% again.

    1. David Thorstad Avatar

      Thanks Mart!

      On the first point about E[C/NLR] – yoinks, my bad! You’re quite right. I’ll say as much in the paper, and give you an acknowledgment (could you share a full name, either here or by email (reflectivealtruismeditor@gmail.com), so I can write a proper acknowledgment)? In the meantime, I’ll edit the post. Thanks.

      I don’t think this changes the point that E[C/NLR] isn’t the quantity we want to estimate. I think that my picking on the lack of independence between the denominator variables is still a good way of picking on the MSB model, but that you’ve also raised a second good way of picking on the MSB model, by reminding us the E[1/X] and 1/E[X] are not to be conflated, where here X = NLR.

      On your second point, it is of course true that one can make existential risk mitigation cost-effective by considering more extreme models. The question will be whether those models are correct. I suspect MSB might have had a hard time publishing their paper if they’d included any numbers more extreme than Model 1, which is already quite striking and not based on a terribly sound methodology (even MSB say it’s their least trustworthy model. It was literally based on asking a bunch of people at a conference on x-risk in 2008 what their estimates were. And I think it’s revealing that the method which consists of asking a bunch of people who are already concerned about x-risks to provide their estimates was the one which led to the highest estimates).

      On your third point, you’re quite right that effective altruists give strikingly high estimates of existential risk. These numbers matter: nobody in their right mind should deny that a 20% risk of existential catastrophe in this century would be worth a great deal of effort to mitigate. The question is instead whether those estimates are warranted by the available evidence. I discuss this question in my blog series, “Exaggerating the risks”. Perhaps most relevantly, Parts 6-8 of that series discuss the Carlsmith report on power-seeking AI. (I did this because I asked people what they took to be the best case for AI risk, and the Carlsmith report was the most popular among well-placed EAs and EA-adjacent folks that I talked to).

      Effective altruists do, as you mentioned, hold another strong thesis – the Time of Perils Hypothesis on which risk is high now, but if we survive this perilous period, risk will soon drop to a permanently low level. The question, again, is whether we have good reasons to accept the Time of Perils Hypothesis. Sections 4-6 of my paper “High risk, low reward: A challenge to the astronomical value of existential risk mitigation” review three leading arguments for the Time of Perils Hypothesis and argue that they don’t work. I’ve also blogged about these in parts 4-6 of my blog series “Existential risk pessimism and the time of perils”, the previous title for the paper.

      It’s worth noting that effective altruists probably need a stronger version of the Time of Perils Hypothesis than the version you mentioned, on which risk drops to a relatively high level (say, 1%). They need it to go to something lower, on most models in the range of 0.01%-0.1%. The paper I mentioned runs the numbers across a range of models, but basically, unless you have very, very optimistic assumptions about the value of near-future centuries, you’ll need risk after the Time of Perils to be pretty low for existential risk mitigation efforts to get the kinds of astronomical valuations that effective altruists want them to have.

      1. Mart Avatar
        Mart

        Dear David,
        thanks for your thorough reply,
        I now read the Millett and Snyder-Beattie paper and (the majority of) your “High risk, low reward: A challenge to the astronomical value of existential risk mitigation”.

        I would be honoured of you think that my comment is worth an acknowledgment (I’ll send a mail).
        However, after having read their paper, I am not sure that I agree with the criticism:
        For an expected value E[C/NLR] with distributions for C,L,N and R, the criticism would apply fully. But MSB do not use distributions and instead have point estimates for C,L and R and an interval (one for each model) for N. They do not claim to use a richer model that allows for expectation values over distributions.
        Instead of trying to estimate the full expected cost-effectiveness, they aim for establishing a plausible lower bound (well, 3 ranges of lower bounds). This approach does seem valid to me.

        I agree with your assessment of the estimates for N. Without knowing a lot more about the arguments of the experts for the model 1 numbers, the models 2 and 3 seem more reliable.
        But I could imagine that there are interventions that do better in terms of cost and risk reduction (R.) For example, there might be an intervention for which there are strong mechanistic arguments that the 20% risk reduction on smaller scales is difficult to circumvent such that a larger expected risk reduction than 1% remains for extinction risks.
        This is related to MSB intending to estimate what I called plausible lower bounds above.

        A further aspect which I think you want to include is that they use typical developed-country health measure cost-effectiveness as the reference instead of that of top charities. If one agrees with this, your corrected numbers still do look somewhat cost-effective (there is a strong similarity to the time-discounted model they use in the paper, except that it is background risks which are the reason for discounting the future).
        One might argue that coordination problems among countries make it more difficult to finance biorisk interventions (each single country mostly cares about a fraction of world population), but I do think this is generally the appropriate reference to choose. Current health spending also does not try to use top charities as a reference standard.

        Of course, if the audience shares core believes of effective altruism, top charities do become the right reference.

        On the third topic: I think you managed to change my mind 🙂
        I still think that one can argue that things like value change and lock-in can endure impressive time spans. But you did convince me that the usual arguments tend to just assume that making anthropological risks tiny is in our reach.
        I also really like the idea to give an expiration time to the risk reduction.
        My current takeaway is that one cannot just assume astronomical value for x-risk reduction, even if a good number of interventions will still turn out to be worthwhile.
        However, if one expects a chance to reach a highly stable good future, that would reintroduce astronomical value.
        With this, a lot of one’s conclusions will depend on how feasible it seems to reach stable-eutopia.

        1. David Thorstad Avatar

          Thanks Mart!

          Wow, I appreciate you reading all of that. That must have taken hours.

          I got your mail and you’ll definitely get an acknowledgment. If I end up including the point about assumptions needed, I’ll write you into the footnote, but you might have managed to convince me not to write that point (or the one I made earlier), in which case you’ll get a general acknowledgment in the section saying thanks, etc. to people who helped. I hope that’s okay.

          I’m fairly sympathetic to your thought that there’s nothing wrong with using point estimates, per se, and that if you’re using point estimates it’s reasonably standard to give point estimates for each variable. If you manage to get a hold of Joshua Blake (the person who originally suggested I push against this, in a comment on Part 7 of my x-risk pessimism blog series), I’d be interested to hear what conclusion you two come to on this. I have a decent bit of math training, but not much in probability/stats or applied modeling, so I’m fairly deferential here.

          You’re definitely right that the issue of indexing against GiveWell (~$5k) versus value of statistical life in wealthy countries (~$10M) makes a big difference. I think EAs are quite right to insist on holding actions to the highest bar possible, since if there were more cost-effective actions around, those actions should be done instead. I appreciate this tendency in effective altruism, and agree with it. But at times like this, it does tend to cut both ways.

          Glad to hear that my work on the time of perils was helpful! The takeaway that you gave is exactly the takeaway I’d like people to come away with. Maybe existential risk reduction is the best thing we can do. Maybe it’s even the best by a mile. But that isn’t obvious. Numbers count. Arguments count. And the numbers and arguments aren’t as good as some effective altruists think they are.

          Stable eutopia would definitely help, because it has a similar structure to the time of perils hypothesis. I suspect the same modeling conclusions would apply, so that i.e. stable utopia within a few centuries would help a lot, and stable utopia within 10-20 centuries would help a bit, but beyond that EAs would also have to lower their views about background risk to milk too much value out of the possibility of future stable eutopia.

        2. Joshua Blake Avatar

          > They do not claim to use a richer model that allows for expectation values over distributions.

          Expectations are over distributions by definition, so I am confused by your claim here.

          I am currently pretty confused about exactly what N, L, and R are meant to be, so struggling to make a useful comment in context. My best current thinking: N is the number of extinction events per century due to biorisks, L is the number of life-years until humanity’s extinction in the absence of biogical extinction, and R is the relative risk reduction in N due to the intervention we are considering. In which case, these seem uncorrelated.

          I made the following general comment to Mart about the concern due to correlations (via private message on the EA forum):

          > I’m not convinced by the argument that correlation doesn’t matter because it’s only a point estimate. All expectations are, by definition, point estimates. Quickly (not carefully) running through the maths, I think if you were to assume log-normal distributions (a reasonable and analytically tractable default), your error for assuming E(XY) = E(X)E(Y) when X and Y aren’t actually independent is on the order of Covar(X, Y). So, if you have medium or high correlation combined with high uncertainty (the latter almost certain here) this could be very important, on the same order of magnitude as XY.

          1. Mart Avatar
            Mart

            Dear Joshua, thanks for joining this discussion.

            While starting to formulate my reply last week, I realized that I will not be able to write a reply that is as thorough as I would hope. So here comes a less polished reply.

            > Expectations are over distributions by definition, so I am confused by your claim here.

            I might well be stating a confused claim. Still, how I currently see it is that their model does not correspond to describing an expected cost-effectiveness of biorisk interventions. Instead, one could say that they draw a single sample (3 ranges of samples) and give arguments that we should expect to be able to do better than that sample. If the reader agrees that this is the case, this suffices for claiming that the actual expected value distribution has its bulk at higher expected values.

            I would agree that this level of detail for a model is not enough to start $250 billion interventions based on it. But they do not argue for this. Instead they merely argue that their estimates suggest biorisk interventions to be a cost-effective field that warrants more attention/research.
            In the Conclusions section they use the formulation “An initial attempt to estimate the cost-effectiveness of reducing these risks” which lets me conclude that the authors agree with this.

            > I’m not convinced by the argument that correlation doesn’t matter because it’s only a point estimate. […]

            That’s a good point. Even in my “draw a conservative sample” description of their point estimate, one could run into this mistake by choosing values which look conservative for each *single* variable, but turn out to be highly unusual in combination (and thus not conservative at all).

            My overall impression is that correlations are not a large source of errors in their model. Or at least the points one would criticise are more easily described as changing the model instead of as adding correlations.
            E.g., one can criticise that they use the expected life-years L of humanity as a fixed number which is only modulated by the coming century of biorisk.
            Interestingly, with the more uniform-in-time model that David uses in (5.), one could still set the background risk to r=1/10000 per century to reach the numbers of MSB even though the model is significantly different now.

    2. Joshua Blake Avatar

      > My impression is that probabilistic independence is also not sufficient and would lead us to E[C]*E[1/N]*E[1/L]*E[1/R] instead. If N,L and R span many orders of magnitude, the expectation value of 1/E[N] is dominated by the large values of N, whereas E[1/N] is dominated by the low values of N.
      I haven’t thought through whether this changes anything about your arguments, but it seems relevant as you use the very exact-sounding “if, and only if”.

      Yes (and I did comment this previously) although not as clearly as you have expressed it here.

      Having thought about this a little more, it in fact is guaranteed to harm MSB’s argument because they have calculated a lower bound on cost-effectivness. Specifically, for a positive X (which N, L, and R all are), Jensen’s Inequality gives us 1/E(X) <= E(1/X), and hence the true value of each of those terms is lower bounded |by that calculated.

      You could get some analytical results under reasonable assumptions (e.g.: X is log-normally distributed) but its too late for me to do that without a high risk of mistakes!

      1. Joshua Blake Avatar

        OK, this is actually VERY problematic and it looks to me that the expected cost per life saved is infinite.

        The time to extinction (assuming a constant probability of extinction), call this T, is exponentially distributed. Therefore, E(1/T) = ∞. This infinity propagates through the whole expression as far as I can tell (although I’m far from expert on dealing with limits so I might be wrong here). The intuition here, is that a constant probability of extinction implies the modal time for extinction is 0, decreasing monotonically, but 1/0 = ∞.

  3. River Avatar
    River

    I’m confused about an aspect of your math in part 4. r is a probability, so r is in [0, 1], right? Then multiplying expected life-years saved by 1/r^2 can only increase it.

    1. David Thorstad Avatar

      Yep! That’s right. The paper and blog post say as much.

  4. Mart Avatar
    Mart

    I just noticed, the numbers for r=0.01 in the table at the end of section 4 are surprisingly close to the original MSB numbers. Is this intended?

    1. David Thorstad Avatar

      Yep! I’ll double-check, but that is consistent with the general lesson that low levels of background risk don’t cost the longtermist so much. In particular, in the case of Model 1 we’re considering levels of background risk that force, at the top range, r = b, so they shouldn’t cost the longtermist anything at all there. Thanks for point this out.

    2. David Thorstad Avatar

      Okay, I checked with Wolfram Alpha and unless I made a calculation mistake before I fed it in, I think the numbers are right. Note that in the top-right column of that table (Model 1, r = 0.01) I had to assume b = 0.01 rather than b = 0.02 at the top end, to respect the constraint b <= r. (This is in a footnote of the paper, but I don't think I would have written it on the blog post?).

      1. Mart Avatar
        Mart

        Thanks for taking the time to check the numbers.

        While trying to understand my confusion better, I noticed that you use 0.005–0.02 for model 1 whereas the paper has an extra zero in the lower end (0.00*0*5).

        My test calculation (except for the extra 0) now matches your numbers except for rounding errors:

        “`python
        bs = [[0.0005, 0.02], [1.6E-6,8E-5], [5E-5, 1.4E-4]] # ranges for the three models
        rs = [0.2, 0.01] # regarded background risk rates

        print(“Background risk correction:”)
        for r in rs:
        lys = [[1E12*0.01*min(b,r)/(r*(r-0.01*min(b,r))) for b in model] for model in bs] # life years saved
        print(f”With r={r} we get \n\t”, ‘\n\t’.join([‘\t’.join([f”{2.5E11/ly:.3G}”
        for ly in model][::-1]) for model in lys]))

        print(“\nThe cumulative risk correction:”)
        for r in rs:
        lys = [[1E12*0.01*b/(r) for b in model] for model in bs] # life years saved
        print(f”With r={r} we get \n\t”, ‘\n\t’.join([‘\t’.join([f”{2.5E11/ly:.3G}”
        for ly in model][::-1]) for model in lys]))
        “`
        Background risk correction:
        With r=0.2 we get
        50 2E+03
        1.25E+04 6.25E+05
        7.14E+03 2E+04
        With r=0.01 we get
        0.248 5
        31.2 1.56E+03
        17.9 50

        The cumulative risk correction:
        With r=0.2 we get
        250 1E+04
        6.25E+04 3.12E+06
        3.57E+04 1E+05
        With r=0.01 we get
        12.5 500
        3.12E+03 1.56E+05
        1.79E+03 5E+03

        I think my surprise was just that while
        – the MSB model uses a one-time risk with fixed 1.6 million year duration of humanity’s future,
        – and your background risk corrected model with r=0.01 has no upper limit on humanity’s future and a risk for each century instead,
        they give basically identical results here. I now think that it is just a coincidence that r=0.01, which reduces the size of humanity’s future happens to cancel out in the cost-effectiveness due to the risk reduction taking effect in every century.

Leave a Reply

%d bloggers like this: