Epistemics: (Part 5: The value of cost-effectiveness analysis)

One of the biggest changes in philanthropy and the international development community that has taken place … is the increased focus of independent organizations on measuring the impact of particular interventions to help people in extreme poverty, and in assessing the effectiveness of the organizations providing the most successful interventions. GiveWell was the pioneer here, setting new standards for rigorous evaluation of the work of charities.

Peter Singer, The life you can save

1. Recap

This is Part 5 in my series on epistemics: practices that shape knowledge, belief and opinion within a community. In this series, I focus on areas where community epistemics could be productively improved.

Part 1 introduced the series and briefly discussed the role of funding, publication practices, expertise and deference within the effective altruist ecosystem.

Part 2 discussed the role of examples within discourse by effective altruists, focusing on the cases of Aum Shinrikyo and the Biological Weapons Convention.

Part 3 looked at the role of peer review within the effective altruism movement.

Part 4 looked at the declining role of cost-effectiveness analysis within the effective altruism movement. Today’s post continues this discussion by highlighting four costs of turning away from cost-effectiveness analysis.

2. Practical cost

Right now, GiveWell estimates that about $5,000 could save a life today. The person saved would be a member of a significantly disadvantaged community, who otherwise would die because they had the misfortune to be born or to live in the wrong place at the wrong time. These claims are based on rigorous cost-effectiveness analyses. I believe them, and I hope you do too.

This puts a clear price on wasteful spending. Each transatlantic business class flight could instead have saved a person’s life. Each independent researcher publishing forum posts costs, in many cases, more than ten lives per year. Each regrantor given a million-dollar budget to spend with limited oversight had better save, in expectation, at least two hundred lives, or they will have killed more people than all but the worst murderers.

Consider, for example, a question posed by Ben Garfinkel in 2018: “How sure are we about this AI stuff?” Perhaps effective altruists are right. Perhaps transformative developments in artificial intelligence are just around the corner; perhaps these developments pose significant levels of near-term existential risk to humanity; perhaps humanity will not respond quickly enough to mitigate risks without being nudged to do so; and perhaps a team of young, motivated and intelligent altruists can put a nontrivial dent in those risks. If that is true, then effective altruists have done very well for themselves, and I myself may have much to apologize for.

But what if effective altruists aren’t right? Many scientists, artificial intelligence researchers, and policymakers think that these risks are negligible to non-existent, and that the AI safety research being funded by effective altruists is often of low quality, and unlikely to lower risk. They think it is impossible to put a rigorous value on AI safety investments by effective altruists, but that the value is probably near-zero, if not negative. If that is right, then longtermists may have a good deal to atone for.

How much would they need to atone? Here is one example. Over the years 2022 and 2023, Open Philanthropy reduced its planned giving to GiveWell by $350 million dollars [Edit: $400 million. Thanks, Evelyn!]. That money could have saved 70,000 members of disadvantaged communities from premature death, or helped to lift millions out of poverty. If longtermists are, in fact, wasting money on poorly justified projects, they will have a good number of people to answer to. Probably, they will have to answer to hundreds of thousands, if not millions, who died when they could have lived. At its peak valuation of over $40 billion dollars, the effective altruism movement could have saved about 8 million lives according to GiveWell estimates.

3. Symbolic cost

Even if free-spending effective altruist projects are in fact an effective way to promote value, it remains true that they look bad. Moments such as the Wytham Abbey purchase and the luxurious flights and living conditions of visitors to the Bahamas struck even many core effective altruists the wrong way, and many outside observers were appalled. It doesn’t help that many of the recipients of this money were privileged and successful members of wealthy societies, or that effective altruist money had previously been funneled almost exclusively towards some of the most disadvantaged groups on earth.

For example, George Rosenfeld wrote in mid-2022 that:

Over the past few months, I’ve heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that’s before they hear about the Bahamas visitor programme … I am not contesting here whether these programmes are worth the money. My own view is that most of them probably are and I try to lay this out to those who ask. But it is the perceptions which I find most concerning: many people see the current state of the movement and intuitively conclude that lots of EA spending is not only wasteful but also self-serving, straying far from what you’d expect the principles of an ‘effective altruism’ movement to be. Given the optics issues which have hindered the progress of EA in the past, we should be wary of this dynamic. Importantly, I’ve heard this claim not only from critics of EA, but also from committed group members and an aligned student who might otherwise be more involved.

I have discussed perceptions of wasteful spending on this blog in several places, including Part 6 of my series on billionaire philanthropy, and Part 4 of my series on epistemics, As expected, a number of commentators shared Rosenfeld’s reaction, and I have personally heard the same from many effective altruists as well as a growing number of others.

Here is one thing that personally bothers me about the optics of free-spending EA. As effective altruists became wealthier, the framing went from a rigorous emphasis on cost-effectiveness to the question of `How are we going to spend all of this money’? Six-figure grants were almost too small to bother with: megaprojects and massively scalable enterprises were the norm. Grantmakers continually bemoaned the fact that they couldn’t find projects able to absorb the available capital.

Only, there were of course a large number of known projects that could absorb the entirety of available funding. These are short-termist projects that had an excellent track record of reducing poverty and the global burden of disease, as well as improving animal welfare on a massive scale. Effective altruists had, for a long time, been funding these projects. The framing in terms of the question, `How are we going to spend all of this money?’ was only made possible because the question ruled out, by assumption, the idea of using the money to fund established, scalable, measurable and cost-effective assumptions that would benefit some of the most disadvantaged groups on earth. Rosenfeld continues:

One especially problematic framing concerns the apparent discrepancy between longtermist and neartermist funding. Many people find it understandably confusing to hear that ‘EA currently has more money than it can spend effectively’ whilst also noticing that problems like malaria and extreme poverty still exist, especially given how much EA focuses on how cheap it is to save a life and how important it is to practise what we preach.

When grantmakers flush with cash find themselves struggling to find a project, any project, which can absorb their money, then walk straight by some of the most disadvantaged groups on earth before handing out large sums of money in a freewheeling way to some of the most advantaged groups on earth, this behavior leaves a bad taste in many people’s mouth.

Maybe that’s because the behavior is wrong. I tend to think it is. But even if it is not wrong in itself, it certainly looks wrong. And looks are important.

For example, my colleague Andreas Mogensen confronted the following question in a recent paper: why is it wrong for societies to preferentially allocate healthcare resources to the most productive members of society, rather than distributing healthcare resources largely according to need? On a consequentialist approach, this may seem like a mistake, since some people do give back more to the community than others. Yet most of us recoil at the prospect of, for example, a technology mogul having a team of thirty personal doctors, devoted only to keeping him alive for a few extra years. Why is this?

Mogensen suggests that the problem is semiotic: it communicates something harmful by signaling that those who produce less are less deserving of a fundamental good, healthcare, which is bound up with the idea of their status as a citizen. By allocating healthcare according to productivity rather than need, we would communicate that less-productive members of society are second-class citizens. Mogensen writes:

The significance attached to medical care in societies where health is a socially recognized need interacts with the prevalence of meritocratic ideals. The result is that prioritizing treatment in this way represents some people as more deserving of a good that is bound up with each person’s standing as a citizen, thereby expressing something contrary to the ideal of equal citizenship.

In the case of healthcare allocation, semiotic objections have strength. It is, most people think, wrong to give much better healthcare to the most productive members of society. And the explanation for this wrongness is, plausibly, semiotic: it looks bad, and those looks cause lasting damage to people’s standing in the community.

In the same way, when foundations flush with cash walk straight by the needy, sinking money wildly into speculative far-future investments that most people do not want, and sinking an increasing fraction of those investments back into the wealthy communities that administer them, this behavior does not look good. It does not look good because it communicates symbolic disrespect for disadvantaged populations by denying their claims to goods such as healthcare, housing, and safe working conditions, goods which are fundamentally bound up with their status as citizens of a global community.

I think that whatever strength we accord to the semiotic objection to rationing healthcare according to productivity within a society, we should accord at least as much strength, and probably considerably more, to the semiotic objection to free-spending longtermist philanthropy. That’s not to say that semiotic considerations must take precedence over all others. But neither should they be ignored.

4. Foregone leadership

Effective altruists have changed many areas of the philanthropic landscape for the better. One of the best things they did for philanthropy was to pioneer the use and spread of up-to-date cost-effectiveness methods backed by high-quality data. Effective altruists did not merely use cost-effectiveness analysis themselves, but also through their example and spending power encouraged charities to produce better data, and philanthropists to act on the basis of better analyses.

We saw in the previous post in this series that Peter Singer expresses justifiable pride in this achievement in his book, The life you can save:

One of the biggest changes in philanthropy and the international development community that has taken place since I first wrote this book is the increased focus of independent organizations on measuring the impact of particular interventions to help people in extreme poverty, and in assessing the effectiveness of the organizations providing the most successful interventions. GiveWell was the pioneer here, setting new standards for rigorous evaluation of the work of charities.

It is hard to overstate the importance of GiveWell’s leadership in spreading rigorous cost-effectiveness analysis.

When effective altruists abandon cost-effectiveness analysis, they lose their ability to lead. They can no longer in good faith, or with reliable effect ask others to adopt rigorous standards of cost-effectiveness analysis if they are no longer willing to adhere to these standards themselves.

Perhaps more damaging, effective altruists may lead others in the opposite direction. If even the movement behind GiveWell cannot adhere to such a rigorous regime, then others might rightly wonder whether cost-effectiveness analysis is all it is cracked up to be. Why shouldn’t they just fund other interventions on the basis of much less substantial analyses?

This would be a real shame. Effective altruists have led well in this area, and it would hurt many to see their leadership erode or backfire.

5. Philosophical cost

Early effective altruists were ruthless in their adherence to rigorous cost-benefit analysis. This spawned the institutional critique, which complained that a variety of systematic reforms were not being considered, let alone funded, because they could not be given a suitably rigorous cost-effectiveness analysis.

Effective altruists replied by doubling down on the requirement to conduct cost-effectiveness analyses. If systemic reforms could not be analyzed through standard forms of cost-effectiveness analysis, then they would not be considered.

Now effective altruists consider and fund a wide variety of speculative interventions that are much harder to evaluate than any of the institutional reforms which they previously refused to consider. We might not have airtight cost-effectiveness analyses for ongoing projects such as prison reform, the Movement for Black Lives, feminist activism, and other justice-focused movements. But we do know a good deal about how these movements have proceeded, what kinds of outcomes they might achieve, and what it might take to achieve those outcomes. We’re also, I hope, fairly sure that the outcomes are good.

By contrast, we know much less about the outcomes that our actions are likely to produce in the far future, and the likelihood of their producing these outcomes. Some have gone so far as to suggest that we are clueless about the long-term future. This contrast raises two philosophical implications.

First, longtermists need a new response to the institutional critique. They can’t say that interventions will be ruled out of consideration unless they can be supported by an especially rigorous cost-effectiveness analysis, because that insistence would also rule out many longtermist interventions that are currently being considered and funded.

Finding another response to the institutional critique may be possible, but it is not so easy. Is the problem that the social movements in question are ineffective? That’s a hard sell: many of these movements have been among the most successful liberatory forces in recent history. Is the problem that funding social movements in wealthy countries is very expensive? Fine. Many of these causes are even more dire in the developing world. Is the problem that effective altruism should remain apolitical? That ship sailed a long time ago. Effective altruism is now so political that they were bankrolling a congressional campaign last year (and executives at FTX were funding a good deal more besides).

Could the problem be that political action on one side of these issues would be cancelled out by other effective altruists batting for the opposite team? Well, yes, that could be a problem. And relatedly, the problem could just be that effective altruists don’t want to see the proposed reforms come to pass. But do effective altruists really want to say this? That would not be a good look.

Perhaps there are other ways for effective altruists to respond to the institutional critique. But their most historically successful response, an insistence on rigorous cost-effectiveness analysis, is now off the table.

A second problem is that effective altruists’ change of attitudes towards cost-effectiveness analysis may look unprincipled. Early effective altruists were quite insistent that cost-effectiveness analysis should be preserved, because methodological rigor is essential to identifying effective charities. Now, effective altruists have done an about-face and endorsed some of the most speculative and least rigorously evaluable interventions proposed by any group on earth. What explains this change of attitudes?

The change had better not be explicable in sociological terms. That is, it had better not turn out that effective altruists wanted to defend short-termist interventions in global health and development as well as animal welfare, of a type that tends to perform well on detailed cost-effectiveness analyses. Therefore, they settled on rigorous cost-effectiveness analysis because it supported their favored views. A few years later, effective altruists wanted to fund speculative longshots aimed at improving the far future. They realized that these longshots cannot possibly be assessed using traditional forms of cost-effectiveness analysis, so they decided that cost-effectiveness analysis was not such a good thing after all.

Is this what happened? I’m not sure I would go out and say that it is. But if this is not the explanation, what else could explain such a sharp methodological change? Did effective altruists discover some powerful new arguments against cost-effectiveness analysis that they were previously unaware of? Did they simply re-evaluate the strength of arguments against cost-effectiveness analysis that they had previously rejected. Perhaps. It would be good to hear more about what these arguments are, and where and when they became influential. Otherwise, critics may have some grounds to suspect the explanation for effective altruists’ changing attitudes towards cost-effectiveness analysis is sociological rather than philosophical. And that would not be good.

6. Taking stock

Part 4 of this series discussed the declining role of cost-effectiveness analysis within the effective altruism movement. We saw that the rise of longtermism and the increasing availability of funding led to a shift away from the movement’s initial focus on rigorous cost-effectiveness analysis towards a variety of less demanding forms of scrutiny.

Today’s post discussed some of the costs that effective altruists face when they turn away from cost-effectiveness analysis. The first cost is practical: without rigorous analysis, effective altruists run a heightened risk of funding ineffective causes. The opportunity cost of this mistake may run to the hundreds of thousands, or even millions of lives – a hefty price.

The second cost is symbolic: wasteful spending communicates symbolic disrespect for disadvantaged populations by denying their claims to goods such as healthcare, housing, and safe working conditions, and turning away from rigorous forms of analysis that would incorporate these claims.

The third cost involves foregone leadership: one of the most impactful things that effective altruists did for global philanthropy was to use their size and influence to pressure charities and philanthropists alike to invest in better data collection and analysis. When effective altruists turn away from cost-effectiveness analysis, they not only lose their ability to lead others, but may in fact actively discourage others by signaling that rigorous adherence to cost-effectiveness methods cannot be maintained.

A final cost is philosophical. Once effective altruists move beyond the initial insistence on rigorous cost-effectiveness analysis, they will need a new response to the institutional critique beyond the classic response that the effects of institutional change efforts are insufficiently measurable. And they will need to supply a principled, non-sociological explanation for their sudden abandonment of cost-effectiveness methods.

Might these costs be worth paying? Perhaps they are, and perhaps they are not. But they are undeniably costs, and they should be internalized, analyzed, and acknowledged as such.

Comments

15 responses to “Epistemics: (Part 5: The value of cost-effectiveness analysis)”

  1. Evelyn Avatar

    There is a middle ground between GiveWell’s painstakingly rigorous cost-effectiveness analyses (CEAs) and Future Fund’s #yolo #shipit approach. Since its inception, Open Philanthropy has used loose back-of-the-envelope calculations (BOTECs) to justify grants. These are not as rigorous as GiveWell CEAs, but they do give a rough idea of each grant’s expected impact. Also, like the broader EA community, Open Phil uses the importance, neglectedness, and tractability framework to prioritize causes. This all fits into their [hits-based approach](https://www.openphilanthropy.org/research/hits-based-giving/) to grantmaking, which is intended to be risk-neutral (so they’re willing to fund something that has a 10% chance of 10x impact, for example). I’ve heard that Open Phil relies less on BOTECs than they used to, but I would be surprised if they’ve abandoned them altogether. I’m curious as to your thoughts on Open Phil’s methods for evaluating grants. Do you think they’re on the right track? Do you think their methodological rigor has improved, worsened, or remained stable over time?

    Also, echoing Jason’s comment [here](https://ineffectivealtruismblog.com/2023/07/21/epistemics-part-4-the-fall-of-cost-effectiveness-analysis/comment-page-1/#comment-293), it’s harder to produce CEAs estimating an intervention’s impact on the far future or on x-risk than on other near-term outcomes, because one needs to make speculative assumptions and take the weighted average of the outcomes across different scenarios. But I’d still like to see people try! It’s better than nothing. The field of futures studies has developed methods like [scenario analysis](http://foresight-platform.eu/community/forlearn/how-to-do-foresight/methods/scenario/) that we can draw on.

    By the way, based on the GiveWell post you cited, Open Phil reduced the amount it planned to give to GiveWell by $400 million, not $350 million. It reduced its 2022 giving target from $500M to $350M (a $150M difference) and its 2023 target from $500M to $250M (a $250M difference).

    1. David Thorstad Avatar

      Thanks Evelyn!

      Good catch on the $350m figure being wrong. I’ve fixed that in the text and added an acknowledgment. I think I also tweeted out the wrong figure a few weeks ago. Oops.

      I’m a huge fan of using methods from the field of decisionmaking under deep uncertainty (DMDU). Scenario planning, which you mentioned, is one excellent example – it’s used not only by futurists, but also by major oil companies to plan risky capital investments. I’ve written two papers advocating the use of DMDU methods (“General-purpose institutional decisionmaking heuristics …”) and (“Tough enough? …) both available on my website, dthorstad.com.

      In fact, I’m just about to hop a plane back from a conference in Argentina. My talk was part of a panel about how to incorporate scientific evidence in thinking about the long-term future. I spoke immediately after Joe Roussos, a philosopher based at the Institute for Future Studies, who drew heavily on scenario planning methods. And I gave a paper about DMDU methods.

      (At the same time, scenario planning methods are usually understood to have a temporal limit. Roussos argued explicitly and in depth that they shouldn’t be used to predict consequences on a million-year horizon, for example, and I think that Roussos and most people in this tradition would think that these methods aren’t suitable for the kinds of very long timeframes that effective altruists often want to think about).

      You’re definitely right that it’s harder for longtermists than for short-termists to do cost-effectiveness analysis. It’s no accident that the rise of longtermism was one of the major contributors to the reduced role that cost-effectiveness analysis now plays within the effective altruism movement. I often wish that effective altruists would take that as one of several good reasons to move away from longtermist causes, but you are definitely right that conditional on the decision to engage in longtermist grantmaking, it’s worth trying to do the best we can to analyze the costs and benefits of various proposals. (One caveat: analysis can give the illusion of having a stronger evidence base than we really do, in which case analysis can sometimes be counter-productive.)

      I’m a bit more pessimistic than many effective altruists are about the specific methods being used by longtermists today. BOTECs are so-named because they are meant to be done on the back of an envelope as a guide to more rigorous thinking, then left on the back of an envelope once a more rigorous replacement is developed. One does not publish the backside of an envelope. In particular, very few risk analysts, futurists, operations researchers, consultants, or project administrators involved in long-term projects outside of the effective altruism space place much confidence in BOTECs, or make much use of them in their final analysis.

      I have similar doubts about the ITN framework, which I hope to write about later, but I think that the most important thing to note for now is that the ITN framework is ordinarily thought to be a framework for cause-selection (a coarse-grained problem) not for comparing specific interventions (a finer-grained problem).

      It’s a good question what to think about OpenPhil’s grantmaking! To be honest, I know this is a frustrating response, but I don’t think I’m in a position to provide a reliable assessment. I like to speak on the basis of relevant knowledge and analysis, and when I don’t have enough of either, I think it’s often best to say nothing at all. Hopefully I’ll be in a good position to say something substantive in the future, but right now I think I’d run an unacceptably high risk of saying something misleading. Sorry! I promised an old professor of mine that I wouldn’t speak when I didn’t know what I was talking about, and I try to keep to that promise as best I can.

      As always, thanks for your engagement.

  2. Richard Y Chappell Avatar

    > “Effective altruists replied by doubling down on the requirement to conduct cost-effectiveness analyses.”

    Citation needed?

    Compare my 2016 blog post on the institutional critique: “Whatever organizations you think could do the most good with additional funding (and I don’t see anyone policing people’s sincere and considered judgments here), you should be donating to them.”
    https://www.philosophyetc.net/2016/04/effective-altruism-radical-politics-and.html

    I don’t think my take here was unrepresentative.

    > “Finding another response to the institutional critique may be possible, but it is not so easy.”

    Again, see my old post for an example: “On the substantive merits of his proposals, though, the political radical is here in an awkward position. If one wants a “safe bet” for improving the world, the global poverty or animal welfare wings of EA are clearly the way to go. If one wants the “best bet” in expected value terms, and is willing to make optimistic estimates in the absence of a robust evidence base, the meta and X-risk wings of EA are clearly the way to go. It’s difficult to find any principled evaluative standpoint from which radical politics looks distinctively promising.”

  3. Vasco Grilo Avatar
    Vasco Grilo

    Thanks for writing these posts, David!

    I can see that GiveWell’s top charities save a human life for 5 k$, but do you think that is an accurate description of their overall effects? From section 4 of Mogensen 2019 (https://globalprioritiesinstitute.org/wp-content/uploads/2019/Mogensen_Maximal_Cluelessness.pdf):
    “In comparing Make-A-Wish Foundation unfavourably to Against Malaria Foundation, Singer (2015) observes that “saving a life is better than making a wish come true.” (6) Arguably, there is a qualifier missing from this statement: ‘all else being equal.’ Saving a child’s life need not be better than fulfilling a child’s wish if the indirect effects of saving the child’s life [e.g. on animals (https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and#Effects_of_global_health_and_development_interventions_on_animals_are_neglected_and_unclear)] are worse than those of fulfilling the wish.”

    1. David Thorstad Avatar

      Thanks Vasco!

      That’s a good point, and it’s taken from a very nice paper. It’s definitely correct to say that cost-effectiveness estimates should take into account, when possible, the downstream effects of interventions.

      In general, many people think that this analysis will boost the cost-effectiveness of many favored short-termist interventions. For example, saving a life from malaria often means that there will be, in the near-term, one more citizen of a developing country to contribute to the infrastructure and economy of the country, and to support their family. These are usually taken to be good things not only in the short term, but also in the medium-term insofar as they contribute to economic growth and development, in the process also helping to reduce inequality across nations.

      There are, of course, some very long-term ways of thinking about the consequences of short-termist interventions on which, under certain assumptions, we can work ourselves up to thinking that we’re quite clueless about the cost-effectiveness of even the most (apparently) measurable short-termist interventions. I’m happy to say a bit about this kind of cluelessness objection if you’d like, but for the most part I think the right thing to say is probably that (1) it’s a separate topic that we should mostly try to treat separately another day, and (2) this view leads us to surprisingly strong and skeptical places, if you follow out Mogensen’s excellent reasoning. That might give us reason to ask how strongly we’re committed to this version of the cluelessness challenge.

      1. Vasco Grilo Avatar
        Vasco Grilo

        Thanks for elaborating, David!

        Would you agree that is it an open question whether longtermist considerations dominate the expected effect of saving human lives? If so, do you think we can conclude that the longterm effects are positive based on the nearterm effects being positive? I do not know whether longterm considerations dominate, but, if they do, I suspect one cannot conclude that nearterm effects being good implies the longterm effects also being good. So I do not know whether saving human lives is good/bad (although I still think killing is super bad!).

        Longtermist considerations apart, it is also unclear to me whether saving human lives is good/bad attending solely to neartermist considerations.
        – For high-income countries, there is the meat eater problem (https://journalofcontroversialideas.org/article/2/2/206/htm). I did some BOTECs which support the conclusion that the effects of a random human on animals outweight the happiness of that human (https://forum.effectivealtruism.org/posts/ayxasxhHWTvf6r5BF/scale-of-the-welfare-of-various-animal-populations). One can easily arrive to numbers suggesting that the impact on farmed animals is either negligible or all that matters.
        – For low-income countries like the ones where the beneficiaries of GiveWell’s top charities live, the consumption of farmed animals is small, and therefore the impact on farmed animals does not matter much. I Fermi-estimated the impact on poultry only reduces the cost-effectiveness by 3 %, and that accounting for other farmed animals can possibly reduce cost-effectiveness by 20 % (https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and#Farmed_animals). However, at least from a hedonistic perspective, the effects on wild animals can easily be negligible or completely dominate in a harmful or beneficial way. My speculative guess is that they are 1 k times as large as those on humans (https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and#Wild_animals).

        I would say complex cluelessness should not be ignored (https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and#Complex_cluelessness_should_not_be_ignored).

        1. David Thorstad Avatar

          Thanks Vasco!

          I’m sorry that your comment took a few hours to appear. I had a moderation setting incorrectly configured. Right now, only comments by first-time commentators need to be manually approved, but the system was also flagging comments containing >=2 links as possible spam. I’ve changed the setting, so hopefully you shouldn’t have any problems in the future. (One note: I do cap the number of nested replies to 5-6 total. That’s mostly for formatting reasons, since I haven’t figured out how to make comments automatically expand/contract, but I also think it can help to keep discussions at a manageable length).

          I certainly agree that the cluelessness problem is a problem, and that when especially strong forms of cluelessness are present, it is not so easy to say how agents ought to act. Your links were very helpful and well-written.
          I’ll try to write more about cluelessness in the future, but roughly I think that in most ordinary decision problems, it is important not to exaggerate the extent of cluelessness that we face. To say a bit more now, two of the phenomena that I discuss in my paper “The scope of longtermism”, rapid diminution and washing out of long-term impacts, may suggest that in many cases, the expected long-term value differences made by our actions are not so large compared to the expected short-term value differences that our actions create. When this is true, we don’t need to have a very precise handle on the long-term effects of our actions in order to have a good sense of the overall value of our actions.

          You do make a good point in raising the meat-eater problem. I’m not entirely sure what to say about this, mostly because I don’t know as much about animal welfare as I would like to. More generally, I think that there are very many important open questions about the relative importance of human/animal welfare that may be extremely action-guiding. Let’s perhaps leave the discussion in this post conditional on resolving outstanding questions about the relative importance of promoting human and animal welfare?

          1. Vasco Grilo Avatar
            Vasco Grilo

            Thanks for engaging, David!

            “To say a bit more now, two of the phenomena that I discuss in my paper “The scope of longtermism”, rapid diminution and washing out of long-term impacts, may suggest that in many cases, the expected long-term value differences made by our actions are not so large compared to the expected short-term value differences that our actions create.”

            I seem to remember you discussing that paper in a Youtube video. I think I skimmed it at the time, but will have another look. Thanks for writing it!

            “Let’s perhaps leave the discussion in this post conditional on resolving outstanding questions about the relative importance of promoting human and animal welfare?”

            Sure! In any case, even if one thinks the effects on animals dominate, one could rely on an animal-related benchmark (e.g. corporate campaigns for chicken welfare affect 9 to 120 years of chicken life per dollar (https://forum.effectivealtruism.org/posts/L5EZjjXKdNgcm253H/corporate-campaigns-affect-9-to-120-years-of-chicken-life)), instead of the human-related one like saving a life for 5 k$. So your point that longtermist spending has an opportunity cost would hold.

            For the sake of balance, I should note there may also be complex cluelessness in animal welfare interventions (https://forum.effectivealtruism.org/posts/7hMgK4hciBhXmBRnW/do-you-think-decreasing-the-consumption-of-animals-is-good).

  4. JWS Avatar
    JWS

    I had a mixed response to this post David. There’s a lot of it that I agree with, and as I re-read it while writing this response I nodded a bunch of times while saying “fair”, but there are part of its that don’t seem robust to me (note: may very well be user error!).

    1 > On ‘Practical Cost’, the argument seems directly follow from taking “Famine, Affluence, and Morality” seriously. However, it doesn’t just apply to AI Safety spending, and Open Philanthropy have no worse obligation for this spending than literally all other spending in the First World that isn’t similarly effective at saving lives. Your focus in this post is AI Safety/Longtermist work which is fine, but the argument itself is universal to all spending, all morality, and every decision we make. Furthermore, any of the usual responses to ‘Strong Singerianism’ will also be counterarguments available to AI Safety advocates as well.

    On a side note, the language you use here (“how much would they need to atone”) is very much in keeping with your “the price of longtermism” tweet – which I think was misguided. You may disagree however, and think this is a case where strong language is necessary?

    2 > On ‘Symbolic Cost’, I don’t disagree much here actually. I think movements need to pay attention to the symbolic meanings of their actions and their interpretations by others (whether or not they are seen as ‘fair’ by those internal to the movement).

    The Semiotic harm argument itself, I’m not as sure about. I think Semiotic harm combined with practical harm (e.g. a tech billionaire spending all this money to extend their life rather than close AMF’s funding gap) is the issue more than Semiotic harm combined with practical good (e.g. people criticising donating to AMF, saying ‘we should take care of our own first’). But I’m not an expert on this, and while I’d love to read the Mogensen paper as a non-academic I have not access 🙁

    3 > On Foregone Leadership, again not my area. It’d be good to see evidence of this (beyond anecdotes) but I can’t deny it’s a possibility.

    4 > I think you overstate EA’s response to the ‘institutional critique’. Many EAs are fans of systemic change (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/)! And I think it’s personally a very valid response to “your recommendations ignore our systemic political reforms” to ask “how sure are you about this revolutionary stuff?” Many people have very strong beliefs about the effectiveness of their policies, which seem to far outstrip the evidence for that policy. I guess I’m saying that the arguments about cluelessness do apply to current political movements right now, rather than just the far future (though the cluelessness objection may be even stronger about the far future).

    I think the simple ‘longtermist’ response to the institutional critique would be “we think this social change is better/more effective use of our efforts/funds”, because the institutional critique has always been about politics, and what society *should* do. I don’t think EA can avoid this argument, even ‘neartermists’ can’t avoid it. One of the biggest radical critiques of EA has been that donating to GiveWell-recommended charities supports the powers of Capital at the expense of the Global South (many such arguments of this kind are made in “The Good it Promises, The Harm it Does” as I’m sure you are aware)

    Finally, I generally dismiss such sociological arguments almost entirely out of hand. I don’t think I’ve ever seen a case where they improve debate, or lead to improvement of the thing being critiqued. There’s always a nasty side of denying the humanity of your interlocutor by explaining them in mechanistic rather than reasoned terms. But again this plays into the Deutschian/Popperian perspective I’m coming around to.

    Final Thoughts >

    In your first blog post, you define longtermism as: the view that in many of the most important decision situations, what it is best to do (or what we ought to do) is what is best, or near-best for the long-term future

    But in a weird way, this isn’t what I think you are objecting to? One could have some bizarre set of beliefs where they believe this, and think that the best thing for the long-term future is to donate to Global Health Charities, or spend money frivolously, for example. Furthermore, much current anxiety and anger about climate change often uses very longtermist language while having little connection to EA.

    I think you’re instead critical of specific cause areas, such as AI Safety research and Biosecurity. But you can definitely believe in these without being a longtermist – and I think in some way it might just confuse the ground (note, I have this issue with the EA Community at large, not just you!) But an strong AI-Safety advocate could just do a G. E. Moore shift on you and say that your denial of risks/seriousness from AI systems would have grave consequences if you are wrong, in which case, we need to get back to object-level arguments again!

    Anyway, I think this ended up being a bit more rambly than I intended, but to the extent you can understand what I’ve said I look forward to your responses as usual 🙂

    1. David Thorstad Avatar

      Thanks JWS!

      Sorry for the late response. I just drove to Nashville and I’m getting settled in.

      On the first point, let’s try to keep a tight argumentative focus. This post is a discussion about the costs of a particular move within a particular movement, namely the growing deprioritization of rigorous cost-effectiveness analysis within the effective altruism movement. It’s of course true that many other people never used cost-effectiveness analysis to guide their decisions, never gave to charity in the first place, gambled away all of their money, mounted a military coup, and so on. Many of these behaviors are wrong, and their wrongness could discussed elsewhere. Today, I want to focus on a different topic: the growing deprioritization of rigorous cost-effectiveness analysis within the effective altruism movement.

      On semiotic harm, it’s certainly true in many cases that the semiotic costs of an action are lessened if the action is, or is intended to be good, generous, philanthropic, and the like. For example, it looks a lot worse for the rich to spend all of their time and money stuffing themselves full of expensive delicacies than it does for them to give those same foods to the poor, even if those delicacies aren’t a terribly good way of addressing food scarcity. I think that effective altruists do benefit from just this semiotic advantage.

      I’m sorry that I wasn’t able to link an accessible version of the Mogensen paper. Ordinarily in philosophy, a late draft of the paper is available on the author’s website. That’s not the case here, but we do have a decently up-to-date version on the GPI website: https://globalprioritiesinstitute.org/andreas-mogensen-meaning-medicine-and-merit/. I hope that helps at least a bit!

      On the institutional critique, effective altruism is a wide and diverse movement, and to that extent, many positions co-exist within the movement. For example, a significant portion of effective altruists never made the shift to longtermism. In that vein, it is heartening to recall that some effective altruists are relatively sympathetic to systemic change.

      Most authors pressing the institutional critique have tended to view the kinds of changes mentioned by Wilbin as relatively `shallow’ systemtic changes (shallow as opposed to deep, not in the pejorative sense of the word). They have tended to push for deeper institutional changes, and thought that many effective altruists resisted these deeper changes on the grounds that I described in this post. (This might be helpfully compared to the dichotomy between effective altruists’ generally positive reaction to shallow critiques of the movement and generally less positive reaction to deep critiques).

      I think it is telling that many effective altruists do, after presenting evidence of receptiveness to shallower institutional reforms, tend to close by expressing skepticism about deeper reforms, as you did here. This tends to reinforce the impression that effective altruists are skeptical about deeper institutional reforms.

      There’s absolutely nothing wrong with asking for evidence about deeper reforms – advocates of these reforms have an obligation to provide evidence, and take themselves in many cases to have provided it. The point in this post is simply that effective altruists cannot, without argument, insist on receiving the same kind of evidence here as we would expect to have from, for example, a randomized controlled trial, since effective altruists have shown an increasing willingness to act without such evidence.

      You’re absolutely right to worry about the ways in which philanthropy magnifies the giving power of the wealthy. I pushed this concern in Part 2 of my series on billionaire philanthropy (especially in the section entitled “Plutocratic bias”) and it’s a favorite concern of many skeptics about billionaire philanthropy.

      On the final thoughts, it’s somewhat common (though not as common as I’d like) to distinguish between convergent/nonconvergent forms of longtermism. Convergent longtermism holds that what is best for the long-term future is also near-best for the short-term future, and in that sense it makes longtermism hold almost for free, and doesn’t diverge terribly much from common sense or from current practice. Non-convergent longtermism holds that what is best for the long-term future is often not near-best for the short-term future. For example, it may involve higher rates of saving versus spending, or less consumption of scarce resources. (I talk about another way to get the convergence thesis in my paper “The scope of longtermism”, namely the view that promoting economic growth is both best for the long-term and best for the short-term. I think this view is often associated with Tyler Cowen).

      While I’m not ready to plunk for any particular view on longtermism, it’s certainly true that many opponents of longtermism are often primarily arguing against non-convergent versions of longtermism. Arguing against convergent versions of longtermism would take an extra step (for example, we might point out that most economists think longtermists should save more than short-termists would tell us to, as my colleague Phil Trammell has rightly noted).

      Thanks, as always JWS, for your continued and constructive engagement!

      1. Richard Y Chappell Avatar

        > “The point in this post is simply that effective altruists cannot, without argument, insist on receiving the same kind of evidence here as we would expect to have from, for example, a randomized controlled trial, since effective altruists have shown an increasing willingness to act without such evidence.”

        David, I’m still wondering *who* you think has ever “insisted” any such thing. I can’t recall ever seeing anyone say, “Political reform/advocacy is indefensible until you show me a supporting RCT.” I really think you’re attacking a straw man here.

        It seems especially ironic for a series on “epistemics” to attack a made-up position like this. If you want to attribute a position to “EAs”, you should really find at least one prominent EA who has actually endorsed the position in question!

      2. JWS Avatar
        JWS

        Thanks for your response David. I’m only going to respond where I think we disagree, so if I don’t pick up a thread/point assume our views are close enough to agreement already. And I appreciate the link to the Mogensen paper! 🙂

        On argumentative focus – I somewhat agree, but I think that such a universal argument needs to have universal application. Those who criticise EA or longtermism on cost-effectiveness grounds ought to apply that everywhere. Saying ‘longtermism isn’t cost-effective and therefore carries great moral harm” actual carries consequences far beyond EA, and that needs to be reckoned with. I do accept that this doesn’t *excuse* EA/longtermism though.

        On systemic critique, there’s a lot to say. I think every philosophy isn’t receptive to ‘deep critiques’, since deep critiques challenge the philosophy itself! If a Marxist accepted a deep critique of Marxism, they probably won’t be a Marxist for much longer!

        But I think that evidence threshold for deep systemic change should ought to be much higher than it is for shallow systemic change, or non-systemic changes. It’s also likely that, as you mention about longtermism, deep systemic change leads to a ‘regression to the inscrutable’. Given that many suggested radical reforms have never been implemented then by definition we will have no empirical evidence of their effectiveness!

        Finally, just as many deep critics of EA take themselves to have provided evidence for their case (and I probably disagree here, at least depending on the scale of the claim), so too do longtermists believe that they have provided evidence for theirs. I many cases this is without RCT-level evidence, but as Richard points out I don’t think there are many EAs who are explicitly saying this? We should make decisions with the best evidence we have – for charity allocation that’s likely RCTs, but for social change that may well be moral argumentation instead.

        The debate over longtermism I continue to find bizarre… Many critics of it seem to assume it is by definition non convergent, and also committed to “sacrificing people alive now for people in the future” which I don’t think I’ve ever seen a longtermist actually suggest. Secondly, the focus of people’s issue with non-convergent longtermism seems to be the non-convergence rather than the longtermism, in which case we’re just back to arguing on the object-level about AI Safety/Biosecurity etc. again. I think most debate is about these issues and the opportunity cost of changing cause priorities, and the moral/meta-ethical debate about longtermism actually isn’t that related.

        1. David Thorstad Avatar

          Thanks JWS!

          It’s definitely possible to extend this and many other arguments from this blog to cover a number of other developments in society. I try to keep a tight focus, but I’d be very happy if people wanted to extend some of these arguments to other areas. (In general, I’m always happy to see more discussion).

          I think you’re capturing a leading effective altruist view on systemic change in a clear and succinct way. I’m not entirely unsympathetic to some (many) of the points raised, since as you note, many of them are related to challenges that I myself have raised for longtermism. For now, I’d be happy to capture the point that it is harder to rule out systemic change once the insistence on cost-benefit analysis is relaxed, and leave for another day the question of what kinds of systemic change should be pursued.

          You’re definitely right that longtermists, like advocates of institutional reform, take themselves to be providing evidence for their views. That’s always good to see! I’m not always sure that I believe that evidence myself, and some of my work, such as the series “Exaggerating the risks” aims to say why I don’t believe it. But I am always, (almost) without fail, excited to see more evidence, and more rigorous evidence, being provided. Evidence has a way of helping us to find the truth, and it is (almost) never a bad thing.

          I also find the longtermism debate difficult for one of the reasons that you mention: there’s an increasing proliferation of terminology, defining many different positions under the umbrella of longtermism. (For example: longtermism vs. strong longtermism; axiological vs. deontic longtermism; convergent vs. non-convergent longtermism). There’s absolutely nothing wrong with a proliferation of terminology – in itself, this is a sign of clarity and progress. But there is a problem if some authors are either unaware of existing distinctions, talking across them, or failing to specify their own views clearly enough within existing distinctions. I have to say that in editing an anthology on longtermism, one of the most common things that I and my co-editors did was to press authors to clarify exactly what they meant by longtermism. I hope that more such pressure will be applied in the coming years, and that the result will be a clearer and better-argued literature.

          I think you are certainly right that, if effective altruists were right that existential risk is so high that it should be a strong short-termist priority, then they would have a very robust case across many different moral views for investment in existential risk mitigation. Anyone who sees a 20% chance of extinction in this century and shrugs their shoulders is taking a very extreme view, and I don’t have much sympathy for such views. I also think it is important to acknowledge that skeptics about investment in existential risk mitigation think existential risk is far lower than this. This is one reason why many people have invested in debates about non-convergent versions of longtermism: if, say, existential risk in this century is 1% or even 0.1%, there might still be a case to be made for massive investment in existential risk mitigation, but it might not be strong enough if made on short-termist grounds alone.

  5. RAP Avatar
    RAP

    Thanks. I have only one disagreement over this passage

    Well, that’s not strictly true. Even if AI safety advocates are right, it’d still be necessary for your needing to apologize that, overall: a) there’s a relevant chance of EA efforts concerning AI being successful; b) this chance is higher than the chance of their being inadvertently harmful (e.g., if they end up increase the risks of an arms race by making the matter even more salient).
    And that’s why I think cost-effectiveness is so important – not because we can precisely measure impact and trade-offs, etc. but because it’s a good test for our theories of change.
    Otherwise, we are not in a much better situation than the “Scared Straight” guys

    1. David Thorstad Avatar

      Thanks RAP! I think that your points (a) and (b) are very important points that are worth stressing.

      For example, on (a), a series of recent posts on the EA forum has expressed concern that many AI safety research labs may not be delivering the expected amount of progress given their funding.

      Likewise, concerns of type (b) are now relatively familiar. For example, effective altruists funded OpenAI to the tune of a bit over 30 million dollars around 2018-2019. Just four years later, OpenAI has demonstrated sufficiently low regard for safety that effective altruists were leaving EA Global in London to picket the offices of OpenAI. It appears that this investment, in particular, may have gone to the wrong folks.

      I should admit, at the same time, that I do give serious consideration to the possibility that I could be wrong. I’m fairly sure that I’m not wrong about AI risk, or about the actors best handled to deal with it, but if I am wrong I could be very much on the wrong side of history here.

Leave a Reply

%d bloggers like this: