Academics review What we owe the future (Part 2: Chappell liked the book)

WWOTF is to longtermism what Animal Liberation was to anti-speciesism. Targeted at a general audience, it advocates for an important kind of moral circle expansion—urging us to take more fully into account morally significant interests or considerations that we otherwise tend to unduly neglect.

Richard Chappell, Review of What we owe the future

1. Introduction

This is the second part of a series looking at academic reviews of Will MacAskill’s book, What we owe the future.

In the first part, we looked at Kieran Setiya’s take in Boston Review, focusing on some doubts Setiya expressed about MacAskill’s views in population ethics.

I think it is only fair to follow on with a positive review of What we owe the future. Richard Chappell is Associate Professor of Philosophy at the University of Miami. Chappell is one of the leading voices in contemporary consequentialist ethics, editor of Utilitarianism.net, and author of Parfit’s Ethics. He blogs at Good Thoughts.

I have had the pleasure of getting to know Chappell this term during his time as a visitor at the Global Priorities Institute. Chappell knows a great deal about philosophical issues in effective altruism and has engaged for many years with the community. This knowledge puts Chappell in a position to offer a thoughtful, often supportive but never dogmatic approach to some of the toughest questions raised by the effective altruism movement.

In this post, I want to focus on some of the most important points raised by Chappell’s discussion. Readers may be surprised to hear that I agree with many of them. I mean it when I say that I take effective altruism seriously, and have a lot in common with many effective altruists.

2. Future people matter

MacAskill summarizes the case for longtermism as follows:

Future people count. There could be a lot of them. We can make their lives go better. This is the case for longtermism in a nutshell.

Will MacAskill, What we owe the future

Many philosophical responses to longtermism deny much or all of the first premise: that future people matter.

Chappell is having none of this.

Just as some philosophers cast about for inane excuses to deny moral status to non-human animals, so some may do for future generations. And it can be interesting to discuss those proposals in the philosophy seminar room. But I think it shouldn’t really be controversial.

While I might not have put the point quite so forcefully, I have considerable sympathy for the view that future people matter. Future people are (or at least will be) people too. They will live, laugh, suffer, thrive and die just like the rest of us. And their laughter and suffering matters, just like our own.

Chappell is especially frustrated by some of the quick, glib or dismissive ways with which future people’s welfare is sometimes tossed aside:

(I guess one does see some anti-EA folks on Twitter boasting about their disregard for “hypothetical future people”, but that just seems so blatantly indecent that I don’t know what to make of it. Would they really be okay with, say, burying landmines under a children’s playground on the condition that the mines are set to be inert for a century?)

I must say that I share Chappell’s frustration with Twitter as a medium for academic discourse. Issues in population ethics will ultimately be settled in scholarly journals, not on Twitter.

At the same time, as Chappell is aware, there are many serious scholars who raise important questions about the way that longtermists value future lives, and who base their objections on years of scholarly research. Those issues were at the heart of Kieran Setiya’s review of What we owe the future, discussed in Part 1. And the forthcoming anthology, Essays on longtermism (Oxford University press, co-edited by myself, Hilary Greaves and Jacob Barrett) will feature a number of sophisticated population-ethical objections to longtermism.

I hope that as population-ethical objections to longtermism begin to more prominently enter the scholarly literature, we can take our discussions off of Twitter, tone down the rhetoric, and reach clarity about exactly which views in population ethics are needed to support longtermism and whether those views are true.

3. Longtermism and total utilitarianism

One of the most frustrating features of recent discussions of longtermism is the conflation of longtermism with total utilitarianism. Here are some popular statements of longtermism:

(Longtermism) The idea that positively influencing the longterm future is a key moral priority of our time.

Will MacAskill, What we owe the future

(Axiological strong longtermism) In the most important decision situations facing agents today:

  1. Every option that is near-best overall is near-best for the far future.
  2. Every option that is near-best overall delivers much larger benefits in the far future than in the near future.
Greaves and MacAskill, “The case for strong longtermism

(Deontic strong longtermism) In the most important decision situations facing agents today:

  1. One ought to choose an option that is near-best for the far future.
  2. One ought to choose an option that delivers much larger benefits in the far future than in the near future.

Greaves and MacAskill, “The case for strong longtermism

As an axiological claim, total utilitarianism holds:

(Total Utilitarianism, Axiological Version) [For any populations A and B,] A is better than B iff total utility in A is higher than total utility in B. If A and B have equal total well-being, then A and B are equally good.

Modified from Hilary Greaves, “Population axiology

Sometimes the term `total utilitarianism’ is used to denote a related deontic claim, but chasing down the precise nature of that claim need not concern us here.

Longtermism and total utilitarianism are not the same thing. One can be, as some of my colleagues are, a longtermist without being a total utilitarian. For example, you may think that we have duties of beneficence to improve the lives of future people, and that given the number of future people who have claims on our beneficence, duties to future people assume significant weight in decisionmaking. You can also, as I do, have considerable sympathy for total utilitarianism while harboring serious doubts about longtermism. One of the purposes of this blog is to show how that can be done.

Chappell is quite right to be frustrated with the tendency to conflate longtermism with total utilitarianism:

The animating moral principle behind the book is that future people matter too. (Note how moderate a claim this is: “I’m not claiming that the interests of present and future people should always and everywhere be given equal weight. I’m just claiming that future people matter significantly.” Beware of critics who dismiss longtermism on the basis of conflating it with more extreme claims, such as total utilitarianism.)

At the same time, it is worth asking why longtermism and (total) utilitarianism are so persistently conflated.

There is no doubt that many longtermists have been strongly influenced by utilitarianism. Chappell’s blog is headlined by the utilitarian flag, designed by Johan Gustafsson. In the 2019 EA survey, a whopping 80.7% of effective altruists identified as consequentialists, with the majority (69.6% of the total sample) identifying as utilitarian. Just 3.2% identified with deontological approaches.

This contrasts sharply with a recent survey of professional philosophers, among whom 23.6% lean consequentialist, and 25.9% lean towards deontology.

These demographic facts do not excuse the conflation of longtermism with total utilitarianism. But they might give us reason to ask whether utilitarianism has exerted undue influence over effective altruist theorizing, and whether effective altruists who distance themselves from total utilitarianism might be downplaying the extent of their own utilitarian sympathies.

This suspicion is voiced well, if perhaps a bit too strongly, by Émile Torres at Aeon:

Although some longtermists insist that they aren’t utilitarians, we should right away note that this is mostly a smoke-and-mirrors act to deflect criticisms that longtermism – and, more generally, the effective altruism (EA) movement from which it emerged – is nothing more than utilitarianism repackaged. The fact is that the EA movement is deeply utilitarian, at least in practice, and indeed, before it decided upon a name, the movement’s early members, including Ord, seriously considered calling it the ‘effective utilitarian community’.

Émile Torres, “Against Longtermism”

I would not go so far as to describe effective altruism as “nothing more than utilitarianism repackaged”. In fact, I complained about this very view on the blog post to which Torres links. But it would also be going too far to deny that utilitarianism has had a sizable influence on the direction of the effective altruism movement.

4. Existential risk

Effective altruists are very concerned about existential risks, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable for desirable future development” (Bostrom 2013)

Their argument is that because of the number of future lives that could be ended or diminished in quality by an existential catastrophe, almost anything we can do to drive down the risk of existential catastrophe seems worth doing.

For example, MacAskill recently tweeted an estimate of existential risk that puts a lower bound of 6% on existential risk in this century:

Chappell is a bit more moderate:

If you grant it even 1% credence— or one tenth of that—that’s more than enough to warrant immense concern, on perfectly ordinary (non-fanatical) expectational grounds. 

Here I am in agreement with Chappell. The human species is a very good thing. We are the only (known) species to make music, study philosophy, learn fundamental laws of physics, and visit the moon. Human civilization is worth preserving, or even spreading. The biblical promise that our descendants will become as numerous as the stars is a realistic possibility if humans are able to avoid existential catastrophe for a long enough period of time. It would be a very good thing for our descendants to survive and thrive, and a very bad thing for humanity to meet a premature demise.

Chappell expresses concern over the sources of skepticism about existential risk:

I worry that some people just like to mock weird-seeming ideas. So I think a lot of the criticism here is not coming from a very thoughtful place.

Critics like to mock AI risk, especially, as “sci fi fantasy”, which seems extraordinary hubris to me. 

Chappell is right: those are bad reasons for skepticism. One of the aims of my work is to put skepticism about existential risk on a different footing.

Here are some of my views about existential risk in brief. One purpose of this blog will be to make out these views in more detail:

  1. The study of existential risk must be put on a firm scientific footing based on reasons, evidence and scholarly discussion. (I hope soon to run a series on methodology).
  2. Most existing arguments for high levels of existential risk fall significantly short of the evidential burden traditionally placed on rigorous academic or scientific arguments. (See my series “Exaggerating the risks“).
  3. In many cases, the authors pushing high estimates of existential risk are not trying to publish rigorous academic papers in peer-reviewed journals supporting their estimates. (I am often asked to cite an 800-page series of blog posts by the title AI Go Foom. Self-published books of blog posts are no basis for a rigorous academic field).
  4. Even the most rigorous arguments for existential risk tend to fall short of minimal scholarly standards. For example, almost none have gone through standard processes of peer review. Those processes exist to allow a jury of experts to select papers based on their adherence to cutting-edge scholarly methods. Failure to complete the process of peer review is typically seen as a reason for skepticism about the rigor of a paper, and rightly so.
  5. When authors do publish rigorous defenses of risk estimates, their arguments often do not work. (For example, my “Against the singularity hypothesis” looks at Bostrom’s and Chalmers’ arguments for the singularity hypothesis. I will discuss this in a later series).
  6. There has been a substantial distorting influence of external money on the field: large amounts of cash are lavished on academics willing to write public reports taking a pessimistic perspective about existential risk (the latest examples: a $100k paper prize for … a conference paper! An AI safety fellowship worth over three times the pro-rated salary of an average PhD student). This leads to a substantial amount of publication bias, because most academics, who are deeply skeptical of the field, have little incentive to publish refutations (I will run at least one post about this). Bias has been amplified by reporting bias, since pessimists about academic risk invest significantly more resources in spreading their ideas once those ideas are published, and since pessimistic claims are naturally headline-grabbing.
  7. Epistemic distortions such as biases of homogenous deliberating groups, fear-driven reactions to risk, and respect for perceived authority (of Oxford-based academics) drove the study of existential risk to a set of extreme conclusions and made it difficult to return towards more moderate views.

This does not amount to the (snide, horrible and self-righteous) path of mocking existential risk estimates for being science-fictional. It amounts to applying accepted standards of rigor, evidence and proof, and asking whether current estimates of existential risk meet those standards.

I suspect that Chappell would agree with me on the need to make estimates of existential risk as rigorous as possible, as I would agree with Chappell that calling those estimates science-fictional is no substitute for a rigorous argument.

I hope that as the study of existential risk moves forward, we will see increasing attempts to put high risk estimates on a rigorous scholarly footing, and also increasing engagement with those estimates from scholars skeptical of the risks.

Is there more to say here? And which other reviews of What we owe the future would you like to hear discussed? Let me know.

Comments

Leave a Reply

%d bloggers like this: