Academics review What we owe the future (Part 1: Setiya on moral mathematics)

Future people count. There could be a lot of them. We can make their lives go better. This is the case for longtermism in a nutshell.

Will MacAskill, What we owe the future

1. Introduction

Will MacAskill is Associate Professor of Philosophy at the University of Oxford, co-founder of the effective altruism movement, and co-founder of Giving What we Can, the Centre for Effective Altruism, and 80,000 Hours. He is also my colleague at the Global Priorities Institute.

MacAskill’s book, What we owe the future, draws on years of academic research and policy engagement to make the case for longtermism. One of the most impressive features of the book is the amount of academic research that went into preparing it: a number of groundbreaking supplementary reports were written by MacAskill’s team to inform the book’s conclusions. These reports helped a great deal: for example, where Toby Ord’s The precipice may have exaggerated existential risk from climate change, MacAskill did not: a 400-page report by John Halstead makes a persuasive and detailed empirical case against existential risk from climate change, and MacAskill listened.

What we owe the future is a bestselling book, and justly so. Like most books of this stature, What we owe the future has attracted reviews from supporters and critics alike.

The gold standard of an academic book review is a short journal article prepared in an academic journal. These reviews are in press, and I will discuss them as they come out.

But in the meantime, a number of world-leading academics have reviewed What we owe the future in other outlets. These reviews are often quite good and worth grappling with. In this series, I look at some of the reviews that have informed my own thinking.

I won’t try to summarize each review or aim for exhaustive coverage. My aim here is to draw out a few of the most interesting and thought-provoking insights from each review. If you see something that catches your eye, I would encourage you to read the full review (as well as related scholarly discussions) and make up your own mind.

2. Kieran Setiya, “The new moral mathematics”

Kieran Setiya is Professor of Philosophy at MIT. In addition to his wide-ranging philosophical work, Setiya knows a thing or two about public philosophy: his latest books Life is hard and Midlife have been reviewed in publications such as The Economist and the New York Times. He blogs at Under the Net.

I first met Setiya during a round of mock interviews conducted by Harvard and MIT faculty for graduating PhD candidates. I was struck by Setiya’s capacity to engage deeply and thoughtfully with other authors’ philosophical work in those authors’ own terms. Setiya’s “The new moral mathematics” (Boston Review) is no exception, combining genuine praise (“a densely researched but surprisingly light read”) with serious academic criticism.

One of the most interesting sections of Setiya’s review deals with population axiology: “the study of the conditions under which one state of affairs is better than another, when the states of affairs in question may differ over the numbers and the identities of the persons who ever live” (Greaves 2017). For example, would it be better to have a million people with welfare level W, or ten million with welfare level W/2? And how, if at all, does it matter who those people are, or when they will live?

Setiya begins his discussion with the intuition of neutrality: “the intuition is that if a person is added to the population of the world, her addition has no positive or negative value in itself” (Broome 2004). What makes the world better is not adding new people with high levels of wellbeing, but rather making the lives of people who would already exist better. In a slogan: “We are in favor of making happy people of making people happy, but neutral about making happy people” (Narveson 1973).

If we take neutrality seriously, this tends to dampen the case for longtermism. If it is not better to fill the world with additional happy people, then we should be less concerned about how our actions will affect the number and well-being of future generations and more concerned about our effects on the current generation:

If neutrality is right, the longtermist’s mathematics rest on a mistake: the extra lives don’t make the world a better place, all by themselves. Our ethical equations are not swamped by small risks of extinction. And while we may be doing much less than we should to address the risk of a lethal pandemic, value lock-in, or nuclear war, the truth is much closer to common sense than MacAskill would have us believe. We should care about making the lives of those who will exist better, or about the fate of those who will be worse off, not about the number of good lives there will be.

Kieran Setiya, “The new moral mathematics“.

Longtermists have long been aware of the threat posed by neutrality. For example, last year Joe Carlsmith discussed neutrality on the EA Forum. MacAskill tackles the threat of neutrality alongside other questions about population ethics in Chapter 8 of his book, “Is it good to make happy people?”.

One of MacAskill’s arguments against neutrality draws on intuitions about future children. Suppose you can choose between having a child who will have a life worth living, except that they will suffer from migraines (call this option Migraine) and having a child whose life will be the same, except without migraines (call this option Migraine-Free). Intuitively, it is better to have the child without migraines than to have the child with migraines, that is:

(1) Migraine-Free > Migraine.

However, consider the option of having no child instead (No-Child). On most readings of neutrality, No-Child is neither better nor worse than Migraine, since it is not better (but could hardly be worse!) to bring a new child into existence with a life worth living. For the same reason, on most readings of neutrality, No-Child is neither better nor worse than Migraine-Free. If we assume a tripartite view on which options that are neither better nor worse than one another must be equally good, it follows that having no child is just as good as having a child with migraines, and also just as good as having a migraine-free child. That is:

(2) No-Child \equiv Migraine.

(3) No-Child \equiv Migraine-Free.

But if that is right, we are in trouble, because pushing through these two equivalences yields the conclusion that having a child with migraines is just as good as having a child without migraines:

(4) Migraine-Free \equiv Migraine.

contradicting (1).

Setiya isn’t convinced. After all, Setiya reminds us, we can reject the tripartite view on which options that are neither better nor worse must be equally good. Instead, we hold that some such options bear other relationships, such as being on a par. And if that is right, (2)-(3) are false, and their replacements will not derive (4), and hence will not contradict (1).

Setiya also reminds us that many views which reject neutrality, such as totalism, have their own problems. Perhaps the most famous of these problems is the repugnant conclusion: name two levels of wellbeing, \epsilon, N, choosing \epsilon to be just barely worth living, and N as high as you like. Now name any number k of people. The repugnant conclusion is this: I can find some number K >> k of people such that having K people at welfare level \epsilon (which, recall, is barely worth living) is better than having k people at welfare level N (which, recall, we chose to be absurdly wonderful). Ouch.

Setiya is right to push here, for as he notes, MacAskill is a signatory to a recent article arguing that theories in population ethics should not be rejected merely on the grounds that they accept the repugnant conclusion. The signatories write:

The fact that an approach to population ethics (an axiology or a social ordering) entails the Repugnant Conclusion is not sufficient to conclude that the approach is inadequate. Equivalently, avoiding the Repugnant Conclusion is not a necessary condition for a minimally adequate candidate axiology, social ordering, or approach to population ethics … Further properties of axiologies or social orderings – beyond their avoidance of the Repugnant Conclusion – are important, should be given importance and may prove decisive.

Zuber et al, “What should we agree on about the repugnant conclusion?

I must confess that I would have signed this article myself. I would have signed on the same grounds given by MacAskill in What we owe the future, and by the signatories quoted above: approaches which block the repugnant conclusion tend to have their own counterintuitive properties, and we may want to weigh their counter-intuitiveness against our initial reluctance to accept the Repugnant Conclusion.

But it would not have been a pleasant document for me to sign, as I imagine it was not pleasant for MacAskill to sign, and Setiya makes an important point when he asks that the counterintuitive consequences of neutrality be weighed against counterintuitive consequences of other views, such as the repugnant conclusion.

Grappling with neutrality and repugnance are important tasks for any longtermist, and Setiya is right to bring them to our attention.

It may be interesting to think about these point in connection with a paper on neutrality by my colleague Teru Thomas. Thomas agrees with Setiya in thinking that neutrality would pose a decisive objection to many longtermist efforts aimed at reducing existential risk. However, Thomas notes, there may be a surprising degree of compatibility between neutrality and other longtermist efforts, such as attempts to speed up technological progress or otherwise change the trajectory of the world without cost to current people. This would no doubt be a disappointment for most longtermists, but perhaps it might still fall short of refuting longtermism.

3. Beyond neutrality

Many philosophers for whom I have the utmost respect find themselves compelled by the intuition of neutrality. For these philosophers, Setiya’s discussion may raise serious concerns about the longtermist enterprise.

I must confess that I do not find myself moved by neutrality. I have considerable sympathy for different views in population ethics, such as totalism, that contradict neutrality. A recent study by Lucius Caviola (of the newly formed EA psychology lab) and colleagues may suggest that folk population ethics rejects neutrality, although it is important not to place too much weight on a single early study.

But even if you do not accept neutrality, there are two reasons to think carefully through Setiya’s discussion.

First, many of us are uncertain about the right moral view. As I said, many people for whom I have a great deal of respect, including Setiya, find neutrality much more compelling than I do, and they have at their disposal a range of responses to objections that MacAskill, or myself, would be disposed to make. This makes me uncertain about whether I am really correct to deny neutrality. Taking account of moral uncertainty about population axiology may require giving less weight to future generations to account for the real possibility that neutrality may be true.

Second, there are softer views in the neighborhood of neutrality are also worth taking seriously. In many fields, such as economics, it is standard to discount future welfare by several percent each year, giving more weight to welfare gains now than welfare gains in the future. One reason for discounting is pure time preference, the view that it is better for benefits to be realized now rather than later.

Philosophers differ sharply from social scientists in that most of us think that pure time preference is indefensible. As the late Derek Parfit puts the point:

It is irrational to care less about future pains because they will be felt either on a Tuesday, or more than a year in the future … In these cases the concern is not less because of some intrinsic difference in the object of concern. The concern is less because of a property which is purely positional, and which draws an arbitrary line. These are the patterns of concern that are, in the clearest way, irrational.

Derek Parfit, Reasons and Persons

But some philosophers have begun to break from this orthodoxy. My colleague, Andreas Mogensen, argues that a limited form of kinship-based discounting may escape standard philosophical objections. That’s not so bad, but the rebellion is growing.

Recently, the Global Priorities Institute awarded a paper prize to Harry Lloyd, a graduate student at Yale University. Lloyd argues that a much broader form of temporal discounting may be defensible.

If that is right, then we may not have to go so far as denying (as neutrality does) that making happy people has any value at all. It is well known that even quite low rates of temporal discounting, compounded over many years, would severely restrict the value of longtermist interventions. For this reason, even if you are not sympathetic to neutrality, you might get some similar conclusions by reaching for temporal discounting.

I hope that you will read through Setiya’s full review. In the meantime, what did you think of neutrality? Are there any other parts of the Setiya review that you wish I had discussed? And which other academic reviews of What we owe the future have you learned from? Let me know.

Comments

5 responses to “Academics review What we owe the future (Part 1: Setiya on moral mathematics)”

  1. Richard Avatar

    Hi David, great discussion! The things I liked most about Setiya’s review were the points about “retrospective shift” and the possibility of parity. Anyone interested in how population ethics and parity combine might like to check out my introduction to critical range views here: https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories

    (I think it’s important to stress that parity does not automatically support strict neutrality. As argued in the above article, there are good reasons to limit the size of the neutral range.)

    One thing that struck me as a bit unfair about the review was his line about “basing one’s ethical outlook on the conclusion of a paradox”. Two reasons: (i) longtermism does not depend upon total utilitarianism; the view defended in the book is really much more moderate than one would suspect from this review. And (ii) given the impossibility results in population ethics, I think we really do just need to acknowledge that there’s no perfectly intuitive option on the table, so pointing to one counterintuitive implication can’t possibly be taken as a decisive refutation. You need to survey the options and judge which is least-bad.

    Further, as I argue (responding to Setiya) in ‘Puzzles for Everyone’, fear of the repugnant conclusion gives no reason to support strict neutrality, because we already know that a different solution is needed for the intrapersonal version: https://rychappell.substack.com/p/puzzles-for-everyone

    1. David Thorstad Avatar

      Hi Richard! Perfect timing — one of the reviews of WWOTF that I am planning to discuss is yours (I liked it). Maybe we could chat more offline about this?

      I’m glad we agree on parity being one of the most interesting features of Setiya’s discussion. As always with academic discussions, you’re quite right that there’s more to be said here. So for example, as you mention we can say that just because we’re on board with a bit of parity doesn’t mean we are willing to admit enough parity to ground neutrality.

      I’m very much on board with the need to separate longtermism from total utilitarianism. (In fact, for my own part I have considerably more sympathy for total utilitarianism than for longtermism). I’ve said as much publicly (for example, in response to this post at the Blog of the APA https://blog.apaonline.org/2021/03/29/is-effective-altruism-inherently-utilitarian/) and privately to the authors of some working papers that make this confusion. I appreciate that in your own review you emphasize the need to separate longtermism from total utilitarianism. That’s absolutely right.

      For my own part, I agree wholeheartedly with your point (ii), that every theory in population ethics has to bite a bullet somewhere, so biting one particular bullet (say, the repugnant conclusion) should not be the end of the road for a theory. I take that to be the point of Zuber et al, and I meant it when I said I would have signed on to that article.

      I really liked your `puzzles for everyone’ post. I guess you’ll remember my reply to the EA forum version of that post: “Amen, Richard. Amen.”. We might agree on most details while disagreeing on some others, but the general point that puzzles for consequentialism are often puzzles for everyone is something that, as you know, many consequentialists believe. And we get frustrated when we perceive ourselves as being unfairly attacked on the basis of puzzles that should be everyone’s problem.

      One thing I liked about Setiya’s review is that he takes the time to go beyond the Twitter headlines (`who cares about future people?’) to spell out in detail one of the most popular population axiologies that could cause trouble for longtermism. And Setiya engages in detail with some of the arguments that MacAskill makes against neutrality, doing so on a basis of a solution (the appeal to parity) that is live in the scholarly literature.

      Of course, there is so much more to be said here. It’s quite difficult to capture a decades-long scholarly debate in a public-facing review. But I really do appreciate the work Setiya did to bring out the challenge from neutrality in a rigorous, engaged and accessible way.

  2. Travis Avatar
    Travis

    Hi David,

    Thanks so much for writing these very interesting posts! I have very much enjoyed reading them.

    I have worked in finance for a long time and I think there is an isomorphism between time preference and uncertainty for discounting. In other words I prefer a dollar today because I am less certain that I will get a dollar tomorrow (and the more uncertain I am the more I prefer the dollar today e.g. the discount rate is higher).

    Applied to the problem of population ethics this just means that we can discount when applied to future humans because we are less certain that they will exist at all and the farther in the future we go the less certain we are.

    Many of your other posts have argued that many people are overestimating existential risks and I agree! In this framework this is equivalent to arguing that the discount rate we apply to future humans should be lower because they are more likely to exist. But nowhere do you insist that the existential risk is zero, and nor should you as that would not make intuitive sense. But then this means that the discount rate is also not zero because we are here today but may not be in the future.

    This also is more easy to square with the common deontological intuition that we should really care more about current suffering of actually alive people than people in the future that do not yet exist.

    So I would reframe longtermism as the quest for a lower discount rate which is good! We want the discount rate to be as low as possible: we don’t want to have risks of going extinct. But that discount rate can never get to zero: future humans can never matter as much to us as currently alive humans… they can only get very close if we can increase our certainty about the future by building a sustainable civilization.

    So what is the appropriate discount rate? I’m not sure but I wish the conversation would move towards those terms because effective altruism is all about using limited resources in the best manner and having a better estimates would help people better allocate the current resources between things like buying bed nets and worrying about AI risks. And this would perhaps make us slightly less ineffective in our altruism.

    Travis

  3. welcomehome24 Avatar

    Hi David,

    Thanks so much for writing these very interesting posts! I have very much enjoyed reading them.

    I have worked in finance for a long time and I think there is an isomorphism between time preference and uncertainty for discounting. In other words I prefer a dollar today because I am less certain that I will get a dollar tomorrow (and the more uncertain I am the more I prefer the dollar today e.g. the discount rate is higher).

    Applied to the problem of population ethics this just means that we can discount when applied to future humans because we are less certain that they will exist at all and the farther in the future we go the less certain we are.

    Many of your other posts have argued that many people are overestimating existential risks and I agree! In this framework this is equivalent to arguing that the discount rate we apply to future humans should be lower because they are more likely to exist. But nowhere do you insist that the existential risk is zero, and nor should you as that would not make intuitive sense. But then this means that the discount rate is also not zero because we are here today but may not be in the future.

    This also is more easy to square with the common deontological intuition that we should really care more about current suffering of actually alive people than people in the future that do not yet exist.

    So I would reframe longtermism as the quest for a lower discount rate which is good! We want the discount rate to be as low as possible: we don’t want to have risks of going extinct. But that discount rate can never get to zero: future humans can never matter as much to us as currently alive humans… they can only get very close if we can increase our certainty about the future by building a sustainable civilization.

    So what is the appropriate discount rate? I’m not sure but I wish the conversation would move towards those terms because effective altruism is all about using limited resources in the best manner and having a better estimates would help people better allocate the current resources between things like buying bed nets and worrying about AI risks. And this would perhaps make us slightly less ineffective in our altruism.

    Travis

    1. David Thorstad Avatar

      Hi Travis!

      It is great to hear from you. It is good to get a perspective from finance, and I guess that in finance it would be a very good idea to think about time preference through its impact on discounting (so you can compare the values of present vs future assets?).

      As I guess you know, there is a nice decomposition of time preference (the Ramsey Equation) which makes pure time preference into an additive component of total time preference (delta).

      And I guess you probably know, given your suggestion, that many people think in practice it is very hard to disentangle uncertainty-based discounting from other types of discounting (i.e. pure time preference). I think Gabaix and Laibson’s paper on as-if discounting makes this point nicely. (https://www.nber.org/papers/w23254).

      I’ll try to think more about the relationship between uncertainty and discounting in the future. Certainly the Cluelessness Problem (roughly: we’re very uncertain about the future) will tend to pose a serious worry for high ex ante valuations of longtermist interventions. (I think I’ll talk about this in Part 3 of my series on philosophers’ reviews of WWOTF, because Regina Rini makes the point nicely, but I’m happy to chat more about it before).

      Very glad to hear from you. Please do let me know what you think! (I’m having some odd issues with my response not displaying, so hopefully you see this, and don’t see ten versions of it).

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading