Papers I learned from (Part 3: Risk, non-identity, and extinction)

Recently, the following argument has entered philosophical discussion: (1) since multiple risk-attitudes are permissible, we should take into account the risk-attitudes of everyone our action will affect, giving more weight to the more risk-averse attitudes in some way; (2) so, when our action affects large numbers of future people who might be risk averse, our risk attitude should be significantly risk averse; (3) so, in these cases, we should support strong precautionary action … In this paper, we examine this argument, taking issue with all three steps.

Kowalczyk and Venkatesh, “Risk, non-identity, and extinction
Listen to this post

1. Preface

This is Part 3 of my series Papers I learned from. The series highlights papers that have informed my own thinking and draws attention to what might follow from them. 

Part 1 looked at Harry Lloyd’s defense of robust temporalism, a form of pure temporal discounting.

Part 2 looked at an argument by Richard Pettigrew that risk-averse versions of longtermism may recommend hastening human extinction. This was meant not as a recommendation, but rather as a way of putting pressure on standard arguments for longtermism.

Today’s post looks at a reply to Pettigrew by Nikhil Venkatesh, LSE Fellow in Philosophy at the London School of Economics, and Kacper Kowalczyk, Associate Lecturer, at University College, London. Their paper, “Risk, non-identity, and extinction” was recently published in a special issue of The Monist on existential risk, the same issue in which Pettigrew’s piece appears.

The best way to introduce a paper is to let the authors introduce it themselves. Today, we’re lucky to have a guest post by Venkatesh and Kowalczyk. The rest of this post was written by them.

2. Introduction

Longtermists believe that we should give far greater moral priority than we currently do to positively influencing the very longterm future: the next million, billion, or even a hundred trillion years. The idea has recently gained prominence within effective altruism, which asks more generally how we can do the most good. The standard argument for longtermism has it that since many more people might live in the longterm future than exist today, there is much more welfare to be gained by taking actions that affect the longterm future than by taking actions that affect the present. Since total welfare is good, longtermists then say, one of the most effective ways to do the most good is to increase the amount of welfare experienced in the far future. (This is the ‘total utilitarian’ argument for longtermism, and not all longtermists endorse it, though most endorse something nearby.)

On these grounds, longtermists typically favour policy interventions aimed at reducing the risk of human extinction – for instance, from runaway artificial intelligence, great power war, pandemics, or extreme climate change. If humans go extinct, they say, there will be no people in the far future, and therefore a very large amount of potential welfare will be lost. The loss of far future welfare would be the worst thing about, say, a nuclear holocaust – worse than the death of every presently existing person. Longtermists therefore recommend governments and philanthropists increase investments in AI safety, nuclear non-proliferation, pandemic preparedness, renewable energy, and so on – even at the cost of reducing help for people struggling with poverty, oppression, and illness today.

Recently, Richard Pettigrew has suggested that concern for the long-term future of humanity has different – and even more counterintuitive – implications (Pettigrew 2024). He believes that the correct decision theory is risk-averse: that is, it places greater weight on avoiding bad scenarios than on achieving good ones. The worst-case scenario, he says, if we truly care about future welfare in the total utilitarian way that most longtermists at least partly share, is not near-term human extinction. It is that humanity has a very long future, but a future in which people’s quality of life is so bad that it would have been better for them never to have been born. (That is, we can say that their welfare level is negative.) In that case, earlier extinction would have been better. Just as longtermists typically emphasise the very large amount of potential future positive welfare that would be lost in extinction, Pettigrew emphasises the very large amount of potential future negative welfare that would occur in this long, miserable future. Therefore, he says, longtermists should do what they can to avoid the long miserable future – in particular, they should work to hasten, rather than prevent, human extinction.

Pettigrew did not mean his argument to persuade effective altruists to start nuclear wars. It was meant, rather, as a reductio ad absurdum of longtermist claims: if you really believe that future welfare matters so much, then you’re led to this unpalatable conclusion – so, we should conclude that future welfare doesn’t matter in quite the way that longtermists typically hold. But not everyone will draw Pettigrew’s conclusion, especially those strongly committed to a longtermist outlook. They might be motivated to try and hasten human extinction. It is therefore very important to carefully and forcefully rebut Pettigrew’s argument. This is what we do in our forthcoming paper, “Risk, Non-Identity, and Extinction” (2024). We summarise our discussion below.

3. Why risk aversion?

Pettigrew’s argument depends on the idea that we should be risk-averse: that is, place a greater weight on avoiding bad scenarios than on achieving good ones. This is reasonably normal behaviour; for instance, many of us would not take a bet that paid out £10 if a fair coin landed on heads and cost us £10 if it landed on tails (though this may also be explained by the diminishing marginal utility of money).

It is more controversial that we have a moral duty to be risk-averse when our decisions affect other people, such as when we are policymakers or philanthropists. Pettigrew’s argument (drawing on Lara Buchak’s discussion of the risk-weighted expected utility theory) makes this case in two steps. Firstly, we should adopt the risk attitudes of the people affected by our decisions. Secondly, when deciding for a group of people with differing or unknown risk attitudes, we should give more weight to the most risk-averse reasonable attitudes from the group.

We take issue with both steps. If we want to help others, we should try to do what’s in their best interest. But what’s in someone’s best interest may not be reflected by their risk attitude, as the following example shows (Blessnohl 2020).

Imagine two people, Bold and Timid, who both suffer from a mild illness. Bold likes risk, while Timid dislikes risk. There is a new experimental drug that might help. It is equally likely to cure the condition or make things even worse. The benefit of being cured is worth as much as the harm of making things worse.

Now imagine that we can only give the drug to one of them. Our choice is between giving the drug to Bold or giving it to Timid, as in the following table.

In this table, rows specify the possible outcomes for the two people with the probabilities shown in the columns. If a person’s preferences about risk-taking reflected what’s in their best interest, then it would be in both people’s best interest to choose the second policy over the second. Plausibly, then, the second policy would then be better than the first, as it would be in everyone’s best interest.

But, impartially speaking, the two options appear equally good, as they have the same chances of producing the same patterns of benefits. Either way, there is a 50% chance of someone being cured and a 50% chance of someone being made worse off. An impartial policymaker should not care about who gets which benefit.

Therefore, we conclude, people’s risk attitudes don’t necessarily reflect what’s in their best interest. The two people would both choose the second policy due to their risk attitudes, but that does not necessarily reflect what would be in their interest. Someone trying to help them should follow their interests, not their risk attitudes.

Even if we were to defer to the risk attitudes (rather than the interests) of those our policies affect, why should we give more weight in particular to the most risk-averse attitudes? Pettigrew motivates this idea through an example:

(A Hiker’s Choice) A lead hiker has to decide whether to continue climbing in bad weather, risking serious harm for the sake of a great view from above the clouds. They are unable to consult other hikers roped up to them. But a few of the other hikers are known to be risk-averse enough that they would prefer to descend.

We share Pettigrew’s intuition that the lead hiker should descend. But we are unconvinced that this is best explained by a duty to give more weight to the attitudes of their risk-averse companions. Rather, it could be explained by the more familiar principle that we should not expose others to significant risks of harm without their consent.

In summary, then, we do not believe that there is a general duty to adopt risk-averse attitudes when our actions affect others. People may have risk-averse attitudes, but these may not reflect their interests. Even if they did, the case for adopting the most risk-averse attitudes in the group of those affected, rather than, say, taking an average, is undermotivated.

4. Non-identity

The idea of adopting the aggregate risk-attitude of all affected seems not only undermotivated, but also unworkable when our choices impact future people. This is because our policy choices can affect who will exist, and what their risk attitudes are. For example:

(Earlier or Later?) You can have a child in your turbulent twenties. The child is equally likely to have a very happy life or a very unhappy life. But they would grow to seek stability and thus become risk-averse. You can alternatively have a child in your stable thirties. This would be a different child who is certain to have a merely satisfied life. But they would grow to despise stability and thus become risk-seeking. All else is equal.

Your choice is thus between having a Timid child and exposing it to risks or having a Bold child and not exposing it to risks, as in the following table where a blank denotes nonexistence.

Let’s assume that, relative to the risk-averse attitude, a 50/50 gamble between a happy life and an unhappy life is less valuable than the certainty of a merely satisfied life, but, relative to the risk-seeking attitude, that gamble is more valuable than the certainty of a merely satisfied life.

So, if you have the child earlier, you should be more timid because the people affected will be; but then you should have had the child later. If, on the other hand, you have the child later, you should be bolder because the people affected will be; but then you should have had the child earlier. So, whatever you choose, it will be the case that you should have chosen something else! This is a very puzzling result.

This puzzle arises because our actions might affect which risk-attitudes are instantiated in the population. In these cases, adopting the aggregate risk-attitude of all affected threatens to become unworkable. So, even if we had some reason to take other people’s risk-attitudes into account elsewhere, cases in which our actions affect future people would likely be an exception.

This puzzle is similar to classic puzzles of preference change where one’s choices affect how one evaluates these very choices, a kind of situation familiar from important life decisions such as deciding where to live, what to do, whether to have children. Pettigrew himself has written about such problems (Pettigrew 2020). Solutions have been proposed for preference change puzzles (Bykvist 2006), but unfortunately they do not work in the case of risk-attitudes, as we explain in the paper.

5. Extinction

Even if we doubt Pettigrew’s particular argument for risk aversion, we might still happen to be risk-averse. A philanthropist or policymaker might really want to avoid doing something that leads to a very bad outcome, for some other reason than deferring to the risk-averse attitudes of those people who are affected. In this case, is Pettigrew correct that given longtermist claims about the importance of future welfare they should invest in hastening human extinction?

We think not. Pettigrew asks us to consider four possible futures, which we could illustrate like this:

Long Happy 
MiddlingExtinction
Long Miserable 

Pettigrew’s point is that risk aversion should lead longtermists to want to avoid the long miserable future, even at the cost of foregoing a long happy future. In terms of this table, Pettigrew suggests that on these grounds it might be preferable to shift probability mass from left to right – towards extinction – rather than from right to left – away from extinction.

We point out two issues with Pettigrew’s argument. First, working to hasten human extinction may not be the most effective way to reduce the risk of a long miserable future. It could be more effective to directly mitigate threats that could make the long future miserable, such as the danger of permanent totalitarianism or the persistent moral atrocities of factory farming, war, and poverty. Even if avoiding a long miserable future is the overriding priority, it will be preferable to shift probability mass towards non-miserable possibilities rather than non-long possibilities (that is, extinction). This direct approach also promises to be more feasible and robustly good, given its potential collateral benefits to present people.

Second, increasing the risk of extinction could plausibly increase the risk of a long miserable future. Voluntary human extinction seems unachievable, as conceded by Les Knight, a real-life pro-extinctionist from the Voluntary Human Extinction Movement (Buckley 2022). The policies we might have to pursue to effectively extinguish all human life, such as rapid exploitation of fossil fuels, creation of powerful weaponry, or expedited progress towards advanced artificial intelligence, could also increase the risk of a long miserable future. Human life would be worse in a warmer world without fossil fuel reserves, wars cause suffering and reduce the potential for international cooperation, and advanced artificial intelligence might disempower humanity even if it does not extinguish it. So, it might be impossible to shift probability mass towards extinction without shifting some to the possibility of the miserable long future.

Therefore, even if longtermism plus risk aversion implies that reducing the risk of a long miserable future is an overriding priority, working to hasten human extinction may not be an effective way of reducing that risk.

6. Conclusion

Pettigrew’s argument threatens to take longtermism to even weirder – and more dangerous places – than it typically goes. The thought that a concern for the welfare of future people would motivate action to end humanity is startling – startling enough, Pettigrew suspects, to make us revise such longtermist concern. We find his argument for risk aversion undermotivated and unworkable when applied to future people. Moreover, it does not necessarily support working to hasten human extinction. The idea that human extinction should be hastened is not only counterintuitive, but might also be dangerous. Realistic attempts to realise it would likely involve violence in the present and risk increasing suffering in the future. It is therefore important to emphasise that the reports of a philosophical case for working to hasten humanity’s demise have been greatly exaggerated.

This post is an adaptation of our article published in The Monist, available under a Creative Commons BY-NC 4.0 License which allows for modifications. For the original article and license details, visit https://doi.org/10.1093/monist/onae004.

References

Allais, Maurice 1953. “Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole americaine,” Econometrica 21(4): 503–46. doi:10.2307/1907921.

Blessenohl, Simon 2020. “Risk Attitudes and Social Choice,” Ethics 130 (4): 485–513.

Buckley, Cara 2022. “Earth Now Has 8 Billion Humans. This Man Wishes There Were None,” The New York Times, November 23.

Bykvist, Krister 2006. “Prudence for Changing Selves,” Utilitas 18(3): 264–83. doi:10.1017/S0953820806002032.

Kowalczyk, Kacper & Nikhil Venkatesh 2024. “Risk, Non-Identity, and Extinction,” The Monist 107(2), pp. 146-156. https://doi.org/10.1093/monist/onae004

Nebel, Jacob M. 2020. “Rank-Weighted Utilitarianism and the Veil of Ignorance,” Ethics 131(1): 87–106. doi:10.1086/709140.

Pettigrew, Richard 2020. Choosing for Changing Selves. Oxford University Press.

Pettigrew, Richard 2022. “Effective Altruism, Risk, and Human Extinction,” Global Priorities Institute, GPI Working Paper No. 2-2022, Version 1.0. https://globalprioritiesinstitute.org/wp-content/uploads/Richard-Pettigrew-Effective-altruism-risk-and-human-extinction-January-2022.pdf.

Pettigrew, Richard 2024. “Should Longtermists Recommend Hastening Extinction Rather Than Delaying It?The Monist 107(2): 130–45. doi:10.1093/monist/onae003.


This post is an adaptation of our article published in The Monist, available under a Creative Commons BY-NC 4.0 License which allows for modifications. For the original article and license details, visit https://doi.org/10.1093/monist/onae004.

Comments

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading