Academics review What we owe the future (Part 3: Rini on demandingness, cluelessness and inscrutability)

Effective altruism has been highjacked by increasingly esoteric and speculative routes to “effectiveness”. Over the past decade a movement that began fighting blindness-inducing parasites and providing anti-malarial bed nets has shifted its attention to sci-fi “existential risks” such as bioengineered weapons, planet-killer comets or a still-very-hypothetical superintelligent computer wiping us out.

Regina Rini, “An effective altruist? A philosopher’s guide to the long-term threats to humanity“.
Listen to this post

1. Introduction

This is Part 3 of the series “Academics review What we owe the future“. This series looks at reviews of Will MacAskill’s book What we owe the future written by professional academics.

Part 1 looked at Kieran Setiya’s review, focusing on population ethics.

Part 2 looked at Richard Chappell’s review, focusing on the value of future people, longtermism and total utilitarianism, and existential risk.

Today’s review is written by Regina Rini. Rini holds the Canada Research Chair in Philosophy of Moral and Social Cognition at York University. Rini is the author of The ethics of microaggression as well as numerous scholarly articles in ethics, epistemology and allied fields.

Rini is also a prodigious author of public philosophy, and the author of a regular column at the Times Literary Supplement. Rini’s review, “An effective altruist? A philosopher’s guide to the long-term threats to humanity” at the Times Literary Supplement presents a rich, informative and balanced take on MacAskill’s book. Let’s take a look at some highlights of Rini’s review.

2. Praise for MacAskill

Rini finds much to like in What we owe the future, and I agree with her on all counts. Rini expresses particular admiration for the amount of money that MacAskill and other effective altruists have raised to benefit the poor.

it’s hard to doubt that MacAskill and his band of merry utilitarians have done more good for needy people than you or I have.

Rini also appreciates the book’s prose, as well as the amount of empirical research that informs it:

MacAskill’s command of factual detail is admirable. So are his lightness of prose and facility in explaining tricky arguments.

MacAskill and the rest of his team deserve a good deal of credit for packing so much detail into a lucid, public-facing book. Finally, Rini pushes back on the idea of effective altruism as an unthinking slave to the logic of capital:

MacAskill is no corporate shill. Regarding climate change, for instance, he writes that instead of worrying about plastic straws or personal carbon footprints, “what we actually need is for companies like Shell to go out of business”.

This last point is something that I wish would have been emphasized more in previous discussions of the institutional critique of effective altruism. While it is often fair to accuse effective altruists of unusually close ties to the logic and checkbooks of capitalist institutions, there are moments when rifts begin to appear, and this is one of those moments.

However, Rini does raise a few challenges to MacAskill’s arguments.

3. Demandingness

All of us, when we write for public consumption, tone down our strongest and most controversial views. But it is important to be honest about points in What we owe the future where the book may express weaker views that many effective altruists, including MacAskill, sometimes advocate.

For example, consider questions about the demandingness of longtermism: just how much sacrifice would we today be required to make in order to act in accordance with longtermism? As Rini notes, MacAskill downplays the demandingness of longtermism:

MacAskill dismisses the most obvious objection to his project – making sacrifices for all those trillions of future people is just too demanding – with the assurance that we don’t have to do much. He is careful to say that we are allowed to weigh the interests of future people less than our own (otherwise they would immediately overwhelm us in the moral arithmetic), but he avoids committing to any specific multiplier. In fact, in an appendix he insists that the costs of what we owe the future are “very small, or even nonexistent”, since most of the things we need to do – disaster preparedness, climate-change mitigation, scientific research – are things we will want to do for ourselves anyway.

That is certainly a view which can be held. I know serious philosophers who hold this view. But it is far from obvious. Notice the things that MacAskill mentions (disaster preparedness, climate mitigation, scientific research), and compare them to some thing that MacAskill does not mention:

  • Patient philanthropy: Invest resources now, and spend them later once they have grown many times larger through patient investments.
  • Mitigating present existential risks: For example, risks from artificial agents or engineered pandemics.

Each of these threats might consume a good deal more of our resources. Depending on background assumptions, it can be potentially optimal for patient philanthropists to save a large fraction of society’s resources over many generations. And if pessimists are correct that we face very high levels of existential risk today, then under some views accepted by many effective altruists (for example, the Time of Perils Hypothesis) it will turn out optimal to spend a very large fraction of our resources right now tackling existential risk. These considerations suggest that longtermism may be a bit more demanding than MacAskill lets on.

4. Regression to the inscrutable

One of the most frustrating moves in recent discussions of existential risk is what I have termed the regression to the inscrutable. Effective altruists began by discussing risks such as asteroids, nuclear war and super-volcanoes that are to some extent tractable using traditional empirical methods. Careful research revealed that most of these risks were not very high.

As a result, research shifted to a range of increasingly esoteric and inscrutable threats, such as the following:

  • Rogue superintelligence: A self-improving artificial agent becomes orders of magnitude more intelligent than humanity. Because its motivations are not aligned with our own, the agent then exterminates humanity as a byproduct of pursuing its own goals.
  • Sophisticated omnicidal bioterrorism: An omnicidal agent gets hold of a sleeper virus that is highly transmissible and 100% lethal. She successfully spreads this to every living human, even those living in submarines, space stations and far-away research outposts. She then activates the virus and destroys humanity.

The frustration for critics of effective altruism is that it is agreed on all sides that these challenges are highly inaccessible to traditional, or even slightly nontraditional forms of empirical research. As we regress further into the realm of inscrutable threats, there is increasingly little that can be said using traditional methods. And critics of effective altruism, who are often skeptical of more speculative methods, think that should be the end of the matter.

As a consequence, when effective altruists place high confidence in inscrutable risks, there is just not much more that can be said than `I don’t see the evidence for that’. Debate breaks down, or devolves into the hurling of epithets (“It’s all just science fiction” or “The rapture of the nerds”).

This is, of course, no way to speak to one another. But sometimes it is not clear what else to say. Here it is revealing that Rini complains of a regression to esoteric risks:

Effective altruism has been highjacked by increasingly esoteric and speculative routes to “effectiveness”. Over the past decade a movement that began fighting blindness-inducing parasites and providing anti-malarial bed nets has shifted its attention to sci-fi “existential risks” such as bioengineered weapons, planet-killer comets or a still-very-hypothetical superintelligent computer wiping us out. 

Effective altruists will no doubt protest that Rini gives no reason to think these risks are sufficiently improbable. And they will probably note that the gently lobbed epithet (“Sci fi”) does not amount to an argument. That’s fair enough. But what else is Rini to say?

I’ll try to do a bit better on this blog. I’ll do that by responding in detail to some existing arguments for existential risk from bioterrorism and artificial intelligence. But readers must understand that I, like Rini, am not always sure how much more there is to say about inscrutable risks. This brings us to our next topic.

5. Cluelessness

James Lenman introduced the cluelessness problem for consequentialism: when we look far enough into the future, we seem to have no clue what the consequences of our actions will be. This is especially troubling for longtermists, since if we are clueless about the long-term effects of our actions, those long-term effects may assume less weight in ex ante decisionmaking.

Rini rightly pushes the cluelessness problem as a challenge to longtermism:

The problem is incredibly simple: we have no idea what the future will be like. Our trying to anticipate the needs of star-faring people millions of years hence is no more realistic than Pleistocene chieftains setting aside sharpened flint for twenty-first-century spearheads. In fact, this problem is a familiar one in philosophy, afflicting any outcomes-focused moral theory. The philosopher James Lenman made the point memorably in an essay called “Consequentialism and Cluelessness” (2000). Imagine you are an ancient German warrior with the opportunity to kill Hitler’s distant ancestor, ensuring that he will never be born and the Holocaust will never happen. How could you possibly know that doing so won’t cause the birth of an even worse genocidal maniac?

Longtermists have explored the cluelessness problem in a number of places, including Hilary Greaves’ “Cluelessness” and Andreas Mogensen’s “Maximal cluelessness“. Recently, many longtermists have stumbled onto an ingenious solution to the cluelessness problem: focus on short and medium-term threats with long-term impacts.

For example, suppose you think that artificial intelligence poses a significant existential risk to humanity today. That’s a short- or medium-term threat, and you might think that we are less-than-fully-clueless about the nature of the threat, as well as what can be done to mitigate it. We may still be clueless about the precise nature of the long-term effects our actions will have, since we don’t know much about the future that we are saving. But we may perhaps be justifiably confident that the future is worth saving. And that confidence, combined with a lack of cluelessness about threats posed by artificial intelligence, gives us a way around the cluelessness problem.

Rini notes that MacAskill at times follows exactly this strategy:

Anxiety about superintelligent  AI … answers our cluelessness problem. Our moment in history is special, MacAskill says, because we’ve come late enough to see the basics of what AI will be, but early enough still to change it. That means we have both knowledge and power at a historical pivot, though our understanding may be limited. The main thought is simple: if you see the jar wobbling on the edge of the kitchen table, then this is the moment to act, even if you don’t know exactly where the marinara sauce would have ended up. So maybe we aren’t so clueless after all. And maybe, by advocating AI safety, longtermism really does have something distinctive to say.

That’s a good strategy for longtermists. But now we have arrived back at our previous problem: after the regression towards inscrutable threats such as superintelligent AI, there is not terribly much for opponents to say except that they do not believe the threats.

With this in mind, I think that Rini’s response, while perhaps a bit disappointing, will make more sense:

Perhaps this moment really is as historically significant as the emergence of writing. But even then I expect that AI will turn out to be much less familiar than our speculations allow. Whatever we assume now is almost certainly just uninformed projection of our own desires and fears. Perhaps twenty-sixth-century historians (who might be software themselves) will chuckle at archival traces of our superintelligence anxieties, how in the generations following Nietzsche’s obituary for God, we couldn’t resist finding a new divine image in the Cloud.

We really have no idea, none at all.

Many effective altruists will be poised to object that Rini hasn’t made much of an argument for skepticism about AI risk here. This is more of an extended assertion.

I would tend to agree. But as we examine the case for AI risk in my series “Exaggerating the risks“, I will argue that at crucial junctures of the dialectic, effective altruists may not have much of an argument either.

6. A worthy heir to Parfit’s legacy

That would be a depressing note to end on, so let me end with a more uplifting line of questioning due to Rini:

Let’s do one last isolation test. Is our world better off for containing William MacAskill? (This would be a pretty brutal way to end a review of any other author, but I’m hoping this one will appreciate it.) I say yes. This book isn’t what it could be, and it shouldn’t distract us from more realistic problems. But MacAskill is a worthy heir to Derek Parfit’s philosophical legacy, adding deep factual research and accessible writing to a provocative line of thought. This has the makings of a decades-long debate, with time for longtermism to grow in nuance. Or so I hope. Whatever else may come, I expect these ideas will be around for the foreseeable future.

Over to you.

Comments

Leave a Reply

%d bloggers like this: