Billionaire philanthropy (Part 1: Introduction)

Philanthropy is a form or exercise of power. In the case of wealthy donors or private foundations, especially, it can be a plutocratic exercise of power, the deployment of vast private assets toward a public purpose … big philanthropy is often an unaccountable, non-transparent, donor-directed and perpetual exercise of power. This is something that fits uneasily, at best, in democratic societies.

Rob Reich, Just Giving

1. Introduction

In the beginning, effective altruists were far from wealthy. They advocated earning to give as a means of building philanthropic capital.

Within a few years, the movement had secured the backing of billionaire philanthropists, often based in Silicon Valley.

The influx of billionaires coincided with some major shifts in the effective altruism movement:

  • From short-termist causes (global health, poverty, animal rights) to long-termist causes (existential risk, moral progress, etc.).
  • From old geographical homes (Oxford, Cambridge) to new locations (Silicon Valley, Bahamas).
  • From fiscal tightness (donations allocated on the basis of cost-effectiveness analysis and randomized controlled trials to only the highest-performing organizations) to fiscal looseness (large grants made to a number of up-and-coming organizations with no proven track record).

The collapse of FTX and the corresponding revelations about its founder, Sam Bankman-Fried have raised questions about the role of billionaires within effective altruism, their influence within the movement and in society.

In this series, Billionaire philanthropy, I will examine in depth some of the most challenging questions raised by the role of billionaires within effective altruism, and propose some policy changes.

Here is a non-exhaustive preview of the questions I will ask.

2. Motivations

What drives billionaire philanthropists? Are they genuine altruists, or are they instead driven by selfish motivations (promoting their own image; avoiding taxes). Do they want to give effectively, to the best charities regardless of cause area, or are they driven to allocate funds to the areas closest to their own hearts?

These questions took center stage when it was revealed that Sam Bankman-Fried may not have been driven by altruism. In an interview with Vox, SBF had this to say about his ethical commitments to effective altruism.

To what extent are billionaire philanthropists driven by genuine altruism? Or is it:

  • Reputation preservation: “It’s what reputations are made of”?
  • Cynical nihilism: “This dumb game we woke westerners play where we say all the right shibboleths and so everyone like us”?
  • A sticky, horrible mixture of ethics and self-interest: “yeah … I mean that [self interest] is not all of it. but it’s a lot.”?

And how should the answer to these questions change the ways that we engage with billionaire philanthropists?

3. Democracy

What is the proper role of billionaire philanthropy in democratic societies?

A fundamental principle of democracy is that we should strive to give all citizens an equal voice in the allocation of public goods. This is, of course, an unachievable ideal, but we may still balk at strong departures from this ideal.

It is well known that billionaires use their wealth to wield inordinate amounts of political influence in society. And indeed, it has emerged that SBF donated large amounts of money to American politicians, and he appears to have donated generously to both sides.

This is not the first attempt by effective altruists to use their wealth to influence politics, and it will not be the last. But to what extent should we allow philanthropic wealth to influence democratic decisionmaking? What checks, balances and regulations should be placed on philanthropy?

According to many billionaires, regulations are unnecessary:

But should we believe them?

4. Tax policy

Why shouldn’t billionaires be allowed to spend their own money, you ask? After all, they earned it.

It’s not quite simple. Tax policy amplifies the philanthropic spending power of billionaires by exempting them from taxes on donated income, capital gains in philanthropic foundations, property taxes on philanthropic assets, and other taxes to which they would otherwise be subject. As Rob Reich has emphasized, this amounts to a massive public subsidy of billionaire philanthropy.

If public money is being used to supplement billionaire philanthropy, then it is no longer obvious why the public should have no say in how that money is spent.

What changes, if any should be made to tax policy with regard to philanthropy? And until those changes are made, what kind of influence can the public expect to have on philanthropic behavior? These are some of the questions I will ask about philanthropy and taxation.

5. Sources of wealth

Why shouldn’t billionaires be allowed to spend their own money, you ask? After all, they earned it.

It is very hard to become a billionaire. Many of those who do have acquired their wealth in unsavory ways. In the case of SBF, there are serious allegations of fraud, and these allegations have been echoed even within the EA community. (Holden Karnofsky, CEO of Open Philanthropy, writes that “I now think it’s very likely that FTX engaged in outrageous, unacceptable fraud.”). (Update: Charges have now been filed).

This is not the first time that billionaire donors to EA have been accused of criminal business dealings. Ben Delo publicly committed to giving much of his money to EA causes, but soon thereafter found himself convicted on charges related to “willfully failing to establish, implement, and maintain an anti-money laundering … program”.

Wealth acquired through fraud, or by turning a blind eye to money laundering, is not justly acquired. Effective altruists need to take a look at the sources of philanthropic wealth used to fund their activities, to ensure that this wealth is legitimate and acquired in ways that are not harmful to the world.

Beyond outright fraud, there are other questions about the acquisition of philanthropic wealth. For example, what is the role of Silicon Valley entrepreneurs in driving climate change? Given the role they have played in driving climate change, can they justly refuse to finance climate mitigation on the grounds that climate mitigation is not a longtermist cause? These are some of the questions about the sources of philanthropic wealth that I will examine.

6. Permissible donor influence

Philanthropists are often given a wide degree of influence over the manner in which their money is spent. We need to think carefully about whether wealthy donors steer donations towards the most effective causes, or whether the money would be more effectively spent if it were allocated on the basis of impartial reasoning and evidence.

By way of illustration, SBF exerted a substantial influence over the shape of effective altruism by establishing the FTX Foundation. The foundation awarded hundreds of millions of dollars in funding, which helped to steer the EA community further towards a specific set of longtermist causes.

Some of the projects funded may not be uncontroversially effective. For example, the FTX Foundation offered a 1.5 million dollar prize to anyone who could convince them that artificial intelligence is unlikely to kill us all in the near future. Is that really the best use of philanthropic capital?

7. The political theory of philanthropy

Another lesson gleaned from Rob Reich and his students is that philanthropy is a political practice, shaped by political and social rules, norms and institutions.

Throughout history, philanthropy has taken many different forms. In Athens, philanthropy was compulsory: wealthy citizens could be compelled by the government to fund a trireme, or other military or civic goods. In other societies, philanthropy has been much more tightly regulated, subject to greater disbursement requirements, regulatory oversight, and limitations on the size and temporal duration of foundations.

As billionaires increasingly use philanthropic giving to magnify their influence, we need to ask whether it is time to rethink the role of billionaire philanthropy in society, rewriting the rules to make sure that philanthropic influence is used fairly, effectively, and in a way that does not disempower ordinary citizens.

8. Looking ahead

That’s all for now. Let me know in the comments section what you think about this new series. Are there other topics connected to billionaire philanthropy that you would like to hear about? What should we think about SBF and the FTX collapse? Over to you.

Comments

8 responses to “Billionaire philanthropy (Part 1: Introduction)”

  1. curiouskiwicat Avatar
    curiouskiwicat

    I am moderately confident that when SBF referred to ethics as a “front” and a “dumb game” he was referring to ethics about crypto and the sorts of regulations we ought to have in that space to protect people from harm. I do not think he was referring to longtermism. He seemed blithely dismissive of ordinary moral intuitions, including e.g. biting the bullet on the St Petersburg paradox. That was the particular style of his longtermist ethic, and it strikes me as almost contrarian and certainly isn’t the sort “right thing” or shibboleth you say to “get people to like” you. On the other hand, if you’re projecting the image of a responsible crypto guy or a responsible anyone who will be taken seriously among Washington democrats, you’d better say the right things on crypto regulation.

    1. David Thorstad Avatar

      Thanks! I’m curious to get your read on SBF as well.

      I take it almost everyone thinks that SBF came off worse in that interview than he usually does. (It’s hard to imagine a worse interview).

      I take it that someone who is alleged to have perpetrated large-scale fraud, and who used their association with effective altruism to build up their personal image as a trustworthy businessperson, should probably not be presumed by default to always have (or act on) the best ethical views. And I take it that it some suspicion might be warranted towards this person’s later public remarks on morality.

      I’m not always sure how to read SBF. I haven’t met him. I suspect he’s probably a better guy than the Kelsey Piper interview made him out to be, and a worse guy than most of the world thought he was a few months ago. Perhaps one thing I want to urge in this series is that understanding the motivations of billionaire philanthropists is hard, and we really can’t assume that because they make the right moral statements in public, or even because they donate money to charity, that they believe in what they are doing or saying.

      1. curiouskiwicat Avatar
        curiouskiwicat

        OK. For clarity, biting the bullet on the St Petersburg Paradox seems to me a very dangerous (certainly in hindsight, but I listened to that comment on Tyler Cowen’s podcast in the first half of 2022 and it startled me even at the time) and probably wrong ethical view to have. My intuitive read on SBF is that he was genuinely committed to maximizing expected value in the long term. That doesn’t entail me thinking he had (let along was acting on) the “best ethical views” because I think those views were quite harmful.

        The reason it’s important-to-me to point this out is that, while SBF probably did a bunch of reckless things not even the most committed and consistent longtermist could rationally endorse, I think those longtermist views probably also contributed to the motivation to do those reckless things. It’s important we don’t dismiss _that_ by default, given that there is a whole community of longtermists out there promoting the practice of acting on longtermism.

    2. David Thorstad Avatar

      Replying here to your previous comment (long story short: I’m still working out some technical kinks with nested comments).

      Thanks again for your thoughtful comment! Yeah, remarks like the one you quote about St Petersburg should have been glaring red flags in hindsight. A longtermist movement really should not want to take on as much risk as SBF seems to have been comfortable with.

      I think you are definitely right that longtermism may have played a role in this risk-taking, by inflating the possible payoffs of the risks. Richard Pettigrew has a nice paper on how risk-aversion might soften some longtermist claims (https://globalprioritiesinstitute.org/effective-altruism-risk-and-human-extinction-richard-pettigrew-university-of-bristol/). He also cross-posted this paper on the EA forum a few months back.

  2. kellerscholl Avatar

    I found this post *frustrating*.

    You talk about the motivation of “billionaires”, as an abstract class. But it’s a handful of people!

    Moskovitz & Tuna, and it’s certainly the repeated public statements of everyone involved that Moskovitz actively defers to the staff at Open Phil and does not exert influence on spending.

    Delo, a flash in the pan I didn’t even remember before he went down for crimes.

    Bankman-Fried, who’s stated a few different interpretations of his motives.

    Ellison, who has never talked publicly about her philanthropic giving strategy or framework, and gave the impression of being a relatively normal EA, just with more money.

    I don’t see how it could be useful to try to discuss the abstract class.

    Instead of making concrete statements, the post has leading questions, or unsupported assertions. “A fundamental principle of democracy is that we should strive to give all citizens an equal voice in the allocation of public goods.” This isn’t…accurate? It isn’t a foundational principle of leading modern thinkers of democracy: I don’t agree with Habermas on much, but it’s a common framework that the point is shared deliberation.

    To pick one obvious example, some goods will affect some citizens more than others. It is at the very least not a *contradiction* of the democratic ideal that I think is commonly understood to say that homeless people should have more say over where and how resources aimed at supporting them are allocated. “nothing about us without us” is not intended to say “without us, in the sense that we will be represented proportional to our fraction of the population”.

    To pick another, democracy and expertise certainly exist in uneasy tension, but it does not seem absurd to say that some questions, some of which will be allocative, are best determined by either elected representatives or appointed experts. I am not saying that it is a bad principle: I’m saying it’s not a fundamental principle of democracy.

    Returning to the point about leading questions that don’t defend themselves, the role of SV entrepreneurs in climate change? The role of Texan entrepreneurs, and more importantly of state-owned enterprises like Saudi Aramco (and Russian and Venezuelan equivalents), and the conventional business space in general, seems much more obvious. The net impact of silicon valley on climate change…get points for agricultural improvements, for spreading solar power and electric vehicles. Lose points for bitcoin and ordinary consumption. Geoengineering depends on your PoV. I’m probably (70%) missing the most important factor, which is fine: it’s a very complicated topic and multiple books could easily be written about the impact of SV on each sector. But the post just…asks the question, then assumes the answer to ask a new question, without justifying the point.

    Perhaps I’m walking in with too many unshared assumptions, and it’s always easy to ask someone to make a post longer and respond to all possible criticisms. But right now, I don’t think that this is as strong as it could be. I hope to see resolutions to more of the uncertainties in the upcoming posts, which I am looking forward to. Thank you for writing this!

    1. David Thorstad Avatar

      Hi Keller,

      It’s good to hear from you!

      This post is the introduction to a series on billionaire philanthropy. The purpose of this post is not to offer an extended exposition of any particular point, but rather to offer a short preview of some of the points to be discussed later in the series.
      Part 2 (https://ineffectivealtruismblog.com/2022/12/29/philanthropy-and-democracy/) of the series deals with the relationship between philanthropy and democracy. Part 3 (coming soon) deals with patient philanthropy.

      ((On billionaires as individuals))

      Billionaires are people, as are landlords, violinists, street sweepers and college professors. Sometimes it makes sense to speak about individual people: for example, you might say that your Topology professor was a twat (not mine: he was great!). Sometimes it makes sense to speak about groups of people: for example, conservatives complain (not entirely without justification) that college professors in many countries lean liberal.

      As you say, there have been only a few billionaires associated with effective altruism. At least two (Delo, Bankman-Fried) have been charged with serious criminal conduct. That is compatible with thinking that some (Moskovitz, Tuna) are better people. In fact, I rather hope that they turn out to be of impeccable character, since at present they are carrying a very large portion of the effective altruism movement on their backs. But even so, two rotten apples from a small basket is a concerning number. It’s the kind of number that would motivate an agricultural inspector to think about dumping the basket.

      ((On democratic ideals))

      There are various interpretations of democratic ideals. Not everyone agrees on what they are. Almost everything that I say about philanthropy and democracy in Part 2 could be redone in a framework of shared deliberation. For example, we might say that philanthropic decisionmaking puts the allocation of large pots of public goods outside the reach of shared deliberation by society at large. We might say that this introduces a plutocratic bias into public goods allocation that would be lessened if goods were allocated through shared public deliberation. And we might say that processes of philanthropic decisionmaking are less transparent and accountable than processes of public deliberation are.

      ((On emissions and reparative justice in Silicon Valley))

      Focusing specifically on cryptocurrency (the source of wealth for two major EA donors, Bankman-Fried and Delo), my understanding is that scientists are nearly unanimous in taking cryptocurrency to be a significant and concerning source of carbon emissions. See for example this piece in Nature Sustainability: https://www.nature.com/articles/s41893-018-0152-7?from=body.

      More broadly, while there has doubtless been progress in many companies, it is usually thought that most tech companies today are significant emitters of un-offset carbon. See for example this piece from Tech Monitor: https://techmonitor.ai/leadership/sustainability/tech-industry-carbon-emissions-progress.

      A broad range of scholars, activists and ordinary citizens have called for tech billionaires, like oil moguls, to pay to offset the harms caused by the industries that built their wealth. You are quite right that the class of guilty parties extends far wider than tech billionaires here. Theories of reparative justice would hold that many billionaires have a pressing obligation to pay to offset some or all of the climate damages caused by their companies.

  3. Chris Leong Avatar
    Chris Leong

    “ For example, the FTX Foundation offered a 1.5 million dollar prize to anyone who could convince them that artificial intelligence is unlikely to kill us all in the near future. Is that really the best use of philanthropic capital?”

    I’m surprised you’re so skeptical of this prize. I would assume you’d be aware the the community is spending quite significant amounts of money on AI safety research, so if it turned out we were mistaken, then that would save EA a lot of money. On the other hand, if EA were underrating the risk and AGI were almost certain to kill us, $1.5 million would be cheap for discovering that. Also note that the highest prize amounts were only being awarded for causing them to significantly update their views.

    Where does your skepticism come from?

    1. David Thorstad Avatar

      Thanks Chris!

      My primary skepticism comes from the fact that most people do not think AI poses anything like the level of existential risk assumed by leading effective altruists. (I will talk about this skepticism later on in my series “Exaggerating the risks”. My current plan is to begin with a review of the Carlsmith report, although I would be interested to hear if there are other arguments for AI risk that readers are more concerned to see discussed).

      The judges offer a $1.5M prize to anyone who can convince them that there is a less than 3% chance of existential catastrophe conditional on AGI being developed by 2070, and $500K to anyone who can convince them the chance is lower than 7%.

      I think there is very little evidence to support risk assignments of this magnitude, so I think that the judges should already have changed their views with no need to award the prize.

      A second avenue for skepticism if that once we understand how strong the relevant notion of AGI is, I think we should be skeptical of arguments that AGI of this sort will be developed soon. So in particular, I think that a 20% lower threshold on AGI being developed by 2043 ($500K prize) is greatly exaggerated. I talk about one reason for skepticism in my paper “Against the singularity hypothesis”, which I will split out into a separate blog series. The paper is under review for publication in an academic journal, and is available on the GPI website.

      My skepticism overlaps with concerns about the composition of the judging panel. When the contest was launched, I raised the fact that many people think the evidence already supports probability assignments well below the lower thresholds. I asked whether the judging panel might be broadened to include people who hold this view. I suggested that the verdicts of the panel would be unlikely to be viewed as credible, legitimate or fair if the panel were composed entirely of people who think the lower thresholds have not already been met. No response or change was made by the judges.

      A final source of skepticism comes from the sources cited as examples of prizeworthy “significant original analysis” by the judges. They write: “Past works that would have qualified for this prize include: Yudkowsky 2008, Superintelligence, Cotra 2020, Carlsmith 2021, and Karnofsky’s Most Important Century series”.

      I have deep concerns about many of these pieces. Most are unpublished, including some outright blog posts. (I hope that readers enjoy and benefit from my blog, but I would be appalled to find any of my posts on a list of this sort.) Almost all fall well below the standards of rigor, precision and evidence expected of a standard academic journal article in a mid-ranking journal, within a movement populated by people capable of publishing standout articles in the highest-ranking journals. I will discuss some of my specific concerns about many of these sources later on in my series “Exaggerating the risks”. Suffice it so say that we disagree substantially about what it means to bring reason and evidence to bear on questions about the future of artificial intelligence.

Leave a Reply

%d bloggers like this: