I have said nothing of the splendor of the buildings and of the pomp connected with some of the grand foundations. It would be perhaps to value very favorably the utility of these objects if we estimated them at one-hundredth part of the whole cost.
Turgot, “Foundation“
1. Introduction
This is Part 6 of my series on billionaire philanthropy.
Part 1 introduced the series. Part 2 looked at the uneasy relationship between billionaire philanthropy and democracy. Part 3 examined the project of patient philanthropy, which seeks to create billionaire foundations by holding money over a long period of time.
Part 4 looked at philanthropic motivations. And Part 5 looked at sources of philanthropic wealth.
One thing that is often remarked about wealthy foundations is that they have a tendency to spend lavishly. This tendency fits uncomfortably with effective altruism’s emphasis on efficient spending. If five thousand dollars can save a life today, then a million-dollar project had better be as valuable as saving hundreds of lives right now.
Today, I want to look at some places where effective altruists may have lost their focus on efficiency and spent more lavishly than the principles of effective altruism would seem to justify.
2. The era of extravagance: How are we going to spend all of this money?
In the early days, effective altruists were ruthlessly efficient. Causes and charities had to be evaluated through rigorous, in-depth cost-effectiveness analyses until they could be funded, and then only the best were listed as worthy of receiving donations.
Over time, effective altruists attracted a substantial influx of funding – peak estimates in 2021 hovered in the range of $40 billion in pledged funding. As funding continued to pour in, effective altruists lowered their standards. Increasingly, the name of the game was not frugality or efficiency, but rather extravagance and largess.
Reflecting on the failure of large philanthropic foundations such as the Gates Foundation to spend money faster than they make it, effective altruists increasingly called for investment in megaprojects and massively scalable projects – anything at all to absorb the influx of capital. Longtermist funds sometimes accepted more than 50% of the grant applications submitted to them, and seemingly every smart twenty-something effective altruist found themself with a grant to fund research or entrepreneurial work.
Signs of lofty spending trickled even into more modest endeavors. In late 2021, FTX announced a series of fellowships (`going to the Bahamas’, as it would come to be known in effective altruist circles) which provided all-expenses paid travel and accommodation, often in luxury hotels, for effective altruists to visit the Bahamas and conduct work related to effective altruism. Calls for high salaries abounded, and effective altruists were increasingly reminded that there is no need to sacrifice a good lifestyle and a six-figure salary in order to do good effectively. The FTX Foundation began handing out over $100m in grants with what many observers would consider to be rather less than the usual level of vetting, deliberation, accounting and due process. There was even a $1.5m prize for arguments that could move the foundation’s views about existential risk from artificial agents.
One may well suspect that the bar for funding was considerably lowered during this period, such that projects which would not previously have been funded were now eagerly funded due to the excess of funds. In fact, we do not have to go on suspicion here. Many major effective altruist grantmakers have told us that the bar was lowered.
3. Raising the bar
After the collapse of FTX, effective altruists lost many of their pledged donations. As a result, leading organizations tightened the bar for how valuable projects would have to be before they were funded.
Tightening of the funding bar after a loss of funding provides good evidence that the bar had been loosened by the previous excess of funding, and gives us reasonably precise estimates of how much the bar was loosened. Keep in mind that effective altruists still have billions of dollars in pledged assets, so that the present figures may not tell the full story about how the ready availability of funding has moved the bar.
Open Philanthropy reports this year that only about half of its grants over the past 18 months would meet its revised bar for funding:
So about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate.
Benjamin Todd, the founder of 80,000 Hours, gives a similar figure:
The value of additional money to EA-supported object level causes has gone up by about 2x since the summer, and the funding bar has also gone up 2x. This means projects that seemed marginal previously shouldn’t get funded.
The manager of the general longtermism team at Rethink Priorities describes similar trends in qualitative terms:
Recent changes in the EA funding landscape should probably have large ripple effects on our team’s strategy. I’m primarily thinking about a) the large drop in tech stocks across the board since the beginning of this year and b) the FTX crash. While I have not evaluated this rigorously, between the two events, I expect total EA funds since the beginning of this year to have roughly halved twice (a 4x decline).
And in particular, as the money evaporated, enthusiasm for previous megaprojects began to erode. The passage continues:
Relative to my previous estimate when I decided on team strategy roughly in Jan-Feb of this year, this obviously means that megaproject implementations should be a lower priority for EA (except maybe for a few of the most robust or highest impact projects). In turn, this puts a relative damper on excitement for further research into megaprojects, as well as capital-intensive pilots. In recent months, we’ve also received some negative feedback from other sources for unrelated reasons on the “megaproject” framing, and have internally shifted somewhat away from focusing on researching capital-intensive projects before the FTX crash.
These reflections are good to see. It appears that, to some degree, a dose of funding reality has helped many leading effective altruists to see the importance of thoughtful and cost-effective spending. I am not sure whether the movement has passed the point of wasteful spending, but it may be helpful to pause and reflect on some instances where previous spending seemed more extravagant than strictly necessary.
3. Wytham Abbey
In 2021, the Center for Effective Altruism purchased Wytham Abbey, which has variously been described as a large manor house or a small castle, for the princely sum of 15 million pounds. The usual suspects were not happy.

But this time, skepticism ran a bit deeper. Purchasing a manor house in the name of effectiveness is not a good look. Several community members raised concerns (see here, here, here), and others protested the $3.5 million FTX-backed purchase of a chateau in Hostačov by the European Summer Program in Rationality.
The press was equally uncomplimentary. Here is what Forbes had to say about the chateau in Hostačov:

Even the New Yorker got in on the action:
Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself.
Others were not happy either. Sam Atis writes:
I’m not saying this was definitely a bad idea, but the justifications often seemed a bit sloppy. Take Geoffrey Miller’s comment on the EA Forum: “I’ve been to about 70 conferences in my academic career, and I’ve noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it’s much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.” And perhaps he’s right. But then again, it does sound awfully convenient that the very best use of EA funds was to buy a lovely castle that lots of EAs would be able to spend a lot of time in. And are we really sure that staying in a lovely building is so good for productivity and attracting talent?
The originator of the Wytham Abbey project, Owen Cotton-Barratt, said some important words in defense of the purchase:
I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.
That’s fair enough. But at the same time, Cotton-Barratt seems to have opted for one of the more expensive options on the market. He continues:
We wanted to be close to Oxford for easy access to the intellectual communities there … We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all “country houses”, which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)
Was this a reasonable purchase? Perhaps, but it was probably not the most cost-effective option. Even Cotton-Barratt seems to waver on this front:
We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist.
That’s fair enough. But we should be honest about what happened here. Wytham Abbey was purchased at the height of a billionaire-backed spending spree in which the bar for funding had been progressively lowered, and considerations such as construction delays or venue quality were now allowed to swamp millions of pounds in potential savings, an amount which GiveWell estimates could save hundreds or even thousands of lives.
4. Redwood Research
As effective altruism becomes increasingly focused on existential risks from artificial agents, money has been lavished on AI safety organizations focused on addressing existential risk from artificial agents.
Many examples come to mind, such as the generous philosophy fellowships offered by the Center for AI Safety and $100k in prizes for a single conference symposium. I am, however, somewhat loathe to target these initiatives, because they represent a step in an important direction: sustained engagement with serious academic researchers leading to publication in leading academic journals or conference proceedings.
I’d like to focus instead on Redwood Research. Redwood Research is an EA-backed AI safety lab, which describes its activities as follows:
We think that powerful, significantly superhuman machine intelligence is more likely than not to be created this century. If current machine learning techniques were scaled up to this level, we think they would by default produce systems that are deceptive or manipulative, and that no solid plans are known for how to avoid this. We are doing research that we hope will help align these future systems with human interests. We are especially interested in practical projects that are motivated by theoretical arguments for how the techniques we develop might successfully scale to the superhuman regime.
Redwood Research was the target of a recent critique on the EA Forum. The critique begins by noting that Redwood Research has received a large volume of grant funding from effective altruists:
Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.
For context, GiveWell estimates that this much money could save perhaps four thousand lives today. What are effective altruists getting for their money? One complaint raised by this discussion is that Redwood seems to be producing surprisingly few research outputs of a standard form. The report continues:
Only two Redwood papers have been accepted into conferences: “Adversarial training for high-stakes reliability” (NeurIPS 2022) and “Interpretability in the wild” (ICLR 2023). We have heard that Redwood is planning to submit more papers in the next year though it seems like a lower priority than other projects.
To contextualize this output, in many fields a single scholar at a research university is expected to meet or even greatly exceed this output in a single year. Redwood would no doubt protest that it has produced a number of other research outputs, but chosen to publish these outputs on internet fora and other nontraditional venues. I will leave the cost-effectiveness of this strategy to my readers’ judgment.
Another complaint raised is that Redwood’s salaries are quite high compared to academic salaries in the field:
As a non-academic lab, its staff salaries are about 2-3x as much as academic labs, but low relative to for-profit labs in SF. A junior-mid level research engineer at Redwood would have a salary in the range of $150,000-$250,000 per year as compared to an academic researcher in the Bay, who would earn around $40,000-50,000 per year.
It is worth asking why so much needs to be paid for researchers who are producing significantly fewer standard research outputs than comparable academics produce.
Some readers may be struck by the relatively low academic salary figure. This figure is low because many Redwood researchers are quite junior. The critique explains:
Redwood is notable for hiring almost exclusively from the EA community and having few senior ML researchers. Redwood’s most experienced ML researcher spent 4 years working at OpenAI prior to joining Redwood. This is comparable experience to someone straight out of a PhD program, which is typically the minimum experience level of research scientists at most major AI labs. CTO Buck Shlegeris has 3 years of software engineering experience and a limited ML research background. He also worked as a researcher at MIRI for two years, but MIRI’s focus is quite distinct from contemporary ML. CEO Nate Thomas has a Physics PhD and published some papers on ML during his PhD. Redwood previously employed an individual with an ML PhD, but he recently left. Jacob Steinhardt (Assistant Professor at UC Berkeley) and Paul Christiano (CEO at ARC) have significant experience but are involved only in a part-time advisory capacity.
Why is this a problem? Here I am entirely in agreement with the authors of this criticism, who continue:
Prima facie, the lack of experienced ML researchers at any ML research org is a cause for concern. We struggle to think of research organizations that have produced substantial results without strong senior leadership with ML experience (see our notes on Redwood’s team above). Redwood leadership does not seem to be attempting to address this gap. Instead, they have terminated some of their more experienced ML research staff.
Let’s take stock. The worry is that because effective altruists have (a) become increasingly concerned with AI safety, and (b) found themselves flush with cash, they have become willing to spend lavishly on AI safety without demanding strong returns to investment.
If the discussion above is on the right track, effective altruists have spent at least $20 million on a research lab that struggles to attract and retain even early-mid-career researchers; that has produced a grand total of two published academic papers; and that pays its junior staff substantially more than comparable academic staff at leading universities in the area, most of whom are substantially more successful at producing research outputs of a standard form.
Is this effective spending? I suppose that depends on your views about the likelihood of existential risk from artificial agents and the ability of relatively nontraditional research institutes to make worthwhile progress on the problem. But I think even readers sympathetic to AI safety research might have grounds to question whether money is being spent in proportion to the quality and quantity of outputs received. Recall the opportunity cost: there is broad agreement that $20 million in funding could save thousands of lives today if invested in leading short-termist causes. Is Redwood worth the equivalent of thousands of lives?
5. Conclusion
As corporations, individuals, or foundations become wealthier, they begin spending more lavishly than they did before. This is so banal an observation as to be scarcely worth making, save for the fact that effective altruists have recently become very wealthy in a short amount of time. This gives us good reason to suspect that an influx of wealthy donors may have driven excessive spending in the past, and could well be driving excessive spending today.
This suspicion should cause us to think carefully about the cost-effectiveness of many different projects, including areas such as community building and the purchase of event space. But it may also have special relevance for longtermism.
If we want to understand how the effective altruist movement took such a sharp turn from the days of rigorous cost-effectiveness estimates, randomized control trials, and short-termist giving towards speculative longtermist projects that are increasingly unhinged from standard forms of cost-effectiveness analysis, scientific evidence, and academic debates, we could do worse than to begin with the observation that longtermist projects became fundable as effective altruists acquired so much money that they found themselves pressured to look beyond traditional short-termist causes.
Does this mean that longtermist projects are necessarily wasteful or unjustified? That would be too hasty. But I think it is no accident that many recent funding cuts have come to longtermist causes, or that longtermists were among the primary beneficiaries of previous excesses, including the FTX Foundation. Now might be a good moment to pause and reflect on whether longtermist spending is really as demonstrably cost-effective as it has been taken to be, or whether much of the past spending, based on limited evidence and cost-effectiveness analysis, may have been a byproduct of a culture of philanthropic excess and overspending.
Leave a Reply