beige and brown floral digital wallpaper

Billionaire philanthropy: (Part 6: From efficiency to extravagance)

I have said nothing of the splendor of the buildings and of the pomp connected with some of the grand foundations. It would be perhaps to value very favorably the utility of these objects if we estimated them at one-hundredth part of the whole cost.

Turgot, “Foundation
Listen to this post

1. Introduction

This is Part 6 of my series on billionaire philanthropy. 

Part 1 introduced the series. Part 2 looked at the uneasy relationship between billionaire philanthropy and democracy. Part 3 examined the project of patient philanthropy, which seeks to create billionaire foundations by holding money over a long period of time.

Part 4 looked at philanthropic motivations. And Part 5 looked at sources of philanthropic wealth.

One thing that is often remarked about wealthy foundations is that they have a tendency to spend lavishly. This tendency fits uncomfortably with effective altruism’s emphasis on efficient spending. If five thousand dollars can save a life today, then a million-dollar project had better be as valuable as saving hundreds of lives right now.

Today, I want to look at some places where effective altruists may have lost their focus on efficiency and spent more lavishly than the principles of effective altruism would seem to justify.

2. The era of extravagance: How are we going to spend all of this money?

In the early days, effective altruists were ruthlessly efficient. Causes and charities had to be evaluated through rigorous, in-depth cost-effectiveness analyses until they could be funded, and then only the best were listed as worthy of receiving donations.

Over time, effective altruists attracted a substantial influx of funding – peak estimates in 2021 hovered in the range of $40 billion in pledged funding. As funding continued to pour in, effective altruists lowered their standards. Increasingly, the name of the game was not frugality or efficiency, but rather extravagance and largess.

Reflecting on the failure of large philanthropic foundations such as the Gates Foundation to spend money faster than they make it, effective altruists increasingly called for investment in megaprojects and massively scalable projects – anything at all to absorb the influx of capital. Longtermist funds sometimes accepted more than 50% of the grant applications submitted to them, and seemingly every smart twenty-something effective altruist found themself with a grant to fund research or entrepreneurial work.

Signs of lofty spending trickled even into more modest endeavors. In late 2021, FTX announced a series of fellowships (`going to the Bahamas’, as it would come to be known in effective altruist circles) which provided all-expenses paid travel and accommodation, often in luxury hotels, for effective altruists to visit the Bahamas and conduct work related to effective altruism. Calls for high salaries abounded, and effective altruists were increasingly reminded that there is no need to sacrifice a good lifestyle and a six-figure salary in order to do good effectively. The FTX Foundation began handing out over $100m in grants with what many observers would consider to be rather less than the usual level of vetting, deliberation, accounting and due process. There was even a $1.5m prize for arguments that could move the foundation’s views about existential risk from artificial agents.

One may well suspect that the bar for funding was considerably lowered during this period, such that projects which would not previously have been funded were now eagerly funded due to the excess of funds. In fact, we do not have to go on suspicion here. Many major effective altruist grantmakers have told us that the bar was lowered.

3. Raising the bar

After the collapse of FTX, effective altruists lost many of their pledged donations. As a result, leading organizations tightened the bar for how valuable projects would have to be before they were funded.

Tightening of the funding bar after a loss of funding provides good evidence that the bar had been loosened by the previous excess of funding, and gives us reasonably precise estimates of how much the bar was loosened. Keep in mind that effective altruists still have billions of dollars in pledged assets, so that the present figures may not tell the full story about how the ready availability of funding has moved the bar.

Open Philanthropy reports this year that only about half of its grants over the past 18 months would meet its revised bar for funding:

So about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate.

Benjamin Todd, the founder of 80,000 Hours, gives a similar figure:

The value of additional money to EA-supported object level causes has gone up by about 2x since the summer, and the funding bar has also gone up 2x. This means projects that seemed marginal previously shouldn’t get funded.

The manager of the general longtermism team at Rethink Priorities describes similar trends in qualitative terms:

Recent changes in the EA funding landscape should probably have large ripple effects on our team’s strategy. I’m primarily thinking about a) the large drop in tech stocks across the board since the beginning of this year and b) the FTX crash. While I have not evaluated this rigorously, between the two events, I expect total EA funds since the beginning of this year to have roughly halved twice (a 4x decline).

And in particular, as the money evaporated, enthusiasm for previous megaprojects began to erode. The passage continues:

Relative to my previous estimate when I decided on team strategy roughly in Jan-Feb of this year, this obviously means that megaproject implementations should be a lower priority for EA (except maybe for a few of the most robust or highest impact projects). In turn, this puts a relative damper on excitement for further research into megaprojects, as well as capital-intensive pilots. In recent months, we’ve also received some negative feedback from other sources for unrelated reasons on the “megaproject” framing, and have internally shifted somewhat away from focusing on researching capital-intensive projects before the FTX crash.

These reflections are good to see. It appears that, to some degree, a dose of funding reality has helped many leading effective altruists to see the importance of thoughtful and cost-effective spending. I am not sure whether the movement has passed the point of wasteful spending, but it may be helpful to pause and reflect on some instances where previous spending seemed more extravagant than strictly necessary.

3. Wytham Abbey

In 2021, the Center for Effective Altruism purchased Wytham Abbey, which has variously been described as a large manor house or a small castle, for the princely sum of 15 million pounds. The usual suspects were not happy.

But this time, skepticism ran a bit deeper. Purchasing a manor house in the name of effectiveness is not a good look. Several community members raised concerns (see here, here, here), and others protested the $3.5 million FTX-backed purchase of a chateau in Hostačov by the European Summer Program in Rationality.

The press was equally uncomplimentary. Here is what Forbes had to say about the chateau in Hostačov:

Even the New Yorker got in on the action:

Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself.

Others were not happy either. Sam Atis writes:

I’m not saying this was definitely a bad idea, but the justifications often seemed a bit sloppy. Take Geoffrey Miller’s comment on the EA Forum: “I’ve been to about 70 conferences in my academic career, and I’ve noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it’s much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.” And perhaps he’s right. But then again, it does sound awfully convenient that the very best use of EA funds was to buy a lovely castle that lots of EAs would be able to spend a lot of time in. And are we really sure that staying in a lovely building is so good for productivity and attracting talent? 

The originator of the Wytham Abbey project, Owen Cotton-Barratt, said some important words in defense of the purchase:

I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.

That’s fair enough. But at the same time, Cotton-Barratt seems to have opted for one of the more expensive options on the market. He continues:

We wanted to be close to Oxford for easy access to the intellectual communities there … We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all “country houses”, which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)

Was this a reasonable purchase? Perhaps, but it was probably not the most cost-effective option. Even Cotton-Barratt seems to waver on this front:

We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist.

That’s fair enough. But we should be honest about what happened here. Wytham Abbey was purchased at the height of a billionaire-backed spending spree in which the bar for funding had been progressively lowered, and considerations such as construction delays or venue quality were now allowed to swamp millions of pounds in potential savings, an amount which GiveWell estimates could save hundreds or even thousands of lives.

4. Redwood Research

As effective altruism becomes increasingly focused on existential risks from artificial agents, money has been lavished on AI safety organizations focused on addressing existential risk from artificial agents.

Many examples come to mind, such as the generous philosophy fellowships offered by the Center for AI Safety and $100k in prizes for a single conference symposium. I am, however, somewhat loathe to target these initiatives, because they represent a step in an important direction: sustained engagement with serious academic researchers leading to publication in leading academic journals or conference proceedings.

I’d like to focus instead on Redwood Research. Redwood Research is an EA-backed AI safety lab, which describes its activities as follows:

We think that powerful, significantly superhuman machine intelligence is more likely than not to be created this century. If current machine learning techniques were scaled up to this level, we think they would by default produce systems that are deceptive or manipulative, and that no solid plans are known for how to avoid this. We are doing research that we hope will help align these future systems with human interests. We are especially interested in practical projects that are motivated by theoretical arguments for how the techniques we develop might successfully scale to the superhuman regime.

Redwood Research was the target of a recent critique on the EA Forum. The critique begins by noting that Redwood Research has received a large volume of grant funding from effective altruists:

Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.

For context, GiveWell estimates that this much money could save perhaps four thousand lives today. What are effective altruists getting for their money? One complaint raised by this discussion is that Redwood seems to be producing surprisingly few research outputs of a standard form. The report continues:

Only two Redwood papers have been accepted into conferences: “Adversarial training for high-stakes reliability” (NeurIPS 2022) and “Interpretability in the wild” (ICLR 2023). We have heard that Redwood is planning to submit more papers in the next year though it seems like a lower priority than other projects.

To contextualize this output, in many fields a single scholar at a research university is expected to meet or even greatly exceed this output in a single year. Redwood would no doubt protest that it has produced a number of other research outputs, but chosen to publish these outputs on internet fora and other nontraditional venues. I will leave the cost-effectiveness of this strategy to my readers’ judgment.

Another complaint raised is that Redwood’s salaries are quite high compared to academic salaries in the field:

As a non-academic lab, its staff salaries are about 2-3x as much as academic labs, but low relative to for-profit labs in SF. A junior-mid level research engineer at Redwood would have a salary in the range of $150,000-$250,000 per year as compared to an academic researcher in the Bay, who would earn around $40,000-50,000 per year.

It is worth asking why so much needs to be paid for researchers who are producing significantly fewer standard research outputs than comparable academics produce.

Some readers may be struck by the relatively low academic salary figure. This figure is low because many Redwood researchers are quite junior. The critique explains:

Redwood is notable for hiring almost exclusively from the EA community and having few senior ML researchers. Redwood’s most experienced ML researcher spent 4 years working at OpenAI prior to joining Redwood. This is comparable experience to someone straight out of a PhD program, which is typically the minimum experience level of research scientists at most major AI labs. CTO Buck Shlegeris has 3 years of software engineering experience and a limited ML research background. He also worked as a researcher at MIRI for two years, but MIRI’s focus is quite distinct from contemporary ML. CEO Nate Thomas has a Physics PhD and published some papers on ML during his PhD. Redwood previously employed an individual with an ML PhD, but he recently left. Jacob Steinhardt (Assistant Professor at UC Berkeley) and Paul Christiano (CEO at ARC) have significant experience but are involved only in a part-time advisory capacity.

Why is this a problem? Here I am entirely in agreement with the authors of this criticism, who continue:

Prima facie, the lack of experienced ML researchers at any ML research org is a cause for concern. We struggle to think of research organizations that have produced substantial results without strong senior leadership with ML experience (see our notes on Redwood’s team above). Redwood leadership does not seem to be attempting to address this gap. Instead, they have terminated some of their more experienced ML research staff.

Let’s take stock. The worry is that because effective altruists have (a) become increasingly concerned with AI safety, and (b) found themselves flush with cash, they have become willing to spend lavishly on AI safety without demanding strong returns to investment.

If the discussion above is on the right track, effective altruists have spent at least $20 million on a research lab that struggles to attract and retain even early-mid-career researchers; that has produced a grand total of two published academic papers; and that pays its junior staff substantially more than comparable academic staff at leading universities in the area, most of whom are substantially more successful at producing research outputs of a standard form.

Is this effective spending? I suppose that depends on your views about the likelihood of existential risk from artificial agents and the ability of relatively nontraditional research institutes to make worthwhile progress on the problem. But I think even readers sympathetic to AI safety research might have grounds to question whether money is being spent in proportion to the quality and quantity of outputs received. Recall the opportunity cost: there is broad agreement that $20 million in funding could save thousands of lives today if invested in leading short-termist causes. Is Redwood worth the equivalent of thousands of lives?

5. Conclusion

As corporations, individuals, or foundations become wealthier, they begin spending more lavishly than they did before. This is so banal an observation as to be scarcely worth making, save for the fact that effective altruists have recently become very wealthy in a short amount of time. This gives us good reason to suspect that an influx of wealthy donors may have driven excessive spending in the past, and could well be driving excessive spending today.

This suspicion should cause us to think carefully about the cost-effectiveness of many different projects, including areas such as community building and the purchase of event space. But it may also have special relevance for longtermism.

If we want to understand how the effective altruist movement took such a sharp turn from the days of rigorous cost-effectiveness estimates, randomized control trials, and short-termist giving towards speculative longtermist projects that are increasingly unhinged from standard forms of cost-effectiveness analysis, scientific evidence, and academic debates, we could do worse than to begin with the observation that longtermist projects became fundable as effective altruists acquired so much money that they found themselves pressured to look beyond traditional short-termist causes.

Does this mean that longtermist projects are necessarily wasteful or unjustified? That would be too hasty. But I think it is no accident that many recent funding cuts have come to longtermist causes, or that longtermists were among the primary beneficiaries of previous excesses, including the FTX Foundation. Now might be a good moment to pause and reflect on whether longtermist spending is really as demonstrably cost-effective as it has been taken to be, or whether much of the past spending, based on limited evidence and cost-effectiveness analysis, may have been a byproduct of a culture of philanthropic excess and overspending.

Comments

16 responses to “Billionaire philanthropy: (Part 6: From efficiency to extravagance)”

  1. John Halstead Avatar
    John Halstead

    I am growing somewhat tired of these weak criticisms of the Wytham Abbey purchase. You are not going to get a good picture of the merits of purchasing Wytham Abbey or other properties if you treat it like a grant rather than what it is – an investment which makes positive returns. It’s not like the £15 million has been burned. Property prices in Oxford increase at 5-8% per year, and the investment can be realised when CEA tries to sell the property.

    This obvious point was pointed out several times in the EA Forum comments, and should be pointed out in your article. What you have done here is superficial and obviously flawed analysis to bolster your argument.

    1. David Thorstad Avatar

      My readers are aware that property can be sold, and that property values often rise. These facts are generally not accounted sufficient justification for the purchase of luxury property. It is usually thought that the purchase of luxury property needs to be supported by a specific business justification.

      1. John Halstead Avatar
        John Halstead

        Lots of people in the comments on the Wytham Abbey post apparently did not realise this

    2. Richard Nerland Avatar
      Richard Nerland

      Sounds like OP is secondly definitely investing in Malengo then.

  2. Charles Grünfeld Avatar
    Charles Grünfeld

    There is another area where resources could be used more efficiently, even more so than in academia: the recruitment strategies in certain EA organizations seems to be heavily favoring women, regardless of the quality of their curriculum and/or intelligence.

    It is dificult not to see here the desire of some of EA members, mostly autistic men who struggle with intimate relationships, to meet and to interact with women who would otherwise reject them.

    While it might not be the most politically correct direction to look into, you should seriously investigate that

    1. David Thorstad Avatar

      I’ve approved this comment, which is in flagrant violation of the blog’s comment policy as well as of common decency and established fact, so that the willingness of this commentator to hijack an unrelated post to make life even harder for women within the effective altruist community than it already is will be plainly visible to all.

      Charles, kindly rethink your life.

      1. Evelyn Avatar

        Not to mention negatively stereotyping autistic people, who generally don’t intentionally take advantage of and harass women. (And for what it’s worth, the autistic community includes a lot of women and trans people.)

  3. River Avatar
    River

    I’m going to respond specifically to the discussion of Redwood Research. Your argument is largely to compare Redwood to traditional academic research, and I think that is a mistake. Redwood’s goal is not to enjoy the life of the ivory tower. Redwood’s goal is quite explicitly to reduce existential risks from AI. Comparing it to academia only makes sense if you think academia makes some measurable contribution to reducing existential risks from AI. Why would you think that? My assumption is that it does not, and I don’t think you even try to explain why you think it does.

    The other thing you overlook, possibly because your training is in philosophy, which doesn’t have an analog, is that a lot of cutting edge science and technology research has always been done in industry, not academia, and industry has always paid better than academia. A generation or two ago the best stuff was coming out of Bell Labs. Currently, in the field of AI, the best stuff is coming from three organizations: Google’s Deep Mind, Anthropic, and OpenAI. If you want to make a fair comparison between Redwood and similar employers, those three organizations are the ones to compare to. None of them are academic labs. All of them pay like other industry jobs. So in terms of pay, Redwood is entirely normal. If anything, it probably underpays its staff. The other comparison you make, that Redwood is significantly overweight on more junior people, is probably true, but your point would be much stronger if you compared Redwood to the three big AI labs rather than academia.

    1. David Thorstad Avatar

      Thanks River!

      A good deal of AI-related research in both academic and non-academic contexts is directed at making artificial systems safer and more suitable for human use. It is to be hoped that interventions which make feasible systems safer in a variety of realistic contexts will also help us learn to make downstream systems safe in many contexts. Academic researchers are loathe to over-promise, and I would be remiss if I claimed that most research currently being conducted will have a significant impact on far-future, low-likelihood threats. While I hope that future generations, if they do face existential threats from artificial agents, will learn from our work, they may be to some extent on their own.

      Effective altruists often stress the need to evaluate organizations on the basis of rigorous cost-effectiveness analysis. When evaluating research organizations, this means evaluating those organizations based on the quality, quantity and reach of the research that they disseminate. For these purposes, it is entirely germane to ask how many high-quality research outputs have been produced for a $20 million investment in Redwood Research, and to compare those outputs to the outputs produced by other research groups. It is also germane to ask whether researchers have the experience, education and seniority to perform well in their roles, and whether the organization is capable of attracting and retaining high-quality senior talent.

      It is certainly true that in many fields, reputable non-academic research groups produce high-quality outputs and play a significant role in advancing the field. This is especially true in resource-intensive fields such as artificial intelligence or pharmaceutical research, in which academic researchers don’t always have the resources to compete with better-funded non-academic groups. Bell Labs, as you mention, is quite famous and enjoys an excellent reputation, as do several of the AI research organizations that you mention.

      I would be happy to evaluate organizations such as Redwood Research by comparing them to research groups such as Bell Labs. Do you think that Redwood Research would stack up favorably against Bell Labs on measures such as quality and quantity of research outputs, researcher qualifications, or reputation in the broader scholarly community? I suspect that Bell Labs might come out looking better in this comparison.

      As someone who recently faced the choice of whether to stay in academia, I am keenly aware that academic salaries are often lower than non-academic salaries. However, in conducting cost-effectiveness analyses, it is important to consider all feasible options. If non-academic researchers are more expensive to hire than academic researchers are, then we should demand comparatively more and better research outputs from them in order to justify continued funding. If, by contrast, they significantly underperform available (and cheaper) junior academic researchers, then funding should be shifted to higher-performing researchers.

  4. Jason Avatar
    Jason

    I appreciate this post and generally agree that there has been an unfortunate loss of fiscal restraint in certain segments of EA.

    Whenever there is discussion of salaries, I think it’s important to separate out the claim that money isn’t being spent very effectively from a potential claim that rank-and-file EAs are getting fat at the EA feeding trough. (I don’t read your post as making the latter claim, but it’s always vaguely in the background whenever the issue of salaries comes up. Thus, I think it should be differentiated.)

    Even at Redwood, it sounds like the average employee is taking a pay cut over a job they could have gotten at a for-profit lab. (EA hiring is rather competitive, so the assumption that the employee could have gotten a higher-paying job elsewhere seems fairly safe.) Plus the potential financial upsides at the for-profit labs are probably much better (possible equity down the road, more transferrable job skills to other high-paying work, etc.)

    From the perspective of deciding whether people at Redwood are getting paid an inappropriate wage, I don’t put much stock by default in the fact that academic labs are paying little more than San Francisco minimum wage (which goes to $18/hour as of next month). It’s clear the academic labs are able to attract people to work for far, far less than their market rate. This could suggest that appropriate talent would be willing to work for Redwood for less than it is paying, or might not.

    The “higher salaries” link is less clear — it references $90K/year in San Francisco for community builders, but that sounded like a grant amount (so the equivalent salary from an employer may be less). Average starting salary for a US college grad is about $55K, but doubtless higher in San Fran and for people with the qualifications of the median EA hire. Limited advancement options, long hours, unclear transferrable skills to other kinds of work, poor job security. My guess is that the average occupant of these positions could have done better for themselves financially.

    It’s definitely true that EA orgs pay better than the median non-profit. But I don’t think most non-profits would claim that they had made a conscious decision that significantly below-market salaries were best for efficiency reasons; I think most would say that they are underfunded and pay less than they would like because there isn’t enough money to go around.

    Orthodox EA thought suggests that lowballing people on salaries is actually counterproductive for effectiveness, because hiring the right person for the job is often much better than hiring a merely good candidate. I find that very hard to test . . . but it doesn’t seem implausible to me. Under most circumstances, I’d expect for-profit companies not to pay their employees more than was utility-maximizing for the company’s purposes. If they could pay their employees half as much and get almost the same productivity, they likely would. And I don’t have any reason to think the non-profit sector has a radically different wage;productivity curve than the private sector. Because non-profits can offer compensation in warm fuzzies that for-profits generally cannot, it’s likely many non-profits can be competitive with for-profits for the same level of talent while paying somewhat less . . . but how much less?

    In the end, it’s hard for me to assess what salaries should be offered based on public information alone.

    1. David Thorstad Avatar

      Thanks, as always, Jason!

      Sorry for the very late reply. It’s been a really busy week at work.

      I think you’re absolutely right to emphasize the cost of living in the Bay Area, which is very high, and constrains what it is feasible to pay unless you want to start bleeding employees or dramatically lowering productivity. As even the critique I quoted agreed, you’re also right to say that while Redwood pays more than competing academic labs, it pays less than competing non-academic labs. That’s important, and that’s a good sign.

      It might be appropriate to frame discussions about wages in cross-organizational conversations, rather than intra-organizational conversations about efficiency. If the conversation is intra-organizational (what should Redwood pay?) they might look fairly reasonable. But if the conversation is cross-organizational (should we fund Redwood, or another cause instead?) they might look a bit less reasonable. For example, suppose it’s true that comparable Bay-area academic researchers produce more and better papers for less money, and that a research center could be established within a leading university (say, Berkeley) instead of funding Redwood. It gets harder to say that we should fund Redwood instead of that research center, if it’s true that Redwood employees make more and produce less.

      You’re definitely right to emphasize that the conversation about salaries should not, for the most part, be about hurling accusations that people are fattening themselves at the feeding trough. Most EAs are altruists, and indeed are more altruistic than I am, so I really wouldn’t want to make too much of these complaints. For the most part, my concerns are the ones I stated above: high salaries raise the productivity bar for investments to be cost-effective.

      At the same time, there are some isolated (but important) places where concerns about fattening at the feeding trough begin to emerge. Some of the more questionable luxury property expenses (hotels in the Bahamas, Wytham Abbey) do disproportionately benefit EAs, and I think we have all witnessed some private conversations about high salaries that looked a bit self-serving. I don’t know that I’d want to say there is anything atypically self-serving about EA’s behavior here – indeed, it might be less self-serving than average – but EAs shouldn’t think themselves immune from any such pressures, or from capture by less altruistic rent-seekers.

      1. Jason Avatar
        Jason

        Some of this is touching on the decline in transparency about funding decisions. When the bulk of your money comes from billionaires, you don’t have to justify your decisions to those outside the inner circle. When the money is coming in fast, you may not feel the need to justify it very much at all.

        Although grantmaking transparency has costs and is not always the best approach, it also reduces the risk of unreasonable expenses. It’s not *implausible* to me that some of these grants were reasonable when made. Without grantmaking transparency, we’re left to wonder why these decisions were made. With appropriate transparency, we can identify the cruxes between the grantmaker’s analysis and our own. We can also assess whether the cruxes appear to be a result of plausible difference of opinion or something else.

        Maybe Part 7 should be “from transparency to . . . not”?

  5. Evelyn Avatar

    To me, complaining about higher pay at EA orgs feels out of touch, given that nonprofit employees have long been systematically underpaid and overworked. Nonprofits traditionally attract people who are willing to accept worse pay and benefits in order to do work that’s more aligned with their values. But this leads to systematically higher burnout, turnover, and loss of institutional knowledge at nonprofits, and predictably, nonprofits often lose top talent to for-profit companies that are competing for the same pool of workers and can afford to pay more. So I think it’s a great thing that EA orgs like Redwood are able to pay their employees more, even if they’re doing less than their academic counterparts. Redwood (and industry research labs before it) are simply bidding up the wages of researchers to be closer to the amount they deserve for their labor.

    There’s an SSIR article from 2004 documenting the problem of nonprofit employee pay: https://ssir.org/articles/entry/the_real_salary_scandal

    1. David Thorstad Avatar

      Thanks Evelyn!

      I’m definitely aware of the problems of overwork and underpay at nonprofits. We academics are notoriously overworked and underpaid, and the fact that we work for nonprofits is one of many explanations for this. In general, I’m quite on board with your suggestion that pay equity would be, other things equal, a good thing.

      In general, there are at least two important considerations bearing on salaries: fairness and efficiency. It is probably right to say that fairness supports paying people what they are worth in an open market. However, given that the sums at issue could save several lives each year (per employee), I would think that most effective altruists would think this fairness justification is too weak to justify the amount of extra spending on salaries.

      When we turn to efficiency, high salaries at nonacademic research orgs may raise two types of problems. The first compares spending across organizations: if academic researchers are producing more and higher-quality research outputs for less money than nonacademic researchers are, then this will tend to tell in favor of directing funding away from less-efficient nonacademic research labs to more efficient academic research groups. There are plenty of academic researchers who would be happy to run a world-class research center on less than $21 million.

      The second challenge is that if the same workers could be retained for a lower salary, then the salary difference could be reallocated to other effective causes (including, again, saving several lives per employee each year). Unless we make quite strong cost-effectiveness judgments on which making a high-salaried employee slightly happier with their total compensation is more important than saving several lives each year, this will tend to tell in favor of keeping salaries lower even if we are going to fund nonacademic research organizations.

      I must say, the timing of calls for high salaries is a bit suspicious. It’s well known that this happened in the “what are we going to do with all of this money?” era when effective altruists found themselves flush with cash and uncertain what to spend it on. That doesn’t demonstrably prove that high salaries are wasteful spending, but it doesn’t look great either.

  6. Henry Norreys Avatar
    Henry Norreys

    I find the stated reasons for the Wytham Abbey purchase hard to take at face value. The building is not a ‘castle’, nor is it ‘palatial’. It is a medium-size manor house. It has no large rooms and is completely unsuitable as a conference centre. What it might be suitable for is accommodation for the EA leadership when attending Oxford events or taking holidays.

    1. David Thorstad Avatar

      Thanks Henry!

      These are important points.

      To be fair, the building might be more suitable for small- to medium-sized conferences that rely on extensive small-group discussions and/or have multiple sessions. It might also be more suitable for conferences requiring a bit more of an encompassing/isolated conference atmosphere, as opposed to one where people mostly take off for their hotels every night.

      For example, Wytham Abbbey hosted a conference on Pluralisms in Existential Risk Studies that led to a consensus group statement on the need for pluralism within the field. Given the very diverse group of attendees and the need to ensure that discussions were constructive and involved listening/dialogue rather than polemic, Wytham Abbey probably did help with this.

      While I am not intimately familiar with the usage history of Wytham Abbey, I do not have any information to indicate that the majority usage of the building is currently to accommodate EA leadership when attending Oxford events or taking holidays.

      One way in which your point about venue suitability might be sharpened is that small- to medium-sized conferences are expensive, so the venue characteristics that you mentioned may tend to suggest that hosting conferences in Wytham Abbey will be fairly expensive on a per-attendee basis.

Leave a Reply

%d bloggers like this: