FTX/CEA - show us your numbers!

By Jack Lewars @ 2022-04-18T12:05 (+159)

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

It seemed at EAG that discussions focussed on two continuums:

Neartermist <---> Longtermist

Frugal spending <---> Ambitious spending

(The labels for the second one are debatable but I'm casually aiming for ones that won't offend either camp.)

Finding common ground on the first has been an ongoing project for years.

The second is much more recent, and it seems like more transparency could really help to bring people on opposite sides closer together.

Accordingly: could FTX and CEA please publish the Back Of The Envelope Calculations (BOTECs) behind their recent grants and community building spending?

(Or, if there is no BOTEC and it's more "this seems plausibly good and we have enough money to throw spaghetti at the wall", please say that clearly and publicly.)

This would help in several ways:

  1. for sceptics of some recent spending, it would illuminate the thinking behind it. It would also let the community kick the tires on the assumptions and see how plausible they are. This could change the minds of some sceptics; and potentially improve the BOTECs/thinking
  2. it should help combat misinformation. I heard several people misrepresent (in good faith) some grants, because there is not a clear public explanation of the grants' theory of change and expected value. A shared set of facts would be useful and improve debate
  3. it will set the stage for future evaluation of whether or not this thinking was accurate. Unless we make predictions about spending now, it'll be hard to see if we were well calibrated in our predictions later

Objection: this is time consuming, and this time is better spent making more grants/doing something else

Reply: possibly true, and maybe you could have a threshold below which you don't do this, but these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA. The dominant theme of my discussions at EAG was some combination of anxiety and scorn about recent spending. If this is too time-consuming for the current FTX advisers, hire some staff (Open Phil has ~50 for a similar grant pot and believes it'll expand to ~100).

Objection: why drag CEA into this?

[EDIT: I missed an update on this last week and now the stakes seem much lower - but thanks to Jessica and Max for engaging with this productively anyway: https://forum.effectivealtruism.org/posts/xTWhXX9HJfKmvpQZi/cea-is-discontinuing-its-focus-university-programming]

Reply: anecdata, and I could be persuaded that this was a mistake. Several students, all of whom asked not be named because of the risk of repercussions, expressed something between anxiety and scorn about the money their own student groups had been sent. One said they told CEA they didn't need any money and were sent $5k anyway and told to spend it on dinners. (Someone from CEA please jump in if this is just false, or extremely unlikely, or similar - I do realise I'm publishing anonymous hearsay.) It'd be good to know how CEA is thinking about spending wisely as they are very rapidly increasing their spending on EA Groups (potentially to ~$50m/year).

Sidenote: I think we have massively taken Open Phil for granted, who are exceptionally transparent and thoughtful about their grant process. Well done them.


jessica_mccurdy @ 2022-04-18T16:58 (+120)

Hi Jack,

Just a quick response on the CEA’s groups team end.

We are processing many small grants and other forms of support for CB  and we do not have the capacity to publish BOTECs on all of them. 

However, I can give some brief heuristics that we use in the decision-making.

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world. We would love for these students to instead focus on solving the world’s biggest and most important problems.

Based on the current amount available in EA, its projected growth, and the value of getting people working in EA careers, we currently think that spending at least as much as McKinsey does on recruiting pencils out in expected value terms over the course of a student’s career. There are other factors to consider here (i.e. double-counting some expenses) that mean we actually spend significantly less than this. However, as Thomas said - even small chances that dinners could have an effect on career changes make them seem like effective uses of money. (We do have a fair amount of evidence that dinners do in fact have positive effects on groups.)

As for your comment on funding student groups, we haven’t sent money to any group that has not asked for it. It is plausible that one of us encouraged them to ask for more since we do think it is a good use of money and would like groups to think ambitiously. We have a list of common group expenses with some tips at the bottom (including considerations on optics)

Given the current landscape, we think missing out on great people and great opportunities is a huge loss. This is especially true if you think there are heavy tails in the amount of impact individuals have. We have thought a lot about our funding guidelines, and suggestions, and feel comfortable with our current status though we are constantly reviewing and updating as the landscape changes.

We appreciate your concern and are always eager for feedback. If you (or others) want to expand on this post with a more in-depth, comprehensive version of this feedback, we’d be open to responding to this in more depth as well.  
 

 (The below is copied from a comment by Max Dalton below and I am adding it here for visibility)

 

"By the way, we are not planning to spend $50m on groups outreach in the near future. Our groups budget is $5.4m this year. 

Also note that our focus university program  is passing to Open Philanthropy."
 

Lucas Lewit-Mendes @ 2022-04-19T12:45 (+42)

Hi Jessica, 

Thanks for outlining your reasoning here, and I'm really excited about the progress EA groups are making around the world. 

I could easily be missing something here, but why are we comparing the value of CEA's community building grants to the value of Mckinsey etc? 

Isn't the relevant comparison CEA's community building grants vs other EA spending, for example GiveWell's marginally funded programs (around 5x the cost-effectiveness of cash transfers)? 

If CEA is getting funding from non-EA sources, however, this query would be irrelevant. 

Looking forward to hearing your thoughts :) 

Nathan_Barnard @ 2022-04-20T16:56 (+22)

I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective. 

If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to  with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students. 

The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect. 

Lucas Lewit-Mendes @ 2022-04-22T10:46 (+3)

Thanks Nathan, that would make a lot of sense, and motivates the conversation about whether CEA can realisticly attract as many people through advertising as Goldman etc. 

I guess the question is then whether: 

a) Goldman's activities are actually effective at attracting students; and

b) This is a relevant baseline prior for the types of activities that local EA groups undertake with CEA's funding (e.g. dinners for EA scholars students)

Larks @ 2022-04-18T21:09 (+38)

Just a quick response on the CEA’s groups team end.

...

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

I'm surprised to see CEA making such a strong claim. I think we should have strong priors against this stance, and I don't think I've seen CEA publish conclusive evidence in the opposite direction.

Firstly, note that these three companies come from very different sectors of the economy and do very different things. 

Secondly, even if you assign high credence to the problems with these firms, it seems like there is a fair bit of uncertainty in each case, and you are proposing a quite harsh upper bound - 'probably at best neutral'.

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.

Finally, there are in fact for each of these firms a bunch of concrete benefits they provide. Rarely do I see these explicitly weighed in the calculus against the problems:

  • Facebook allows people to keep in touch with friends and relatives, to share their thoughts and news about their lives, and meet like-minded new friends. Certainly I have personally made many new friends over facebook, and engaged in many good discussions. It also allows advertisers to show their products to the people who are most likely to appreciate them, saving others from having their time wasted with irrelevant ads.
  • McKinsey provides advice and allows for the diffusion of best practices from leading firms to others in the economy. They can also help management overcome internal veto players and other opposition to change by helping supply credibility to decisions. For some types of consulting (though a little different to what McKinsey mainly does) we even have RCTs showing that they improve productive efficiency.
  • Goldman's trading arm provides a wide range of services to market participants, like research, prime brokerage and market making, that are necessary to help keep markets efficient. They also provide investment banking services, allowing companies and governments to raise money to finance projects, and retail banking, giving ordinary people higher interest rates than they'd get from their legacy banks. 

It's possible that these has been some explicit analysis of these firms to support your very strong statement. I searched on the forum for 'McKinsey' to try to find it, but at least the first page or so of results were generally positive references - e.g. people quoting their work on climate change, or positively referencing how they would address a problem. 80k does have an old article with some cursory analysis of the harms of finance, but the analysis is seriously flawed, and it doesn't cover Management Consulting or Social Networks at all. 

nonn @ 2022-04-19T00:34 (+60)

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

jessica_mccurdy @ 2022-04-19T12:50 (+46)

Sorry, I was trying to get a quick response to this post and I made a stronger claim than I intended. I was trying to say that I think that EA careers are doing much more good than the ones mentioned on average and so spending money is a good bet here. I wasn’t intending to make a definitive judgment about the overall social impact of those other careers, though I know my wording suggests that. I also generally want to note that this element was a personal claim and not necessarily a CEA endorsed one. 

Charles He @ 2022-04-20T09:51 (+2)

This was a great comment and thoughtful reply and the top comment was great too.

Looking at the other threads generated from the top comment, it looks like tiny turns of phrase in that top comment, produced (unreasonably) large amounts of discussion.

I think we all learned a valuable lesson about the importance of clarity and precision when commenting on the EA forum.

Jeff_Kaufman @ 2022-04-22T19:39 (+7)

FYI I would have upvoted this if not for the final paragraph

MichaelStJules @ 2022-04-19T04:09 (+42)

Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It's always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers... but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.

I consider this to be a pretty weak argument, so it doesn't contribute much to my priors, which although weak (and so the particulars of a company matter much more), are probably centered near neutral on net welfare effects (in the short to medium term). I think a large share of goods people buy and things they do are harmful to themselves or others before even considering the loss of income/time as a result, or worse for them than the things they compete with. It's enough that I wouldn't have a prior strongly in favour of what profitable companies are doing being good for us. Here are reasons pushing towards neutral or negative impacts:

  1. A lot of goods are mostly for signaling, especially signaling wealth, which often has negative externalities and I'd guess little positive value for the individual. Brand name versions of things, clothing, jewelry, cars.
  2. Many modern ways people spend their time (enabled by profitable companies) have probably made us less active, more indoor-bound, less close with others, and less pursuant of meaning and meaningful goals, which may conflict with people's reflective preferences, as well as generally be bad for health, mental health and other measures of wellbeing. Basically a lot of the things we do on our computers and phones.
  3. Many things are stimulating and addictive, and companies are optimizing for want, not welfare. Want and welfare can come apart when we optimize for want. So we get cigarettes, addictive video games, junk food, algorithms optimizing for clicks when we'd be better off stepping away from the internet or doing more substantial things online, and lots of salt, sugar and calories in our foods.
  4. Media companies may optimize for revenue over accurate reporting. This includes outrage, playing to our fears, demonizing and polarization.
  5. Some companies make us want their stuff for fear of missing out or social pressure, so it can be closer to coercion than providing a valuable opportunity.
  6. I'd guess relatively little is spent on advertisement for things that we have good evidence for improving our welfare, because most of those things are hard to profit from: basic healthy foods, exercise (although there are certainly exercise products and programs that get advertised, but less so just gym memberships, joining sports leagues, running outside), just spending more time with your friends and family (in cheap ways, although travel and amusement parks are advertised), pursuing meaning or meaningful goals, helping others (even charity ads are relatively rare). So, advertisement seems to push us towards things that are worse for us than the alternatives we'd have gone with. To capitalize on the things that do make us substantially better off, companies may sell us more expensive versions that aren't (much) better or things to go with them that don't substantially help.
  7. I'd expect a lot of hedonic adaptation for many goods and services, but not mental health (almost by definition), physical pain and to a lesser extent general health and mobility, which are worsened by a lot of the things companies provide, directly or indirectly by competing with the things that are better for health.
  8. Company valuations don't usually substantially reflect their externalities, and shorting companies is riskier and more costly than buying and holding shares, so this biases markets towards positively valuing companies even if their overall value for the world is negative.
  9. There are often negative externalities on nonhuman animals in particular, although the overall effects on nonhuman animals may be complicated when you also consider the effects on wild animals.

I do think it's plausible McKinsey and Goldman have done and do more good than harm for humans in the short term, based on the arguments you give, but I don't have a strong view either way. It could depend largely on whether raising people's consumption levels makes them better off overall (and how much) in the places where people are most affected by these companies. Measures of well-being do seem to positively correlate with income/wealth/consumption at the individual level, and I'd guess also at the aggregate level for developing countries, but I'd guess not for developed countries, or at best weakly so. There are negative externalities for increasing an individual's income on others' life satisfaction, although it's possible a large share is due to rescaling, not actually thinking your life is worse absolutely than otherwise. See:

  1. Haushofer, J., Reisinger, J., & Shapiro, J. (2019). Is your gain my pain? Effects of relative income and inequality on psychological well-being.
    1. Based on GiveDirectly in Kenya. They had multiple measures of wellbeing, but negative effects were only observed for life satisfaction for non-recipient households of cash transfers in the same village. See Table A5.
  2. This table from Veenhoven, R. (2019). The Origins of Happiness: The Science of Well-Being over the Life Course., reproduced in this post.
  3. This graph, reproduced in this post.
  4. Other writing on the Easterlin Paradox.

 

Some companies may also contribute to relative inequality or even counterfactually make the median or poor person absolutely poorer through their political activities.

 

The categories of things I'm optimistic about for human welfare in the short to medium term are:

  1. Things that save us time, so we can spend more time on things that actually make us better off.
  2. Things that improve or protect our health (including mental health).
  3. Things that make us (feel) safer/more secure (physically, financially, etc.).
  4. Things that make us more confident, but without substantially net negative externalities (negative externalities may come from positional goods, costly signaling, peer pressure).
  5. Things that help us make better decisions, without important negative effects.

I'm neutral to optimistic about these (possibly neutral because they just replace cheaper versions of themselves that would be just as good):

  1. In-person activities with friends/family.
  2. Things for hobbies or projects.
  3. Restaurants.

I'm about neutral and pretty uncertain about screen-based entertainment (TV, movies, video games), and recreational substances that aren't extremely addictive or harmful (alcohol, marijuana).

I'm pessimistic about:

  1. Social media.
  2. Status-signaling goods/positional goods/luxuries.
  3. Processed foods.
  4. Cigarettes.
Guy Raveh @ 2022-04-20T08:06 (+4)

There are also a lot of externalities that act at least equally on humans, like carbon emissions, promotion of ethnic violence, or erosion of privacy. Those are all examples off the top of my head for Facebook specifically.

I upvoted Larks' comment, but like you I think this particular argument, "people buy from these firms", is weak.

Charles He @ 2022-04-18T23:28 (+26)

Ok. Lark’s response seems correct.

But surely, the spirit of the original comment is correct too.

No matter which worldview you have, the value of a top leader moving into EA is overwhelmingly larger than the the social value of the same leader “rowing” in these companies.

Also, at the risk of getting into politics (and really your standard internet argument) gesturing at “free market” is really complicated. You don’t need to take the view of Matt Stoller or something to notice that the benefits of these companies can be provided by other actors. The success of these companies and their resources that allow recruitment with 7 figure campus centres probably has a root source different than pure social value.

The implication that this statement requires CEA to have a strong model of these companies seems unfair. Several senior EAs, who we won’t consider activists or ideological, have deep experiences in these or similar companies. They have opinions that are consistent with the parent comment’s statement. (Being too explicit here has downsides.)

calebp @ 2022-04-19T08:23 (+17)

I think the main crux here is that even if Jessica/CEA agrees that the sign of the impact is positive, it still falls in the neutral bracket because on the CEA worldview the impact is roughly negligible relative to the programs that they are excited about. 

If you disagree with this maybe you agree with the weaker claim of the impact being comparatively negligible weighted by the resources these companies consume? (there's some kind of nuance to 'consuming resources' in profitable companies, but I guess this is more gesturing at a leaving value on the table framing as opposed to just is the organisation locally net negative or positive.

MichaelStJules @ 2022-04-19T00:27 (+10)

Do you think people are better off overall than otherwise because of Facebook (and social media generally)? You may have made important connections on Facebook, but many people probably invest less in each connection and have shallower relationships because of social media, and my guess is that mental health is generally worse because of social media (I think there was an RCT on getting people to quit social media, and I wouldn't be surprised if there were multiple studies. I don't have them offhand). I'd guess social media is basically addictive for a lot of people, so people often aren't making well-informed decisions about how much to use, and it's easy for it to be net negative despite widespread use. People joining social media pressures others to join, too, making it more costly to not be on it, so FB creates a problem (induces fear of missing out) and offers a solution to it. Cancel culture, bubbles/echo chambers, the spread of misinformation, and polarization may also be aggravated by social media.

That being said, maybe FB was really important for the growth of the EA community. I mostly got into EA through FB initially, although it's not where I was first exposed to EA. If we think the EA community is important enough, then this plausibly dominates. And, of course, it's where Open Phil's funding came from, but that seems to be historical luck, not really anything special about Facebook, except the growth of its market cap.

On the other hand, FB accelerated the development of AI capabilities, e.g. PyTorch was primarily built by FB. But maybe we should also consider this to be only weakly related to FB's role in social media, and more related to the fact that it's just a large tech company.

There are also multiple counterfactuals we could consider: no Facebook + people spend less time on social media, and no Facebook + people spend about as much time on social media (possibly on one similar to FB, or whatever other options there are now). In the first case, I think it's hard to make a balanced argument for FB being robustly net positive. In the second case, the impact is closer to 0, from either direction, and it's harder to evaluate its sign. Then there's the counterfactual impact of FB getting a more productive hire, or one who is otherwise more valued by FB.

I think McKinsey and Goldman would have other firms step into their spaces if they weren't around.

Linch @ 2022-04-19T15:42 (+8)

I don't think this is persuasive. I think most actions people take either increase or decrease x-risk, and you should start with a ~50% prior for which side of neutrality a specific action is on (though not clearly true; see discussion here). I agree there's some commonsensical notions that economic growth is good, including for the LT future, but I personally find arguments in the opposite direction to be slightly stronger. Your own comment to an earlier post is one interesting item on the list of arguments I'd muster in that direction.

Larks @ 2022-05-03T02:19 (+2)

Ahh, interesting argument! I wasn't thinking about the argument that these firms might (e.g.) slightly accelerate economic growth, which might then cause an increase in x-risk (if safety is not equivalently accelerated). In general I feel sufficiently unclear about such considerations - like maybe literally 50:50 equipoise is a reasonable prior - that I am loath to let them overwhelm a more concrete short-term impact story in our cost-benefit analysis, in the absence of a clear causal link to a long run impact in the opposite direction, as you suggest in the article.

In this case I think my argument still goes through, because the claim I'm objecting to is so strong - that there is in some sense a >50% probability that every reasonable scenario has all three firms being negative.

Jack Lewars @ 2022-04-18T17:25 (+18)

Thanks Jessica, this is helpful, and I really appreciate the speed at which you replied.

A couple of things that might be quick to answer and also helpful:

  • is there an expected value of someone working in an EA career that CEA uses? The rationale above suggests something like 'we want to spend as much as top tier employers' but presumably this relates to an expected value of attracting top talent that would otherwise work at those firms?
  • I agree that it's not feasible to produce, let alone publish, a BOTEC on every payout. However, is there a bar that you're aiming to exceed for the manager of a group to agree to a spending request? Or a threshold where you'd want more consideration about granting funding? I'm sure there are examples of things you wouldn't fund, or would see as very expensive and would have some rule-of-thumb for agreeing to (off-site residential retreats might be one). Or is it more 'this seems within the range of things that might help, and we haven't spent >$1m on this school yet?'
  • is there any counterfactual discounting? Obviously a lot of very talented people work in EA and/or have left jobs at the employers you mention to work in EA. So what's the thinking on how this spending will improve the talent in EA?
MaxDalton @ 2022-04-19T09:35 (+16)
  • Some non-CEA people have made estimates that we sometimes refer to. I'm not sure I have permission to share them, but they suggest significant value. Based in part on these figures, I think that the value of a counterfactual high-performing EA is in the tens of millions of dollars.
    • I think we should also expect higher willingness to pay than private firms because of the general money/people balance in the community, and because we care about their whole career (whereas BCG  will in expectation only get about 4 years of their career (number made up)).
  • I'll let Jessica answer with more specifics if she wants to, but we're currently spending much less than $1m/school.
  • Yes, it's obviously important that figures are counterfactually discounted. But groups seem have historically been counterfactually important to people (see OP's survey), and we think it's likely that they will be in the future too. Given the high value of additional top people, I think spending like this still looks pretty good.
jessica_mccurdy @ 2022-04-19T14:20 (+17)

Overall, CEA is planning to spend ~$1.5mil on uni group support in 2022 across ~75 campuses, which is a  lot less than $1mil/campus. :) 

calebp @ 2022-04-22T12:28 (+10)

Fwiw, I personally would be excited about CEA spending much more on this at their current level of certainty if there were ways to mitigate optics, community health, and tail risk issues.

Jack Lewars @ 2022-04-19T16:29 (+2)

Indeed :-) I had understood from this post (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/) that this was the destination, though, so the current rate of spending would be less relevant than having good heuristics before we get to that scale.

I see from Max below, though, that Open Phil is assuming a lot of this spending, so sorry for throwing a grenade at CEA if you're not actually going to be behind a really 'move the needle' amount of campus spending.

Sunny1 @ 2022-04-20T15:52 (+11)

Just as a casual observation, I would much rather hire someone who had done a couple of years at McKinsey than someone coming straight out of undergrad with no work experience. So I'm not sure that diverting talented EAs from McKinsey (or similar) is necessarily best in the long run for expected impact. No EA organization can compete with the ability of McK to train up a new hire with a wide array of generally useful skills in a short amount of time. 

Nathan_Barnard @ 2022-04-20T17:02 (+9)

I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.  

DavidNash @ 2022-04-20T18:02 (+14)

I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.

Guy Raveh @ 2022-04-21T15:28 (+8)

Hi, thanks for your comment.

While it's reasonable not to be able to provide an impact estimate for every specific small grant, I think there are some other things that could increase transparency and accountability, for example:

  • Publishing your general reasoning and heuristics explicitly on the CEA website.
  • Publishing a list of grants, updated with some frequency.
  • Giving some statistics on which sums went to what type of activities - again, updated once in a while.
MaxRa @ 2022-04-18T18:02 (+5)

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

That's really interesting to me because I'm currently thinking about potential recruitment efforts at CS departments for AI safety roles. I couldn't immediately find a source for the numbers you mention, do you remember where you got them from?

AndreaM @ 2022-04-18T20:36 (+9)

I also couldn't find much information on campus recruitment expenses for top firms. However, according to the US National Association of Colleges and Employers (NACE), in 2018 average cost-per-hire from US universities was $6,110

FAANG and other top tier employers are likely to spend much more than the average.

Charles He @ 2022-04-19T10:59 (+8)

For each of the companies, if you look at publicly available websites for the campus recruiting centre for one of the HYPS schools for these companies, and just look at the roster of public facing “ambassadors”, who have significant skills and earning counterfactual (so fully burdened cost may be over 200K per head) it’s clear it’s a 7 figure budget for them once you include operations, physical offices, management and other oversight (which won’t appear on the PL per se).

1 mil is the low end.

I can’t immediately pull up a link here as I am on mobile.

Holly @ 2022-04-19T15:57 (+105)

Good to see a post that loosely captures my own experience of EAG London and comes up with a concrete idea for something to do about the problem (if a little emotionally presented).

I don't have a strong view on the ideal level of transparency/communication here, but something I want to highlight is: Moving too slowly and cautiously is also a failure mode

In other words, I want to emphasise how important "this is time consuming, and this time is better spent making more grants/doing something else" can be. Moving fast and breaking things tends to lead to much more obvious, salient problems and so generally attracts a lot more criticism. On the other hand, "Ideally, they should have deployed faster" is not a headline. But if you're as consequentialist as the typical EA is, you should be ~equally worried about not spending money fast enough. Sometimes to help make this failure mode more salient, I imagine a group of chickens in a factory farm just sitting around in agony waiting for us all to get our act together (not the most relevant example in this case, but the idea is try to counteract the salience bias associated with the problems around moving fast). Maybe the best way for e.g. CEA to help these chickens overall is to invest more time reducing "reputational and epistemic risks to EA". Maybe it's to keep trying to get resources out the door according to their best judgements and accepting their predicted levels of failed grants, confused community members, and loss of potentially useful feedback that could come from more external scrutiny. It's not clear to me. But it seems like it could well be the latter. True, "these things have a much higher than average chance of doing harm", but there's also a lot more at stake if they move too slowly.

To be clear: This is not to say FTX/CEA are getting the balance right (and even if they broadly are, your suggestion for them to say something like "this seems plausibly good and we have enough money to throw spaghetti at the wall" still seems good to me). I just wanted to give more prominence to a consideration on the other side of the argument that seems to be relatively neglected in these discussions. So, à la your sidenote: Props to FTX for moving fast.

Michelle_Hutchinson @ 2022-04-20T13:06 (+18)

Thanks so much for this comment. I find it incredibly hard not to be unwarrantedly risk averse. It feels really tempting to focus on avoiding doing any harm, rather than actually helping people as much as I can. This is such an eloquent articulation of the urgency we face, and why we need to keep pushing ourselves to move faster. 

I think this is going to be useful for me to read periodically in the future - I'm going to bookmark it for myself.

MichaelDickens @ 2022-04-20T20:04 (+10)

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

Jack Lewars @ 2022-04-19T18:19 (+8)

I like this.

I'm not sure I agree with you that I find it equally worrying as moving so fast that we break too many things, but it's a good point to raise. On a practical level, I partly wrote this because FTX is likely to have a lull after their first grant round where they could invest in transparency.

I also think a concern is what seems to be such an enormous double standard. The argument above could easily be used to justify spending aggressively in global health or animal welfare (where, notably, we have already done a serious, serious amount of research and found amazing donation options; and, as you point out, the need is acute and immediate). Instead, it seems like it might be 'don't spend money on anything below 5x GiveDirectly' in one area, and the spaghetti-wall approach in another.

Out of interest, did you read the post as emotional? I was aiming for brevity and directness but didn't/don't feel emotional about it. Kind of the opposite, actually - I feel like this could help to make us more factually aligned and less driven by emotional reactions to things that might seem like 'boondoggles'.

Holly @ 2022-04-19T20:17 (+17)

Yeah personally speaking, I don't have very developed views on when to go with Spaghetti-wall vs RCT, so feel free to ignore the following which is more of a personal story. I'd guess there's a bunch of 'Giving Now vs Giving Later' content lying around that's much more relevant.

I think I used to be a lot more RCT because:

  1. I was first motivated to take cost-effectiveness research seriously after hearing the Giving What We Can framing of "this data already exists, it's just that it's aimed at the health departments of LMICs rather than philanthropists" - that's some mad low-hanging fruit right there (OTOH I seem to remember a bunch of friends wrestling with whether to fund Animal Charity Evaluators or ACE's current best guesses - was existing cost-effectiveness research enough to go on yet?)
  2. I was basically a student trying to change the world with a bunch of other students - surely the grown-ups mostly know what they're doing and I should only expect to have better heuristics if there's a ton of evidence behind them
  3. My personality is very risk-averse

Over time, however:

  1. I became more longtermist and there's no GiveWell for longtermism
  2. We grew up, and basically the more I saw of the rest of the world the less faith I had in people generally being sensible and altruistic and having their **** together
  3. I recognised how much of my aversion to Spaghetti-wall is a personality thing [edit: maybe writing my undergrad dissertation on risk aversion in ethics made me acknowledge this more fully :P]
Holly @ 2022-04-19T20:23 (+11)

| Out of interest, did you read the post as emotional? I was aiming for brevity and directness

Ah, that might be it. I was reading the demanding/requesting tone ("show us your numbers!", "could FTX and CEA please publish" and  "If this is too time-consuming...hire some staff" vs "Here's an idea/proposal") as emotional, but I can see how you were just going for brevity/directness, which I generally endorse (and have empathy for emotional FWIW, but generally don't feel like I should endorse as such).

rossaokod @ 2022-04-19T10:00 (+96)

It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago. 

By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fundraising prospectus in 2015 which estimated various counterfactual scenarios. Have there been serious self-evaluations since ? (Sincere apologies if I've missed them or could find them easily - this is a genuine question!)

In terms of what I'd like to see more of with respect to self-evaluation, and tentatively think we could have done better on this over the last 5+ years: 

Jonas Vollmer @ 2022-04-19T14:24 (+61)

I'd personally be pretty excited to see well-run analyses of this type, and would be excited for you or anyone who upvoted this to go for it. I think the reason why it hasn't happened is simply that it's always vastly easier to say that other people should do something than to actually do it yourself.

rossaokod @ 2022-04-20T09:26 (+17)

I completely agree that it is far easier to suggest an analysis than to execute one! I personally won't have the capacity to do this in the next 12-18 months, but would be happy to give feedback on a proposal and/or the research as it develops if someone else is willing and able to take up the mantle. 

I do think that this analysis is more likely to be done (and in a high quality way) if it was either done by, commissioned by, or executed with significant buy-in from CEA and other key stakeholders involved in community building and running local groups. This is partly a case of helping source data etc, but also gives important incentives for someone to do this research. If I had lots of free time over the next 6 months, I would only take this on if I was fairly confident that the people in charge of making decisions would value this research. One model would be for someone to write up a short proposal for the analysis and take it to the decision makers; another would be for the decision-makers to commission it (my guess is that this demand-driven approach is more likely to result in a well-funded, high quality study). 

To be clear, I massively appreciate the work that many, many people (at CEA and many other orgs) do and have done on community building and professionalising the running of groups (sorry if the tone of my original comment was implicitly critical). I think such work is very likely very valuable. I also think the hits-based model is the correct one as we ramp up spending and that not all expenditure should be thoroughly evaluated. But in cases where it seems very likely that we'll keep doing the same type of activity for many years and spend comparatively large resources on it (e.g. support for groups), it makes sense to bake self-evaluation into the design of programmes, to help improve their design in the future.

rossaokod @ 2022-04-20T09:36 (+4)

P.S. I've also just seen Joan's write-up of the Focus University groups in the comments below, which suggests that there is already some decent self-evaluation, experimentation and feedback loops happening as part of these programmes' designs. So it is very possible that there is a good amount of this going on that I (as a very casual observer) am just not aware of!

IanDavidMoss @ 2022-04-19T14:57 (+2)

Agreed! Note, however, that in the case of the FTX grants it will be pretty hard to do this analysis oneself without access to at the very least the list of funded projects, if not the full applications.

David_Moss @ 2022-04-19T13:48 (+22)

I also agree this would be extremely valuable. 

I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-time organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).

MichaelDickens @ 2022-04-20T20:26 (+10)

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

Jack Lewars @ 2022-04-19T16:20 (+5)

This would be great. It also closely aligns with what EA expects before and after giving large funding in most cause areas.

Ben Pace @ 2022-04-18T12:58 (+86)

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.

Ben Pace @ 2022-04-18T12:59 (+62)

To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.

Jack Lewars @ 2022-04-18T22:17 (+11)

Candidly, I'm a bit dismayed that the top voted comment on this post is about clickbait.

Ben Pace @ 2022-04-19T09:47 (+6)

Well, you don’t have to be any more, because now it’s Jessica McCurdy’s reply.

Jack Lewars @ 2022-04-19T16:24 (+3)

Indeed - and to be clear, I wasn't trying to suggest that you shouldn't have made the comment - just that it's very secondary to the substance of the post, and so I was hoping the meat of the discussion would provoke the most engagement.

Ben Pace @ 2022-04-19T17:05 (+2)

Yeah, pretty reasonable.

Jeff_Kaufman @ 2022-04-22T19:43 (+2)

Voting is biased toward comments that are easy to evaluate as correct/helpful/positive/valuable. With that in mind, I don't especially find this individual instance dismaying?

alexrjl @ 2022-04-19T10:39 (+66)


If this is too time-consuming for the current FTX advisers, hire some staff 

 

Hiring is an extremely labour and time intensive process, especially if the position you're hiring for requires great judgement. I think responding to a concern about whether something is a good use of staff time with 'just hire more staff' is pretty poor form, and given the context of the rest of the post it wouldn't be unreasonable to respond to it with 'do you want to post a BOTEC comparing the cost of those extra hires you think we should make to the harms you're claiming?'

IanDavidMoss @ 2022-04-19T13:57 (+45)

The top-voted suggestion in FTX's call for megaproject ideas was to evaluate the impacts of FTX's own (and other EA) grantmaking. It's hard to conduct such an evaluation without, at some point, doing the kind of analysis Jack is calling for. I don't have  a strong opinion about whether it's better for FTX to hire in-house staff to do this analysis or have it be conducted externally (I think either is defensible), but either way, there's a strong demonstrated demand for it and it's hard to see how it happens without EA dollars being deployed to make it possible. So I don't think it's unreasonable at all for Jack to make this suggestion, even if it could have been worded a bit more politely.

Jack Lewars @ 2022-04-19T16:07 (+19)

That's right, and this was very casually phrased, so thanks for pulling me up on it. A better way of saying this would be: "if you're going to distribute billions of dollars in funding, in a way that is unusually capable of being harmful, but don't have the time to explain the reasoning behind that distribution, it's reasonable to ask you to hire people to do this for you (and hiring is almost certainly necessary for lots of other practical reasons)."

freedomandutility @ 2022-04-19T14:07 (+5)

I agree with you that it’s important to account for hiring being very expensive.

My view on more transparency is that its main benefit (which I don’t think OP mentions) is as a long-term safeguard to reduce poor but well intentioned reasoning, mistakes and nepotism around grant processes, and is likely to be worth hiring costs even if we don’t expect to identify ongoing harms.

In other words, I think the stronger case for EA grantmakers being more transparent is the potential for transparency to reduce future harms, rather than its potential to reveal possible ongoing harms.

Holly Morgan @ 2022-04-22T01:06 (+2)

Relevant comment from Sam Bankman-Fried in his recent 80,000 Hours podcast episode: "In terms of staffing, we try and run relatively lean. I think often people will try to hire their way out of a problem, and it doesn’t work as well as they’re hoping. I’m definitely nervous about that." (https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/#ftx-foundation-002022)

Peter Wildeford @ 2022-04-19T00:36 (+58)

One generic back-of-the-envelope calculation from me:

Assume that when you try to do EA outreach, you get the following funnel:

Thus we expect outreach to a particular person to produce ~0.002 EAs on average.

Now assume an EA has the same expected impact as a typical GWWC member, and assume a typical GWWC member donates ~$24K/yr for ~6 years, making the total value of an EA worth ~$126,000 in donations, discounting at 4%. I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

Note that these numbers are pretty much made up[2] and each number ought to be refined with further research - something I'm working on and others should too. Also keep in mind that obviously these numbers will vary a lot based on the specific type of outreach being considered and so should be modified for modeling the specific thing being done. But hopefully this is a useful example.

But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. So if a dinner markets EA to ten people that otherwise would not have been marketed to, it will be worth ~$2500 to run just that one dinner. So spending $5000 to run a bunch of dinners can make sense.

Also note that of course EA marketing is not a single-touchpoint-and-then-done-forever system, so you will frequently be spending time/money on the same person multiple times. But this is hopefully made up for by the person becoming more likely to convert (both from self-selection and from the outreach).

Note: This is personal to just me, and does not reflect the views of Rethink Priorities or the Effective Altruism Infrastructure Fund or any other EA institution.


  1. According to me, using my intuition forecaster powers ↩︎

  2. Hopefully even though a lot of this is completely made up, it's useful as a scaffold/demonstration and eventually we can collect more data to try to refine these numbers. ↩︎

Fermi–Dirac Distribution @ 2022-04-19T17:10 (+30)

But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. 

I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.” 

The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $ does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs. 

To put it more explicitly: suppose you value 1 EA  with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is -9 EAs per recruit, and you are in no way breaking even. 

Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.

Peter Wildeford @ 2022-04-19T18:40 (+22)

I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs - that is, we should maximize return on investment.

**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We'd also ideally set ourselves to learn-by-doing when engaging in this outreach work.

Jack Lewars @ 2022-04-20T08:00 (+4)

Absolutely. And so the questions are:

  • have we defined that ROI threshold?

  • what is it?

  • are we building ways to learn by doing into these programmes?

The discussions on post suggest that it's at least plausible that the answers are 'no', 'anything that seems plausibly good' and 'no', which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.

Peter Wildeford @ 2022-04-20T19:31 (+4)

This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.

I'd really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I'll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we've been hiring a lot more).

Jonas Vollmer @ 2022-04-19T14:19 (+22)

I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That's a factor of 30x, so a framing like "likely vastly more valuable" would seem more appropriate to me.

Linch @ 2022-04-19T15:46 (+23)

One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don't need to be. Possibly not a huge adjustment in any case, but worth considering. 

Jonas Vollmer @ 2022-04-19T16:33 (+3)

Yeah I fully agree with this; that's partly why I wrote "gestures". Probably should have flagged it more explicitly from the beginning.

Linch @ 2022-04-19T15:35 (+3)

assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 EAs

Should be 4999

Jeff_Kaufman @ 2022-04-22T19:48 (+2)

assuming that there are 5000 EAs

I know this isn't your main point, but that's ~1/10 what I would have guessed. 5k is only 3x the people who attended EAG London this year.

Robert_Wiblin @ 2022-04-20T21:41 (+44)

My guess is this would reduce grant output a lot relative to how much I think anyone would learn (maybe it would grantmaking in half?) so personally I'd rather see them just push ahead and make a lot of grants then review or write about just a handful of them from time to time.

MichaelStJules @ 2022-04-18T21:13 (+41)

I also wish all the EA Funds and Open Phil would do this/make their numbers more accessible.

MaxDalton @ 2022-04-19T09:35 (+35)

By the way, we are not planning to spend $50m on groups outreach in the near future. Our groups budget is $5.4m this year. 

Also note that our focus university program  is passing to Open Philanthropy.

Jack Lewars @ 2022-04-19T16:02 (+3)

Hi Max - I took this from CEA's post here (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/), which aims for campus centres at 17 schools controlling "a multi-million dollar budget within three years of starting", and which Alex HT suggested in the comments would top out at $3m/year. This suggested a range of $17m-$54m.

MaxDalton @ 2022-04-19T16:16 (+6)

Cool, I see where you got the figure from. But yeah, most of that work is passing to Open Philanthropy, so we don't plan to spend $50m/year.

Jack Lewars @ 2022-04-19T16:33 (+3)

Thanks - I missed that update, and wouldn't have written about CEA above if I had seen it, I think.

Benjamin_Todd @ 2022-04-29T15:03 (+22)

Just wanted to add that I did a rough cost-effectiveness estimate of the average of all past movement building efforts using the EA growth figures here. I found an average of 60:1 return for funding and 30:1 for labour. At equilibrium, anything above 1 is worth doing, so I expect that even if we 10x the level of investment, it would still be positive on average.

Thomas Kwa @ 2022-04-18T13:10 (+20)

I've done informal BOTECs and it seems like the current funding amounts are roughly correct, though we need to be careful with deploying this funding due to concerns like optics and epistemics. Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change. This seems like a failure of communication, because funding dinners is either clearly good and students weren't doing the BOTEC, or it's bad due to some optics or other concerns that the students didn't communicate to CEA.

Jack Lewars @ 2022-04-18T17:07 (+7)

In the spirit of this post, maybe you could share these informal BOTECs?

'Here is a BOTEC' is going to help more than 'I've done a BOTEC and it checks out'.

(I appreciate the post isn't actually aimed at you)

Mauricio @ 2022-04-18T21:02 (+12)

That's fair - I'm not the earlier commenter but would suggest (as someone who's heard some of these conversations but isn't necessarily representative of others' thinking):

For dinners: Suppose offering to buy a $15 dinner for someone makes it 10% more likely than they'll go to a group dinner, and suppose that makes it 1% more likely that they'll have a very impactful career. Suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the dinner costs $15 and yields $400.

For retreats: Suppose offering to subsidize a $4oo flight makes someone 40% more likely to go to a retreat and that this makes them 5% more likely to have a very impactful career. Again suppose that means counterfactually donating 10% of $100k for 40 years. Then on average the flight costs $400 and yields $8,000.

(And expected returns are 100x higher than that under bolder assumptions about how much impact people will have. Although they're negative if optics costs are high enough.)

Jack Lewars @ 2022-04-18T22:01 (+12)

Thanks - this is exactly what I think is useful to have out there, and ideally to refine over time.

My immediate reaction is that the % changes you are assigning look very generous. I doubt a $15 dinner makes some 1% more likely to pursue an impactful career; and especially that a subsidised flight produced a 5% swing. I think these are likely orders of magnitude too high, especially when you consider that other places will also offer free dinners/retreats.

If a $400 investment in anything made someone 5% more likely to pursue an impactful career, that would be amazing.

But I guess what I'm really hoping is that CEA and FTX have exactly this sort of reasoning internally, with some moderate research into the assumptions, and could share that externally.

Mauricio @ 2022-04-18T22:36 (+7)

Thanks! Agree it's good to refine these and that these are very optimistic - I suspect the optimism is justified by the track record of these events. Anecdotally, it seems nontrivially common for early positive interactions to motivate new community members to continue/deepen their (social and/or motivational) engagement, and that seems to often lead to impactful career plan changes.

(I think there's steeply diminishing returns here--someone's first exposure to the community seems much more potentially impactful than later exposures. I tried to account for "how many participants will be having their first exposure" in the earlier estimate.)

In other words, we could (say) break down the ~1% estimate (which is already conditioned on counterfactual dinner attendance) into the following (ignoring benefits for people who are early on but not totally new):

  • 30% chance that this is their first exposure
  • conditional on the above, 10% chance that the experience kickstarts long/deep engagement
  • conditional on the above, 50% chance of an impactful career switch (although early exposures that aren't quite the first one also seem valuable)

If 1% is far too generous, which of the above factors are too high? (Maybe the second one?)

(Edited to add) And yup, I acknowledge this isn't the source you were looking for - hopefully still adds to the conversation.

berglund @ 2022-04-18T14:04 (+4)

Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change.

How much of the impact generated by the career change are you attributing to CEA spending here? I'm just wondering because counterfactuals run into the issue of double-counting (as discussed here). 

Thomas Kwa @ 2022-04-18T15:58 (+10)

Unsure but probably more than 20% if the person wouldn't be found through other means. I think it's reasonable to say there are 3 parties: CEA, the group organizers, and the person, and none is replaceable so they get 33% Shapley each. At 2% chance to get a career change this would be a cost of 750k per career which is still clearly good at top unis. The bigger issue is whether the career change is actually counterfactual because often it's just a speedup.

Amalie Farestvedt @ 2022-04-18T16:32 (+4)

I do think you have to factor in the potential negative risk of spending too much in that estimate as some of the potential members might be turned of by what seems like inefficient use of money. I think this is especially crucial if you are in the process of explaining the EA principles or when relating to members who not yet are committed to the movement.

Fermi–Dirac Distribution @ 2022-04-18T19:50 (+1)

Is $750k the market price for 1 expected career change from someone at a top school, excluding compensation costs? Alternatively, is there no cheaper way to cause such a career change? IMO, this is the important question here: if there is a cheaper way, then paying $750k has an opportunity cost of >1 career changes. 

Thomas Kwa @ 2022-04-18T20:42 (+3)

edit: misinterpreted what comment above meant by "market price"

I think the market price is a bit higher than that. The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <3 year payback period even if the students take a couple years to graduate or skill up.

In practice, dinners typically produce way more than 2% of a career change for $5k of dinners (33 dinners for 10 people at $15/serving). The situation at universities has non-monetary bottlenecks, like information transmission fidelity, qualified organizers, operational capacity, university regulations, etc., and most things that get you better use of those other resources and aren't totally extravagant are worth funding, unless they have a hidden cost like optics or attracting greedy people.

Fermi–Dirac Distribution @ 2022-04-18T22:32 (+30)

I think the market price is a bit higher than that.

Someone else in this thread found a report claiming that employers spend an average of ~$6,100 to hire someone at a US university.  I also found this report saying that the average cost per hire in the United States is <$5,000, $15k for an executive. At 1 career = 10 jobs that's $150,000/career for executive-level talent, or $180,000/career adjusting for inflation since the report was released. 

I'm not sure how well those numbers reflect reality (the $15k/executive number looks quite low), but it seems at least fairly plausible that the market price is substantially less than $750k/career. 

The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <2 year payback period.

This line of reasoning is precisely what I'm claiming to be misguided. Giving you a gallon of water to drink allows you to live at least two additional days (compared to you having no water), which at $750k of impact/year (~$2000/day ) means, by your reasoning, that EA should fund all interventions that ensure you have 1 gallon of water for <=$4000, up to the amount you need to survive. 

If water happened to be that expensive, that would be a worthwhile trade. But given the current market price of water (with the time cost of acquiring it included) being willing to pay anywhere near $4000/gallon is absurd. 

In general, if you value something at $x, and its market price is $y, x only matters for deciding whether you should pay for the thing or not, not for deciding how much you should pay for it. If x >= y, then you should pay $y, otherwise you should pay $0.

Thomas Kwa @ 2022-04-19T00:42 (+2)

It looks like I misunderstood a comment above. I meant "market price" as the rate at which CEA should currently trade between money and marginal careers, which is >$750k. I think you mean the average price at which other companies "in the market for talent" buy career changes, which is <$750k.

I think there isn't really a single price at which we can buy infinite talent. We should do activities as cost-effective as other recruiters, but these can only be scaled up to a limited extent before we run into other bottlenecks. The existence of a cheaper intervention doesn't mean we shouldn't fund a more expensive intervention once the cheaper one is exhausted. And we basically want an infinite amount of talent, so in theory the activities that buy career changes at prices between $150k and $750k are also worth funding.

I think we can agree that

  • different activities have different cost-effectiveness, some of them substantially cheaper than $750k/career
  • we can use a basically infinite amount of talent, and the supply curve for career changes slopes upwards
  • we shouldn't pay more than the market price for any intervention e.g. throw $100k at a university group for dinners when it produces the same effect as $5k spent on dinners
  • we should fund every activity that has a cost-effectiveness of better than $750k per career change (or whatever the true number is), unless we saturate our demand for talent and lower the marginal benefit of talent, or deplete much of our money and increase the marginal benefit of money
  • we are unlikely to saturate our demand for talent by throwing more money at EA groups because there are other bottlenecks
  • Because most of the interventions are much cheaper than $750k/career change, our average cost will be much less than $750k/career change
Markus Amalthea Magnuson @ 2022-04-18T12:31 (+20)

Just a list of projects and organisations FTX has funded would be beneficial and probably much less time-consuming to produce. Some of the things you mention could be deducted from that, and it would also help in evaluating current project ideas and how likely they are to get funding from FTX at some point.

Jack Lewars @ 2022-04-18T17:18 (+5)

True, and it seems like a necessary step on its own, but I'm wary of people 'deducing' too much. Right now, a lot of the anxiety seems to be coming from people trying to deduce what funders might be thinking; ideally, they'd tell people themselves.

calebp @ 2022-04-18T23:48 (+19)

I kind of like the general sentiment but I'm a bit annoyed that it's just assumed that your burden of proof is so strongly on the funders.

Maybe you want to share your BOTEC first, particularly given the framing of the post is "I want to see the numbers because I'm concerned" as opposed to just curiosity?

Jack Lewars @ 2022-04-19T16:16 (+3)

I'm not sure why the burden wouldn't fall on people making the distribution of funds? (Incidentally, I'm using this to mean that the funders could also hire external consultancies etc. to produce this.)

But, more to the point, I wrote this really hoping that both organisations would say "sure, here it is" and we could go from there. That might really have helped bring people together. (NB: I realise FTX haven't engaged with this yet.)

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

calebp @ 2022-04-21T08:15 (+11)

I think what I'm getting at is that burden of proof is generally an unhelpful framing, and an action that you could take that might be helpful is communicating your model that makes you sceptical of their spending.

Hiring consultancies to do this seems like it's not going to go well unless it's rethink priorities or they have lot of context and on the margin I think it's reasonable for CEA to say no, they have better things to do.

I feel confused about the following but I think that as someone that runs an EA org you could easily have reached out directly to CEA/FTX to ask this question (maybe you did, if so apologies) and this action seems kind of like outing them more than being curious. I'm not necessarily against this (in fact I think this is helpful in lots of ways) but many forum users seen to not like these kinds of adversarial actions.

Jack Lewars @ 2022-04-22T09:19 (+1)

Like you, I'm fairly relaxed about asking people publicly to be transparent. Specifically in this context, though, someone from FTX said they would be open to doing this if the idea was popular, which prompted the post.

As a sidenote, I think also that MEL consultancies are adept at understanding context quickly and would be a good option (or something that EA could found itself - see Rossa's comment). My wife is an MEL consultant, which informs my view of this. But that's not to say they are necessarily the best option.

calebp @ 2022-04-22T12:30 (+1)

I as an individual would endorse someone hiring an MEL consultant to do this for the information value and would also bet on this not providing much value due to the analysis being poor at $100.

Terms to be worked out of course, but if someone was interested in hiring the low context consultant, I'd be interested in working out the terms.

calebp @ 2022-04-22T12:25 (+1)

Oh right, I didn't pick up on the ftx said they'd like to see if this was popular thing. This resolves part of this for me (at least on the ftx as opposed to the CEA side).

calebp @ 2022-04-21T08:29 (+7)

Broken into a different comment so people can vote more clearly

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

I think that there is likely different epistemic standards between cause areas such that this is a pretty complicated question and people underpreciate how much of a challenge this is for the EA movement.

freedomandutility @ 2022-04-19T08:20 (+2)

I think it makes sense to have the burden of proof mostly on the funders given that they presumably have more info about all their activities, plus having the burden set this way has instrumental benefits of encouraging transparency which could lead to useful critiques, and extra reputation-related incentives to use good reasoning and do a good job of judging what grants do and do not meet a cost-effectiveness bar.

Holly Morgan @ 2022-04-22T01:32 (+4)

Just noticed Sam Bankman-Fried's 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.

I think the excerpt below is not far from the OP's request that "if there is no BOTEC and it's more 'this seems plausibly good and we have enough money to throw spaghetti at the wall', please say that clearly and publicly."

Sam:

I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If you want this level of funding, you better effectively have proof that what you’re going to do is great.” We’re happy to give a lot with not that much evidence and not that much conviction — if we think it’s, in expectation, great. Maybe it’s worth doing more research, but maybe it’s just worth going for. I think that is something where it’s a different style, it’s a different brand. And we, I think in general, are pretty comfortable going out on a limb for what seems like the right thing to do.

Rob:

I guess you might bring a different cultural aspect here because you come from market trading, where you have to take a whole lot of risk and you’ve just got to be comfortable with that or there’s not going to be much out there for you. And also the very risk-taking attitude of going into entrepreneurship — like double-or-nothing all the time in terms of growing the business.

I’ve had a worry that’s been developing over the last year that the effective altruism community might be a bit too conservative about its giving at this point. Because many of us, including me, got our start when our style of giving was pretty cash-starved — it was pretty niche, and so we developed a frugal mindset, an “I’ve got to be careful” mindset.

And on top of that, to be honest, as a purely aesthetic matter, I like being careful and discerning, rather than moving fast and doing lots of stuff that I expect in the future is going to look foolish, or making a lot of bets that could make me look like an idiot down the road. My colleague, Benjamin Todd, estimated last year that there’s $46 billion committed to effective altruist–style philanthropy — of course that figure is flying around all the time, but it’s probably something similar now — and according to his estimates, that figure had been growing at 35% a year over the last six years. So increasingly, it’s been growing much faster than we’ve been able to disburse these funds to really valuable stuff.

So I guess me and other people might want to start thinking that maybe the big risk that we should be worried about is not about being too careless, but rather not giving enough to what look like questionable projects to us now — because the marginal project in 10 years’ time is going to be noticeably more mediocre or noticeably less promising. Or alternatively, we might all be dead from x-risk already because we missed the boat.

Sam:

Completely agree. That is roughly my instinct: that there are a lot of things that you have to go out on a limb for. I think it’s just the right thing to do, and that probably as a movement, we’ve been too conservative on that front. A lot of that is, as you said, coming from a place where there’s a lot less funding and where it made sense to be more conservative.

I also just think, as you said, most people don’t like taking risks. And especially, it’s often a really bad look to say you’re trying to do something great for the world and then you have no impact at all. I think that feels really demoralizing to a lot of people. Even if it was the right thing to do in expectation, it still feels really demoralizing. So I think that basically fighting against that instinct is the right thing to do, and trying to push us as a community to try ambitious things nonetheless.

Jack Lewars @ 2022-04-22T09:21 (+1)

Very interesting, thanks. I read this as more saying 'we need to be prepared to back unlikely but potentially impactful things', and acknowledging the uncertainty in longtermism, rather than saying 'we don't think expected value is a good heuristic for giving out grants', but I'm not confident in that reading. Probably reflects my personal framing more than anything else.

Holly Morgan @ 2022-04-22T12:27 (+5)

Oh, I read it as more the former too!

I read your post as:

  1. Asking if FTX have done something as explicit as a BOTEC for each grant or if it's more a case of "this seems plausibly good" (where both use expected value as a heuristic)
  2. If there are BOTECs, requesting they write them all up in a publicly shareable form
  3. Implying that the larger the pot, the more certain you should be ("these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.")

I thought Sam's comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you're doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they're pretty bad) and Sam here essentially seems to be responding "I don't think we need to be that certain." I can't see where the expected value heuristic was ever called into question? Sorry if you thought that's how I was reading this.

[Edit: Maybe when you say "plausibly good" you mean "negative in expectation but a decent chance of being good", whereas I read it as "good in expectation but not as the result of an explicit BOTEC"? That might be where the confusion lies. If so, with my top-level comment I was trying to say "This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning" rather than "This is why they might not be that bothered about expected value and instead are just funding things that might be good". Hope that makes sense.]

Bluefalcon @ 2022-04-22T00:54 (+4)

I would prefer that they be less transparent so they don't have to waste their valuable time.

Guy Raveh @ 2022-04-18T17:24 (+3)

I strongly agree we need transparency. In lieu of democracy in funding, orgs need to be accountable to the movement in some way.

Also, what's a BOTEC?

Jack Lewars @ 2022-04-18T17:31 (+6)

I've updated this now: it's a Back Of The Envelope Calculation.

David_Moss @ 2022-04-19T14:10 (+2)

Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels. 

The numbers would all be very out of date now though,  and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.

Kerkko Pelttari @ 2022-04-18T22:15 (+2)

Good questions, I have ended up thinking about many of these topics ofren.

Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn't give a total view into where the current bar for interventions is, because we're only seeing the project distribution from above the cutoff point.