EA Infrastructure Fund: Ask us anything!

By Jonas V, Michelle_Hutchinson, Buck, Max_Daniel @ 2021-06-03T01:06 (+70)

Hi everyone!

Managers of the EA Infrastructure Fund will be available for an Ask Me Anything session. We'll start answering questions on Friday, June 4th, though some of us will only be able to answer questions the week after. Nevertheless, if you would like to make sure that all fund managers can consider your question, you might want to post it before early UK time on Friday morning. 

What is the EA Infrastructure Fund?

The EAIF is one of the four EA Funds. While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. 

Who are the fund managers, and why might you want to ask them questions?

The fund managers are Max Daniel, Michelle Hutchinson, and Buck Shlegeris. In addition, EA Funds Executive Director Jonas Vollmer is temporarily taking on chairperson duties, advising, and voting consultatively on grants. Ben Kuhn was a guest manager in our last grant round. They will all be available for questions, though some may have spotty availability and might post their answers as they have time throughout next week.

One particular reason why you might want to ask us questions is that we are all new in these roles: All fund managers of the EAIF have recently changed, and this was our first grant round. 

What happened in our most recent grant round?

We have made 26 grants totalling about $1.2 million. They include:

For more detail, see our payout report. It covers all grants from this round and provides more detail on our reasoning behind some of them.

The application deadline for our next grant round will be the 13th of June. After this round is wrapped up, we plan to accept rolling applications.

Ask any questions you like; we'll respond to as many as we can. 


Buck @ 2021-06-04T17:39 (+42)

 A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:

Presumably this differs a lot between grants; I'd be interested in some typical figures.

This question is important because you need a sense of these numbers in order to make decisions about which of these parties you should try to be. Eg if the donors get 90% of the credit, then EtG looks 9x better than if they get 10%.

 

(I'll provide my own answer later.)

Jonas Vollmer @ 2021-06-06T09:26 (+22)

Making up some random numbers:

  • The donors to the fund – 8%
  • The grantmakers – 10%
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics) – 7%
  • The grantee – 75%

This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.

This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.

Max_Daniel @ 2021-06-06T10:13 (+4)

I think my off-the-cuff numbers would be roughly similar as Jonas's, but mostly I just feel like I don't know how to think about this. I would probably need to spend 1 to 10 hours reviewing relevant theoretical concepts before being comfortable giving numbers that others might base decisions on.

Michelle_Hutchinson @ 2021-06-07T08:44 (+4)

+1

Jonas Vollmer @ 2021-06-14T10:00 (+5)

Here's another comment that goes into this a bit.

Max_Daniel @ 2021-07-21T21:58 (+11)

(I'd be very interested in your answer if you have one btw.)

Linch @ 2021-06-13T10:34 (+3)

(No need for the EAIF folks to respond; I think I would also find it helpful to get comments from other folks)

I'm curious about a set of related questions probing this at a more precise level of granularity.

For example, for the 

$248,300 to Rethink Priorities to allow Rethink to take on nine research interns (7 FTE)

Suppose for the sake of the argument that the RP internship resulted in better career outcomes than the interns counterfactually would have.* For the difference of the impact from the internship vs the next-best-option, what fraction of credit assignment should be allocated to:

  • The donors to the fund
  • The grantmakers
  • The rest of the EAIF infrastructure
  • RP for selecting interns and therefore providing a signaling mechanism either to the interns themselves or for future jobs
  • RP for managing/training/aiding interns to hopefully excel
  • The work of the interns themselves


I'm interested in whether the ratio between the first 3 bullet points has changed (for example, maybe with more $s per grant, donor $s are relatively less important and the grantmaker effort/$ ratio is lower)

I also interested in the appropriate credit assignment (breaking down all Jonas' 75%!) of the last 3 bullet points. For example, if most people see the value of RP's internship program to the interns as primarily via RP's selection methods, then it might make sense to invest more management/researcher time into designing better pre-internship work trials. 

I'm also interested in even more granular takes, but perhaps this is boring to other people.

(I work for RP. I do not speak for the org).


*(for reasons like it a) speeded up their networking, b) tangible outputs from the RP internship allowed them to counterfactually get jobs where they had more impact, c) it was a faster test for fit and made the interns correctly choose to not go to research, saving time, d) they learned actually valuable skills that made their career trajectory go smoother, etc) 

Linch @ 2021-06-07T23:26 (+33)

Would the EAIF be interested in a) post hoc funding of previous salary/other expenses or b) impact certificates that account for risk taken? 


Some context: When I was thinking of running SF/Bay Area EA full-time*, one thing that was fairly annoying for me is that funders (correctly) were uninterested in funding me until there was demonstrated success/impact, or at least decent proxies for such. This intuition was correct, however from my perspective the risk allocation seemed asymmetric.  If I did a poor job, then I eat all the costs. If I did a phenomenally good job, the best I could hope for (from a funding perspective) was a promise of continued funding for the future and maybe back payments for past work.

In the for-profit world, if you disagree with the judgement of funders, press on, and later turn out to be right, you get a greater share of the equity etc.  Nothing equivalent seemed to be true within EA's credit allocation.

It seems like if you disagree with the judgment of funders, the best you can hope to do is break even. Of course, a) I read not being funded as some signal that people didn't think me/my project was sufficiently promising and b) maybe some funders would actually prefer, for downside risk reasons, a lower number of unpromising EA projects of that calibre in the world. But at least in my case, I explicitly asked quite a few people about my project, and basically nobody said I should desist because the downside risks were too high. ** So it seems weird to be in a situation where founders of unfunded (but ex post good) projects bear all the risks but can at best hope for typical rewards.


*I've since decided to test my fit for research and a few other things. I currently think my comparative advantage is pretty far from direct community building, though I can imagine changing my mind again in 3 years.

 **I expect all these signals to be even more confusing for people who want to work in areas less central than the SF Bay Area, or who know less funders than I do. 

Buck @ 2021-07-05T17:13 (+8)

I would personally be pretty down for funding reimbursements for past expenses.

Max_Daniel @ 2021-07-05T22:28 (+4)

I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.

So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.

If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.

Habryka @ 2021-07-05T18:55 (+4)

I would also be in favor of the LTFF doing this.

Linch @ 2021-07-06T01:01 (+2)

That's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point? 

Buck @ 2021-07-06T04:41 (+2)

I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.

Jonas Vollmer @ 2021-07-06T09:05 (+6)

I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority).

One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.

MichaelA @ 2021-06-03T13:38 (+18)

Thanks for doing this AMA!

In the recent payout report, Max Daniel wrote:

My most important uncertainty for many decisions was where the ‘minimum absolute bar’ for any grant should be. I found this somewhat surprising.

Put differently, I can imagine a ‘reasonable’ fund strategy based on which we would have at least a few more grants; and I can imagine a ‘reasonable’ fund strategy based on which we would have made significantly fewer grants this round (perhaps below 5 grants between all fund managers).

This also seems to me like quite an important issue. It seems like reminiscent of Open Phil's idea of making grants "when they seem better than our “last dollar” (more discussion of the “last dollar” concept here), and [saving] the money instead when they don’t". 

Could you (any fund managers, including but not limited to Max) say more about how you currently think about this? Subquestions include:

Buck @ 2021-06-05T18:20 (+31)

I feel very unsure about this. I don't think my position on this question is very well thought through.

Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism's last dollar.

I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to "where should the bar be" seems like a worse use of my time than those other activities.

I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.

(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I don't think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)

Linch @ 2021-06-08T00:14 (+2)

I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make

Am I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1/4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact) 

If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make?  See related question.

Buck @ 2021-06-08T17:54 (+2)

This is indeed my belief about ex ante impact. Thanks for the clarification.

Michelle_Hutchinson @ 2021-06-04T10:55 (+22)

Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.

Here are a couple of things pushing me to have a low-ish bar for funding: 

  • I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
  • As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out. 
  • I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.  

Here are a couple of things driving up my bar:

  • EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ‘central cases’ of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didn’t fully succeed in this aim this time round.)
  • Something particularly promising which I don’t fund is fairly likely to get funded by others, whereas something harmful I fund can’t be cancelled by others, so I want to be fairly cautious while I’m starting out in grant making.
Jonas Vollmer @ 2021-06-04T15:20 (+24)

Some further things pushing me towards lowering my bar:

  • It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
  • EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it's generally agreed that direct work seems more impactful for them. Our common intuitions for spending money don't hold anymore – e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I don't want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
  • In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself 'should this be funded by the EAIF or a large aligned donor?'

Some further things increasing my bar:

  • If we routinely fund mediocre work, there's little real incentive for grantseekers to strive to produce truly outstanding work.
Max_Daniel @ 2021-06-04T15:39 (+15)

Basically everything Jonas and Michelle have said on this sounds right to me as well.

Maybe a minor difference:

  • I certainly agree that, in general, donor preferences are very important for us to pay attention to.
  • However, I think the "bar" implied by Michelle's "important for all the donations to be at least somewhat explicable to the majority of its donors" is slightly too high.
  • I instead think that it's important that a clear majority of donors endorses our overall decision procedure. [Or, if they don't, then I think we should be aware that we're probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable). 
    • Though I would be interested to learn if EAIF donors disagreed with this.
  • (It's a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/donors and weights propotional to grant/donation size, while being closer to the latter.)
Ben_West @ 2021-06-04T18:36 (+4)

This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed... Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it's generally agreed that direct work seems more impactful for them

I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *

Is this because you aren't getting those senior people applying? Or are there other constraints?

* (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)

Peter_Hurford @ 2021-06-04T20:36 (+15)

I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *

 

This is two misconceptions:

(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.

(2) The grant is not being entirely allocated to intern compensation

Interns at Rethink Priorities currently earn $23-25/hr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/yr (prorated for part-time work).

Jonas Vollmer @ 2021-06-06T09:19 (+8)

I notice that the listed grants seems substantially below $1000/hour (…)

Is this because you aren't getting those senior people applying? Or are there other constraints?

The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/y when working at an EA organization. It would seem really weird for that person to ask for a $5 million / year salary, especially given that they'd most likely want to donate most of that anyway.

Ben_West @ 2021-06-23T18:34 (+6)

Cool, for what it's worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/year, and your comments seem to indicate that you would be happy to fund at least some people at that level.

So I remain kind of confused why the grant amounts are so small.

Jonas Vollmer @ 2021-06-24T10:11 (+5)

If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.

FWIW, I also don't really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I'd be keen to hear more about that.

Jonas Vollmer @ 2021-06-04T14:43 (+6)

Because the EAIF is aiming to grow the overall resources and capacity for improving the world, one model is simply "is the growth rate greater than zero?" Some of the projects we don't fund to me look like they have a negative growth rate (i.e., in expectation, they won't achieve much, and the money and time spent on them will be wasted), and these should obviously not be funded. Beyond that, I don't think it's easy to specify a 'minimum absolute bar'.

Furthermore, one straightforward way to increase the EA community's resources is through financial investments, and any EA projects should beat that bar in addition to returning more than they cost. (I don't think this matters much in practice, as we're hoping for growth rates much greater than typical in financial markets.)

Linch @ 2021-06-08T01:55 (+15)

What % of grants you almost funded do you expect to be net negative for the world, had they counterfactually been implemented? 

See paired question about grants you actually funded. 

MichaelA @ 2021-06-04T08:33 (+13)

[I'm going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]

  1. How much do you think you would've granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
  2. What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
    • Available funding
    • Good applicants with good proposals for implementing good project ideas
      • And to the extent that this is your bottleneck, do yo
    • Grantmaker capacity to evaluate applications
      • Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
    • Grantmaker capacity to solicit or generate new project ideas
    • Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications 
      • E.g., it sounds like this would've been relevant to Max Daniel's views on the IIDM working group in the recent round
  3. To the extent that you're bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you expect it'd be because) there too few applications in general, or too low a proportion that are high-quality?
  4. When an application isn't sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicant’s skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
  5. If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff  but they’re applying less often than would be ideal?

(It seems like answers to those questions could inform whether EAIF should focus on generating more ideas, finding more people from within EA who could execute ideas, finding more people from outside of EA who could execute ideas, or improving the match between ideas and people.)

Michelle_Hutchinson @ 2021-06-04T14:42 (+21)

Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts: 

1. Tough to tell. My intuition is 'the same amount as I did' because I was happy with the amount I could grant to each of the recipients I granted to, and I didn't have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won't be doing an entire quarter's grants in one round, and because there will be less 'getting up to speed'.

2. Probably most of these are some bottleneck, and also they interact: 
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It's possible some grants we didn't fund would have seemed worth funding had the proposal been clearer / more specific. 
- There were macrostrategic questions the grant makers disagreed over - for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn't affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like 'you can donate extremely cost-effectively to these global health charities' versus more generalised EA principles.  

3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we're granting to everyone who applies, there's always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected. 

4. I think I noticed some of each of these, and it's a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
 

5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things - for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it's a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.

Jonas Vollmer @ 2021-06-04T16:01 (+2)

(Just wanted to say that I agree with Michelle.)

Buck @ 2021-06-05T19:17 (+19)

Re 1: I don't think I would have granted more

Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.

Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate. 

Re 4: It varies. Mostly it isn't that the applicant lacks a specific skill.

Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I'd love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn't heard about it or hasn't decided that it's promising, or doesn't want to try it because they don't have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who'd be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.

Max_Daniel @ 2021-06-05T21:19 (+8)
  1. How much do you think you would've granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?

I can't think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees.

Overall, I think there might be a "paradoxical" effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on:

  • Generating leads for, and otherwise helping with, recruiting additional fund managers
  • Active grantmaking
  • 'Structural' improvements to the fund - e.g., improving our discussions and voting methods
Max_Daniel @ 2021-06-05T21:27 (+6)

On 2, I agree with Buck that the two key bottlenecks - especially if we weight grants by their expected impact - were "Good applicants with good proposals for implementing good project ideas" and "Grantmaker capacity to solicit or generate new project ideas".

I think I've had a stronger sense than at least some other fund managers that "Grantmaker capacity to evaluate applications" was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more 'marginal' anyway, which diminishes its impact-weighted importance. I'm still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a "false negative" on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.

Max_Daniel @ 2021-06-05T21:33 (+5)

5. If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff  but they’re applying less often than would be ideal?

I think both of these are significant effects. I suspect I might be more worried than others about "good people applying less often than would be ideal", but not sure.

Max_Daniel @ 2021-06-05T21:32 (+5)

4. When an application isn't sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicant’s skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?

All of these have happened. I agree with Buck that "applicant lacks a highly specific skill" seems uncommon; I think the cases of "mismatch between the idea and the applicant" are broader/fuzzier.

I don't have an immediate sense that any of them is particularly common.

Max_Daniel @ 2021-06-05T21:28 (+5)

Re 3, I'm not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:

I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate. 

Max_Daniel @ 2021-06-05T21:22 (+5)
  • Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
    • E.g., it sounds like this would've been relevant to Max Daniel's views on the IIDM working group in the recent round

Hmm, I'm not sure I agree with this. Yes, if I had access to a working crystal ball that would have helped - but for realistic versions of 'knowing more about macrostrategy', I can't immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I don't think they have to do with macrostrategy, crucial considerations, etc.)

MichaelA @ 2021-06-06T09:05 (+5)

This surprises me. Re-reading your writeup, I think my impression was based on the section "What is my perspective on 'improving institutions'?" I'd be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. I'll first quote the section in full, for the sake of other readers:

I am concerned that ‘improving institutions’ is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ‘rational’ at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.

At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.

I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these ‘known’ interventions.

To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were “bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve”, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.

Personally, when I think of what work in the area of ‘improving institutions’ I’m most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both ‘EA researchers’ and ‘non-EA’ domain experts as well as policymakers.

It seems to me like things like "Fundamental, macrostrategic, basic, or crucial-considerations-like work" would be relevant to things like this (not just this specific grant application) in multiple ways:

  • I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
    • Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
    • This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
  • Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally don't think the latter is useful, but you seem to), which you link to in that report
    • In these cases, I think there's less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
  • It seems it would also be useful and tractable to at least somewhat improve our understanding of which "intermediate goals" would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
  • Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
Max_Daniel @ 2021-06-06T10:10 (+4)

I think the expected value of the IIDM group's future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.

Given the nature of the activities proposed by the IIDM group, I don't think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or "object-level" strategy, e.g., lobbying for a certain policy.

I mean it would have helped me somewhat, but I think it pales in importance compared to things like "having more first-hand experience in/with the kind of institutions the group hopes to improve", "more relevant knowledge about institutions, including theoretical frameworks for how to think about them", and "having seen more work by the group's leaders, or otherwise being better able to assess their abilities and potential".

[ETA: Maybe it's also useful to add that, on my inside view, free-floating macrostrategy research isn't that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as 'too high-level' and 'too shallow' to be that helpful, though I think some 'grunt work' like 'mapping out actors' would help a bit, albeit it's not what I typically think of when saying macrostrategy.

Neither is 'object-level' work that ignores macrostrategic uncertainty useful. 

I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque "good judgment" that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around "how valuable is that proxy anyway?". Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things I'm saying here) that we're not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and 'teaching' some of the practice-able skills relevant for this. (I think there are also some more 'innate' components.)

This is all slightly exaggerated to gesture at the view I have, and I'm not sure how much weight I'd want to give that inside view when making, e.g., funding decisions.]

MichaelA @ 2021-06-06T12:14 (+4)

Thanks, these are interesting perspectives.

---

I think to some extent there's just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the "Fundamental, macrostrategic, basic, or crucial-considerations-like work" - I mainly wanted to draw a distinction between (a) all research "upstream" of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.

E.g., I'd include "more relevant knowledge about institutions, including theoretical frameworks for how to think about them" in the bucket I was trying to gesture to.

So not just e.g. Bostrom-style macrostrategy work.

On reflection, I probably should've also put "intervention research" in there, and added as a sub-question "And do you think one of these types of research would be more useful for your grantmaking than the others?"

---

But then your "ETA" part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).

Patrick @ 2022-01-24T06:54 (+11)

I emailed CEA with some questions about the LTFF and EAIF, and Michael Aird (MichaelA on the forum) responded about the EAIF. He said that I could post his email here. Some of the questions overlap with the contents of this AMA (among other things), but I included everything. My questions are formatted as quotes, and the unquoted passages below were written by Michael.

Here are some things I've heard about LTFF and EAIF (please correct any misapprehensions):

You can apply for a grant anytime, and a decision will be made within a few weeks.

Basically correct. Though some decisions take longer, mainly for unusually complicated, risky, and/or large grants, or grants where the applicant decides in response to our questions that they need to revisit their plans and get back to us later. And many decisions are faster. 

The application process is meant to be low-effort, with the application requiring no more than a few hours' work. 

Basically correct, though bear in mind that that doesn't necessarily include the time spent actually doing the planning. We basically just don't want people to spend >2 hours on actually writing the application, but it'll often make sense to spend >2 hours, sometimes much more than 2 hours, on actual planning.

The funds don't put many resources into evaluation, which is ad hoc and focuses on the most-controversial grants—the goal is to decide whether to make more such grants in the future. (Question: how do you decide whether a controversial grant was successful?) [Author's note: I was unclear here—I was asking about post-hoc evaluation, but Michael's answer is about evaluating grant applications.]

 These statements seem somewhat fuzzy so it's hard to say if I'd agree. Here's what I'd say:

The typical grant is small and one-off (more money requires a new application), and made to an individual. Grants are also made to organizations, and these might be a little bigger but still on the small side (probably not more than $300k).

I guess this is about right, but:

Your specific questions:

How many grants come through channels other than people applying unbidden (e.g., referrals/nominations by third parties or active grantmaking by fund managers)? What's the most common such channel?

The LTFF's fund managers all have backgrounds in AI or CS. Is the process for evaluating grants in areas outside the managers' areas of expertise any different?

What's the role of the advisers to the LTFF and EAIF listed on the website? Do managers commonly discuss grants with people not listed on the website (e.g., experts at other nonprofits)?

What's the process for a grant's being approved or rejected? E.g., can a primary grant evaluator unilaterally reject a grant? Do grants have to be unanimously approved by all managers? Do all mangers have a say in all grants?

What are the motivations for having guest managers—increased capacity, identifying or training promising grantmakers, diversity of viewpoints?

I know that sometimes you give feedback to unsuccessful grant recipients. What does this feedback look like—e.g., is it a 3-sentence email, or an arbitrarily long phone conversation with the primary evaluator?

What processes do you have to learn from mistakes or sub-optimal decisions?

Linch @ 2021-06-07T23:47 (+11)

What % of your grants (either grantee- or $-weighted, but preferably specify which denominator you're using) do you expect to be net negative to the world?

A heuristic I have for being less risk-averse is 

If X (horrible thing) never happens, you spend too much resources on preventing X.

Obviously this isn't true for everything (eg a world without any existential catastrophes seems like a world that has its priorities right), but I think it's overall a good heuristic, as illustrated by Scott Aaronson's Umeshisms and Mitchell and Webb's "No One Drowned" episode. 

Max_Daniel @ 2021-06-08T00:15 (+14)

My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).

(I still think that the total value of the grantee portfolio would be significantly positive b/c I'd expect the absolute values to be systematically higher for positive than for negative grants.)

This is just a general view I have. It's not specific to EA Funds, or the grants this round. It applies to basically any action. That view is somewhat considered but I think also at least somewhat controversial. I have discussed it a bit but not a lot with others, so I wouldn't be very surprised if someone replied to this comment saying "but this can't be right because of X", and then I'd be like "oh ok, I think you're right, the close-to-50% figure now seems massively off to me".

--

If "net negative" means "significantly net negative" (though I'm not sure what the interesting bar for "significant" would  be), then I'm not sure I have a strong prior. Glancing over the specific grants we made I feel that for very roughly 1/4 of them I have some vague sense that "there is a higher-than-baseline risk for this being significantly net negative". But idk what that higher-than-baseline risk is as absolute probability, and realistically I think all that's going on here is that for about 1/4 of grants I can easily generate some prototypical story for why they'd turn out to be significantly net negative. I don't know how well this is correlated with the actual risk.

(NB I still think that the absolute values for 'significantly net negative' grants will be systematically smaller than for 'significantly net positive' ones. E.g., I'd guess that the 99th percentile ex-post impact grant much more than offsets the 1st percentile grant [which I'm fairly confident is significantly net negative].)

Linch @ 2021-06-08T02:01 (+2)

Thanks a lot for this answer! After asking this, I realize I'm also interested in asking the same question about what ratio of grants you almost funded would be ex post net-negative.

Jonas Vollmer @ 2021-06-13T14:47 (+3)

This isn't what you asked, but out of all the applications that we receive (excluding desk rejections), 5-20% seem ex ante net-negative to me, in the sense that I expect someone giving funding to them to make the world worse. In general, worries about accidental harm do not play a major role in my decisions not to fund projects, and I don't think we're very risk-averse. Instead, a lot of rejections happen because I don't believe the project will have a major positive impact.

Linch @ 2021-06-13T21:08 (+2)

are you including opportunity cost in the consideration of net harm? 

Jonas Vollmer @ 2021-06-14T08:46 (+2)

I include the opportunity cost of the broader community (e.g., the project hires people from the community who'd otherwise be doing more impactful work), but not the opportunity cost of providing the funding. (This is what I meant to express with "someone giving funding to them", though I think it wasn't quite clear.)

Max_Daniel @ 2021-06-08T00:21 (+2)

As an aside, I think that's an excellent heuristic, and I worry that many EAs (including myself) haven't internalized it enough.

(Though I also worry that pushing too much for it could lead to people failing to notice the exceptions where it doesn't apply.)

MichaelA @ 2021-06-08T08:40 (+2)

[thinking/rambling aloud] I feel like an "ideal reasoner" or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50/50 on this and haven't thought about it much; "unsure" doesn't mean "I suspect it'd probably be bad.) 

I think this intersects with concerns about naive consequentialism and (less so) potential downsides of using explicit probabilities

If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I can't instead choose sophisticated consequentialism, I'd probably do that. But I'm not sure for EA grantmakers. And of course sophisticated consequentialism seems better.

Maybe there's a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already  help "balance things out", since grantmakers may already be focusing somewhat too much on downside risk. I really don't know.

Linch @ 2021-06-10T03:41 (+9)

Hmm, I think this heuristic actually doesn't make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and don't need weird approximations/heuristics. 

I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. I'm not sure what you mean by "sophisticated consequentialism" here, but I guess I'd sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.

MichaelA @ 2021-06-10T07:23 (+3)

I now think sophisticated consequentialism may not be what I really had in mind. Here's the text from the entry on naive consequentialism I linked to:

Consequentialists are supposed to estimate all of the effects of their actions, and then add them up appropriately. This means that they cannot just look at the direct and immediate effects of their actions, but also have to look at indirect and less immediate effects. Failing to do so amounts to applying naive consequentialism. That is to be contrasted with sophisticated consequentialism, which appropriately takes indirect and less immediate effects into account (cf. the discussion on “simplistic” vs. “correct” replaceability on 80,000 Hours’ blog (Todd 2015)).

As for a concrete example, a naive conception of consequentialism may lead one to believe that it is right to break rules if it seems that that would have net positive effects on the world. Such rule-breaking normally has negative side-effects, however - e.g. it can lower the degree of trust in society, and for the rule-breaker’s group in particular - which means that sophisticated consequentialism tends to be more opposed to rule-breaking than naive consequentialism.

I think maybe what I have in mind is actually "consequentialism that accounts appropriately for biases, model uncertainty, optimizer's curse, unilateralist's curse, etc." (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like that's not what the term is meant to mean.) 

I'd be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice you're most excited about. (That's not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)

Neel Nanda @ 2021-06-07T00:14 (+11)

If I know an organisation is applying to EAIF, and have an inside view that the org is important, how valuable is donating $1000 to the org compared to donating $1000 to EAIF? More generally, how should medium sized but risk-neutral donors coordinate with the fund?

Max_Daniel @ 2021-06-07T20:45 (+17)

My very off-the-cuff thoughts are:

  • If it seems like you are in an especially good position to assess that org, you should give to them directly. This could, e.g., be the case if you happened to know the org's founders especially well, or if you had rare subject-matter expertise relevant to assessing that org.
  • If not, you should give to a donor lottery.
  • If you win the donor lottery, you would probably benefit from coordinating with EA Funds. Literally giving the donor lottery winnings to EA Funds would be a solid baseline, but I would hope that many people can 'beat' that baseline, especially if they get the most valuable inputs from 1-10 person-hours of fund manager time.
  • Generally, I doubt that it's good use of the donor's and fund managers' time if donors and fund managers coordinated on $1,000 donations (except in rare and obvious cases). For a donation of $10,000 some very quick coordination may sometimes be useful - especially if it goes to an early-stage organization. For a $100,000 donation, it starts looking "some coordination is helpful more likely than not" (though in many cases the EA Funds answer may still be "we don't really have anything to say, it seems best if you make this decision independently"), but I still don't think explicit coordination should be a strong default or norm.

One underlying and potentially controversial assumption I make is that more variance in funding decisions is good at the margin. This pushes toward more independent funders being good, reducing correlation between the decisions of different funders, etc. - My view on this isn't resilient, and I think I remember that some thoughtful people disagree with that assumption.

MichaelA @ 2021-06-04T09:34 (+11)

Recently I've been thinking about improving the EA-aligned research pipeline, and I'd be interested in the fund managers' thoughts on that. Some specific questions (feel free to just answer one or two, or to say things about the general topic but not these questions):

  1. In What's wrong with the EA-aligned research pipeline?, I "briefly highlight[ed] some things that I (and I think many others) have observed or believe, which I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error." Do those observations or beliefs ring true to you? Would you diagnose the "problem(s)" differently?
  2. More recently, I "briefly discuss[ed] 19 interventions that might improve [this] situation. I discuss[ed] them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline." Do you think any of those ideas seem especially great or terrible? Would you rank ordering be different to mine? 
  3. Do you think there are promising intervention options I omitted?

(No need to read more of those posts than you have the time and interest for. I expect you'd be able to come up with interesting thoughts on these questions without clicking any of those links, and definitely if you just read the summary sections without reading the rest of the posts.)

Buck @ 2021-06-05T19:08 (+36)

Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.

Increasing grantmaking capacity and/or improving grantmaking processes

Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.

My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.

I think that grantmaking capacity is more of a bottleneck for things other than research output.

Scaling Effective Thesis, improving it, and/or creating new things sort-of like it

I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.

I'm not confident.

Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.

The post doesn't seem to exist yet so idk

Increasing and/or improving research by non-EAs on high-priority topics

I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.

Creating a central, editable database to help people choose and do research projects

I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.

Using Elicit (an automated research assistant tool) or a similar tool

I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.

Forecasting the impact projects will have

I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.

Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)

I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting

I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.

Increasing and/or improving career advice and/or support with network-building

Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.

I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.

Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers

I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.

Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.

Creating and/or improving relevant educational materials

I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs

Creating, improving, and/or scaling market-like mechanisms for altruism

I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.

Increasing and/or improving the use of relevant online forums

Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".

Increasing the number of EA-aligned aspiring/junior researchers

I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).

I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.

Increasing the amount of funding available for EA-aligned research(ers)

This seems almost entirely useless; I don't think this would help at all.

discovering, writing, and/or promoting positive case studies

Seems like a good use of someone's time.

 

---------------

This was a pretty good list of suggestions. I guess my takeaways from this are:

  • I care a lot about access to mentorship
  • I think that people who are willing to talk to lots of new people are a scarce and valuable resource
  • I think that most of the good that can be done in this space looks a lot more like "do a long schlep" than "implement this one relatively cheap thing, like making a website for a database of projects".
Max_Daniel @ 2021-06-05T21:07 (+9)

I wonder whether I should try making up an EA interview

I would be enthusiastic about this. If you don't do it, I might try doing this myself at some point.

I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be? 

I.e., I'm worried that the bottleneck might be something like "there are only very few people who are good at assessing other people" as opposed to "people typically use the wrong method to try to assess people".

MichaelA @ 2021-06-06T08:34 (+5)

(FWIW, at first glance, I'd also be enthusiastic about one of you trying this.)

Linch @ 2021-06-06T21:18 (+4)

I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity.

Sorry, minor confusion about this. By "top 25%," do you mean 75th percentile? Or are you encompassing the full range here?

MichaelA @ 2021-06-06T08:47 (+3)

Increasing the amount of funding available for EA-aligned research(ers)

This seems almost entirely useless; I don't think this would help at all.

I'm pretty surprised by the strength of that reaction. Some followups:

  1. How do you square that with the EA Funds (a) funding things that would increase the amount/quality/impact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding?
    • Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
    • Do you disagree that the funds have room for more funding?
  2. Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?
  3. Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)? 
Buck @ 2021-06-06T17:14 (+14)

Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".

I think that the funds' RFMF is only slightly real--I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn't really increase my ability to direct money at promising projects that I run across. (It's helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn't have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.

And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.

  • Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?

I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.

Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

I think that increasing available funding basically won't help at all for causing interventions of the types you listed in your post--all of those are limited by factors other than funding.

(Non-longtermist EA is more funding constrained of course--there's enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)

Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.

High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that's not where I expect most of their value to come from.

I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I'd rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.

Max_Daniel @ 2021-06-06T17:44 (+12)

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy' and somewhat unlikely-to-pan-out options. I.e., things like:

  • Building an "EA city" somewhere
  • Buying a majority of shares of some AI company (or of relevant hardware companies)
  • Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
  • Buying the New York Times
  • Being among the first actors settling Mars

(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a "realistic" additional donor wouldn't be open to such things. I'm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)

Buck @ 2021-06-06T18:08 (+4)

I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.

Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it

.

Max_Daniel @ 2021-06-06T18:23 (+2)

Hmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.

(I guess there is also the question what exactly we're assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I'm much more inclined to agree with "business as usual + this extra capital adds much less than 20%". In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)

Max_Daniel @ 2021-06-06T18:35 (+2)

OK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.

Jonas Vollmer @ 2021-06-06T20:17 (+7)

Here's a toy model:

  • A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
  • A default assumption that longtermism will eventually end up with $30-$300B in funding, let's assume $100B

Increasing the funding from $100B to $200B would then increase utility by 15%.

Jonas Vollmer @ 2021-07-05T11:45 (+9)

> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).

Just wanted to flag briefly that I personally disagree with this:

  • I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
  • I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.

* Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).

I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.

Overall, I want to continue funding good fundraising organizations.

Linch @ 2021-06-07T08:45 (+9)

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

I'm curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.

Max_Daniel @ 2021-06-07T22:09 (+14)

I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:

  • How much longtermist $$ is there now?
    • This is the least uncertain one. It's not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I'd be surprised if my estimate on this was off by 10x.
  • What will the financial returns on current longtermist $$ be before they're being spent?
    • Over long timescales, for some of that capital, this might be 'only' as volatile as the stock market or some other 'broad' index.
    • But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
  • How much new longtermist $$ will come in at which times in the future?
    • This seems highly uncertain because it's probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
  • What should the discount rate for longtermist $$ be?
    • Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
  • What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?

I think for some narrow notion of roughly "longtermist $$ as 'aligned' as Open Phil's longtermist pot" my 80% credence interval for the net present value is $30B - $1 trillion. I'm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.

Generally my view on this isn't that well considered and probably not that resilient.

MichaelA @ 2021-06-08T08:25 (+4)

Interesting, thanks.

... my 80% credence interval for the net present value is $30B - $1 trillion. I'm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.' [emphases added]

Shouldn't your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/ruling out some AI stuff?

(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)

Max_Daniel @ 2021-06-08T10:16 (+8)

Shouldn't your lower bound for the 50% interval be higher than for the 80% interval?

If the intervals were centered - i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively - then it should be, yes.

I could now claim that I wasn't giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.

Max_Daniel @ 2021-06-08T10:20 (+2)

I also now think that the lower end of the 80% interval should probably be more like $5-15B.

Max_Daniel @ 2021-06-06T18:13 (+6)

I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.

However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors.

One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I'd guess it's much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces "evangelists" rather than just people who'll start giving 1% as a 'hobby', are quiet about it, and otherwise don't think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.

So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.

(I'm also quite uncertain about all of this. E.g., I wouldn't be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving - even in a 'good' way - were significantly net negative.)

Jonas Vollmer @ 2021-06-06T09:51 (+3)

When I said that the EAIF and LTFF have room for more funding, I didn't mean to say "EA research is funding-constrained" but "I think some of the abundant EA research funding should be allocated here." 

Saying "this particular pot has room for more funding" can be fully consistent with the overall ecosystem being saturated with funding.

Do you think increasing available funding wouldn't help with any EA stuff

I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research – but the difference you can make through direct work is plausibly vastly greater (>10x greater).

* Substantial in the sense "if you calculate the expected impact, it'll be huge", not "substantial relative to the EA community's total impact."

MichaelA @ 2021-06-06T12:17 (+4)

When I said that the EAIF and LTFF have room for more funding, I didn't mean to say "EA research is funding-constrained" but "I think some of the abundant EA research funding should be allocated here." 

Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5?

(I don't think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises.

I ask about the very large donors specifically because things you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe I'm wrong about that.)

Jonas Vollmer @ 2021-06-06T18:09 (+7)

I don't think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didn't allocate more funding this year.

Edit:

you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF

Hmm, why do you think this? I don't remember having said that.

MichaelA @ 2021-06-06T18:44 (+5)

Hmm, why do you think this? I don't remember having said that.

Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say "For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it." 

(Which implies you think that that's a more effective way for most smaller donors to give than giving to the EA Funds right away - rather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.)

I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said. 

I've now struck out that part of my comment. 

MichaelA @ 2021-06-06T08:42 (+2)

FWIW, I agree that your concerns about "Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers" are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.

MichaelA @ 2021-06-06T08:41 (+2)

I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much.

I think I agree with this, though part of the aim for the database would be to help people find mentors (or people/resources that fill similar roles). But this wasn't described in the title of that section, and will be described in the post coming out in a few weeks, so I'll leave this topic there :)

MichaelA @ 2021-06-06T08:37 (+2)

Thanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say.

Regarding Effective Thesis:

  • I think I agree that "most research areas relevant to longtermism require high context in order to contribute to", at least given our current question lists and support options. 
    • I also think this is the main reason I'm currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas.
  • On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to people - particularly specialists - with less context, especially if accompanied with suggested resources, a mentor with more context, etc. 
    • E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context.
    • I think in theory Effective Thesis or things like it could contribute to that
    • After writing that, I saw you said the following, so I think we mostly agree here: "I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff."
      • OTOH, in terms of examples of this happening, I think at least Luke Muehlhauser seems to believe some of this has happened for Open Phil's AI governance grantmaking (though I haven't looked into the details myself), based on this post: https://www.openphilanthropy.org/blog/ai-governance-grantmaking
  • But in any case, I don't see the main value proposition as the direct impact of the theses Effective Thesis guides people towards or through writing. I see  the main value propositions as (a) increasing the number of people who will go on to become more involved in an area, get more context on it, and do useful research in it later, and (b) making it easier for people who already have good context, priorities, etc. to find mentorship and other support
    • Rather than the direct value of the theses themselves
    • (Disclaimer: This is a quick, high-level description of my thoughts, without explaining all my related thoughts of re-reading Effective Thesis's strategy, impact assessment, etc.)
MichaelA @ 2021-06-06T12:02 (+9)

FYI, someone I know is interested in applying to the EAIF, and I told them about this post, and after reading it they replied "btw the Q&A responses at the EAIF were SUPER useful!"

I mention this as one small data point to help the EAIF decide whether it's worth doing such Ask Us Anythings (AUAs?) in future and how much time to spend on them. By extension, it also seems like (even weaker) evidence regarding how useful detailed grant writeups are.

Jonas Vollmer @ 2021-06-06T18:09 (+4)

Thanks, this is useful!

MichaelA @ 2021-06-04T09:00 (+9)

Some related questions with slightly different framings: 

  1. What crucial considerations and/or key uncertainties do you think the EAIF fund operates under?
  2. What types/lines of research do you expect would be particularly useful for informing the EAIF's funding decisions?
  3. Do you have thoughts on what types/lines of research would be particularly useful for informing other funders'  funding decisions in the "EA infrastructure" space?
  4. Do you have thoughts on how the answers to questions 2 and 3 might differ?
Max_Daniel @ 2021-06-05T14:17 (+24)

Some key uncertainties for me are: 

  • What products and clusters of ideas work as 'stepping stones' or 'gateways' toward (full-blown) EA [or similarly 'impactful' mindsets]?
    • By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or ...), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | A now encounters X)?
    • I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
    • It tentatively seems to me that I'm unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, I'm more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly 'taking the goal of effectively improving the world seriously by making a sincere effort to improve on median do-gooding by applying evidence-based reasoning, and delivering results that are impressive and epistemically useful to someone previously only exposed to median do-gooding' - or at least conveying information, or embodying a style of reasoning about the world, that is important for such endeavours.
      • To give a random example, I might be fairly excited about assigning Steven Pinker's The Better Angels of Our Nature as reading in high school, even though I think some claims in this book are false and that there are important omissions.
      • To give a different example (and one we have discussed before), I'm fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically don't care about them in this context.
      • Yet another example: I'm fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
    • (Though in some other ways I might be more pessimistic / might have a higher bar for such content. E.g., I might care more about content being well written and engaging.)
    • I think my underlying inside-view model here is roughly:
      • One of the highest-leverage effects we can have on people is to 'dislodge' them from a state of complacency or fatalism about their ability to make a big positive contribution to the world.
      • To achieve this effect, it is often sufficient to expose people to examples of other people seriously trying to make a big positive contribution to the world while being guided by roughly the 'right' methods (e.g. scientific mindset), and are doing so in a way that seems impressive to the person exposed.
        • It is helpful if these efforts are 'successful' by generic lights, e.g., produce well-received output.
        • It doesn't matter that much if a 'core EA' would think that, all things considered, these efforts are worthless or net negative because they're in the wrong cause area or miss some crucial consideration or whatever.
  • How can we structure the EA community in such a way that it can 'absorb' very large numbers of people while also improving the allocation of talent or other resources?
    • I am personally quite unsatisfied with many discussions and standard arguments around "how much should EA grow?" etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be "grow more slowly" or "have a community of uniformly extremely high capability levels" but instead: "structure the community in such a way that selection/screening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.".
      • ETA: Upon rereading, I worry that the above can be construed as being too indiscriminately negative about discussions on and efforts in EA community building. I think I'm mainly reporting my immediate reaction to a diffuse "vibe" I get from some conversations I remember, not to specific current efforts by people thinking and working on community building strategy full-time (I think often I simply don't have a great understanding of these people's views).
    • I find it instructive to compare the EA community to pure maths academia, and to large political parties.
      • Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general population. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs who'll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or ... And while perhaps in a sense they put "too little" weight on this, I also think they don't need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isn't for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
        • Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps 'mediocre') books on advanced mathematics: "In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]"
        • I'm not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
        • Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a 'wide net' that exposes very large numbers of people to mathematics and an opportunity to 'rise through its ranks'?
      • Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an 'elitist' approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
      • So it seems that even though these other 'communities' also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly "elitist". Why is that? Maybe there are differences between these communities that mean their approaches can't work for EA.
        • E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too 'preparadigmatic' to allow for something like that.
        • Perhaps the key difference for political parties is that they have higher demand for 'non-elite' talent - e.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize one's views to large fractions of the general population.
        • But is that it? And is it all? I'm worried that we gave up too early, and that if we tried harder we'd find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesn't seem great anyway) within the community, despite these structural challenges.
  • How large are the returns on expected lifetime impact as we move someone from "hasn't heard of EA at all" toward "is maximally dedicated and believes all kinds of highly specific EA claims including about, e.g., top cause areas or career priority paths"?
    • E.g., very crudely, suppose I can either cause N people to move from '1% EA' to '10% EA' or 1 person from '50% EA' to '80% EA'. For which value of N should I be indifferent?
      • This is of course oversimplified - there isn't a single dimension of EA-ness. I still feel that questions roughly like this one come up relatively often.
    • A related question roughly is: if we can only transmit, say, 10% of the 'full EA package', what are the most valuable 10%? A pitch for AMF? The Astronomical Waste argument? Basic reasons for why to care about effectiveness when doing good? The basic case for worrying about AI risk?  Etc.
    • Note that it could turn out that moving people 'too far' can be bad - e.g., if common EA claims about top cause areas or careers were wrong, and we were transmitting only current 'answers' to people without giving them the ability to update these answers when it would be appropriate.
  • Should we fund people or projects? I.e., to what extent should we provide funding that is 'restricted' to specific projects or plans versus, at least for some people, give them funding to do whatever they want? If the latter, what are the criteria for identifying people for whom this is viable?
    • This is of course a spectrum, and the literal extreme of "give A money to do whatever they want" will very rarely seem correct to me.
    • It seems to me that I'm more willing than some others to move more toward 'funding people', and that when evaluating both people and projects I care less about current "EA alignment" and less about the direct, immediate impact - and more about things like "will providing funding to do X cause the grantee to engage with interesting ideas and make valuable learning experiences".
  • How can we move both individual grantees as well as the community as a whole more toward an 'abundance mindset' as opposed to 'scarcity mindset'?
    • This is a pretty complex topic, and an area that is delicate to navigate. As EA has perhaps witnessed in the past, naive ways of trying to encourage an "abundance mindset" can lead to a mismatch of expectations (e.g., people expecting that the bar for getting funding is lower than it in fact is), negative effects from poorly implemented or badly coordinated new projects, etc. - I also think there are other reasons for caution against 'being too cavalier with money', e.g., it can lead to a lack of accountability.
    • Nevertheless, I think it would be good if more EAs internalized just how much total funding/capital there would be available if only we could find robustly good ways to deploy it at large scale. I don't have a great solution, and my thoughts on the general issue are in flux, but, e.g., I personally tentatively think that on the margin we should be more willing to provide larger upfront amounts of funding to people who seem highly capable and want to start ambitious projects.
    • I think traditional "nonprofit culture" unfortunately is extremely unhelpful here b/c it encourages risk aversion, excessive weight on saving money, etc. - Similarly, it is probably not helpful that a lot of EAs happen to be students or are otherwise have mostly experienced money being a relatively scarce resource in their personal lives.
MichaelA @ 2021-06-05T15:22 (+4)

Your points about "How can we structure the EA community in such a way that it can 'absorb' very large numbers of people while also improving the allocation of talent or other resources?" are perhaps particularly thought-provoking for me. I think I find your points less convincing/substantive than you do, but I hadn't thought about them before and I think they do warrant more thought/discussion/research.

On this, readers may find the value of movement growth entry/tag interesting. (I've also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.)

Here are some quick gestures at the reasons why I think I'm less convinced by your points than you. But I don't actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect you've considered things like this already - this is maybe more for the readers benefit, or something.

  • As you say, "perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too 'preparadigmatic' to allow for something like that."
  • I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
    • E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). I'd either have to stop doing that or find some way of "screening" people for it, which could impose costs and awkwardness on both parties
  • Currently we have the option of either growing more, faster, or differently in future, or not doing so. But certain growth strategies/outcomes would be hard-to-reverse, which would destroy option value
    • You say "I'm worried that we gave up too early", but I don't think we've come to a final stance on how, how fast, and how large the movement should grow, we're just not now pushing for certain types or speeds of growth
      • We can push for it later
      • (Of course, there are also various costs to delaying our growth)
Max_Daniel @ 2021-06-05T16:12 (+2)

I mean I'm not sure how convinced I am by my points either. :) I think I mainly have a reaction of "some discussions I've seen seem kind of off, rely on flawed assumptions or false dichotomies, etc." - but even if that's right, I feel way less sure what the best conclusion is.

One quick reply:

I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways

I think the "particularly if it grows in certain ways" is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow.

I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying "adding 2,000 random people to EAG". But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well!

I think the key question is not "what if we make everything we have bigger?" but "can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?".

A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution you're a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will break - e.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldn't conclude that, unfortunately, the population can't grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.

MichaelA @ 2021-06-05T17:16 (+4)

(Yeah, this seems reasonable. 

FWIW, I used "if we threw 2000 additional randomly chosen people into an EA conference" as an example precisely because it's particularly easy to explain/see the issue in that case. I agree that many other cases wouldn't just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)

Greg_Colbourn @ 2021-11-07T14:54 (+2)

"can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?"

Interesting discussion. What if there was a separate brand for a mass movement version of EA?

MichaelA @ 2021-06-05T15:11 (+4)

Thanks! This is really interesting.

Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.

MichaelA @ 2021-06-05T15:12 (+2)

[The following comment is a tangent/nit-pick, and doesn't detract from your actual overall point.]

Yet another example: I'm fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).

I agree that that sort of content seems useful, and also that "for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)". But I think the "even though" doesn't quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And that's more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering.

(But I'd guess that that would be less true for Galef's article, due to having a less targeted audience. That said, I haven't actually read either of these specific articles.)

Max_Daniel @ 2021-06-06T11:37 (+4)

(Ah yeah, good point. I agree that the "even though" is a bit off because of the things you say.)

MichaelA @ 2021-06-05T15:11 (+2)

To give a different example (and one we have discussed before), I'm fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically don't care about them in this context.

In case any readers are interested, they can see my thoughts on that piece here: Quick thoughts on Kelsey Piper's article "Is climate change an “existential threat” — or just a catastrophic one?"

Personally, I currently feel unsure whether it'd be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that "What products and cluster of ideas work as 'stepping stones' or 'gateways' toward (full-blown) EA [or similarly 'impactful' mindsets]?" is a key uncertainty and that more clarity on that would be useful.

(I should also note than in general I think Kelsey's work has remarkably good quality, especially considering the pace she's producing things at, and I'm very glad she's doing the work she's doing.)

Michelle_Hutchinson @ 2021-06-04T17:03 (+7)

Here are a few things: 

  • What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? I’ve been pretty surprised how much traction ‘EA’ as an overall concept has gotten. Whereas I’ve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
  • How much and in which cases research is translated into action. I have some hypothesis that it’s often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think it’s difficult to bring these together. I think maybe that inclines me towards funding more ‘getting things done’ and less research than I might naturally be tempted to. (Though I’m probably pretty far on the ‘do more research’ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
  • How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, it’s really hard to make it in academia, and there are strong incentives in place there, which don’t point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
MichaelA @ 2021-06-04T17:20 (+2)

Interesting, thanks. (And all the other answers here have been really interesting too!)

What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)?

Is what you have in mind the sort of thing the "awareness-inclination model" in How valuable is movement growth? was aiming to get at? Like further theorising and (especially?) empirical research along the lines of that model, making breaking things down further into particular bundles of EA ideas, particular populations, particular ways of introducing the ideas, etc.?

MichaelA @ 2021-06-04T08:47 (+9)

The Long-Term Future Fund put together a doc on "How does the Long-Term Future Fund choose what grants to make?" How, if at all, is the EAIF's process for choosing what grants to make differ from that? Do you have or plan to make a similar outline of your decision process? 

Jonas Vollmer @ 2021-06-04T14:53 (+10)

We recently transferred a lot of the 'best practices' that each fund (especially the LTFF) discovered to all the other funds, and as a result, I think it's very similar and there are at most minor differences at this point.

Neel Nanda @ 2021-06-06T21:37 (+5)

What were the most important practices you transferred?

Jonas Vollmer @ 2021-06-14T09:19 (+23)
  • Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
  • Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
  • Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
  • Explicitly considering giving applicants more money than they applied for
  • Offering feedback to applicants under certain conditions (if we feel like we have particularly useful thoughts to share with them, or they received an unusually high score in our internal voting)
  • Asking for references in the first stage of the application form, but without requiring applicants to clear them ahead of time (so it's low-effort for them, but we already know who the references would be)
  • Having an automatically generated google doc for each application that contains all the information related to a particular grant (original application, evaluation, internal discussion, references, applicant emails, etc.)
  • Writing in-depth payout reports to build trust and help improve community epistemics; write shorter, lower-effort payout reports once that's done and we want to save time
BrianTan @ 2021-06-04T09:37 (+2)

I think you meant EAIF, not AWF :)

MichaelA @ 2021-06-04T10:04 (+2)

(Ah yes, thanks, fixed. This was a casualty of copy-pasting a bunch of questions over from other AMAs.)

Linch @ 2021-06-08T00:11 (+7)

As a different phrasing of Michael's question on forecasting, do EAIF grantmakers have implicit distributions of possible outcomes in their minds when making a grant, either a) in general, or b) for specific grants? 

If so, what shape does those distributions (usually) look like? (an example of what I mean is "~log-normal minus a constant" or "90% of the time, ~0, 10% of the time, ~power law")

If not, are your approaches usually more quantitative (eg explicit cost-effectiveness models) or more qualitative/intuitive (eg more heuristic-based and verbal-argument driven)?

 

Max_Daniel @ 2021-06-08T00:30 (+4)

I think I often have an implicit intuition about something like "how heavy-tailed is this grant?". But I also think most grants I'm excited about are either at least somewhat heavy-tailed or aimed at generating information for a decision about a (potentially heavy-tailed) future grant, so this selection effect will reduce differences between grants along that dimension.

But I think for less than 1/10 of the grants I think about I will have any explicit quantitative specification of the distribution in mind. (And if I have it will be rougher than a full distribution, e.g. a single "x% of no impact" intuition.)

Generally I think our approaches are more often qualitative/intuitive than quantitative. There are rare exceptions, e.g. for the children's book grant I made a crappy cost-effectiveness back-of-the-envelope calculation just to check if the grant seemed like a non-starter based on this. As far as I remember, that was the only such case this round.

Sometimes we will discuss specific quantitative figures, e.g., the amount of donations a fundraising org might raise within a year. But our approach for determining these figures will then in turn usually be qualitative/intuitive rather than based on a full-blown quantitative model.

Jonas Vollmer @ 2021-07-06T17:03 (+5)

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

MichaelA @ 2021-06-04T08:50 (+5)

Have you considered providing small pools of money to people who express potential interest in trying out grantmaking and who you have some reason to believe might be good at it? This could be people the fund manager's already know well, people who narrowly missed out on being appointed as full fund managers, or people who go through a short application process for these small pools specifically. 

Potential benefits:

Possible downsides:

(Really those last two points are "reasons the benefits may be small", rather than "downsides".)

(To be clear, I'm not necessarily saying I think you should do this.)

Jonas Vollmer @ 2021-06-04T15:02 (+9)

I have a pretty strong view that I don't fully trust any single person's judgment (including my own), and that aggregating judgments (through discussion and voting) has been super helpful for the EAIF's, Animal Welfare Fund's (AWF's), and especially the Long-Term Future Fund's (LTFF's) overall judgment ability in the past. E.g., I can recall a bunch of (in my view) net-negative grants that didn't end up being made thanks to this sort of aggregation, and also some that did end up happening – where it ultimately turned out that I was wrong.

I have also heard through the grapevine that previous experiments in this direction didn't go very well (mostly in that the 'potential benefits' you listed didn't really materialize; I don't think anything bad happened). Edit: I don't give a lot of weight to this though; I think perhaps there's a model that works better than what has been tried in the past.

I also think that having more discussion between grantmakers seems useful for improving judgment over the longer term. I think the LTFF partly has good judgment because it has discussed a lot of disagreements that generalize to other cases, has exchanged a lot of models/gears, etc.

For this reason, I'm fairly skeptical of any approach that gives a single person full discretion over some funding, and would prefer a process with more engagement with a broader range of opinions of other grantmakers. (Edit: Though others disagree somewhat, and will hopefully share their views as well.)

Our current solution is to appoint guest managers instead, as elaborated on here: https://forum.effectivealtruism.org/posts/ek5ZctFxwh4QFigN7/ea-funds-has-appointed-new-fund-managers

Appointing guest managers takes quite a lot of time, so I'm not sure how many we will have in the future.

Another idea that I think would be interesting is to implement your suggestion with teams of potential grantmakers (rather than individuals), like the Oxford Prioritisation Project. Again it would take some capacity to oversee, but could be quite promising. If someone applied for a grant for a project like this, I'd be quite interested in funding it.

Buck @ 2021-06-04T17:43 (+8)

I don't think this has much of an advantage over other related things that I do, like

  • telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why
  • asking people for their thoughts on grant applications that I've been given
  • asking people for ideas for active grantmaking strategies
Adam_Scholl @ 2021-06-05T08:10 (+4)

At one point an EA fund manager told me something like, "the infrastructure fund refuses to support anything involving rationality/rationalists as a policy." Did a policy like this exist? Does it still?

Buck @ 2021-06-05T19:18 (+9)

Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.

Max_Daniel @ 2021-06-05T17:09 (+7)

I'm not aware of any such policy, which means that functionally it didn't exist for this round.

I don't know about what policies may have existed before I joined the EAIF, and generally don't have much information about how previous fund managers made decisions. FWIW, I find it hard to believe that there was a policy like the one you suggest, at least for broad construals of 'anything involving'. For instance, I would guess that some staff members working for organizations that were funded by the EAIF in previous rounds might identify at rationalists, and so if this counted as "something involving rationalists" previous grants would be inconsistent with that policy.

It sounds more plausible to me that perhaps previous EAIF managers agreed not to fund projects that primarily aim to build the rationality community or promote standard rationality content and don't have a direct connection to the EA community or EA goals. (But again, I don't know if that was the case.)

Speaking personally, and as is evident from some grants we made this round (e.g. this one), I'm generally fairly open to funding things that don't have an "EA" branding and that contribute to "improving the work of projects that use the principles of effective altruism" (cf. official fund scope) in a rather indirect way. (See also some related thoughts in a different AMA answer.) Standard rationality/LessWrong content is not among the non-EA-branded things I'm generally most excited to promote, but I would still consider applications to that effect on a case-by-case basis rather than deciding based on a blanket policy. In addition, other fund managers might be more generically positive about promoting rationality content or building the rationality community than I am.

MichaelA @ 2021-06-04T08:46 (+4)

In the Animal Welfare Fund AMA, I asked: 

Have you considered sometimes producing longer write-ups that somewhat extensively detail the arguments you saw for and against giving to a particular funding opportunity? (Perhaps just for larger grants.)

This could provide an additional dose of the kind of benefits already provided by the current payout reports, as well as some of the benefits that having an additional animal welfare charity evaluator would provide. (Obviously there's already ACE in this space, but these write-ups could focus on funding opportunities they haven't got a write-up on, or this could simply provide an additional perspective.)

A similar idea would be to sometimes investigate a smaller funding opportunity or set of opportunities in detail as a sort of exemplar of a certain type of funding opportunity, and produce a write-up on that. Or to do things more explicitly like intervention reports or cause area reports.

Some of this probably isn't the Animal Welfare Fund's comparative advantage, but perhaps it'd be interesting to experiment with the first option, as that could mostly just use the reasoning and discussions you already have internally when making decisions?

But I think the recent EAIF report already had longer write-ups than the Animal Welfare Fund reports tend to, and in particular Max's write-ups seemed to provide a fair amount of detail on his thinking on various issues related to the grants that were made. So the question is less applicable here. 

But I'm still interested in your thoughts on this kind of thing, such as:

  1. Do you think in future you'll continue to provide write-ups as detailed as those in the recent report? 
  2. What about increasing the average level of detail, e.g. making the average write-up similarly detailed to Max's write-ups?
  3. What are your thoughts on the pros and cons of that?
  4. Do you think you might in future consciously aim to produce write-ups that serve more of the role that would be served by a write-up about a certain type of funding opportunity, or an intervention report, or a cause area report?
Jonas Vollmer @ 2021-06-04T15:34 (+14)

My take on this (others at the EAIF may disagree and may convince me otherwise):

I think EA Funds should be spending less time on detailed reports, as they're not read by that many people. Also, a main benefit is people improving their thinking based on reading them (it seems helpful for improving one's judgment ability to be able to read very concrete practical decisions and how they were reached), but there are a many such reports already at this point, such that writing further ones doesn't help that much – readers can simply go back to past reports and read those instead. I think EA Funds should produce such detailed reports every 1-2 years (especially when new fund managers come on board, so interested donors can get a sense of their thinking), and otherwise focus more on active grantmaking.

In addition, I think it would make sense for us to publish reports on whichever topic seems most important to us to communicate about – perhaps an intervention report, perhaps an important but underappreciated consideration, or a cause area. I think this should probably happen on an ad-hoc basis.

Max_Daniel @ 2021-06-04T16:14 (+2)

While I produced a number of detailed reports for this round, I agree with this.

Buck @ 2021-06-04T17:47 (+10)

re 1: I expect to write similarly detailed writeups in future.

re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)

re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes more time. The secondary con is that if I wrote more detailed grant reports, I'd have to be a bit clearer about the advantages and disadvantages of the grants we made, and this would involve me having to be clearer about kind of awkward things (like my detailed thoughts on how promising person X is vs person Y); this would be a pain, because I'd have to try hard to write these sentences in inoffensive ways, which is a lot more time consuming and less fun.

re 4: Yes I think this is a good idea, and I tried to do that a little bit in my writeup about Youtubers; I think I might do it more in future.

Michelle_Hutchinson @ 2021-06-04T15:38 (+10)

Speaking for myself, I'm interested in increasing the detail in my write-ups a little over the medium term (perhaps making them typically more the length of the write up for Stefan Schubert). I doubt I'll go all the way to making them as comprehensive as Max's. 
Pros:

  • Particularly useful for donors to the fund and potential applicants to get to know the reasoning processes grant makers when we've just joined and haven't yet made many grants
  • Getting feedback from others on what parts of my reasoning process in making grants seem better and worse seems more likely to be useful than simply feedback on 'this grant was one I would / wouldn't have made' 

Cons:

  • Time writing reports trades against time evaluating grants. The latter seems more important to me at the current margin. That's partly because I'd have liked to have decidedly more time than I had for evaluating grants and perhaps for seeking out people I think would make good grantees.
  • I find it hard to write up grants in great detail in a way that's fully accurate and balanced without giving grantees public negative feedback. I'm hesitant to do much of that, and when I do it, want to do it very sensitively.

I expect to try to include considerations in my write ups which might be found in write ups of types of opportunity. I don't expect to produce the kind of lengthy write ups that come to mind when you mention reports.

I would guess that the length of my write ups going forward will depend on various things, including how much impact they seem to be having (eg how much useful feedback I get from them that informs my thinking, and how useful people seem to be finding them in deciding what projects to do / whether to apply to the fund etc).

Max_Daniel @ 2021-06-04T17:07 (+5)

While I'm not sure I'll produce similarly long write-ups in the future, FWIW for me some of the pros of long writeups are:

  • It helps me think and clarify my own views.
  • I would often find it more time-consuming to produce a brief writeup, except perhaps for writeups that have a radically more limited scope - e.g., just describing what the grant "buys", but not saying anything about my reasoning for why I thought the grant is worth making.
MichaelA @ 2021-06-04T08:39 (+3)
  1. What processes do you have for monitoring the outcome/impact of grants?
  2. Relatedly, do  the EAIF fund managers make forecasts about potential outcomes of grants?
    • I saw and appreciated that Ben Kuhn made a forecast related to the Giving Green grant.
    • I'm interested in whether other fund managers are making such forecasts and just not sharing them in the writeup or are just not making them - both of which are potentially reasonable options.
  3. And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time?
    • E.g., what you'd want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.
    • (In the May grant recommendation report, it seems like Buck and Max shared such proxies for many grants (but not for all of them), that Ben did so for his one writeup, and that Michelle didn't. But maybe in some cases these proxies have been written down but just not shared in the report)

(I ask this because I imagine that such forecasts and writing down of proxies could help improve decision-making both by providing another framing for thinking about whether a grant is worthwhile, and by tracking what did and didn’t go as expected in order to better train your judgement for future evaluations.

I'm adapting these questions from a thread in a Long-Term Future Fund AMA and another in the Animal Welfare Fund AMA, and those threads also contained some interesting discussion that might be somewhat relevant here too.)

Buck @ 2021-06-05T19:29 (+2)

I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.

I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.

I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard and I feel like I'm going to have to spend more effort figuring out reasonable proxies than actually thinking about the question of whether this grant will be good, and so I feel drawn to a more "I'll know it when I see it" approach to evaluating my past grants.