EA Infrastructure Fund: May 2021 grant recommendations

By Jonas V, Max_Daniel, Buck, Michelle_Hutchinson, Ben Kuhn @ 2021-06-03T01:01 (+92)

Introduction

Highlights

Our grants include:

Grant recipients

Grants made during this grant application round:

Grant reports

Note: Many of the grant reports below are very detailed. If you are considering applying to the fund, but prefer a less detailed report, simply let us know in the application form. We are sympathetic to that preference and happy to take it into account appropriately. Detailed reports are not mandatory.

We run all of our payout reports by grantees, and we think carefully about what information to include to maximize transparency while respecting grantees’ preferences. If considerations around reporting make it difficult for us to fund a request, we are able to refer to private donors whose grants needn’t involve public reporting. We are also able to make anonymous grants.

Grant reports by Buck Shlegeris

Emma Abele, James Aung, Bella Forristal, Henry Sleight ($84,000)

12 months' salary for a 3-FTE team developing resources and programs to encourage university students to pursue highly impactful careers

This grant is the main source of funding for Emma Abele, James Aung, Bella Forristal, and Henry Sleight to work together on various projects related to EA student groups. The grant mostly pays for their salaries. Emma and James will be full-time, while Bella and Henry will be half-time, thus totalling 3 FTE.

The main reason I’m excited for this grant is that I think Emma and James are energetic and entrepreneurial, and I think they might do a good job of choosing and executing on projects that will improve the quality of EA student groups. Emma and James have each previously run EA university groups (Brown and Oxford, respectively) that seem generally understood to be among the most successful such groups. James co-developed the EA Student Career Mentoring program, and Emma ran an intercollegiate EA projects program. I’ve been impressed with their judgment when talking to them about what kinds of projects in this space might produce value, and they have a good reputation among people I’ve talked to.

I think it would be great if EA had a thriving ecosystem of different projects which are trying to provide high-quality services and products to people who are running student groups, e.g.:

CEA is working on providing support like this to university groups (e.g., they’re hiring for this role, which I think might be really impactful). But I think that it’s so important to get this right that we should be trying many different projects in this space simultaneously. James and Emma have a strong vision for what it would be like for student groups to be better, and I’m excited to have them try to pursue these ideas for a year.

In order to evaluate an application for grant renewal, I’ll try to determine whether people who run student groups think that this team was particularly helpful for them, and I’ll also try to evaluate content they produce to see if it seems high-quality.

Emma Abele, James Aung ($55,200)

Enabling university group organizers to meet in Oxford

We're also providing funding for about ten people who work on student groups to live together in Oxford over the summer. This is a project launched by the team described in the previous report. Concretely, Emma and James will run this project (while Bella and Henry won't be directly involved). The funding itself will be used for travel costs and stipends for the participants, as Emma and James's salaries are covered by the previous grant.

I am excited for this because I think that the participants are dedicated and competent EAs, and it will be valuable for them to know each other better and to exchange ideas about how to run student groups effectively. A few of these people are from student groups that aren't yet well established but could be really great if they worked out; I think that these groups are noticeably more likely to go well given that some of their organizers are going to be living with these experienced organizers over the summer.

Zakee Ulhaq ($41,868)

6-12 months' funding to help talented teenagers apply EA concepts and quantitative reasoning to their lives

Zakee (“Zak”) is running something roughly similar to an EA Introductory Fellowship for an audience of talented high schoolers, and will also probably run a larger in-person event for the participants in his fellowships. Most of this grant will pay for Zak’s work, though he may use some of it to pay others to help with this project.

Zak has this opportunity because of a coincidental connection to a tutoring business which mostly works with high school students whose grades are in about the top 1% of their cohorts.

I think that outreach to talented high schoolers seems like a plausibly really good use of EA money and effort, because it’s cheaper and better in some ways than outreach to talented university students.

I think Zak seems like a good but not perfect fit for this project. He has teaching experience, and he has a fairly strong technical background (which in my experience is helpful for seeming cool to smart, intellectual students). I’ve heard that he did a really good job improving EA Warwick. Even if this project largely fails, I think it will likely turn out to have been worth EAIF’s money and Zak’s time. That’s because it will teach Zak useful things about how to do EA movement building and high school outreach more specifically, which could be useful if he either tries again or can give good advice to other people.

Projects from the Czech Association for Effective Altruism (CZEA)

Irena Kotikova & Jiří Nádvorník

$30,000: 6 months’ salaries for two people (0.5 FTE each) to work on development, strategy, project incubation, and fundraising for the CZEA national group

This grant funds Irena Kotikova and Jiri Nadvornik (who run CZEA) to spend more time on various other projects related to the Czech EA community, e.g., fundraising and incubating projects.

I mostly see this grant as a gamble on CZEA. In the world where this grant was really good, it’s probably because CZEA winds up running many interesting projects (that it wouldn’t have run otherwise), which have positive impact and teach their creators lots of useful stuff. The grant could also help Irena and Jiri acquire useful experience that other EAs can adopt.

Someone who I trust had a fairly strong positive opinion of this grant, which made me more enthusiastic about it.

$25,000: 12 months’ salary for one person (0.2 FTE) and contractors to work on strategic partnership-building with EA-aligned organizations and individuals in the Czech Republic

CZEA is fairly well-connected to various organizations in Czechia, e.g., government organizations, nonprofits, political parties, and companies. They want to spend more time running events for these organizations or collaborating with them.

I think the case for this grant is fairly similar to the previous case – I’m not sure quite how the funds will lead to a particular exciting result, but given that CZEA seems to be surprisingly well connected in Czechia (which I found cool and surprising), it seems reasonable to spend small amounts of money supporting similar work, especially because CZEA’s team might learn useful things in the process.

Jiří Nádvorník

$8,300: Creating a short Czech-language book (~130 pages) and brochure (~20 pages) with a good introduction to EA in digital and print formats

This grant pays for CZEA to make high-quality translations of articles about EA, turn those translations into a brochure and a book, and print copies of them to give away.

I think that making high-quality translations of EA content seems like a pretty good use of money. (I think that arguments like Ben Todd’s against translating EA content into other languages apply much more to Chinese than into many other languages.) I am aware of evidence suggesting that EAs who are non-native English speakers are selected to be unusually good at speaking English compared to their peers, which is evidence that we’re missing out on some of their equally-promising peers.

It seems tricky to ensure high translation quality, and one of the main ways this project might fail is if the translator contracted for the project does a poor job. I’ve talked about this with Jiri a little and I thought he had a reasonable plan. In general, I think CZEA is competent to do this kind of project.

Irena Kotikova

$5,000: Giving away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar

This grant provides funds for CZEA to give away copies of books related to EA to talented young people in Czechia, e.g. people who do well in STEM competitions.

I think that giving away books seems like a generally pretty good intervention:

I also think that CZEA seems to be competent at doing this kind of project. So it seems like a solid choice.

YouTube channel “Rational Animations” ($30,000)

Funding a YouTube channel recently created by two members of the EA community

Rational Animations (which I’ll abbreviate RA) is a new YouTube channel created by members of the EA community.

The case for this grant:

Jeroen Willems ($24,000)

Funding for the YouTube channel “A Happier World”, which explores exciting ideas with the potential to radically improve the world

A Happier World is a YouTube channel run by Jeroen Willems, who recently graduated with a master's degree in television directing. He wrote an EA Forum post about the project here.

The argument for this grant is similar to the argument for Rational Animations: EA-related YouTube channels might produce a bunch of value, and Jeroen seems to be capable of making good content. I was very impressed by the video about pandemics he made as part of his degree, and I hope this grant will give him the time and incentive to improve his skills further.

Alex Barry ($11,066)

2 months' salary to develop EAIF funding opportunities and run an EA group leader unconference

Alex previously worked on EA group support at CEA.

This grant funds Alex to work on some combination of two different things, chosen at his discretion:

Aaron Maiwald ($1,787)

Funding production costs for a German-language podcast devoted to EA ideas

This grant provides a little bit of funding to cover some expenses for a new podcast in German about EA ideas, by Aaron Maiwald and Lia Rodehorst. Lia has experience working on podcasts and doing science journalism.

This grant seemed like a reasonable opportunity because it wasn’t very much money and it seems plausible that they’ll be able to make some good content. In order to get a grant renewal, I’d want to see that the content they’d produced was in fact good, by asking some German speakers to review it for me.

Grant reports by Max Daniel

General thoughts on this cycle of grants

My most important uncertainty for many decisions was where the ‘minimum absolute bar’ for any grant should be. I found this somewhat surprising.

Put differently, I can imagine a ‘reasonable’ fund strategy based on which we would have at least a few more grants; and I can imagine a ‘reasonable’ fund strategy based on which we would have made significantly fewer grants this round (perhaps below 5 grants between all fund managers).

Tony Morley, ‘Human Progress for Beginners’ ($25,000)

Early-stage grant for a children’s book presenting an optimistic and inspiring history of how the world has been getting better in many ways

This is an early-stage grant to support the production of a children’s book aimed at presenting history from a ‘progress studies’ and ‘new optimist’ perspective: That is, highlighting the many ways in which the world and human well-being have arguably massively improved since the Industrial Revolution.

The prospective author was inspired by the success of the children’s book Good Night Stories for Rebel Girls, and specifically envisions a book for children below age 12 with about 100 pages, each double page featuring a large illustration on one page and 200-500 words of text on the other.

The grant’s central purpose is to pay for professional illustrator Ranganath Krishnamani to be involved in the book project. The book’s prospective author, Tony Morley, is planning to work on the book in parallel with his job and has not asked for a salary. However, I view this grant primarily as a general investment into the book’s success, and would be happy for Tony to use the grant in whatever way he believes helps most achieve this goal. This could include, for example, freeing up some of his time or paying for marketing.

The idea of funding a children’s book was initially controversial among fund managers. Stated reasons for skepticism included unclear benefits from an EA perspective; a long ‘lag time’ between educating young children and the time at which the benefits from that education would materialize; and reputational risks (e.g., if the book was perceived as objectionably influencing children, or as objectionably exposing them to controversial issues).

However, I am very excited that we eventually decided to make this grant, for the following reasons:

Since we decided to make this grant, we became aware of additional achievements by Tony: he secured a grant from Tyler Cowen’s Emergent Ventures, and Steven Pinker tweeted about the book. These further increase my confidence in the project.

To be clear, I overall still consider this to be a ‘risky’ grant in the spirit of ‘hits-based giving’. That is, I think that base rates suggest a significant chance of the book never being completed or getting very little attention – but also there is a sufficiently large chance of a big success that the grant is a good bet in expectation.

I’m not sure whether Tony will apply for further funding. If so, I would look for signs of continued implementation progress such as draft pages, sample illustrations, and thoughts on the later stages of the project (e.g. marketing). In reviewing content, I expect I would focus on generic ‘quality’ – is it true, well written, and engaging for the intended audience? – rather than ‘alignment’ with an effective altruism perspective. This is because I think that, given its basic theme, the book’s value isn’t reliant on EA alignment, and because I think that this project will go best if the author retains editorial control and focuses on claims he deeply understands and stands behind.

Pablo Stafforini, EA Forum Wiki ($34,200)

6-month grant to Pablo for leading the EA Forum Wiki project, including pay for an assistant

This is a renewal of a previous $17,000 grant from the Long-Term Future Fund (LTFF) to allow Pablo to continue to lead the EA Forum wiki project. With the previous grant, Pablo had focused on content creation. The wiki has since launched publicly on the EA Forum, and the recent ‘Editing Festival’ was aimed at encouraging more people to contribute. While the previous grant was made from the LTFF, we made this grant through the EAIF because the wiki’s content will not be restricted to issues relevant to the long-term future and because we consider a wiki on EA topics to be a prime example of ‘EA infrastructure’.

This grant covers a 6-month period. About 55% is a salary for Pablo, while the additional funds can be used at Pablo’s discretion to pay for assistants and contractors. After the period covered by this grant, we will consider a further renewal or an ‘exit grant’.

I think that a wiki, if successful, could be highly valuable for multiple reasons.

Perhaps most notably, I think it could help improve the ‘onboarding’ experience of people who have recently encountered effective altruism and want to learn more about it online. For a couple of years, I have often encountered people – both ‘new’ and ‘experienced’ members of the EA community – who were concerned that it was hard to learn more about research and methods relevant to effectively improving the world, as well as about the EA community itself. They cited problems like a lack of ‘canonical’ sources, content being scattered across different online locations, and a paucity of accessible summaries. I believe that an actively maintained wiki with high-quality content could help address all of these problems.

Other potential benefits of a wiki:

My most significant reservation about the wiki as a project is that most similar projects seem to fail – e.g., they are barely read, don’t deliver high-quality content, or are mostly abandoned after a couple of months. This seems to be the case both for wikis in general and for similar projects related to EA, including EA Concepts, PriorityWiki, the LessWrong Wiki, and Arbital. While some of these may be ambiguous successes rather than outright failures, my impression is that they provide only limited value – they certainly fall short of what I envision as the realistic best case for the EA Forum wiki.

I think that Pablo is extremely well-placed to execute this project in several ways. He has been involved in the effective altruism community from its very start, and has demonstrated a broad knowledge of many areas relevant to it; he is, based on my own impression and several references, a very strong writer; and he has extensively contributed to Wikipedia for many years.

I also think that Pablo met the expectations from his previous EAIF grant (in which I was not involved) by producing a substantial amount of high-quality content (80,000 words in 6 months).

I feel less sure about Pablo’s fit for strategy development and project management. Specifically, I think there may have been a case for focusing less on extensive content production and more on getting some initial content in front of readers who could provide feedback. I also expect that someone who is especially strong in these areas would have had more developed thoughts on the wiki’s governance and strategic questions such as ‘how much to rely on paid content creators vs. volunteers?’ at this stage of the project. Lastly, I would ideally have liked to see an analysis of how past similar projects in the EA space failed, and an explicit case for why this wiki might be different.

However, I also believe that these are issues on which it would be hard for me to have a confident view from the outside, and that to some extent such projects go best if their leaders follow a strategy that they find easy to envision and motivating to follow. I also consider it an encouraging sign that I felt it was easy to have a conversation about these issues with Pablo, that he contributed several good arguments, and that he seemed very receptive to feedback.

When considering a renewed grant, I will look for a more developed strategy and data on user engagement with the wiki (including results from the ‘editing festival’). I will also be interested in the quality of content contributed by volunteers. I might also review the content produced by Pablo and potential other paid contractors in more detail, but would be surprised if the decision hinged on that.

For the wiki’s longer-term future I would also want to have a conversation about its ideal funding base. This includes questions such as: Is there a point beyond which the wiki works best without any paid contributors? If not, which medium and large funders should contribute their ‘fair share’ to its budget? Would it be good if the wiki fundraised from a broad range of potential donors, potentially after setting up a dedicated organization?

Effective Institutions Project ($20,417)

Developing a framework aimed at identifying the globally most important institutions

What is this grant?

This grant is for a project to be undertaken as part of the working group on ‘Improving Institutional Decision Making’ (IIDM). The group is led by Ian David Moss, Vicky Clayton, and Laura Green. The group’s progress includes hosting meetups at EA conferences, mapping out a strategy for their first 2–3 years, launching their working group with an EA Forum post, setting up a Slack workspace with more than 220 users, and more broadly rekindling interest in the eponymous cause area that had seen little activity since Jess Whittlestone’s problem profile for 80,000 Hours from 2017.

Specifically, the grant will enable Ian David Moss to work part-time on the IIDM group for 3–4 months. Alongside continuing to co-lead the group, Ian is going to use most of this time to develop a framework aimed at identifying the world’s key institutions – roughly, those that are most valuable to improve from the perspective of impartially improving the world. A potential later stage of the project would then aim to produce a list of these key institutions. This is highly similar to a planned project the IIDM group has described previously as one of their main priorities for this year.

The IIDM group had initially applied for a larger grant of $70,000, also mostly for buying Ian’s time. This would have covered a longer period, and would have allowed the working group to carry out additional projects. We were not comfortable making this larger upfront commitment. We are open to considering future grants (which may be larger), which we would do in part by assessing the output from this initial grant.

I think the most likely way in which this grant might not go well is if we didn’t get much new evidence on how to assess the IIDM group’s potential, and find ourselves in a similarly difficult position when evaluating their future funding requests. (See below under “Why was this grant hard for us to evaluate?” for more context on why I think we were in a difficult position.)

If this grant goes well, the intermediate results of the ‘key institutions project’ will increase my confidence that the IIDM group and its leaders are able to identify priorities in the area of ‘improving institutions’. In the longer term, I would be excited if the IIDM group could help people working in different contexts to learn from each other, and if it could serve as a ‘bridge’ between EA researchers who work on higher-level questions and people who have more firsthand understanding of how institutions operate. The working group leadership told me that this vision resonates with their goals.

Why was this grant hard for us to evaluate?

We felt that this grant was challenging to evaluate for multiple reasons:

  1. The need to clear an especially high bar, given the risks and potential of early field-building
  2. Difficulty in assessing the group’s past work and future prospects
  3. Cultural differences between some of the fund’s managers and the IIDM community

Most significantly, we felt that the grant would need to clear an unusually high bar. This is because the IIDM group is engaged in early field building efforts that could have an outsized counterfactual impact on the quality and amount of total EA attention aimed at improving institutions. The group’s initial success in drawing out interest – often from people who feel like their interests or professional backgrounds make them a particularly good fit to contribute to this area rather than others – suggests the potential for significant growth. In other words, I think the group could collect a lot of resources which could, in the best case, be deployed with large positive effects – or else might be misallocated or cause unintended harm. In addition, even somewhat successful but suboptimal early efforts could discourage potential top contributors or ‘crowd out’ higher-quality projects that, if the space had remained uncrowded, would have been set up at a later point.

In addition, I feel that there were at least two reasons for why it was hard for us to assess whether the IIDM group more broadly, or the specific work we’d fund Ian for, meet that high bar.

First, the group’s past output is limited. It’s certainly been successful at growing its membership, and at high-level strategic planning. However, I still found it hard to assess how well-placed the group or its leaders are to identify the right priorities within the broad field of “improving institutions”. As I explain below (“What is my perspective on improving institutions?”), I also think that correctly identifying these priorities depends on hard research questions, and I’m not sure about the group’s abilities to answer such questions.

While I found it hard to assess the group’s potential, I do think they have a track record of making solid progress, and their lack of other outputs (e.g. recommending or implementing specific interventions, or published research) is largely explained by the group having been run by volunteers. In addition, the working group’s leadership told me that, motivated by an awareness of the risks I discussed earlier, in their early efforts they had deliberately prioritized behind-the-scenes consultations and informal writing over more public outputs.

Second – and here I’m particularly uncertain whether other fund managers and advisors agree – my perception is that there might be a ‘cultural gap’ between (1) EAIF fund managers (myself included) and their networks, and (2) some of the people in the EA community most interested in improving institutions (including within the IIDM working group). I think this gap is reflected in, for instance, the intellectual foundations one draws on when thinking about institutions, preferred terminology, and professional networks. To be clear, this gap is not itself a reason to be skeptical about the group’s potential; however, it does mean that getting on the same page (about the best strategy in the space and various other considerations) would require more time and effort than otherwise.

A few further clarifications about this potential ‘gap’:

For these reasons, we may not have been in a great position to evaluate this grant ourselves. We therefore asked a number of people for their impression of the IIDM group’s past work or future potential. Most impressions from those who had substantially engaged with the IIDM group were positive. We also encountered some skeptical takes on the group’s potential, but they were disproportionately from people not very familiar with the group’s past work and plans. While these conversations were useful, they ultimately weren’t able to resolve our key uncertainties with sufficient confidence.

Overall, the reasons discussed above and my impressions more generally make me somewhat skeptical of whether the IIDM group’s leadership team and strategy are strong enough that I’d be excited for them to play a major role in shaping EA thought and practice on ‘improving institutions’ – in particular from the perspective I discuss below (under “What is my perspective on ‘improving institutions’?“). On the other hand, these reasons also make me less confident in my own ability to identify the best strategy in this space. They also make me more skeptical about my ability to adequately evaluate the group leaders’ strengths and potential. I’m therefore wary of a “false negative”, which makes me more sympathetic to giving the group the resources they need to be able to ‘prove themselves’, and more willing to spend more time to engage with the group and otherwise ‘stress test’ my view of how to best approach the area.

I would also like to emphasize that I think there are some unreservedly positive signs about the group and its leadership’s potential, including:

What is my perspective on ‘improving institutions’?

I am concerned that ‘improving institutions’ is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ‘rational’ at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.

At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.

I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these ‘known’ interventions.

To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were “bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve”, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.

Personally, when I think of what work in the area of ‘improving institutions’ I’m most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both ‘EA researchers’ and ‘non-EA’ domain experts as well as policymakers.

The Impactmakers ($17,411)

Co-financing to implement the 2021 Impact Challenge at the Dutch Ministry of Foreign Affairs, a series of workshops aimed at engaging civil servants with evidence-based policy and effective altruism

This is an extension of an earlier $13,000 EAIF grant to co-finance a series of workshops at the Dutch Ministry of Foreign Affairs (MFA). The MFA is covering the remaining part of the budget. The workshops are aimed at increasing civil servants’ ability to identify and develop high-impact policies, using content from sources on evidence-based policy as well as Doing Good Better.

The workshops are developed and delivered by a team of five that includes long-term members of EA Netherlands: Jan-Willem van Putten, Emil Iftekhar, Jason Wang, Reijer Knol, and Lisa Gotoh. The earlier grant allowed them to host a kick-off session and to recruit 35 participants. Based on feedback from the MFA, the team will structure the remaining workshop series as a ‘challenge’ in which teams of participating civil servants will address one of these three areas: 1) increasing policy impact; 2) improving decision-making processes; 3) increasing personal effectiveness. During the 10-month challenge teams will research and test ideas for a solution in these areas. This differs from their original plan, and the team thus requires a larger budget. This differs from their original plan, and the team thus requires a larger budget.

I am positive about this grant because it seems like a reasonably cheap way to introduce EA-related ideas to a valuable audience. Based on the written grant application, some workshop materials I reviewed, a reference from someone with both EA and policy experience, my conversation with Lisa, and the group’s track record so far, I also feel sufficiently confident that the team’s quality of execution will be sufficiently high to make an overall positive impression on the audience.

I am not sure if this team is going to seek funding for similar projects in the future. Before making further grants to them, I would try to assess the impact of this workshop series, more carefully vet the team’s understanding of relevant research and methods, and consider whether the theory of change of potential future projects was appropriately aimed at particularly high-leverage outcomes.

Effective Environmentalism ($15,000)

Strategy development and scoping out potential projects

This is an early-stage grant to the Effective Environmentalism group led by Sebastian Scott Engen, Jennifer Justine Kirsch, Vaidehi Agarwalla, and others. I expect that most of this grant will be used for exploratory work and strategic planning by the group’s leadership team, and that they might require renewed funding for implementing their planned activities.

Similar to the grant to the IIDM group described at length above, I view this as a high-variance project:

In the best case, I think the Effective Environmentalism group could evolve into a subcommunity that builds useful bridges between the EA community on one hand, and the environmentalism and climate activism communities on the other hand. It could help these communities learn from each other, expose a large number of people to the ideas and methods of effective altruism, improve the impact of various environmentalist and climate change mitigation efforts, and help the EA community to figure out its high-level views on climate change as a cause area as well as how to orient toward the very large communities predominantly focused on climate change.

In a disappointing case, I think this group will produce work and offer advice that is poorly received by both the EA and the environmentalist or climate activism communities. A typical EA perception might be that their work is insufficiently rigorous, and that they’re unduly prioritizing climate change relative to other cause areas. A typical environmentalist perception might be that their work is unappealing, insufficiently focused on social change, insufficiently focused on grassroots activism, or even a dishonest attempt to lure climate activists into other cause areas. In the worst case, there could be other harms, such as an influx of people who are not in fact receptive to EA ideas into EA spaces, an increase of negative perceptions of EA in the general public or environmentalist communities, or tensions between the EA and climate activism communities.

I am currently optimistic that the Effective Environmentalism team is aware of these and other risks, that they’re well placed to avoid at least some of them, and that they might have a shot at achieving a best-case outcome.

I also have been impressed with the progress this recently-created team has made between the time when they first submitted their grant application and the time at which their grant was approved. I also liked that they seem to have started with a relatively broad intended scope, and then tried to identify particularly high-value ‘products’ or projects within this scope (while remaining open to pivots) – as opposed to having an inflexible focus on particular activities.

Overall, I remain highly uncertain about the future trajectory of this team and project, as I think is typical given their challenging goals and limited track record. Nevertheless, I felt that this was a relatively easy grant to recommend, since I think that the work covered by it will be very informative for decisions about future funding, and that most of it will be ‘inward-facing’, or engage with external audiences only on a small scale (e.g. for ‘user interviews’ or small pilots) and thus incur few immediate risks.

Disputas ($12,000)

Funding an exploratory study for a potential software project aimed at improving EA discussions

This is a grant to the startup Disputas aimed at funding a feasibility study for improving the digital knowledge infrastructure for EA-related discussions. This feasibility study will consist of problem analysis, user interviews, and potentially producing wireframes and sketching out a development plan for a minimum viable product.

Disputas’s proposed project bears some similarity to argument mapping software. I am generally pessimistic about argument mapping software, both from an ‘inside view’ and because I suspect that many groups have tried to develop such software, but have always failed to get wide traction.

I decided to recommend this grant anyway, for the following reasons:

Steven Hamilton ($5,000)

Extending a senior thesis on mechanism design for donor coordination

This grant will allow Steven Hamilton, who recently graduated with a BA in economics and a minor in mathematics, to extend his senior thesis on mechanism design for donor coordination. Steven will undertake this work in the period between his graduation and the start of his PhD program.

Steven’s thesis was specifically about a mechanism to avoid charity overfunding – i.e. the issue that, absent coordination, the total donations from some group of donors might exceed a charity’s room for more funding. For instance, suppose there are 10 donors who each want to donate $50. Suppose further that there is some charity MyNonProfit that can productively use $100 in additional donations, and that all 10 donors most prefer filling MyNonProfit’s funding gap. If the donors don’t coordinate, and don’t know what other donors are going to do, they might each end up giving their $50 to MyNonProfit, thus exceeding its room for more funding by $400. This $400 could have been donated to other charities if the donors would have coordinated.

I am personally not convinced that charity overfunding is a significant problem in practice. However, I do think that there is room for useful work on donor coordination more broadly. Nevertheless, the application was sufficiently well-executed, and the grant amount sufficiently low, that I felt comfortable recommending the grant. If it turns out well, I suspect it will be either because I was wrong about charity overfunding being unimportant in practice – or because the grant causes a potentially promising young researcher to spend more time thinking about donor coordination, thus enabling more valuable follow-up work.

Steven has since told me that his work might also apply to other issues, e.g. charity underfunding or the provision of public goods. This makes me more confident in my optimistic perspective, and makes my reservations about the relevance of charity overfunding less relevant.

Grant reports by Michelle Hutchinson

Rethink Priorities ($248,300)

Compensation for 9 research interns (7 full-time equivalents) across various EA causes, plus support for further EA movement strategy research

Rethink Priorities is a research organization working on (largely empirical) questions related to how to do the most good, including questions like what moral weights we should assign to different animal species, or understanding what the current limitations of forecasting mean for longtermism.

Roughly half of this grant supports 9 interns (7 FTE), with the main aim of training them in empirical impact-focused research. Our perception is that it would be useful to have more of this research done and that there currently aren’t many mentors who can help people learn to do it.

Rethink Priorities has some experience of successfully supporting EA researchers to skill up. The case study of Luisa Rodriguez seemed compelling to us: she started out doing full-time EA research for Rethink Priorities, went on to become a research assistant for William MacAskill's forthcoming book about longtermism, and plans to work as a researcher at 80,000 Hours. Luisa thinks it's unlikely she would have become a full-time EA researcher if she hadn't received the opportunity to train up at Rethink Priorities. My main reservation with the program was the lack of capacity at RP from people who had significant experience doing this type of research (though they have a number of staff members who have many years of other research experience). This was ameliorated by senior staff’s ready willingness to provide comments on research across the team, and by RP’s intention to seek external mentorship for their interns in addition to internal.

The second half of the grant goes toward growing Rethink’s capacity to conduct research on EA movement strategy. The team focuses on running surveys, aimed both at EAs and the broader public. The types of research this funding will enable include getting a better sense of how many people in the broader public are aware of and open to EA. This research seems useful for planning how much and what kinds of EA outreach to do. For example, a number of people we asked found RP’s survey on longtermism useful. The committee was split as to whether a better model for funding such research was having the EA organizations who would do EA outreach commission the surveys. The latter model would increase the chance of the research being acted on. Ensuring this type of research is directly action-relevant for the organizations most responsible for shaping the direction of the EA movement strikes me as pretty difficult, and decidedly easier if they’re closely involved in designing the research. The research being action-relevant is particularly important because much of the research being done involves surveying the EA community. The cost to the community of surveys like the EA survey is fairly large. (I’d guess, assuming a 40 hour work-week, that the filling-in-the-survey work costs 25 weeks of EA work per EA survey).

Collaborations often seem tricky to pull off smoothly and efficiently. For that reason, a funding model we considered suggesting was EAIF paying for specific pieces of research ‘commissioned’ by groups such as CEA. This model would have the benefit that the group commissioning the research would be responsible for applying for the funding, and so the onus would be on them to make sure they would use the research generated. On the other hand, we hope that this type of research will be useful to many different groups, including ones like local groups who typically don’t have much funding. We therefore decided in favor of approving this funding application as is. We’d still be interested in continued close collaborations between RP and the groups who will be using the research, such as CEA and local EA groups.

The Long-Term Future Fund (see payout report) and Animal Welfare Fund (see payout report) have also made grants to Rethink Priorities (for different work).

Stefan Schubert ($144,072)

Two years of funding for writing a book on the psychology of effective giving, and for conducting related research

We’ve granted $144,072 to Stefan Schubert to write a book and a number of related papers on the psychology of effective giving, alongside Lucius Caviola. The book describes some of the reasons that people donate ineffectively, followed by ideas on how to make philanthropy more effective in the future.

Stefan and Lucius have a strong track record of focusing their work on what will help others most. An important way of ensuring this type of research is impactful is by drawing different considerations together to make overall recommendations, as opposed to simply investigating particular considerations in isolation. (An example of this might be: It’s often suggested that asking people to donate a small amount initially, which they are happy to give, makes people happier to give more to that place in future. It’s also often suggested that it’s a good idea to make a big ask of people, because when you then present them with a smaller ask it seems more reasonable and they’re more likely to acquiesce. These psychological results are each interesting, but if you’re trying to figure out how much to ask someone to donate, it’s hard to decide if you’ve only heard each presented in isolation as a consideration.) Describing individual considerations in isolation is common in academia because novelty is highly prized (so reiterating considerations others investigated is not) and because academics are suspicious of overreach and doing the comparison of considerations is very difficult. This makes research often very hard to draw action-relevant implications from, which strikes me as a major failing. My hope is that this book will do more ‘bringing together and comparing’ considerations than is typical.

Another reason for optimism about this grant is that Stefan has a track record of making his research accessible to those for whom it’s most relevant, for example by speaking at Effective Altruism Global (EAG), posting on social media, and writing blog posts in addition to peer-reviewed articles. In general, I worry that research of this kind can fail to make much of an impact because the people for whom it would be most action-relevant might not read it, and even if they do it’s complicated to figure out the specific implications for action. It seems to me that this kind of ‘translating research into action’ is somewhat neglected in the EA community, and we could do with more of it, both for actions individuals might take and those specific organizations might take. I’d therefore be particularly excited for Stefan to accompany his research with short, easily digestible summaries including ‘here are the things I think individual EAs should do differently because of this research; here are some implications I think it might have for how we run EAG, for how 80,000 Hours runs its advising program etc’.

The Centre for Long-Term Resilience ($100,000)

Improving the UK’s resilience to existential and global catastrophic risks

The Centre for Long-Term Resilience (CLTR; previously named Alpenglow) is a non-profit set up by Angus Mercer and Sophie Dannruther, with a focus on existential, global catastrophic, and other extreme risks. (It has also looked a bit into the UK’s global development and animal welfare policies.) CLTR’s aim is to facilitate discussions between policymakers and people doing research into crucial considerations regarding the long-run future. We’ve granted the centre a preliminary $100,000 in this round. Its scale-up plans mean it has a lot of room for more funding, so we plan to do a further investigation as to whether to fund it more significantly in our next round (should their funding gap not have been filled by then).

The organization seems to have been successful so far in getting connected to relevant researchers and policymakers, and is in the early stages of seeing concrete outputs from that. A key concern we have is that there may not be sufficient specific policy recommendations coming out of research in the longtermist space, which would be a major block on CLTR’s theory of change.

Jakob Lohmar ($88,557)

Writing a doctoral thesis in philosophy on longtermism at the University of Oxford

Recusal note: Based on an updated conflict of interest management plan for Max Daniel's position at the University of Oxford, he retroactively recused himself from this grant decision.

This is a grant for Jakob Lohmar to write a doctoral thesis in philosophy on longtermism, studying under Hilary Greaves. He currently plans for his thesis to examine the link between longtermism and different kinds of moral reasons. Examples of the type of question he expects to examine:

Whether or not we allow considerations about the long term to dominate our actions, the choice will make a huge difference to how we help others, as will how to weigh reducing existential risks against trajectory changes. Having more of this research therefore seems likely to be useful. We expect the primary impact of this grant to come from allowing Jakob to skill up in this area, rather than the research itself.

We’re particularly excited about people doing this type of research who have been thinking for a significant while about how to help people most. Action relevance isn’t prized in academic philosophy, so it can be hard to keep your research directed in a useful direction. Jakob’s background shows strong evidence he will do this.

Joshua Lewis, New York University, Stern School of Business ($45,000)

Academic research into promoting the principles of effective altruism

We granted Joshua Lewis $45,000 for doing academic research into promoting the principles of effective altruism over the next 6 months. Joshua Lewis is an Assistant Professor of Marketing at the NYU Stern School of Business. He has a long-term vision of setting up a research network of academics working in psychology and related fields. We intended this grant to be for a shorter time period so that we can assess initial results before long, but to be generous over that time period to ensure he isn’t bottlenecked by funding.

Joshua’s research agenda seems interesting and useful, covering questions such as understanding people’s risk aversion towards interventions with high expected value but low probability of impact. These topics seem important if we’re going to increase the extent to which people are taking the highest-impact actions they know of. Some committee members were primarily excited about the research itself, while others were also excited about the flow-through effects from getting other academics excited about working on these problems.

Grant reports by Ben Kuhn

Giving Green (an initiative of IDinsight) ($50,000)

Improving research into climate activism charity recommendations

Giving Green is trying to become, more or less, GiveWell for climate change. This grant provides them with funding to hire a researcher to improve their recommendation in the grassroots activism space.

This is a relatively speculative grant, though it has high potential upside. Giving Green has shown promise in media, user experience, and fundraising, and has appeared in the New York Times, Vox, and The Atlantic. At the same time, we have serious reservations about the current quality of their research (largely along the lines laid out in alexrjl’s EA Forum post). Like several commenters on that post, we found their conclusions about grassroots activism charities, and specifically The Sunrise Movement, particularly unconvincing. That said, we think there’s some chance that, with full-time work, they’ll be able to improve the quality of their research—and they have the potential to be great at fundraising—so we’re making this grant to find out whether that’s true. This grant should not be taken as an endorsement of Giving Green’s current research conclusions or top charity recommendations.

Giving Green originally requested additional funding for a second researcher, but we gave a smaller initial grant to focus the research effort only on the area where we’re most uncertain, with the remaining funding conditional on producing a convincing update on grassroots activism. I (Ben) currently think it’s moderately unlikely (~30%?) that they’ll hit this goal, but that the upside potential is worth it. (Obviously, I would be extremely happy to be wrong about this!)

*The eligible domain experts will be a set of climate researchers from other EA-affiliated organizations; we plan to ask several for their views on this research as part of our follow-up evaluation.

Grants by the previous fund managers

Note: These grants were made in December 2020 and January 2021 as off-cycle grants by the previous EAIF fund managers, and Megan Brown was the main contributor to these grants as an advisor. However, these payout reports have been written by current fund manager Max Daniel based on the written documentation on these grants (in which Max was not directly involved).

High Impact Athletes ($50,000)

Covering expenses and growth for 2021 for a new nonprofit aimed at generating donations from professional athletes

At the time of this first grant application, High Impact Athletes (HIA) was a recently launched nonprofit aimed at generating donations to effective charities from professional athletes. It had gained some initial traction, having generated over $25,000 in donations by the time of its official launch. This grant was intended to cover growth and expenses for 2020 and 2021.

HIA’s recommended charities align closely with charities that are commonly regarded as having unusually high impact in the EA community (e.g., GiveWell top charities).

CEO and founder Marcus Daniell is a professional tennis player himself. With his networks in the sporting world, as well as his long history as a donor to effective charities, we thought he was very well placed to get other professional athletes to join him in his giving. The grant includes a part-time salary for Marcus so he can continue to lead HIA.

High Impact Athletes ($60,000)

Enabling a first hire for a nonprofit aimed at generating donations from professional athletes

This grant is intended to allow High Impact Athletes (HIA) to hire a first staffer who would help CEO Marcus Daniell to focus on fundraising and growth, leveraging his unique network in the sporting world. For more context on HIA, see the previous writeup.

We were impressed that, by the time of this grant application, and not long after its launch, HIA had made substantial progress: in just one month, it had influenced $80,000 of donations, and had secured contributions from 42 athletes, including Olympic gold medalists.

Feedback

If you have any feedback, we would love to hear from you. Let us know in the comments, submit your thoughts through our feedback form, or email us at eainfrastructure@effectivealtruismfunds.org.

To comment on this payout report, please join the discussion on the Effective Altruism Forum.


MichaelA @ 2021-06-03T13:26 (+42)

Thanks for this writeup! 

I was surprised to find that reading this instilled a sense of hope, optimism, and excitement in me. I expected to broadly agree with the grant decisions and think the projects sound good in expectation, but was surprised that I had a moderate emotional reaction accompanying that. 

I think this wasn't exactly because these grants seem much better than expected, but more because I got a sense like "Oh, there are actually a bunch of people who want to focus intensely on a specific project that someone should be doing, and who are well-suited to doing it, and who may continue to cover similar things indefinitely, gradually becoming more capable and specialised. The gaps are actually gradually getting filled - the needs are gradually getting "covered" in a way that's more professional and focused and less haphazard and some-small-number-of-excellent-generalists-are-stretched-across-everything."

I often feel slightly buried under just how much there is that should be getting done, that isn't getting done, and that I in theory could do a decent job of if I focused on it (but I of course don't have time to do all the things, nor would I necessarily do the excellent job that these things deserve).[1] I already knew "we're heading in the right direction" with regards to that, but this report made that more salient, or something, which was nice.  

[1] Obviously I know that that's not all "on me", that there are also many other people who could in theory do a decent or better job of these things, etc. But it's still the case that those things aren't getting done and that I could switch to doing them, and in some minority of cases I actually should switch to doing them (i.e., I've sometimes changed my priorities or jobs in a way that I still endorse in retrospect, and before that time these were cases like this where I'm doing one useful thing but could switch to another useful thing).

Larks @ 2021-06-03T03:58 (+28)

Thanks for writing up this detailed account of your work; I'm glad the LTFF's approach here seems to be catching on!

Neel Nanda @ 2021-06-04T07:19 (+24)

Thanks a lot for the write-up! Seems like there's a bunch of extremely promising grants in here. And I'm really happy to see that the EAIF is scaling up grantmaking so much. I'm particularly excited about the grants to HIA, CLTR and to James Aung & Emma Abele's project.

And thanks for putting in so much effort into the write-up, it's really valuable to see the detailed thought process behind grants and makes me feel much more comfortable with future donations to EAIF. I particularly appreciated this for the children's book grant, I went from being strongly skeptical to tentatively excited by the write-up.

Habryka @ 2021-06-03T06:47 (+22)

Thank you for writing these! I really like these kind of long writeups, it really feels like it helps me get a sense of how other people think about making grants like this.

Dan Stein @ 2021-06-03T20:33 (+21)

Hello everyone, Dan from Giving Green here. As noted in the explanation above, the main purpose of this grant is to deepen and improve our research into grassroots activism, hopefully coming up with something that is more aligned with research norms within the EA community. We'd love to bring an experienced EA researcher on board to help us with that, and would encourage any interested parties to apply. 

We currently have two jobs posted, one for a full-time or consultant researcher, and the second for a full-time program manager. We're also interested in hearing from people who may not exactly fit these job descriptions but can contribute productively. If interested, please submit an application at the links above or reach out at givinggreen@idinsight.org.

MichaelA @ 2021-06-03T13:43 (+16)

I really like that Ben made an explicit prediction related to the Giving Green grant, and solicited community predictions too! I currently think grantmakers (at least the EA Fund fund managers) should do this sort of thing more (as discussed here and here), so it's nice to see a first foray into that.

That said, it seems both hard to predict on the question as stated and hard to draw useful inferences from it, since no indication is given of how many experts will be asked. The number I'd give, and what the numbers signify, would be very different if you expect to ask 3 experts vs expecting to ask 20, for example. Do you have a sense of roughly what that denominator will be?

Jonas Vollmer @ 2021-06-04T15:39 (+7)

My guess is 1-3 experts.

MichaelA @ 2021-06-04T17:00 (+3)

Thanks. I now realise that I have another confusion about the question: Are experts saying whether they found the research high quality and convincing in whatever conclusions it has, or saying whether the researcher strongly updated the experts specifically towards viewing grassroots activism more positively

This is relevant if the researcher might form more mixed or negative conclusions about grassroots activism, yet still do so in a high-quality and convincing way. 

I'm gonna guess Ben either means "whether the researcher strongly updated the experts specifically towards viewing grassroots activism more positively" or he just assumes that a researcher Giving Green hires and manages is very likely to conclude that grassroots activism is quite impactful (such that the different interpretations of the question are the same in practice). (My forecast is premised on that.)

Jonas Vollmer @ 2021-06-06T11:22 (+4)

high quality and convincing in whatever conclusions it has

This.

weeatquince @ 2021-06-03T08:48 (+15)

Thank you for the write-up super helpful. Amazing to see so much good stuff get funding.

Some feedback and personal reflections as a donor to the fund:

 

This should not take away form the fact that I think the fund has genuinely done a great job here. For example saying that I would lean towards directly following the funds recommendations is recognition that I trust the fund and the work you have done to evaluate these projects – so well done!

Also I do support innovative longtermist projects (especially love CLTR – mega-super to see them funded!!) it is just not what I expect to see this fund doing so leaves me a bit confused / tempted to give elsewhere.

 

Michelle_Hutchinson @ 2021-06-04T12:20 (+21)

Thanks for the feedback! 

I basically agree with the conclusion MichaelA and Ben Pace have below. I think EAIF’s scope could do with being a bit more clearly defined, and we’ll be working on that. Otoh, I see the Lohmar and CLTR grants as fitting fairly clearly into the ‘Fund scope’ as pasted by MichaelA below. Currently, grants do get passed from one fund to the other, but that happens mostly when the fund they initially applied to deems them not to fall easily into their scope, rather than if they seem to fall centrally into the scope of the fund they apply for and also another fund. My view is that CLTR, for example, is good example of increasing the extent to which policy makers are likely to use EA principles when making decisions, which makes it seem like a good example of the kind of thing I think EAIF should be funding. 

I think that there are a number of ways in which someone might disagree: One is that they might think that ‘EA infrastructure’ should be to do with building the EA _community_ specifically, rather than being primarily concerned with people outside community. Another is that they might want EAIF to only fund organisations which have the same portfolio of cause activities as is representative of the whole EA movement. I think it would be worse to narrow the fund’s scope in either of these ways, though I think your comment highlights that we could do with being clearer about it not being limited in that way. 

Over the long run, I do think the fund should aim to support projects which represent different ways of understanding and framing EA principles, and which promote different EA principles to different extents. I think one way in which this fund pay out looks less representative than it felt to me is that there was a grant application for an organisation which was mostly fundraising for global development and animal welfare which didn’t get funded due to getting funding from elsewhere while we were deliberating. 

The scope of the EAIF is likely to continue overlapping in some uneasy ways with the other funds. My instinct would be not to be too worried about that, as long as we’re clear about what kinds of things we’re aiming at funding and do fund. But it would be interesting to hear other people’s hunches about the importance of the funds being mutually exclusive in terms of remit.

MichaelPlant @ 2021-06-04T16:28 (+10)

Thanks for writing this reply and, more generally, for an excellent write-up and selection of projects!

I'd be grateful if you could address a potential, related concern, namely that EAIIF might end up as a sort of secondary LTFF, and that this would be to the detriment of non-longtermist applicants to the fund, as well being, presumably,  against the wishes of EAIIF's current donors.  I note the introduction says:

we generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

and also that Buck, Max, and yourself are enthusiastic longtermists - I am less sure about Ben and Jonas is a temporary member. Putting these together, combined with what you say about funding projects which could/should have applied to the LTFF, it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects.

Is this what you plan to do? If not, why not? If  yes,  do you plan to inform the current donors?

I emphasise I don't see any signs of this in the current round, nor do I expect you to do this. I'm mostly asking so you can set my mind at rest, not least because the Happier Lives Institute (disclosure: I am its Director) has been funded by EAIIF and its forerunner, would likely apply again, and is primarily non-longtermism (although we plan to do some LT work - see the new research agenda). 

If the EAIIF radically changes directly, it would hugely affect us, as well as meaning more pluralistic/meta EA donors would lack an EA fund to donate to. 

MichaelA @ 2021-06-04T17:08 (+8)

FWIW, a similar question was raised on the post about the new management teams, and Jonas replied there. I'll quote the question and response. (But, to be clear, I don't mean to "silence this thread", imply this has been fully covered already, or the like.)

Question:

It seems like everyone affiliated with the EA Infrastructure Fund is also strongly affiliated with longtermism. I admire that you are going to use guest managers to add more worldview diversity, but insofar as the infrastructure fund is funding a lot of the community building efforts for effective altruism writ large should we worry about the cause neutrality here?

Jonas's reply:

I agree that greater representation of different viewpoints on the EA Infrastructure Fund seems useful. We aim to add more permanent neartermist fund managers (not just guest managers). Quoting from above:

  • We’ve struggled to round out the EAIF with a balance of neartermism- and longtermism-focused grantmakers because we received a much larger number of strong applications from longtermist candidates. To better reflect the distribution of views in the EA community, we would like to add more neartermists to the EAIF and have proactively approached some additional candidates. That said, we plan to appoint candidates mostly based on their performance in our hiring process rather than their philosophical views.

Does that answer your question? Please let me know if you had already seen that paragraph and thought it didn't address your concern.

EDIT: Also note that Ben Kuhn is serving as a guest manager this round. He works on neartermist issues.

(Terminological nitpick: It seems this is not an issue of "cause neutrality" but one of representation of different viewpoints. See here –  the current fund managers are all cause-impartial; neartermist fund managers wouldn't be cause-agnostic either; and the fund is supporting cause-general and cause-divergent work either way.)

Michelle_Hutchinson @ 2021-06-05T15:42 (+9)

Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply. 

MichaelPlant @ 2021-06-05T18:23 (+9)

Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question - your comment doesn't really give me any more information than I already had about what to expect.

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

What would you do? I can't think of any other information you would need.

FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones? - and they don't want their money to go to another fund's area - otherwise, that's where they have put it. Hence, picking B would be tantamount to a breach of trust.

(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don't think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)

Jonas Vollmer @ 2021-06-06T11:01 (+4)

Buck, Max, and yourself are enthusiastic longtermists (…) it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they're longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my 'fair share' to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.

MichaelPlant @ 2021-06-06T14:39 (+2)

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them

Thanks for this reply, which I found reassuring. 

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination

Okay, this is interesting and helpful to know. I'm trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere. 

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund's remit.  (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

Jonas Vollmer @ 2021-06-06T18:02 (+6)

the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former

This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I've talked to actually want the fund managers to spend the money that way (the EA Funds pitch is "defer to experts" and donors want to go all in on that, with only minimal scope constraints).

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

Fund scope definitions are always a bit fuzzy, many grants don't fit into a particular bucket very neatly, and there are lots of edge cases. So while I'm sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max's comment.

Max_Daniel @ 2021-06-05T22:04 (+4)

I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones?

I care about donor expectations, and so I'd be interested to learn how many donors have a preference for fund scopes to not overlap.

However, I'm not following the suggested reasoning for why we should expect such a preference to be common. I think people - including donors - choose between partly-but-not-fully overlapping bundles of goods all the time, and that there is nothing odd or bad about these choices, the preferences revealed by them, or the partial overlap. I might prefer ice cream vendor A over B even though there is overlap in flavours offered; I might prefer newspaper A over B even though there is overlap in topics covered (there might even be overlap in authors); I might prefer to give to nonprofit A over B even though there is overlap in the interventions they're implementing or the countries they're working in; I might prefer to vote for party A over B even though there is overlap between their platforms; and so on. I think all of this is extremely common, and that for a bunch of messy reasons it is not clearly the case that generally it would be best for the world or the donors/customers/voters if overlap was reduced to zero. 

I rather think it is the other way around: the only thing that would be clearly odd is if scopes were not overlapping but identical. (And even then there could be other reasons for why this makes sense, e.g., different criteria for making decisions within that scope.)

Larks @ 2021-06-06T03:21 (+12)

However, I'm not following the suggested reasoning for why we should expect such a preference to be common.

I definitely have the intuition the funds should be essentially non-overlapping. In the past I've given to the LTFF, and would be disappointed if it funded something that fit better within one of the other funds that I chose not to donate to.

With non-overlapping funds, donors can choose their allocation between the different areas (within the convex hull). If the funds overlap, donors can no longer donate to the extremal points. This is basically a tax on donors who want to e.g. care about EA Meta but not Longtermist things.

Consider the ice-cream case. Most ice-cream places will offer Vanilla, Chocolate, Strawberry, Mint etc. If instead they only offered different blends, someone who hated strawberry - or was allergic to chocolate - would have little recourse. By offering each as a separate flavour, they accommodate purists and people who want a mixture. Better for the place to offer each as a standalone option, and let donors/customers combine. In fact, for most products it is possible to buy 100% of one thing if you so desire. 

This approach is also common in finance; firms will offer e.g. a Tech Fund, a Healthcare Fund and so on, and let investors decide the relative ratio they want between them. This is also (part of) the reason for the decline of conglomerates - investors want to be able to make their own decisions about which business to invest in, not have it decided by managers.

Michelle_Hutchinson @ 2021-06-07T11:28 (+11)

I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'. 

That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech innovations to be applied to market problems'. If some company is doing good tech innovation and making profit in the healthcare space, I'd be keen for the tech fund to invest in it. I wouldn't be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I'd expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren't being made because some specific fund is missing  (which might be a subset of a current fund, like 'non-longtermist EA infrastructure').  

I would guess that EAF isn't a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I'd often do better to fund individual companies than broad funds. 

In case it doesn't go without saying, I think it's extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others

Max_Daniel @ 2021-06-06T11:56 (+8)

Thanks for sharing your intuition, which of course moves me toward preferences for less/no overlap being common.

I'm probably even more moved by your comparison to finance because I think it's a better analogy to EA Funds than the analogies I used in my previous comments.

However, I still maintain that there is no strong reason to think that zero overlap is optimal in some sense, or would widely be preferred. I think the situation is roughly:

  • There are first-principles arguments (e.g., your 'convex hull' argument) for why, under certain assumptions, zero overlap allows for optimal satisfaction of donor preferences.
    • (Though note that, due to standard arguments for why at least at first glance and under 'naive' assumptions splitting small donations is suboptimal, I think it's at least somewhat unclear how significant the 'convex hull' point is in practice. I think there is some tension here as the loss of the extremal points seems most problematic from a 'maximizing' perspective, while I think that donor preferences to split their giving across causes are better construed as being the result of "intra-personal bargaining", and it's less clear to me how much that decision/allocation process cares about the 'efficiency loss' from moving away from the extremal points.)
  • However, reality is more messy, and I would guess that usually the optimum is somewhere on the spectrum between zero and full overlap, and that this differs significantly on a case-by-case basis. There are things pushing toward zero overlap, and others pushing toward more overlap (see e.g. the examples given for EA Funds below), and they need to be weighed up. It depends on things like transaction costs, principal-agent problems, the shape of market participants' utility functions, etc.
  • Here are some reasons that might push toward more overlap for EA Funds:
    • Efficiency, transaction/communication cost, etc., as mentioned by Jonas.
    • My view is that 'zero overlap' just fails to carve reality at its joints, and significantly so.
      • I think there will be grants that seem very valuable from, e.g., both a 'meta' and a 'global health' perspective, and that it would be a judgment call whether the grant fits 'better' with the scope of the GHDF or the EAIF. Examples might be pre-cause-neutral GWWC, a fundraising org covering multiple causes but de facto generating 90% of its donations in global health, or an organization that does research on both meta and global health but doesn't want to apply for 'restricted' grants.
      • If funders adopted a 'zero overlap' policy, grantees might worry that they will only be assessed a long one dimension of their impact. So, e.g., an organization that does research on several causes might feel incentivized to split up, or to apply for 'restricted' grants. However, this can incur efficiency losses because sometimes it would in fact be better to have less internal separation between activities in different causes than required by such a funding landscape.
    • More generally, it seems to me that incomplete contracting is everywhere.
      • If I as a donor made an ex-ante decision that I want my donations to go to cause X but not Y, I think there realistically would be 'borderline cases' I simply did not anticipate when making that decision. Even if I wanted, I probably could not tell EA Funds which things I do and don't want to give to based on their scope, and neither could EA Funds get such a fine-grained preference out of me if they asked me.
      • Similarly, when EA Funds provides funding to a grantee, we cannot anticipate all the concrete activities the grantee might want to undertake. The conditions implied by the grant application and any restrictions attached to the grant just aren't fine-grained enough. This is particularly acute for grants that support someone’s career – which might ultimately go in a different direction than anticipated. More broadly, a grantee will sometimes realize they might want to fund activities for which neither of us have previously thought about if they're covered by the 'intentions' or 'spirit' of the grant, and this can include activities that would be more clearly in another fund's scope.
    • To drive home how strongly I feel about the import of the previous points, my immediate reaction to hearing "care about EA Meta but not Longtermist things" is literally "I have no idea what that's supposed to even mean". When I think a bit about it, I can come up with a somewhat coherent and sensible-seeming scope of "longtermist but not meta", but I have a harder time making sense of "meta but not longtermist" as a reasonable scope. I think if donors wanted that everything that's longtermist (whether meta or not) was handled by the LTFF, then we should clarify the LTFF's scope, remove the EAIF, and introduce a "non-longtermist EA fund" or something like that instead - as opposed to having an EAIF that funds things that overlap with some object-level cause areas but not others.
    • Some concrete examples:
      • Is 80k meta or longtermist? They have been funded by the EAIF before, but my understanding is that their organizational position is pro-longtermism, that many if not most of their staff are longtermist, and that this has significant implications for what they do (e.g., which sorts of people to advise, which sorts of career profiles to write, etc.).
      • What about Animal Advocacy Careers? If they wanted funding from EA Funds, should they get it from the AWF or the EAIF?
      • What about local EA groups? Do we have to review their activities and materials to understand which fund they should be funded by? E.g., I've heard that EA NYC is unusually focused on animal welfare (idk how strongly, and if this is still true), and I'm aware of other groups that seem pretty longtermist. Should such groups then not be funded by the EAIF? Should groups with activities in several cause areas and worldviews be co-funded by three or more funds, creating significant overhead?
      • What about CFAR? Longtermist? Meta?

--

Taking a step back, I think what this highlights is that feedback like this in the comment may well move me toward "be willing to incur a bit more communication cost to discuss where a grant fits best, and to move grants that arguably fit somewhat better with a different fund". But (i) I think where I'd end up is still a far cry from 'zero overlap', and (ii) I think that even if I made a good-faith efforts it's unclear if I would better fulfil any particular donor's preference because, due to the "fund scopes don't carve reality at its joint" point, donors and me might make different judgment calls on 'where some grant fits best'.

In addition, I expect that different donors would disagree with each other about how to delineate scopes, which grants fits best where, etc.

This also means it would probably more help me to better satisfy donor preferences if I got specific feedback like "I feel grant X would have better fitted with fund Y" as opposed to more abstract preferences about the amount of overlap in fund scope. (Though I recognize that I'm kind of guilty having started/fueled the discussion in more abstract terms.)

However, taking yet another step back, I think that when deciding about the best strategy for EA Funds/the EAIF going forward, I think there are stakeholders besides the donors whose interests matter as well: e.g., grantees, fund managers, and beneficiaries. As implied by some of my points above, I think there can be some tensions between these interests. How to navigate this is messy, and depends crucially on the answer to this question among other things. 

My impression is that when the goal is to “maximize impact” – even within a certain cause or by the lights of a certain worldview – we’re less bottlenecked by funding than by high-quality applications, highly capable people ‘matched’ with highly valuable projects they’re a good fit for, etc. This makes me suspect that the optimal strategy would put somewhat less weight on maximally satisfying donor preferences – when they’re in tension with other desiderata – than might be the case in some other nonprofit contexts. So even if we got a lot of feedback along the lines of “I feel grant X would have fitted better with fund Y”, I’m not sure how much that would move the EAIF’s strategy going forward.

(Note that the above is about what ‘products’ to offer donors going forward. Separately from that, I think it’s of course very important to not be misleading, and to make a good-faith effort to use past donations in a way that is consistent with what we told them we’d do at the time. And these demands are ‘quasi-deontological’ and can’t be easily sacrificed for the sake of better meeting other stakeholders’ interests.)

weeatquince @ 2021-06-07T12:53 (+13)

Nothing I have seen makes me thinks the EAIF should change the decision criteria. It seems to be working very well and good stuff is getting funded. So don’t change that to address a comparatively very minor issue like this, would be throwing the baby out with the bathwater!!
 

--
If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).

And then all those dogmatic donors to the EAIF who don’t like longtermist stuff can go to bed happy and all those  dogmatic donors to the LTFF who don’t like meta stuff can go to bed happy and everyone feels like there money is going to where they expect it to go, etc.  Which does matter a little bit because as a donor you feel that you really need to trust that the money is going to where it says on the tin and not to something else. 

(But sure if the admin costs here are actually really high or something then not a big deal, it matters a little bit to some donors but is not the most important thing to get right)

Max_Daniel @ 2021-06-07T19:25 (+7)

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).

Thank you for this suggestion. It makes sense to me that this is how the situation looks from the outside.

I'll think about the general issue and suggestions like this one a bit more, but currently don't expect large changes to how we operate. I do think this might mean that in future rounds there may be a similar fraction of grants that some donors perceive to better fit with another fund. I acknowledge that this is not ideal, but I currently expect it will seem best after considering the cost and benefits of alternatives.

So please view the following points of me trying to explain why I don't expect to adopt what may sound like a good suggestion, while still being appreciative of the feedback and suggestions.

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

To be clear, I would expect decision-relevant disagreements for a minority of grants - but not a sufficiently clear minority that I'd be comfortable acting on "the other fund is going to make this grant" as a default assumption.

Your suggestion of retaining the option to make the grant through the 'original' fund would help with this, but not with the two following points. 

I think another issue is duplication of time cost. If the LTFF told me "here is a grant we want to make, but we think it fits better for the EAIF - can you fund it?", then I would basically always want to have a look at it. In maybe 50% [?, unsure] of cases this would only take me like 10 minutes, though the real attention + time cost would be higher. In the other 50% of cases I would want to invest at least another hour - and sometimes significantly more - assessing the grant myself. E.g., I might want to talk to the grantee myself or solicit additional references. This is because I expect that donors and grantees would hold me accountable for that decision, and I'd feel uncomfortable saying "I don't really have an independent opinion on this grant, we just made it b/c it was recommended by the LTFF".

(In general, I worry that "quickly double-checking" something is close to impossible between two groups of 4 or so people, all of whom are very opinionated and can't necessarily predict each other's views very well, are in parallel juggling dozens of grant assessments, and most of whom are very time-constrained and are doing all of this next to their main jobs.)

A third issue is that increasing the delay between the time of a grant application and the time of a grant payout is somewhat costly. So, e.g., inserting another 'review allocation of grants to funds' step somewhere would somewhat help with the time & attention cost by bundling all scoping decisions together; but it would also mean a delay of potentially a few days or even more given fund managers' constrained availabilities. This is not clearly prohibitive, but significant since I think that some grantees care about the time window between application and potential payments being short.

However, there may be some cases where grants could be quickly transferred (e.g., if for some reason managers from different funds had been involved in a discussion anyway), or there may be other, less costly processes for how to organize transfers. This is definitely something I will be paying a bit more attention to going forward, but for the reasons explained in this and other comments I currently don't expect significant changes to how we operate.

weeatquince @ 2021-06-07T20:30 (+9)

Thank you so much for your thoughtful and considered reply.

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

 

Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that).

Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe: 

  1. One fund is making quite poor decisions AND/OR
  2. There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR
  3. There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements.

Just curious and typing up my thoughts. Not expecting good answers to this.

Max_Daniel @ 2021-06-07T21:03 (+5)

I think all funds are generally making good decisions.

I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations, sign uncertainty, etc.

I think you are correct that both of the following are true:

  • There is potential of improving decision quality by spending time on discussing diverging views, improving the way we aggregate opinions to the extent they still differ after the amount of discussion that is possible, and maybe by using specific 'decision making tools' (e.g., certain ways of a structured discussion + voting).
  • There are interesting lessons to be learned by identifying cruxes. Some of these lessons might directly improve future decisions, others might be valuable for other reasons - e.g., generating active grantmaking ideas or cruxes/results being shareable and thereby being a tiny bit epistemically helpful to many people.

I think a significant issue is that both of these cost time - both identifying how to improve in these areas and then implementing the improvements -, which is a very scarce resource for fund managers.

I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas. Hopefully this means we're not too far away from the optimum. 

I think there are different views on this within EA Funds (both within the EAIF committee, and potentially between the average view of the EAIF committee and the average view of the LTFF committee - or at least this is suggested by revealed preferences as my loose impression is that  LTFF fund managers spend more time in discussions with each other). Personally, I actually lean toward spending less time and less aggregation of opinions across fund managers - but I think currently this view isn't sufficiently widely shared that I expect it to be reflected in how we're going to make decisions in the future.

But I also feel a bit confused because some people (e.g., some LTFF fund managers, Jonas) have told me that spending more time discussing disagreements seemed really helpful to them, while I feel like my experience with this and my inside-view prediction of how spending more time on discussions would look like make me expect less value. I don't really know why that is - it could be that I'm just bad at getting value out of discussions, or updating my views, or something like that.

weeatquince @ 2021-06-08T10:35 (+4)

think a significant issue is that both of these cost time

I am always amazed at how much you fund managers all do given this isn't your paid job!
 

I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas

Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.
 

... it could be that I'm just bad at getting value out of discussions, or updating my views, or something like that.

That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them!
(And/or just that everyone is different and different ways of learning work for different people)

Larks @ 2021-06-07T19:51 (+6)

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

Thanks for writing up this detailed response. I agree with your intuition here that 'review, refer, and review again' could be quite time consuming.

However, I think it's worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money.  In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate. 

In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds' evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.

Max_Daniel @ 2021-06-07T18:41 (+2)

If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

Thank you for sharing - as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.

[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I definitely see why among the grants we've made they are among the ones that seem 'closest' to the LTFF's scope; but I don't personally view them as clearly being more in scope for the LTFF than for the EAIF.]

weeatquince @ 2021-06-07T20:06 (+4)

[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I definitely see why among the grants we've made they are among the ones that seem 'closest' to the LTFF's scope; but I don't personally view them as clearly being more in scope for the LTFF than for the EAIF.]

Thank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate.

One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: "if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?"
 

Jonas Vollmer @ 2021-06-06T20:34 (+8)

A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner's dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.

You might respond that there's no easy way to verify whether others are cooperating. I might respond that you can verify how much money the fund gets in total and can ask EA Funds about the funding sources. (Also, I think that acausal cooperation works in practice, though perhaps the number of donors who think about it in this way is too small for it to work here.)

Larks @ 2021-06-07T20:07 (+2)

I'm afraid I don't quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal of the EAIF it seems like a natural fit:

While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. [emphasis added]

Nor would this be disallowed by weeatquince's policy, as no other fund is more appropriate than EAIF:

we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question.

MichaelPlant @ 2021-06-07T22:41 (+4)

Just a half-formed thought how something could be "meta but not longtermist" because I thought that was a conceptually interesting issue to unpick.

I suppose one could distinguish between meaning "meta" as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.

If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I'm not going to define these), regardless of what domain it works towards. In this sense, 'meta' and (e.g.) 'longtermist' are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn't focused on the longterm, you would be meta but not longtermist (although it might be more natural to say "meta and not longtermist" as there is no tension between them).

If one is thinking the latter way, one might say that an org is less "meta", and more "non-meta", the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here "meta" and "non-meta" are mutually exclusive and a matter of degree. A "non-meta" org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.

(In both cases, we will run into familiar issues about to making precise what an agent 'focuses on' or 'intends'.)

MichaelPlant @ 2021-06-04T17:15 (+4)

Yes, I read that and raised this issue privately with Jonas.

weeatquince @ 2021-06-04T22:08 (+4)

Thank you Michelle.

Really useful to hear. I agree with all of this.

It seems like, from what you and Jonas are saying, that the fund scopes currently overlap so there might be some grants that could be covered by multiple funds and even if they are arguably more appropriate to another fund than another they tend to get funded with by whoever gets to them first as currently the admin burden of shifting to another fund is large. 

That all seems pretty reasonable.

I guess my suggestion would be that I would be excited to see these kinks minimised over time and funding come from which ever pool seems most appropriate. That overlap is seen as a bug to be ironed out not a feature. 

FWIW I think you and all the other fund managers made really really good decisions. I am not just saying that to counteract saying something negative but I am genuinely very excited by how much great stuff is getting funded by the EAIF. Well done. 

(EDIT: PS. My reply to Ben below might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E

Basically a more tightly defined fund scope could be nice and makes it easier for donors but harder for the Funds so there is a trade-off)



 

weeatquince @ 2021-06-05T01:10 (+2)

the importance of the funds being mutually exclusive in terms of remit.


I lean (as you might guess) towards the funds being mutually exclusive. The basic principle is that In general the more narrow the scope of each fund then the more control donors have about where their funds go. 

If the Fund that seemed more appropriate pays out for any thing where there is overlap then you would expect:

  • More satisfied donors. You would expect the average amount of grants that donors strongly approve to go up.
  • More donations. As well as the above satisfaction point, if donors know more precisely how their money will be spent then they would have more confident that giving to the fund makes sense comapred to some other option.
  • Theoretically better donations? If you think donors wishes are a good measure of expected impact it can arguably improve the targeting of funds to ensure amounts moved are closer to donors wishes (although maybe it makes the relationship between donors and specific fund managers weaker as there might be crossover with fund mangers moving money across multiple of the Funds).
     

None of these are big improvements, so maybe not a priority, but the cost is also small. (I cannot speak for CEA but as a charity trustee we regularly go out our way to make sure we are meeting donors wishes, regranting money hither and thither and it has not been a big time cost).

 

Dicentra @ 2021-06-05T03:23 (+15)

OTOH my impression is that the Funds aren't very funding-constrained, so it might not make sense to heavily weigh your first two reasons (though all else equal donor satisfaction and increased donation quantity seems good).

I also think there are just a lot of grants that legitimately have both a strong meta/infrastructure and also object-level benefit and it seems kind of unfair to grantees that provide multiple kinds of value that they still can only be considered from one funding perspective/with focus on one value proposition. If a grantee is both producing some kind of non-meta research and also doing movement-building, I think it deserves the chance to maybe get funded based on the merits of either of those value adds. 

Jonas Vollmer @ 2021-06-06T11:03 (+2)

Yeah, I agree with Dicentra. Basically I'm fine if donors don't donate to the EA Funds for these reasons; I think it's not worth bothering (time cost is small, but benefit even smaller). 

There's also a whole host of other issues; Max Daniel is planning to post a comment reply to Larks' above comment that mentions those as well. Basically it's not really possible to clearly define the scope in a mutually exclusive way.

weeatquince @ 2021-06-07T12:27 (+6)

Basically it's not really possible to clearly define the scope in a mutually exclusive way.


Maybe we are talking past each other but I was imagining something easy like: just defining the scope as mutually exclusive. You write: we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question.

Then before you grant money you look over and see if any stuff passed by one fund looks to you like it is more for another fund. If so (unless the fund mangers of the second fund veto the switch) you fund the project with money from the second fund.

Sure it might be a very minor admin hassle but it helps make sure donor's wishes are met and avoids the confusion of donors saying – hold on a min why am I funding this I didn’t expect that.

This is not a huge issue so maybe not the top of your to do list. And you are the expert on how much of an admin burden something like this is and if it is worth it, but from the outside it seems very easy and the kind of action I would just naturally expect of a fund / charity. 

[minor edits]

weeatquince @ 2021-06-05T22:10 (+2)

It also makes it easier for applicants to know what fund to apply to (or apply to first).

MichaelA @ 2021-06-04T17:12 (+2)

(FWIW, that all makes sense and seems like a good approach to me.)

weeatquince @ 2021-06-05T00:46 (+17)

Retracted:

Upon reflection and reading the replies I think I perhaps I was underestimating how broad this Fund's scope is (and perhaps was too keen to find fault).

I do think there could be advantages for donors of narrowing the scope of this Fund / limiting overlap between Funds (see other comments), but recognise there are costs to doing that.

All my positive comments remain and great to see so much good stuff get funded.

Max_Daniel @ 2021-06-03T09:11 (+7)

Hi Sam, thank you for this feedback. Hearing such reactions is super useful. 

Could you tell us more about which specific grants you perceive as potentially "better suited to other funds"? I have some guesses (e.g. I would have guessed you'd say CLTR), but I would still find it helpful to see if our perceptions match here. Feel free to send me a PM on that if that seemed better.

MichaelA @ 2021-06-03T13:53 (+6)

FWIW, I was also confused in a similar way by:

  • The CLTR grant
  • The Jakob Lohmar grant
  • Maybe the Giving Green grant

If someone had asked me beforehand which fund would evaluate CLTR for funding, I would've confidently said LTFF.

For the other two, I'd have been uncertain, because:

  • The Lohmar grant is for a project that's not necessarily arguing for longtermism, but rather working out how longtermist we should be, when, how the implications of that differ from what we'd do for other reasons, etc.
    • But GPI and Hilary Greaves seem fairly sold on longtermism, and I expect this research to mostly push in more longtermism-y directions
    • And I'd find it surprising if the Infrastructure Fund funded something about how much to care about insects as compared to humans - that's likewise not necessarily going to conclude that we should update towards more focus on animal welfare, but it still seems a better fit for the Animal Welfare Fund
  • Climate change isn't necessarily strongly associated with longtermism within EA
  • I guess Giving Green could also be seen as aimed at bringing more people into EA by providing people who care about climate change with EA-related products and services they'd find interesting?
    • But this report doesn't explicitly state that that's why the EAIF is interested in this grant, and I doubt that that's Giving Green's own main theory of change

But this isn't to say that any of those grants seem bad to me. I was just somewhat surprised they were funded by the EAIF rather than the LTFF (at least in the case of CLTR and Jakob Lohmar).

Jonas Vollmer @ 2021-06-04T15:52 (+9)

A big part of the reason was simply that CLTR and Jakob Lohmar happened to apply to the EAIF, not the LTFF. Referring grants takes time (not a lot, but I don't think doing such referrals is a particularly good use of time if the grants are in scope for both funds). This is partly explained in the introduction of the grant report.

MichaelPlant @ 2021-06-04T16:49 (+4)

I recognise there is admin hassle. Although, as I note in my other comment, this becomes an issue if the EAIIF in effect becomes a top-up for another fund.

Jonas Vollmer @ 2021-06-06T11:13 (+5)

FWIW, it's not just admin hassle but also mental attention for the fund chairs that's IMO much better spent on improving their decisions. I think there are large returns from fund managers focusing fully on whether a grant is a good use of money or on how to make the grantees even more successful. I therefore think the costs of having to take into account (likely heterogeneous) donor preferences when evaluating specific grants are quite high, and so as long as a majority of assessed grants seems to be somewhat "in scope" it's overall better if fund managers can keep their head free from scope concerns and other 'meta' issues.

I believe that we can do the most good by attracting donors who endorse the above. I'm aware this means that donors with different preferences may want to give elsewhere.

(Made some edits to the above comment to make it less disagreeable.)

Linch @ 2021-06-05T21:56 (+2)

I think of climate change (at least non-extreme climate change) as more of a global poverty/development issue, for what it's worth.

Jonas Vollmer @ 2021-06-04T15:43 (+6)

without an explanation of why they are being funded from the Infrastructure Fund

In the introduction, we wrote the following. Perhaps you missed it? (Or perhaps you were interested in a per-grant explanation, or the explanation seemed insufficient to you?)

Some of the grants are oriented primarily towards causes that are typically prioritized from a ‘non-longtermist’ perspective; others primarily toward causes that are typically prioritized for longtermist reasons. The EAIF makes grants towards longtermist projects if a) the grantseeker decided to apply to the EAIF (rather than the Long-Term Future Fund), b) the intervention is at a meta level or aims to build infrastructure in some sense, or c) the work spans multiple causes (whether the case for them is longtermist or not). We generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

weeatquince @ 2021-06-04T21:45 (+4)

You are correct – sorry I missed that.

I agree with Michael above that a) seems is a legit administrative hassle but it seems like the kind of think I would be excited to see resolved when you have capacity to think about it. Maybe each fund could have some discretionary money from the other fund.
 
An explanation per grant would be super too as an where such a thing is possible!

(EDIT: PS. My reply to Ben above might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E)

Larks @ 2021-06-06T03:38 (+2)

I don't suppose you would mind clarifying the logical structure here:

The EAIF makes grants towards longtermist projects if a) the grantseeker decided to apply to the EAIF (rather than the Long-Term Future Fund), b) the intervention is at a meta level or aims to build infrastructure in some sense, or c) the work spans multiple causes (whether the case for them is longtermist or not).

My intuitive reading of this (based on the commas, the 'or', and the absence of 'and') is:

a OR b OR c 

i.e., satisfying any one of the three suffices.  But I'm guessing that what you meant to write was 

a AND (b OR c)

which would seem more sensible?

Jonas Vollmer @ 2021-06-06T11:18 (+2)

Yeah, the latter is what I meant to say, thanks for clarifying.

weeatquince @ 2021-06-07T08:16 (+4)

FWIW I had assumed the former was the case. Thank you for clarifying.

I had assumed the former as

  • it felt like the logical reading of the phrasing of the above
  • my read of the things funded in this round seemed to be that some of them don’t appear to be b OR c (unless b and c are interpreted very broadly).
     
Ben Pace @ 2021-06-03T19:39 (+6)

The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the  Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).
 

I think that different funders have different tastes, and if you endorse their tastes you should consider giving to them. I don't really see a case for splitting responsibilities like this. If Funder A thinks a grant is good, Funder B thinks it's bad, but it's nominally in Funder B's purview, this just doesn't seem like a strong arg against Funder A doing it if it seems like a good idea to them. What's the argument here? Why should Funder A not give a grant that seems good to them?

MichaelA @ 2021-06-04T05:50 (+31)

I find this perspective (and its upvotes) pretty confusing, because:

  • I'm pretty confident that the majority of EA Funds donors choose which fund to donate to based far more on the cause area than the fund managers' tastes
    • And I think this really makes sense; it's a better idea to invest time in forming views about cause areas than in forming views about specifically the funding tastes of Buck, Michelle, Max, Ben, and Jonas, and then also the fund management teams for the other 3 funds.
  • The EA Funds pages also focus more on the cause area than on the fund managers.
  • The fund manager team regularly changes composition at least somewhat.
  • Some fund managers have not done any grantmaking before, at least publicly, so people won't initially know their fund tastes.
    • In this particular case, I think all fund managers except Jonas haven't done public grantmaking before.

I think a donation to an EA Fund is typically intended to delegate to some fund managers to do whatever is best in a given area, in line with the principles described on the EA Funds page. It is not typically intended to delegate to those fund managers to do whatever they think is best with that money, except if we assume that what they think is best will always be something that is in that area and is in line with those principles described (in which case it would still be problematic for them to donate in other ways). 

Likewise, if you pay a contractor to do X and then instead they do Y, this may well be problematic even if they think doing Y is better and even if they might have good judgement. And especially so if their ad for their services focused on X rather than on their individual track record, tastes, or good judgement. 

To be clear, this comment isn't intended as sharp criticism of any choices the EAIF made this round. I also didn't donate to the EAIF this round and lean quite longtermist myself, so I don't personally have any sense of my donation being used a way I don't like, or something like that. I'm just responding to your comment.

---

Another way to put this is that if you don't really see "a case for splitting responsibilities like this", then I think that means you don't see a case for the current set up of the EA Funds (at least as a place for you specifically to donate), and so you're not the relevant target audience? 

(I feel like this will sound rude or something in written text without tone - apologies in advance if it does sound that way; that's not my intent.)

Ben Pace @ 2021-06-04T06:13 (+4)

Yeah, that's a good point, that donors who don't look at the grants (or know the individuals on the team much) will be confused if they do things outside the purpose of the team (e.g. donations to GiveDirectly, or a random science grant that just sounds cool), that sounds right. But I guess all of these grants seem to me fairly within the purview of EA Infrastructure?

The one-line description of the fund says:

The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

I expect that for all of these grants the grantmakers think that they're orgs that either "use the principle of effective altruism" or help others do so.

I think I'd suggest instead that weeatquince name some specific grants and ask the fund managers the basic reason for why those grants seem to them like they help build EA Infrastructure (e.g. ask Michelle why CLTR seems to help things according to her) if that's unclear to weeatquince.

MichaelA @ 2021-06-04T06:42 (+5)

Yeah, good point that these grants do seem to all fit that one-line description. 

That said, I think that probably most or all grants from all 4 EA Funds would fit that description - I think that that one-line description should probably be changed to make it clearer what's distinctive about the Infrastructure Fund. (I acknowledge I've now switched from kind-of disagreeing with you to kind-of disagreeing with that part of how the EAIF present themselves.)

I think the rest of the "Fund Scope" section helps clarify the distinctive scope:

While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. This will be achieved by supporting projects that:

  • Directly increase the number of people who are exposed to principles of effective altruism, or develop, refine or present such principles
  • Support the recruitment of talented people who can use their skills to make progress on important problems
  • Aim to build a global community of people who use principles of effective altruism as a core part of their decision-making process when deciding how they can have a positive impact on the world
  • Conduct research into prioritizing between or within different cause areas
  • Raise funds or otherwise support other highly-effective projects
  • Improve community health by promoting healthy norms for interaction and discourse, or assist in resolving grievances

Re-reading that, I now think Giving Green clearly does fit under EAIF's scope ("Raise funds or otherwise support other highly-effective projects"). And it seems a bit clearer why the CLTR and Jakob Lohmar grants might fit, since I think they partly target the 1st, 3rd, and 4th of those things.

Though it still does seem to me like those two grants are probably better fits for LTFF.

And I also think "Conduct research into prioritizing [...] within different cause areas" seems like a better fit for the relevant cause area. E.g., research about TAI timelines or the number of shrimp there are in the world should pretty clearly be under the scope of the LTFF and AWF, respectively, rather than EAIF. (So that's another place where I've accidentally slipped into providing feedback on that fund page rather than disagreeing with you specifically.)

Ben Pace @ 2021-06-04T08:37 (+5)

Though it still does seem to me like those two grants are probably better fits for LTFF.

But this line is what I am disagreeing with. I'm saying there's a binary of "within scope" or not, and then otherwise it's up to the fund to fund what they think is best according to their judgment about EA Infrastructure or the Long-Term Future or whatever. Do you think that the EAIF should be able to tell the LTFF to fund a project because the EAIF thinks it's worthwhile for EA Infrastructure, instead of using the EAIF's money? Alternatively, if the EAIF thinks something is worth money for EA Infrastructure reasons, if the grant is probably more naturally under the scope of "Long-Term Future", do you think they shouldn't fund the grantee even if LTFF isn't going to either?

MichaelA @ 2021-06-04T08:56 (+3)

Ah, this is a good point, and I think I understand where you're coming from better now. Your first comment made me think you were contesting the idea that the funds should each have a "scope" at all. But now I see it's just that you think the scopes will sometimes overlap, and that in those cases the grant should be able to be evaluated and funded by any fund it's within-scope for, without consideration of which fund it's more centrally within scope for. Right?

I think that sounds right to me, and I think that that argument + re-reading that "Fund Scope" section have together made it so that I think that EAIF granting to CLTR and Jakob Lohmar just actually makes sense. I.e., I think I've now changed my mind and become less confused about those decisions.

Though I still think it would probably make sense for Fund A to refer an application to Fund B if the project seems more centrally in-scope for Fund B, and let Fund B evaluate it first. Then if Fund B declines, Fund A could do their own evaluation and (if they want) fund the project, though perhaps somewhat updating negatively based on the info that Fund B declined funding. (Maybe this is roughly how it already works. And also I haven't thought about this until writing this comment, so maybe there are strong arguments against this approach.)

(Again, I feel I should state explicitly - to avoid anyone taking this as criticism of CLTR or Jakob - that the issue was never that I thought CLTR or Jakob just shouldn't get funding; it was just about clarity over what the EAIF would do.)

Jonas Vollmer @ 2021-06-04T15:48 (+5)

Though I still think it would probably make sense for Fund A to refer an application to Fund B if the project seems more centrally in-scope for Fund B, and let Fund B evaluate it first.

In theory, I agree. In practice, this shuffling around of grants costs some time (both in terms of fund manager work time, and in terms of calendar time grantseekers spend waiting for a decision), and I prefer spending that time making a larger number of good grants rather than on minor allocation improvements.

MichaelA @ 2021-06-04T17:11 (+2)

(That seems reasonable - I'd have to have a clearer sense of relevant time costs etc. to form a better independent impression, but that general argument + the info that you believe this would overall not be worthwhile is sufficient to update me to that view.)

Ben Pace @ 2021-06-04T09:07 (+4)

Yeah, I think you understand me better now.

And btw, I think if there are particular grants that seem not in scope from a fund, is seems totally reasonable to ask them for their reasoning and update pos/neg on them if the reasoning does/doesn't check out. And it's also generally good to question the reasoning of a grant that doesn't make sense to you.

weeatquince @ 2021-06-04T23:36 (+5)

Tl;dr: I  was  to date judging the funds by the cause area rather than the fund managers tastes and this has left me a bit surprised. I think in future I will judge more based on the fund mangers tastes.
 

Thank you Ben – I agree with all of this

Maybe I was just confused by the fund scope.

The fund scope is broad and that is good. The webpage says the scope includes: "Raise funds or otherwise support other highly-effective projects" which basically means everything! And I do think it needs to be broad – for example to support EAs bringing EA ideas into new cause areas.

But maybe in my mind I had classed it as something like "EA meta" or as "everything that is EA aligned that would not be better covered by one of the other 3 funds" or similar. But maybe that was me reading too much into things and the scope is just "anything and everything that is EA aligned". 

It is not bad that it has a broader scope than I had realised, and maybe the fault is mine, but I guess my reaction to seeing the scope is different to what I realised  is to take a step back and reconsider if my giving to date is going where I expect.

To date I have been judging the EAIF as the easy option when I am not sure where to give and have been judging the fund mostly by the cause area it gives too.

I think taking a step back will likely involve spending an hour or two going though all of the things given in recent fund rounds and thinking about how much I agree with each one then deciding if I think the EAIF is the best place for me to give, and if I think I can do better giving to one of the existing EA meta orgs that takes donations. (Probably I should have been doing this already so maybe a good nudge).

Does that make sense / answer your query?

– – 

If the EAIF had a slightly more well defined narrower scope that could make givers slightly more confident in where their funds will go but has a cost in terms of admin time and flexibility for the Funds. So there is a trade-off.

My gut feel is that in the long run the trade-off is worth it but maybe feedback from other donors would say otherwise. 

Linch @ 2021-06-05T22:50 (+9)

5,000 to the Czech Association for Effective Altruism to give away EA-related books

Concretely, which books are they giving away? The most obvious book to give away (Doing Good Better) is more than 5 years old at this point, which is roughly half the length of the EA movement, and thus maybe expected to not accurately represent 2021 EA thought.

Jiri_Nadvornik @ 2021-06-07T10:54 (+13)

It depends on target audience. I guess that beside DGB it will be also The Precipice (both DGB and The Precipice have Czech translations), Human Compatible, Superintelligence, maybe even Scout Mindset or Rationality from AI to Zombies.
 

Linch @ 2021-06-07T17:52 (+11)

Thanks for the reply!

Small note: AI Safety researchers I've talked to (n~=5) have almost universally recommend Brian Christian's The Alignment Problem to Human Compatible. (I've started reading but have not finished either).

I also personally got a bunch of value from Parfit's Reasons and Persons and Larissa MacFarquhar's Strangers Drowning, but different people's tastes here can be quite different. Both books predated DBG I think, but because they're from fields that aren't as young/advancing as fast as EA, I'd expect them to be less outdated.

Max_Daniel @ 2021-06-07T18:53 (+7)

(FWIW, I personally love Reasons and Persons but I think it's much more "not for everyone" than most of the other books Jiri mentioned. It's just too dry, detailed, abstract, and has too small a density of immediately action-relevant content.

I do think it could make sense as a 'second book' for people who like that kind of philosophy content and know what they're getting into.)

Linch @ 2021-06-07T19:43 (+2)

I agree that it's less readable than all books Jiri mentioned except maybe Superintelligence. 

Pro-tip for any aspiring Reasons-and-Persons-readers in the audience: skip (or skim) section I and II. Section III (personal identity) and IV (population ethics) is where the meat is, especially section III. 

Max_Daniel @ 2021-06-07T21:39 (+4)

FWIW, I actually (and probably somewhat iconoclastically) disagree with this. :P

In particular, I think Part I of Reasons and Persons is underrated, and contains many of the most useful ideas. E.g., it's basically the best reading I know of if you want to get a deep and principled understanding for why 'naive consequentialism' is a bad idea, but why at the same time worries about naive applications of consequentialism or the demandingness objection and many other popular objections to consequentialism don't succeed at undermining it as ultimate criterion of rightness.

(I also expect that it is the part that would most likely be perceived as pointless hair-splitting.)

And I think the most important thought experiment in Reasons and Persons is not the teleporter, nor Depletion or Two Medical Programs, nor the Repugnant Conclusion or the Absurd Conclusion or the Very Repugnant Conclusion or the Sadistic Conclusion and whatever they're all called - I think it's Writer Kate, and then Parfit's Hitchhiker.

Part II in turn is highly relevant for answering important questions such as this one.

Part III is probably more original and groundbreaking than the previous parts. But it is also often misunderstood. I think that Parfit's "relation R" of psychological connectedness/continuity does a lot of the work we might think a more robust notion of personal identity would do - and in fact, Parfit's view helps rationalize some everyday intuitions, e.g., that it's somewhere between unreasonable and impossible to make promises that bind me forever. More broadly, I think that Parfit's view on personal identity is mostly not that revisionary, and that it mostly dispels a theoretical fiction most of our everyday intuitions neither need nor substantively rely on. (There are others, including other philosophers, who disagree with this - and think that there being no fact of the matter about questions of personal identity has, e.g., radically revisionary implications for ethics. But this is not Parfit's view.)

Part IV on population ethics is all good and well. (And in fact, I'm often disappointed by how little most later work in population ethics does to improve on Reasons and Persons.) But its key lessons are already widely appreciated within EA, and today there are more efficient introductions one can get to the field.

All of this is half-serious since I don't think there's a clear and reader-independent fact of the matter of which things in Reasons and Persons are "most important". It's also possible, especially for Part I, that what I think I got out of Reasons and Persons is quite idiosyncratic, and doesn't bear a super direct or obvious relationship to its actual content. Last but not least, it's been 5 years or so since I read Reasons and Persons, so probably some claims in this comment about content in Reasons and Persons are simply false because I misremember what's actually in there.

Linch @ 2021-06-07T22:11 (+6)

Thanks for the contrarian take, though I still tentatively stand by my original stances. I should maybe mention 2 caveats here:

  1. I also only read Reasons and Person ~4 years ago, and my memory can be quite faulty.
    1. In particular I don't remember many good arguments against naive consequentialism. To me, it really felt like parts 1 and 2 were mainly written as justification for axioms/"lemmas" invoked in parts 3 and 4, axioms that most EAs already buy.
  2. My own context for reading the book was trying to start a Reasons and Persons book club right after he passed away. Our book club dissolved in the middle of reading section 2. I kept reading on, and I distinctively remember wishing that we continued onwards, because sections 3 and 4 would kept the other book clubbers engaged etc. (obviously this is very idiosyncratic and particular to our own club).
HowieL @ 2021-06-08T07:16 (+4)

If I had to pick two parts of it, it would be 3 and 4 but fwiw I got a bunch out of 1 and 2 over the last year for reasons similar to Max.

Misha_Yagudin @ 2021-06-08T12:28 (+3)

(Hey Max, consider reposting this to goodreads if you are on the platform.)

Max_Daniel @ 2021-06-08T15:52 (+5)

(done)

MichaelA @ 2021-06-12T09:47 (+4)

(FWIW, I'm almost finished reading Strangers Drowning and currently feel I've gotten quite little out of it, and am retroactively surprised by how often it's recommended in EA circles. But I think I'm in the minority on that.)

Pablo @ 2021-06-13T13:54 (+2)

I wonder if the degree to which people like that book correlates with variation along the excited vs. obligatory altruism dimension.

MichaelA @ 2021-06-13T14:04 (+2)

Do you have a guess as to which direction the correlation might be in? Either direction seems fairly plausible to me, at first glance.

Pablo @ 2021-06-13T14:44 (+2)

I was thinking that  EAs sympathetic to obligatory altruism would like it more, given the book's focus on people who appear to have a strong sense of duty and seem willing to make great personal sacrifices.

MichaelA @ 2021-06-13T15:07 (+2)

(Yeah, that seems plausible, though FWIW I'd guess my own mindset is more on the "obligatory" side than is average.)

Linch @ 2021-06-12T11:05 (+2)

Out of curiosity, do you read/enjoy any written fiction or poetry? 

MichaelA @ 2021-06-12T13:04 (+6)

Until a couple years ago, I read a lot of fiction, and also wrote poetry and sometimes short stories and (at least as a kid) had vague but strong ambitions to be a novelist. 

I now read roughly a few novels a year, mostly Pratchett. (Most of my "reading time" is now either used for non-fiction or - when it's close to bedtime for me - comedy podcasts.)

Jiri_Nadvornik @ 2021-06-11T18:42 (+1)

Thanks a lot!

MichaelA @ 2021-06-03T13:33 (+8)

My most significant reservation about the wiki as a project is that most similar projects seem to fail – e.g., they are barely read, don’t deliver high-quality content, or are mostly abandoned after a couple of months. This seems to be the case both for wikis in general and for similar projects related to EA, including EA Concepts, PriorityWiki, the LessWrong Wiki, and Arbital.

This was also my most significant reservation about both whether the EA Wiki should get a grant (I wasn't a decision-maker on that - I just mean me thinking from the sidelines) and about whether I should spend time contributing to the wiki. 

That said, I think those sentences have a quite notable omission: The new LessWrong wiki, which does seem to be fairly actively used, to have fairly high-quality content, and to not have been abandoned even after it's been around for several months. (I haven't looked at page views, nor carefully reviewed many articles, so these claims are tentative.)

This is especially notable because:

(It's still definitely plausible to me that the EA Wiki will be abandoned within a year, will never receive much traffic, or will fail/fizzle out/provide little value for some other reason. But I think it's less likely than one might think if one considered only the four projects you mentioned and not the new LessWrong wiki.)

Stefan_Schubert @ 2021-06-04T10:41 (+22)

Fwiw I think that looking at the work that's been done so far, the EA Wiki is very promising.

Max_Daniel @ 2021-06-03T19:18 (+6)

Thanks, I agree that this is an interesting data point. I had simply not been aware of a new LessWrong Wiki, which seems like an oversight.

MichaelA @ 2021-06-03T19:23 (+6)

(Just to clarify, what I meant was their tagging+concept system, which is very similar to the EA Wiki's system and is being drawn on for the EA Wiki's system. I now realise my previous comment - now edited - was misleading in that it (a) said "The new LessWrong Wiki" like that was its name and (b) didn't give a link. Google suggests that they aren't calling this new thing "the LessWrong Wiki".)

Habryka @ 2021-06-03T22:19 (+12)

Yep, the new wiki/tagging system has been going decently well, I think. We are seeing active edits, and in general I am a lot less worried about it being abandoned, given how deeply it is integrated with the rest of LW (via the tagging system, the daily page and the recent discussion feed).

Pablo @ 2021-06-04T00:43 (+11)

Also worth mentioning is that LessWrong has recently extended the karma system to wiki edits. You can see it here. I'm pretty excited about this feature, which I expect to increase participation, and look forward to its deployment for the EA Wiki.

Pablo @ 2021-06-15T01:23 (+12)

The improvements are now ported to the Wiki. Not only can you vote for individual contributions, but you can also see, for each article, a list of each contributor, and see their contributions by hovering over their names. Articles now also show a table of contents, and there may be other features I haven't yet discovered. Overall, I'm very impressed!

Max_Daniel @ 2021-06-15T09:13 (+2)

Great! I'm also intuitively optimistic about the effect of these new features on Wiki uptake, editor participation, etc.

vaidehi_agarwalla @ 2021-06-03T03:01 (+7)

Thanks for this incredibly detailed report. It's super useful to understand the rationale and thinking behind each grant. 

Are there plans to have update reports on the outcomes of these grants in the future (say 6 or 12 months)?

Michelle_Hutchinson @ 2021-06-04T12:55 (+3)

No set plans yet.

MichaelA @ 2021-06-04T17:15 (+4)

Are there plans to internally assess in future how the grants have gone / are going, without necessarily making the findings public, and even in cases where the grantees don't apply for renewal? 

(I seem to recall seeing that the EA Funds do this by default for all grants, but I can't remember for sure and I can't remember the details. Feel free to just point me to the relevant page or the like.)

Max_Daniel @ 2021-06-05T14:29 (+4)

That seems fairly important to me, and there are some loose ideas we've exchanged. However, there are a number of things that at first glance seem quite important, and we are very limited by capacity. So I'm currently not sure if and when ex-post evaluations of grants are going to happen. I would be very surprised if we thought that never doing any ex-post evaluations was the right call, but I wouldn't be that surprised if we only did them for a fraction of grants or only in a quite 'hacky' way, etc.

Jonas Vollmer @ 2021-06-06T11:27 (+5)

I think we will probably do two types of post-hoc evaluations:

  1. Specifically aiming to improve our own decision-making in ways that seem most relevant to us, without publishing the results (as they would be quite explicit about which grantees were successful in our view), driven by key uncertainties that we have
  2. Publicly communicating our track record to donors, especially aiming to find and communicate the biggest successes to date

#1 is somewhat high on my priority list (may happen later this year), whereas #2 is further down (probably won't happen this year, or if it does, it would be a very quick version). The key bottleneck for both of these is hiring more people who can help our team carry out these evaluations.

Jonas Vollmer @ 2021-07-06T17:02 (+6)

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

MaxRa @ 2021-06-03T10:22 (+4)

Thanks for the work and the transparent write-up. I’m really glad and impressed our community got the funds going and running.

About the IIDM grant, I noticed some discomfort with this point and would be interested in pointers how to think about this, examples, how big the risk is and how easy preventable this factor is:

In addition, even somewhat successful but suboptimal early efforts could discourage potential top contributors or ‘crowd out’ higher-quality projects that, if the space had remained uncrowded, would have been set up at a later point.

MichaelA @ 2021-06-03T13:46 (+7)

For the issue in general (not specific to the area of IIDM or how EAIF thinks about things), there's some discussion from 80k here and in other parts of that article. Probably also in some other posts tagged accidental harm.

(Though note that 80k include various caveats and counterpoints, and conclude the article with:

We hope this discussion of ways to do bad hasn’t been demotivating. We think most projects that have gone forward in the EA community have had a positive expected value and when we hear about new projects we’re typically excited, not wary. Even projects that are ill-conceived to start with typically improve over time as the founders get feedback and learn from experience. So whenever you consider these risks (and our advice for mitigating them) make sure to weigh them against the potentially massive benefits of working on some of the world’s most pressing problems.

I say that just to avoid people being overly discouraged by reading a single section from the middle of that article, without the rest of the context. I don't say this to imply I disagree with Max's comments on the IIDM grant.)

MaxRa @ 2021-06-03T16:22 (+2)

Thanks! The 80,000Hours article kind of makes it sound like it‘s not supposed to be a big consideration and can be addressed by things IIDM has clearly done, right?

Get advice from people you trust to be honest about whether you’re a reasonable fit for the project you’re considering. Ask around to see if anybody else in your field has similar plans; maybe you should merge projects, collaborate, or coordinate on which project should move forward.

My impression is that the IIDM group is happy for any people interested in collaborating and called for collaboration a year ago or so, and the space of improving institutions also seems very big (in comparison to 80k‘s examples of career advice for EAs and local EA chapters).

IanDavidMoss @ 2021-06-03T19:29 (+18)

(Disclaimer: speaking for myself here, not the IIDM group.)

My understanding is that Max is concerned about something fairly specific here, which is a situation in which we are successful in capturing a significant share of the EA community's interest, talent, and/or funding, yet failing to either imagine or execute on the best ways of leveraging those resources.

While I could imagine something like this happening, it's only really a big problem if either a) the ways in which we're falling short remain invisible to the relevant stakeholders, or b) our group proves to be difficult to influence. I'm not especially worried about a) given that critical feedback is pretty much the core competency of the EA community and most of our work will have some sort of public-facing component. b) is something we can control and, while it's not always easy to judge how to balance external feedback against our inside-view perspectives, as you've pointed out we've been pretty intentional about trying to work well with other people in the space and cede responsibility/consider changing direction where it seems appropriate to do so.