What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund

By Linch, calebp, Daniel_Eth @ 2023-08-10T20:11 (+175)

The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we’ve compiled fictional but representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past. 

Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities.

For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we’ll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later.

Background on the LTFF

Methodology for this analysis

At the LTFF, we assign each grant application to a Principal Investigator (PI) who assesses its potential benefits, drawbacks, and financial cost. The PI scores the application from -5 to +5. Subsequently, other fund managers may also score it. The grant gets approved if its average score surpasses the funding threshold, which historically varied from 2.0 to 2.5, but is currently at 2.9. 

Here's how we created the following list of fictional grants:

This process is highly qualitative and is intended to demonstrate the types of projects we'd fund at various donation levels. The final ranking likely does not represent the views of any individual fund manager very well.

This analysis has weaknesses, including that:

Caveat for grantseekers

This article is primarily aimed at donors, not grantees. We believe that the compatibility between an applicant and their proposed project, including personal interest and enthusiasm, plays a crucial role in the project’s success. Therefore, we discourage tailoring your applications to match the higher tiers of this list; we do not expect this to increase either your probability of getting funded or the project’s eventual impact conditional upon funding.

Grant tiers

Our primary aim in awarding grants is to optimize the trajectory of the long-term future. To that end, grantmakers try to evaluate each grant according to their subjective worldviews of whether spending $X on the grant is a sufficiently good use of limited resources given that we only have $Y total to spend for our longtermist goals. 

In the tiers below, we illustrate the types of projects (and corresponding grant costs[1] in brackets) we'd potentially finance if our fundraising over the next six months reaches that tier. For each tier, we list only projects we likely wouldn't finance if our fundraising only met the preceding tier's total. For example, if we raised $1.2 million, we would likely fund everything in the $100,000 and $1M tiers, but only a small subset (up to $200,000) of projects in the $5M tier, and nothing in the $10M tier.

To put it differently, as the funding amount for the LTFF increases, the threshold for applications we would consider funding decreases, as there is more funding to go around. 

If LTFF raises $100,000

These are some fictional projects that we might fund if we had roughly $100,000 of funding over the next 6 months. Note that this is not a very realistic hypothetical: in worlds where we actually only have ~$100,000 of funding over 6 months, a) many LTFF grantmakers would likely quit, and b) the remaining staff and volunteers would likely think that referring grants to other grantmakers was a more important part of our job than allocating the remaining $100k. Still, these are projects that would meet our bar even if our funding was severely constrained in this way.

$1M

Below are some hypothetical projects we might additionally fund if we had roughly $1M of funding over the next 6 months (roughly 1/5 - 1/6 of our past spending rate). This is roughly how much money we would have if we only account for our current reserves and explicit promises of additional funding we’ve received. 

$5M

$5M over 6 months is our current target, and roughly how much we want to raise to cover our grantmaking budget going forward. Note that our current threshold (2.9) is in between the $1M and $5M bars.

Should we secure roughly $5M in funding for the next six months, corresponding to our funding threshold from November 2022 to July 2023 (2.5), we might additionally fund the following hypothetical grants:

Aside from Linch: 

To add more color to these examples, I’d like to discuss the sort of applications that are relatively close to the current LTFF funding bar – that is, the kind of applications we’ll neither obviously accept nor obviously reject. Hopefully, this will both demystify some of the inner workings of LTFF, as well as help donors make more informed decisions. 

Some grant applications to the LTFF look like the following: a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety or biosecurity researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk. 

In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional. Factors that would positively influence my impression include additional signs of dedication, a more substantial track record in relevant areas, indications of exceptional talent, or other signs of potential for a notably successful early-career investment. Conversely, evidence of deceitfulness, problematic unilateral actions or inclinations, rumors or indications of sketchiness not quite severe enough to be investigated by Community Health, or other signs or evidence of possibly becoming a high-downside grant would negatively influence my assessment.

I think the median grant application of this kind (without extenuating evidence) would be a bit below our funding bar until July (2.5), and just above our pre-November 2022 bar.

$7.5M

If we accumulate $7.5M in funds over the next six months, we might additionally support the following hypothetical grants. This aligns with our pre-November 2022 grantmaking threshold (2.0). However, we have never actually spent as much as $7.5m in any six-month period before November 2022. This is because we’ve had an average increase in both quantity and quality of applications this year, which meant there were not enough applications above the old bar to fund $7.5M worth of projects. 

$10M

Below are some hypothetical grants that we might additionally fund if we have $10M in spending over the next 6 months. This will correspond to a lower grantmaking bar than at any point in LTFF’s history. That said, should we actually receive such a substantial influx, we might instead opt to carry out proactive grantmaking projects we deem more impactful, and/or reconsider our general policy against saving funds.

We will always refrain from funding projects we believe are net harmful in expectation, regardless of the funds raised.

If you’ve read this far, please don’t hesitate to comment if you have additional questions, clarifications, or feedback!

If you think grants above the $1M tier are valuable, please consider donating to us! If we do not receive more money soon, we will have to increase our bar again, resulting in a quite suboptimal (by my lights) misallocation of longtermist resources.

Acknowledgements

This post was written by Linch Zhang and Caleb Parikh, with considerable help from Daniel Eth. Thanks to Lizka Vaintrob, Nuño Sempere, Amber Dawn and GPT-4 for helpful feedback and suggestions.

Appendix A: Donation smoothing/saving

The LTFF saves money/smooths donations on the timescale of months (e.g. if we have unexpectedly high donations in August, we might want to ‘smooth out’ our grantmaking so that we award similar amounts in September, October, etc). However, we generally do not attempt to smooth donations on the timescale of years. That is, if we receive an unexpectedly high windfall in 2023, we would not by default plan to “save up” donations for future years. Instead, we may aim both to more aggressively solicit grant applications, and also to lower the bar for funding. Similarly, if we receive unexpectedly little in donations, we will likely raise the bar for funding and/or refer grant applicants to other donors. 

This is in contrast to Open Philanthropy, which tries to optimize for making the best grants over the timescale of decades, and the Patient Philanthropy Fund, which tries to optimize for making the best grants over the timescale of centuries. 

There are several considerations in favor of not attempting to do too much donation smoothing:

However, this policy is not set in stone. If donors or the community have strong opinions, we welcome engagement here!

  1. ^

     See this appendix in the payout report for how we set grant and stipend amounts.

  2. ^

    Note that this grant would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.


Linch @ 2023-08-11T00:40 (+40)

Would donors/other members of the community find it helpful if I were to repeat this process and write such a post for EAIF? Note that as I do not do grantmaking for EAIF, my attempts at doing the analagous "modifying and blending grants to form representative fictitious grants" might be missing some key nuances.

"Agree"-vote if helpful relative to the counterfactual, "Disagree" if not helpful; assume my nearest counterfactual is writing some other posts drawn from the same distribution as my past posts or comments, particularly LTFF-related ones.

Elizabeth @ 2023-08-13T22:08 (+27)

This is hard to answer without knowing the exact counterfactual. I'd value you going deeper on topics you have the most information on, and my guess is EAIF is not your comparative advantage, but if there isn't a specific other post you're excited about I'd much rather have EAIF than nothing. I thought it might be helpful to give ideas of  posts I'd be interested in from you, specifically:

  • what do you want to see in the impact or theories of change section? (related)
  • the practicalities of living off of grants as an independent. do people ask for enough? how bad is it if you ask for too much? how do you structure work to avoid gaps between grants? 
  • how do you evaluate results from independent researchers?
  • how do you evaluate the success of grants for upskilling or exploration?
  • how do you evaluate work from other kinds of independent grant recipients (AXRP and Rob Miles's youtube channel come to mind, but probably there are more grants that are even harder to categorize)? 
  • what do you regret not funding?
Yonatan Cale @ 2023-08-24T15:20 (+2)

Writing such a post for EAIF (even a 5x shorter version) would help me get an idea on what's the bar for a community project to be ~worthwhile, and especially to easily say "no, this isn't worthwhile".

I'm saying this because even this LTFF post updated my opinion about that.

Yonatan Cale @ 2023-08-13T09:41 (+23)

I really liked this post, and specifically the framing of "what will a marginal donation be" (as opposed to "what's the best thing we ever did" or so). 

 

[ramblings from my subjective view point of EA-software]

  1. It reminds me of how developers consider joining an EA org, and think "well, seems like all your stuff is already built, no?". I think writing about marginal things the org wants to build and needs help with - would go a long way for many job posts
  2. This somewhat updated me towards "it's a bad idea to fund me, my work isn't as important as all this" and also towards "maybe I better do some E2G so you can fund more things like this"
Vasco Grilo @ 2023-08-17T16:49 (+9)

Thanks for sharing! I confess I had been wondering about moving my donations elsewhere due to lack of knowledge about LTFF's processes, but this and other recent posts will probably imply that I will continue donating to LTFF in the near future.

We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.

Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as "events that cause roughly 10 million deaths or $10 trillion in damages or more". Maybe it would be better to be explicit about the severity of the events in the website?

Note that this grant [in bio] would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.

I would be curious to know how you compare grants in different areas. For example, could you share which fraction of grants in each area (e.g. AI, bio, nuclear, or other) are successful? I understand you consider AI and bio to be the most pressing areas (emphasis mine):

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

You also only mentioned grants in AI and bio in the OP. However, even if applications in other areas were as likely as those in AI to be funded, they would still not be (randomly) selected to be in the OP, because applications outside of AI and bio only represent a small fraction of the total.

calebp @ 2023-08-17T17:45 (+7)

Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as "events that cause roughly 10 million deaths or $10 trillion in damages or more". Maybe it would be better to be explicit about the severity of the events in the website?

I don’t think that as an organisation we have a specific definition in mind. I think it’s still worth saying we are most focussed in reducing global catastrophic risks as opposed to pursuing other goals like instilling caring about future generations as a value in society or economic growth.

In practice we direct funding towards activities that we think reduce catastrophic risks, but are most focussed on existential risks.

Jason @ 2023-08-10T21:55 (+8)

On mobile, "roughly ⅕ -⅙ of our past spending rate" doesn't display correctly -- that's one fifth to one sixth for my fellow mobile users.

[Edit for Forum management: these images displayed as white Xs in a black box on my Android / Samsung S22; I saved a screenshot if helpful. It looks like they have been edited to 1/5 and 1/6 in ordinary characters now.]

Linch @ 2023-08-10T23:16 (+4)

Thank you! I believe it should be fixed now! (I changed ⅕ to 1/5).

JJ Hepburn @ 2023-08-11T07:16 (+3)

This change also helps with text-to-speach

Linch @ 2023-08-12T03:27 (+2)

Yay!

calebp @ 2023-08-10T22:40 (+2)

It’s working on mobile for me (iPhone - safari)

Vasco Grilo @ 2024-04-11T18:13 (+7)

Caleb and Linch randomly selected grants from each group.

I think your procedure to select the grants was great. However, would it become even better by making the probability of each grant being selected proportional to its size? In theory, donors should care about the impact per dollar (not impact per grant), which justifies weighting by grant size. This may matter because there is significant variation in grant size. The 5th and 95th percentile amount granted by LTFF are 2.00 k$ and 169 k$, so, specially if one is picking just a few grants as you did (as opposed to dozens of grants), there is a risk of picking unrepresentatively small grants.

Linch @ 2024-04-12T21:19 (+4)

Thank you! This is a good point; your analysis makes a lot of sense to me.

Stephen McAleese @ 2023-08-18T20:48 (+7)

Thanks for the post. Until now, I used to learn about what LTFF funds by manually reading through its grants database. It's helpful to know what the funding bar looks like and how it would change with additional funding.

I think increased transparency is helpful because it's valuable for people to have some idea of how likely their applications are to be funded if they're thinking of making major life decisions (e.g. relocating) based on them. More transparency is also valuable for funders who want to know how their money would be used.

Jason @ 2023-08-11T01:09 (+5)

The PI scores the application from -5 to +5. 

Does the zero point have any specific meaning? Specifically, does a negative score convey a belief that the proposal has net-negative EV?

Daniel_Eth @ 2023-08-11T01:30 (+9)

In principle, the zero point is supposed to signify equivalent to burning the money, and negative signifies net-negative EV (neglecting financial cost of the grant). In practice, speaking personally, if I weakly think a grant is a bit net negative, but it's not particularly worrying nor something I feel confident about, I usually give it a score that's well below the funding threshold, but still positive (so that if other grantmakers are more confidently in favor of the grant, they can more likely outvote me here). If I were to confidently believe that a grant was of zero net value, I would give it a vote of zero.

Linch @ 2023-08-11T03:02 (+6)

I personally give a negative value and (when I have low certainty) flag that I'm willing to change/delete my votes if other people feel strongly, so as to not unduly tank the results. I think LTFF briefly experimented with weighted voting in the past but we've moved against it (I forgot why). 

JP Addison @ 2023-08-28T17:34 (+4)

I'm curating this. Along with other commenters, I really like the focus on the marginal grant. If I were to write a post that would help donors understand the impact of their donations to the Long Term Future Fund, it would look a lot like this. 

While I'm sympathetic to the reasoning, I was sad to hear that EA Funds would stop sharing publicly all its grants.  To my mind to this post goes a long way towards remedying that, and makes me much more likely to recommend the Long Term Future Fund to others. (That strikes me as a surprisingly large update, but I stand by it.)

Thanks a bunch for writing this!

calebp @ 2023-08-28T23:18 (+1)

Thanks for curating it :)

vin @ 2023-08-13T19:09 (+4)

I really appreciate your transparency about how you allocate funding! Thank you for this post!

Alexandra Bos @ 2023-08-25T13:41 (+3)

Thanks for the post!

A related question: Is LTFF more likely to fund a small AI safety research group than to fund individual independent AI Safety researchers?

So could we see a scenario where, if person A, B or C apply individually for an independent research grant, they might not meet your funding bar. But where, if similarly impressive people with a similarly good research agenda applied as a research group, they would be a more attractive funding opportunity for you?

Linch @ 2023-08-25T23:06 (+8)

(Giving my own professional opinion, not speaking for anybody else/employers) This seems unlikely to me, unless there's a different substantive reason to believe that the research group is better for either research qua research or upskilling. Eg having access to better mentors, or demonstrated evidence that the group is better at keeping each other on track. 

Plausibly I'm wrong here. Being an independent researcher kinda sucks in a variety of ways, and I can imagine having a group to work with to be good even if you can't point to a specific reason. But I don't currently think we have a bias towards groups and against independent researchers, and if anything I'd guess our revealed preferences are a bit in the other direction.