A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps

By Linch @ 2024-12-03T21:57 (+84)

The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we're currently one of the only organizations comprehensively serving these functions. Specifically, we:

Getting these functions right takes meaningful resources - well over $1M annually. This figure isn't arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we're trying to achieve - from maintaining independent AI safety voices to seeding new fields like x-risk focused information security - this is arguably a minimum viable scale.

While I'm excited to see some of these functions being taken up by others (especially in career development), I'm skeptical we'll see comprehensive replacements anytime soon. My experience has been that proposed alternatives often die quickly or serve narrower functions than initially planned.

This piece fits into a broader conversation about LTFF's effectiveness during the Forum’s Marginal Funding Week. I hope to publish more pieces in the coming weeks based on reader interest. Meanwhile, you can check out our 2024 and 2022-2023 payout reports, our past analyses of marginal grants, and (anonymized) grants we've narrowly rejected to better understand what LTFF does and why it matters.

Core Argument

To elaborate:

Something like LTFF ought to exist at a reasonable scale. We need funders who can:

No other organization currently serves these functions at scale. While other funders do important work in adjacent spaces, they often have constraints that prevent them from filling these specific needs:

Therefore, LTFF should be funded. Until other organizations step up to fill these gaps (which I'd genuinely welcome), LTFF needs to continue serving these functions. And to serve them effectively, we need meaningful resources – as I'll explain later, likely well over $1M annually.

I want to be clear: this isn't meant to be a complete argument for funding LTFF. For that, you'd want to look at our marginal grants, create cost-effectiveness analyses, and/or make direct comparisons with other funding opportunities. Rather, it's an attempt to convey the intuition that something needs to fill these ecosystem gaps, and right now, LTFF is one of the only organizations positioned to do so.

In the following sections, I'll walk through each of these key functions in detail, examining why they matter and how LTFF approaches them.

Key Functions Currently (Almost) Unique to LTFF

Technical AI Safety Funding

We have always been one of the biggest funders for GCR-focused technical AI safety, particularly for early-stage nonacademic researchers. Our rotating staff of part-time grantmakers always has multiple people who are highly engaged and knowledgeable about AI safety, usually including people actively working in the field.

Why aren't other funders investing in GCR-focused technical AI Safety?

It's at some level surprising, but despite being arguably one of EA's most important subcause areas, no other funder really specializes in funding GCR-focused technical AI safety. For a significant fraction of technical AI Safety research work, we are the most obvious and sometimes only place people apply to.

For example:

(This list is non-exhaustive, for example I do not cover SFF or some of the European funders, which I know less about. In a later article I’d like to explicitly contrast LTFF’s approach on AI alignment and AI safety with that of other funders in the broader ecosystem).

Career Transitions and Early Researcher Funding

We've historically funded many people for career transitions who are interested in working on fields that we think are very important, but who are currently not able to directly productively work in those fields. 

Given how long we've been around, and how young many of the fields we work in are, if our fieldbuilding efforts were successful, we should be able to observe some macrolevel effects. And I think we do observe this: I believe many people now productively working on existential risk owe a sizable fraction of their career start to LTFF. As an experiment, you can think of (especially junior) people who you respect in longtermism or existential risk reduction or AI Safety, and do an internet search for their name plus LTFF. 

There are many components to successful field-building for a new research field. In particular, the user journey I'm imagining looks like: "somebody talented wants to work on x-risk (especially AI safety)" ->???-> "they are productively contributing useful work (esp research) in making the world better via x-risk reduction."

LTFF tries to help people with an easier transition to the "???" part, both historically and today.

Why aren't other groups investing in improving career transitions in existential risk reduction?

The happy answer to this question is "now they do!" For example:

In many ways, this is all really good from my perspective. Obviously, it alleviates our load and means we can focus on other pressing problems or bottlenecks. Further, to be frank the experience of being a grantee can often be quite poor. I’m sad it’s not better, but at the same time I'm optimistic that having more mentorship and a structured organizational umbrella probably adds a lot of value for trainees.

Going Forwards

We will continue to fund transition grants and early research grants. However, in the past 1-2 years, and I suspect even more so in the future, we will fund:

A good example of the latter would be MATS scholars/trainees. In every MATS cohort, usually more than two-thirds of their scholars apply to us for funding, many/most with competitive applications. This allows the scholars to both do directly useful research and build more of a track record as they apply to other jobs.

Providing (Some) Counterbalance to AI Companies on AI Safety

Right now, AI companies substantially influence the public conversation on AI risk. They also substantially influence the conversation on AI risk within the AI safety field. Right now, via their internal safety teams, AI companies pay for a large fraction of current work in the technical AI Safety field. This is great in some ways, but also limits the ability for independent voices to voice concerns, both directly and through various indirect channels (e.g. people angling for a job at XYZ AI company may become less inclined to criticize XYZ).

I think it is important to fund communication from a range of experts, to ensure that more perspectives are represented. It is arguably especially important to fund some technical safety research outside of the major AI labs, as a check on these labs: external researchers may be more comfortable voicing dissent with lab orthodoxy or pointing out if the major labs are acting irresponsibly.

Unfortunately, it is quite hard to form sufficient counterbalance to them. AI labs are of course extremely well-funded, and most of the other big funders in this space have real or perceived conflicts-of-Interest, like their donors or project leads having large investments in AI companies or strong social connections with lab leadership.

I don't think the ability to be funded by LTFF is anywhere close to enough counterbalance. But having one funder stake out this position is much better than having zero, and I want us to take this responsibility seriously.

Going Forwards

In the medium term (say 3 to 5 years) my hope is that governments will take AI safety seriously enough to a) conduct their own studies and evaluations, b) hire their own people, c) fund academia and other independent efforts, and broadly attempt to serve as a check on AI corporate labs in the private sector, as governments are supposed to do anyway. Preventing/reducing regulatory capture seems pretty important here, of course.

In the meantime I want both LTFF and other nonprofit funders in AI Safety to be aware of the dynamic of labs influencing AI Safety conversations, and take at least some active measures to correct for that.

Funding New Project Areas and Approaches

LTFF is often able to move relatively quickly to fund new project areas that have high expected impact. In many of those cases, other funders either aren't interested or will take 6-24 months to ramp up.

As long as a project has a plausible case that it might be really good for the long-term future, we will read it and attempt to make an informed decision about whether it's worth funding. This means we've funded rather odd projects in the past, and will likely continue to do so.

Importantly, when other funders abandon project areas and subcause areas for reasons other than expected impact, as Open Phil/Good Ventures has recently announced, a well-funded LTFF can step in and take on some of the relevant funding needs.

Additionally, we have a higher tolerance for PR risks than most, and are thus able to fund a broader range of projects with higher expected impact.

Going Forwards

With sufficient funding, we can (and likely will) try to do active grantmaking to fund other areas that currently either do not have any funding or are operating at very small scales. For example, Caleb Parikh (EA Funds Project Lead and LTFF fund manager) is one of the few people working very actively on fieldbuilding for global catastrophic risk-focused information security. We can potentially expand a bunch of active grantmaking there.

We'd also potentially be excited to do a bunch of active grantmaking into AI Control.

We will continue to try to fund high-impact projects and subcause areas that other funders have temporarily abandoned.

Broader Funding Case

Why Current Funding Levels Matter

Let me try to give a rough sense of why LTFF likely needs >>$1M/year to fulfill its key functions, rather than loosely gesture at “some number”. I acknowledge this is a large number with serious opportunity costs (about 200,000-250,000 delivered bednets, for example). But here's the basic case:

All together, it seems like if there are productive uses of money, it seems easy to see how effectively fulfilling all of the functionality of LTFF takes >$1M/year.

Going Forwards

I genuinely hope that many of LTFF's current functions will eventually be taken over by other organizations or become unnecessary. For example:

But I'm not confident this will happen soon, and I’m especially pessimistic that they will all happen soon in a comprehensive way. My experience has been that when other projects propose to replace LTFF's functions:

In January 2022 (when I first joined LTFF) there were serious talks of shutting LTFF down and having another organization serve our main functions, like maintaining continuously open applications for smaller grants. Nearly 3 years later, LTFF is still around, and many of the proposed replacement projects haven't materialized.

Thus, I tentatively think we should:

Conclusion

We need something like LTFF to exist and be reasonably well-funded. This need stems from critical ecosystem gaps - supporting early-stage technical AI safety research, helping talent transitions into existential risk work, providing independent voices outside major labs, and funding promising new areas quickly.

Right now, no other organization comprehensively serves these functions at scale. While I'm excited to see new programs emerging in specific areas, much of the full set of capabilities we need still primarily exists within LTFF. Until that changes - which I'd welcome! - LTFF needs meaningful resources to continue this work.

This isn't a complete case for funding LTFF - that would require detailed cost-effectiveness analysis and comparison with other opportunities. But I hope I've conveyed why having these ecosystem functions matters, and why right now, LTFF is one of the few organizations positioned to provide them.

Appendix: LTFF's Institutional Features

While our core case focuses on key ecosystem functions LTFF provides, there are several institutional features that enable us to serve these functions effectively:

Transparency and Communication

We aim to be a consistently candid funder, with frequent, public communications about our grants and decision-making processes. We often don't hit this goal to a degree that satisfies us or the community. But we do more public communication than most longtermist funders and organizations, through:

Operational Features

We maintain several operational practices that support our core functions:

Continuous Open Applications

Institutional Memory

We've been around longer than most institutional funders in this space (though the space itself is quite young). This gives us valuable context about:

Risk Tolerance

Diverse Worldviews


Neel Nanda @ 2024-11-20T11:16 (+17)

Thanks for the post! This seems broadly reasonable to me and I'm glad for the role LTFF plays in the ecosystem, you're my default place to donate to if I don't find a great specific opportunity.

I'm curious how you see your early career/transition stuff (including MATS) compared to OpenPhil's early career/transition grant making? In theory, it seems to me like that should ideally be mostly left to OpenPhil, and LTFF be left to explore stuff OpenPhil is unwilling to fund, or otherwise to LTFF's comparative advantage (eg speed maybe?)

Jason @ 2024-11-18T02:17 (+13)

Has there been any consideration of creating sub-funds for some or all of the critical ecosystem gaps? Conditioned on areas A, B, and C being both critical and ~not being addressed elsewhere, it would feel a bit unexpected if donors have no way to give monies to A, B, or C exclusively. 

If a donor values A, B, and C differently -- and yet the donor's only option is defer to LTFF's allocation of their marginal donation between A, B, and C -- they may "score" LTFF less well than they would score an opportunity to donate to whichever area they rated most highly by their own lights. 

The best reason to think this might not make a difference: If enough donors wanted to defer to LTFF's allocation among the three areas, then donor choice of a specific cause would have no practical effect due to funging.

Linch @ 2024-12-03T03:12 (+4)

Hi Jason. Yeah this makes a lot of sense. I think in general I don't have a very good sense of how much different people want to provide input into our grantmaking vs defer to LTFF; in practice I think most people want to defer, including the big(ish) donors; our objective is usually to try to be worthy of that trust. 

That said, I think we haven't really broken down the functions as cleanly before; maybe with increased concreteness/precision/clarity donors do in fact have strong opinions about which things they care about more on the margin? I'm interested in hearing more feedback, nonymously and otherwise. 

Important caveat: A year or so ago when I floated the idea of earmarking some donations for anonymous vs non-anonymous purposes, someone (I think it was actually you? But I can't find the comment) rightly pointed out that this is difficult to do in practice because of fungibility concerns (basically if 50% of the money is earmarked "no private donations" there's nothing stopping us from increasing the anonymous donations in the other 50%). I think a similar issue might arise here, as long as we both have a "general LTFF" fund and specific "ecosystem subfunction" funds. 

I don't think the issue is dispositive, especially if most money eventually goes to the subfunction funds, but it does make the splits more difficult in various ways, both practically and as a matter of communication.

OscarD🔸 @ 2024-11-18T22:13 (+9)

How come LTFF isn't in the donation election? Maybe it is too late to be added now though.

JJ Hepburn @ 2024-12-07T07:09 (+8)

What is the current response time for the LTFF?

niplav @ 2024-12-06T16:17 (+5)

Since this is turning out to be basically an AMA for LTFF, another question:

How high is the bar for giving out grants to projects trying to increase human intelligence[1]? Has the LTFF given out grants in the area[2], and is this something you're looking for?

(A short answer without justification, or a simple yes/no, would be highly appreciated for me to know whether this is a gap I should be trying to fill.)


  1. Or projects trying to create very intelligent animals that can collaborate with and aid humans. ↩︎

  2. Looking around in the grants database CSV I didn't find anything obviously relevant. ↩︎

Oscar Sykes @ 2024-11-18T19:24 (+4)


Great post. I'm wondering if you could expand on this statement?

Additionally, we have a higher tolerance for PR risks than most, and are thus able to fund a broader range of projects with higher expected impact.

Could you provide examples of grants with PR risks (hypothetical or real) that LTFF would fund but OpenPhil wouldn't?

Habryka @ 2024-11-18T20:05 (+30)

Due to what I understand to be trickiness in communicating Dustin's relationship to PR risk to the EA community, there isn't a ton of clarity on what things OP would fund via GV, but some guesses on stuff where I expect OP to be hesitant for PR-ish reasons, but which the LTFF would definitely consider: 

  • A grant to Manifold Markets (who I expect Dustin would not be in favor of funding due to hosting certain right-leaning intellectuals at their conferences)
  • A grant to Nick Bostrom to work on FHI-ish stuff
  • A grant to a right-leaning AI think tank
  • A grant to rationalist community building, in as much as it would be effective for improving the long term future
  • A grant to work on digital sentience
  • Grants to various high-school programs like Atlas
  • AI pause or stop advocacy
  • Distributing copies of HPMoR to various promising people around the world

Again, there isn't much clarity on what things OP might or might not fund via GV, but my current best guess is none of the things on the list above could currently get GV funding.

gergo @ 2024-11-22T09:57 (+1)

All of these map onto my understanding of what they wouldn't fund, but note that they have funded Atlas in the past, and also provide funding for Non-trivial which engage with young people including high-schoolers. They also fund ML4Good for in-person bootcamps

Habryka @ 2024-11-22T17:02 (+4)

Open Phil’s funding interests and priorities and constraints have drastically changed in the last year or two. I agree they funded many things like this in the past.

OscarD🔸 @ 2024-11-18T22:12 (+2)

How does LTFF relate to https://www.airiskfund.com/about?

I am confused given the big overlap in people and scope. 

calebp @ 2024-11-18T23:04 (+7)

This fund was spun out of the Long-Term Future Fund (LTFF), which makes grants aiming to reduce existential risk. Over the last five years, the LTFF has made hundreds of grants, specifically in AI risk mitigation, totalling over $20 million. Our team includes AI safety researchers, expert forecasters, policy researchers, and experienced grantmakers. We are advised by staff from frontier labs, AI safety nonprofits, leading think tanks, and others.

More recently ARM Fund has been doing active grantmaking in AIS areas, we'll likely write more about this soon. I expect the funds to become much more differentiated in staff in the next few months (though that's not a commitment). Longer term, I'd like them to be pretty separate entities but for now they share roughly the same staff.

Neel Nanda @ 2024-11-20T11:13 (+7)

Is there a difference in philosophy, setup, approach etc between the two funds?

Linch @ 2024-12-05T00:53 (+4)

I think ARM Fund is still trying to figure out its identity, but roughly the fund was created to be something where you should be happy to refer your non-EA, non-longtermist friends (e.g. in tech) to check out, if they are interested in making donations to organizations working on reducing catastrophic AI risk but aren't willing (or in some cases able) to put in the time to investigate specific projects. 

Philosophically, I expect it (including the advisors and future grant evaluators) to care moderately less than LTFF about e.g. the exact difference between catastrophic risks and extinction risks, though it will still focus only on real catastrophic risks and not safetywash other near-time issues. 

calebp @ 2024-12-05T07:11 (+4)

The main difference in actions so far is that the ARM Fund has focussed on active grantmaking (e.g. in AI x information security fieldbuilding). In contrast, the LTFF has a more democratic and passive grantmaking focus. I also don't think that ARM Fund has reached product market fit yet, it's done a few things reasonably well but I don't think it has a scalable product (unless we decide to do a lot more active grantmaking but so far that has been more opportunistic).

Charlie_Guthmann @ 2024-12-04T21:15 (+1)

My two cents on why I am not giving to any effective altruism funds: I have no political representation in this movement.

Linch @ 2024-12-05T00:10 (+6)

Can you clarify more what you mean by "political representation?" :) Do you mean EA Funds/EA is too liberal for you, or our specific grants on AI policy do not fit your political perspectives, or something else?

Charlie_Guthmann @ 2024-12-05T00:31 (+3)

Nope I think the grants you are doing seem good, I don’t mean it like that.

I mean the idea of giving these semi central ea orgs/ funds money doesn’t make sense to me. EA is supposed to be a cause agnostic decentralized movement. If it was called the utilitarian fund this would probably be about 70% of the way towards me being willing to donate. When my dad asks why you guys did x and I have to respond by saying ea isn’t a monolith and then he asks me where the Chicago donation fund is and then I have to say well dad that’s not effective because (insert large moral circle utilitarian calculus) and then he asks me “oh so is ea a utilitarian movement” … maybe you see where I am going with this? It’s a bit of a meta point and perhaps now is not the right time to bring it up.

I mean political representation like if there is an organization with ea in its name then I should be able to vote for the board as a member of ea.

Linch @ 2024-12-05T00:47 (+5)

That makes sense. We've considered dropping "EA" from our name before, at least for LTFF specifically. Might still do it, I'm not as sure. Manifund might be a more natural fit for your needs, where individuals make decisions about their own donations (or sometimes delegate them to specific regranters), rather than have decisions made as a non-democratic group. 

Charlie_Guthmann @ 2024-12-05T00:55 (+1)

Yea I think I can feel better about giving to manifund so that’s a good shout. Functionally giving money to them still feels like I’m contributing to the larger ea political oligopoly though. I want to enrich a version of ea with real de jure democratic republic institutions

Linch @ 2024-12-05T00:59 (+2)

I think the donation election on the forum was trying to get at that earlier.

Charlie_Guthmann @ 2024-12-05T01:02 (+1)

It was such a token effort though. I’m literally giving that much away myself. How about every single person at an ea org steps down and we have an election for the new boards, or they can drop the ea name? (I’m only half joking)

Linch @ 2024-12-05T01:12 (+2)

Tangent, but do you have a writeup somewhere of why you think democracy is a more effective form of governance for small institutions or movements? Most of the arguments for democracy I've seen (e.g. peaceful transfer of power) seem much less relevant here, even as analogy.  

Charlie_Guthmann @ 2024-12-05T01:14 (+1)

No I don’t but effective altruism should not be a small movement. I think about 1/3 of all people could get on board. Applied utilitarianism should be a small movement, and probably not democratic. I’ll just write up a more coherent version of my vision and make a quick take or post though. I would agree democracy is not great for a small movement though I’m not expert.

Daniel_Friedrich @ 2024-12-04T12:27 (+1)
MichaelDickens @ 2024-12-04T16:13 (+6)
  • $100 million/year and 600 people = $167,000 per person-year
  • $1M buys 10 person-years = $100,000 per person-year

These numbers are approximately the same. I don't understand how you get that 5/6 of the work comes from volunteering / voluntary underpayment, did I do it wrong?