Future Fund June 2022 Update

By Nick_Beckstead, leopold, ab, William_MacAskill, ketanrama @ 2022-07-01T00:50 (+279)

This is a linkpost to https://ftxfuturefund.org/future-fund-june-2022-update/

Summary

Background

The FTX Foundation’s Future Fund publicly launched in late February. We're a philanthropic fund that makes grants and investments to improve humanity's long-term prospects. For information about some of the areas we've been funding, see our Areas of Interest page

This is our first public update on the Future Fund’s grantmaking. The purpose of this post is to give an update on what we’ve done and what we're learning about the funding models we're testing. (It does not cover a range of other FTX Foundation activities.)

We’ve also published a new grants page and regrants page with our public grants so far.

Our focus on testing funding models

We are trying to learn as much as we can about how to deploy funding at scale to improve humanity’s long-term prospects. Our primary objective for 2022 is to perform bold and decisive tests of new funding models. The main funding models we have tested so far are our regranting program and our open call for applications. 

In brief, these models worked as follows:

Grantmaking by funding model

So far we have made 262 grants and investments, totaling ~$132M. These break down as follows:

There are also ~$25M of grants we are likely to make soon, but have some relevant aspects TBD.

Some example grants and investments

Below are some grants and investments that we find interesting and/or representative of what we are trying to fund.

Regranting

Open call

Staff-led

See this grants page and this regrants page for all of our public grants and investments so far, and further example grants in later sections.

Key stats

These numbers are for our grantmaking overall. (The sections below with more detail on  regranting / open call / staff-led grants give the corresponding stats for each funding stream.)

Areas of interest

AreaCountVolume
 

262

$132M

Artificial Intelligence

76

$20M

Biorisk and Recovery from Catastrophe

30

$30M

Economic Growth

9

$7M

Effective Altruism

61

$34M

Empowering Exceptional People

18

$10M

Epistemic Institutions

21

$8M

Great Power Relations

6

$2M

Other

17

$16M

Research That Can Help Us Improve

4

$1M

Space Governance

7

<$1M

Values and Reflective Processes

13

$3M

Grant size

 CountVolume
 

262

$132M

<$50k

119

$2M

$50k - $500k

102

$20M

>=$500k

41

$109M

Some takeaways on funding models so far

While trying out these funding models, we've been trying to learn how cost-effective they are, how much of our team's time (and others' time) is required to operate them per unit benefit, how scalable they are, and whether they produce grants and investments that we otherwise wouldn't have known about.

Below are some of the main things we've learned over the last couple of months as we’ve been trying out these funding models.

Other activities

Apart from the grantmaking described above, here are some of the other things going on.

Priorities for the rest of the year

We will continue the regranting program's 6-month trial (until October) and staff-led grantmaking. We currently don’t plan to run another open call in the next couple months. We will revisit this when we have more capacity and find a way to run it more efficiently.

Consistent with our original plan for this year, our additional priorities for bold and decisive tests of new funding models include:

Separately, we will also more thoroughly estimate the expected cost-effectiveness of our grants and investments. We are working on a standardized process for this that will help us more robustly evaluate our programs.

Regranting program in more detail

Background

We launched a pilot version of the regranting program with 20 people in late February, and then scaled up the program to include >100 regrantors and >60 grant recommenders in early April. Our hope was to empower a range of interesting, ambitious, and altruistic people to drive funding decisions through a rewarding, low-friction process.  We have set aside >$100M for this initial test, which will last until the end of September, at which point we will evaluate how it has gone and decide whether to continue it and what changes to make. As of mid-June, regrantors have made 168 grants and investments, totaling ~$31M.

The basic structure is that the regrantors have been given discretionary budgets ranging from $100k to a few million dollars. (A larger number towards the lower end, a smaller number towards the higher end—there is a wide variation in budget sizes.) Regrantors submit grant recommendations to the Future Fund, which we screen primarily for downsides, conflicts of interests, and community effects. We typically review and approve regranting submissions within 72 hours.

Grant recommenders have access to a streamlined grant recommendation form from us where we give them some deference, but they don’t have a discretionary budget. (We wanted to try out multiple versions, and in part randomized participation in the different programs.)

We compensate regrantors for the volume and quality of their grantmaking, including an element based on whether we fund the projects they seeded ourselves in the future. We also unlock additional discretionary funding when we're ready to see more of what they've been doing.

Some example regrants

≥$500k grants

Some of the largest grants made so far include:

$50k-$500k grants

Some example grants we found exciting from this category include:

<$50k grants

Some examples of grants that we found exciting in this category: 

Key stats

Areas of interest

AreaCountVolume
 

168

$31M

Artificial Intelligence

60

$11M

Biorisk and Recovery from Catastrophe

11

$1M

Economic Growth

5

$5M

Effective Altruism

41

$7M

Empowering Exceptional People

10

$3M

Epistemic Institutions

12

$4M

Great Power Relations

3

<$1M

Other

12

<$1M

Research That Can Help Us Improve

2

<$1M

Space Governance

5

<$1M

Values and Reflective Processes

7

$1M

Grant size

 CountVolume
 

168

$31M

<$50k

110

$2M

$50k - $500k

47

$7M

>=$500k

11

$23M

Expectations vs. reality

Some outcomes we were interested in, and thoughts on how they went:

  1. Finding new promising things to fund that weren’t on our radar
    1. Better than expected. A majority of our regrants seem like opportunities that we wouldn’t have been aware of by default. They also seem about as good as projects we're funding through other mechanisms.
  2. Launching new projects in our areas of interest
    1. Promising signs, but too early to tell. We were quite unsure how many new projects we'd expect to see launched via the regranting program. The main update is that projects are getting launched and the founders look impressive based on their previous work. We haven't seen enough of their new work yet (now much more closely related to our areas of interest) to say whether things are likely to go in a good direction.
  3. Bringing in new people who weren't on our radar and supporting them to work on our areas of interest
    1. Promising signs, but too early to tell. A lot of movement here is coming from many <$50k grants to people who are learning/developing their skills in our areas of interest, career transition grants, as well as some of the larger projects being launched by founders that weren’t on our radar. We are tentatively excited about this.
  4. Avoiding spending lots of money on things that seem wasteful
    1. Better than expected, but too early to tell. Some of these grants may look clearly low-EV in retrospect, but few look that way to us now. One measure is that ~80% of grants (by dollar volume) are grants we probably would have been happy to make even if we weren't extending significant deference to the regrantor.
  5. Avoiding approving grants that seem ill-advised or net negative
    1. Better than expected. Our screening process weeds out some grants that look harmful or otherwise inappropriate, but there haven't been many we'd describe that way. Our process may also have weeded out worthwhile grants where we were unsure about downsides and we chose to proceed cautiously.
  6. Avoiding interpersonal drama (over who was and wasn't selected, what their discretionary budget was, and so on)
    1. Better than expected. We haven't had a lot of drama, though we were pretty careful to set things up to minimize that. (For example, by in part randomizing participation in the program and providing careful communication guidance to regrantors.)
  7. Doing all of the above without a massive time commitment
    1. About as expected (and well). Our team time spent per dollar moved is >2x lower than via our open call. We expect this to improve even further in the future because a lot of the time cost here was the fixed cost of designing and setting up the program.

Some more general reflections:

Going forward

We are going to continue with this experiment until October and then more systematically review the process and the quality of the grantmaking. We may also have a more developed sense at that point of how some of the new projects are going. Our current guess is that this program should probably continue in some form. 

Open call in more detail

Background

Our open call for applications was launched on February 28, 2022. We gave people three weeks to submit applications, and we received over 1700 applications. 

As explained above, the basic idea of the open call was, "Let's tell people what we're trying to do, what kinds of things we might be interested in funding, give them a lot of examples of projects they could launch, have an easy and fast application process, and then get the word out with Twitter blitz." We wrote some about the review process here

We funded 69 applications, totaling $27M. Some stats on acceptance rate:

Some example grants

Key stats

Areas of interest

AreaCountVolume
 

69

$27M

Artificial Intelligence

15

$5M

Biorisk and Recovery from Catastrophe

16

$12M

Economic Growth

1

<$1M

Effective Altruism

10

$3M

Empowering Exceptional People

5

$2M

Epistemic Institutions

9

$3M

Great Power Relations

3

$1M

Other

4

$1M

Research That Can Help Us Improve

1

<$1M

Space Governance

2

<$1M

Values and Reflective Processes

3

<$1M

Grant size

 CountVolume
 

69

$27M

<$50k

6

<$1M

$50k - $500k

49

$12M

>=$500k

14

$15M

Expectations vs. reality

Some outcomes we were interested in, and thoughts on how they went:

  1. Getting sympathetic founders from adjacent networks to launch new projects related to our areas of interest
    1. Worse than expected. We thought that maybe there was a range of people who aren't on our radar yet (e.g., tech founder types who have read The Precipice) who would be interested in launching projects in our areas of interest if we had accessible explanations of what we were hoping for, distributed the call widely, and made the funding process easy. But we didn’t really get much of that. Instead, most of the applications we were interested in came from people who were already working in our areas of interest and/or from the effective altruism community. So this part of the experiment performed below our expectations.
  2. Getting people from the effective altruism community to submit ambitious proposals that we wouldn't otherwise have considered funding
    1. Somewhat better than expected. An outcome that would have been at or slightly below expectations would be one where the applications we received were highly duplicative with the applications received by other effective altruism funders (e.g. EA Funds). However, we also received some new, interesting, and large proposals, for example from Ray Amjad, Sage, Global Guessing, Nathan Young, Manifold Markets, and Kevin Esvelt (including two pending applications we're excited about where we're waiting for further details).
  3. Getting people to launch massively scalable projects
    1. Worse than expected. Our encouragement toward massively scalable projects did not seem to have the intended effect. We got some very large requests for funding, sometimes tens or even hundreds of millions of dollars. We appreciate the boldness and ambition. However, we are much more interested in funding projects that start out no larger than they need to be, but can scale massively (without too great a fall in cost-effectiveness) once they show sufficient signs of traction. In short, this got us massive project applications but not really the massively scalable project applications we were hoping for. It seems that there is continued energy toward brainstorming projects of this on the EA Forum. And we are excited that the Atlas Fellowship has been founded as one project meeting this description in a clear way. We'd love to see more in this area!
  4. Getting people to launch projects from our project ideas lists
    1. About as expected (going OK). We funded a number of applications that were closely related to our project ideas lists. We think our project idea list played a major role in shaping the project in the cases of: Apollo Academic Surveys (expert polling for everything); Forecasting Our World In Data (Good Judgment Inc); and a couple of other cases.
  5. Introducing us to projects related to our project ideas and areas of interest that we otherwise may not have considered funding
    1. As expected in biosecurity, worse in other areas. Some biosecurity grants meeting this description include grants for better PPE (Michael Robkin, Greg Liu/Virginia Tech), pathogen sterilization (Justin Mares, Dr. Emilio Alarcon/University of Ottawa), and strengthening the BWC (Michael Jabob/MITRE, Council on Strategic Risks). In AI alignment, a notable grant in this category was an application from Lionel Levine to focus on AI alignment research
  6. Doing all of the above without it taking a ton of time.
    1. Worse than expected. We think it took us about twice as long as we were hoping for, and the impact per unit time was lower than other programs we've experimented in.

Some other reflections:

Going forward

If we were doing this again, there are a number of changes we would consider, including:

Overall the project went somewhat worse than expected, but we still think the ROI on both team time and capital was reasonable, and we’re excited about some of the new projects we funded. 

We currently don’t plan to run another open call in the next few months. We will revisit this when we have more capacity and see if we can find a way to run it more efficiently.

Staff-led grantmaking in more detail

Background

Unlike the open call and regranting, these grants and investments are not a test of a particular potentially highly scalable funding model. These are projects we funded because we became aware of them and thought they were good ideas.

Some example grants

We made 25 grants in this category, totaling ~$73M.

Five of our largest grants/investments were:

Key stats

Areas of interest

AreaCountVolume
 

25

$73M

Artificial Intelligence

1

$5M

Biorisk and Recovery from Catastrophe

3

$18M

Economic Growth

3

$2M

Effective Altruism

10

$24M

Empowering Exceptional People

3

$5M

Epistemic Institutions

0

$0M

Great Power Relations

0

$0M

Other

1

$15M

Research That Can Help Us Improve

1

$1M

Space Governance

0

$0M

Values and Reflective Processes

3

$2M

Grant size

 CountVolume
 

25

$73M

<$50k

3

<$1M

$50k - $500k

6

$2M

>=$500k

16

$71M

Expectations and reflections

Coming in, our expectation was that there would be some low-hanging fruit to pick here, the grants would be pretty good, the best ones would largely get funded anyway, the funding stream wouldn't be massively scalable, and that the return on time from these grants would be pretty good. Our experience has generally been pretty consistent with that. (This is unsurprising because it's continuous with things that Nick had a lot of experience with at Open Philanthropy.)

Probably the most distinctive grant from this set is our grant to Longview Philanthropy, which is using the funds for its grantmaking in global priorities research, nuclear weapons policy, and other areas (this is another experiment with regranting, in this case regranting via a grantmaking organization rather than via individuals as in our main regranting program). We're interested to see how the experiment goes!

Going forward

We'll continue with staff-led grantmaking in the background, but most of our focus will be on testing new funding models.

Conclusion

There's much in the above update we find exciting, including:

We feel like we're learning a lot from the process, and we are also looking forward to seeing what else we learn as we test prizes and try new approaches to proactively launching new projects. 

Finally: thank you to everybody who applied to us or otherwise engaged with one of our programs! We’re grateful for your work to help humanity flourish. We also deeply appreciate the help we are getting from other folks at FTX, colleagues at Open Philanthropy, expert reviewers, our regrantors, and other collaborators, whom we rely on extensively. It is a privilege to work with all of you!


Locke_USA @ 2022-07-01T19:05 (+79)

Thanks for the detailed update!

There was one expectation / takeaway that I was surprised about.

Getting sympathetic founders from adjacent networks to launch new projects related to our areas of interest - Worse than expected. We thought that maybe there was a range of people who aren't on our radar yet (e.g., tech founder types who have read The Precipice) who would be interested in launching projects in our areas of interest if we had accessible explanations of what we were hoping for, distributed the call widely, and made the funding process easy. But we didn’t really get much of that. Instead, most of the applications we were interested in came from people who were already working in our areas of interest and/or from the effective altruism community. So this part of the experiment performed below our expectations.

You mentioned the call was open for three weeks. Would that have been sufficient for people who are not already deeply embedded in EA networks to formulate a coherent and fundable idea (especially if they currently have full-time jobs)? It seems likely that this kind of "get people to launch new projects" effect would require more runway. If so, the data from this round shouldn't update one's priors very much on this question.

Minh Nguyen @ 2022-07-04T17:17 (+8)

I agree, I found out about FTX Future Fund in mid-April despite being connected to EA, being connected to the crypto community (following SBF on Twitter) and myself working on an application for the 776 Fellowship. I finally found out about it because Alexis Ohanian randomly retweeted a Future Fund tweet.

Clearly, they still had almost 2k applications, but I don't think it was an easy find for anyone not actively looking. I would have never found it if I was busy actively working on a project, because I don't check most of the channels this was advertised on.

DonyChristie @ 2022-07-01T03:18 (+59)

We appreciate you! ❤️

Parker_Whitfill @ 2022-07-01T17:06 (+35)

Since it seems like a major goal is of the Future Fund is to experiment and gain information on types of philanthropy —how much data collection and causal inference are you doing/plan to do on the grant evaluations? 

Here are some ideas I quickly came up with that might be interesting. 

  1. If you decided whether to fund marginal projects by votes or some scoring system—you could later assess what you think is the impact of funding projects by using a regression-discontinuity-design. 
  2. You mentioned that there is some randomness in who you used as re-granters. This has some similarities to the random assignment of judges that is frequently used in applied econ.  You could use this to infer if certain features of grantmakers cause better grants (e.g. some grant-makers might tend to believe in providing higher amounts of funding, so you could assess if this more gung-ho attitude leads to better grants, etc.)
  3. Explicitly introduce some randomness on if you approve a grant or not. 

In all these cases, you'd need to ex-post assess grant applications a few years later, including the ones you didn't fund on impact. Then these above strategies would let you assess the causal impact of your grants. 

Luke Freeman @ 2022-07-01T07:54 (+28)

Thank you for such a detailed and transparent post! It's really exciting to see experimentation in funding models as Future Fund enters the ecosystem. (It's also great to see a bunch of promising things getting the resources they need!)

I've found that the project ideas, areas of interest and grants/regrants databases are also especially useful resources in helping people to think about how they might best contribute! I've shared these multiple times when speaking with very promising people who are relatively cause neutral and just want to do as much good as they can given their specific skills & context.

BrianTan @ 2022-07-01T11:39 (+2)

Thanks for sharing the database links Luke! I wasn't aware FTX had that, but it definitely makes sense that they do.

DominikPeters @ 2022-07-01T09:09 (+19)

"Our sense is we’re able to generate >2x more value per time with our other activities [than with open calls]": does this number include an estimate of the time spent by regrantors? (Even if it doesn't, the 2x figure is still interesting.)

Owen Cotton-Barratt @ 2022-07-01T21:10 (+8)

Either way it looks pretty hard to have a real apples-to-apples comparison, since presumably the open call takes significantly more time from prospective grantees (but you wouldn't want to count that the same as grantmaker time).

lukeprog @ 2022-07-01T07:19 (+18)

Very exciting!

jtm @ 2022-07-01T05:59 (+12)

Thanks for taking the time to write this up!

quinn @ 2022-07-03T01:24 (+5)

Is there a way to access a list of regrantors, maybe indexed by problem area? Any reason I can't just query "show me the email address of every FTX regrantor who is interested in epistemic institutions" for instance? 

Jeff Kaufman @ 2022-07-03T02:12 (+41)

My guesses:

  1. Regranting is intended as a way to let people with local knowledge apply it to directing funds. This is different from just deputizing grantmakers.

  2. If you made the list public I'd expect the regranters to be overwhelmed by people seeking grants, and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)

RyanCarey @ 2022-07-03T09:49 (+27)

A public list of regranters makes the system very gameable and vulnerable to individual granters unilaterally funding negative value projects.

aogara @ 2022-07-03T05:18 (+6)

This explanation makes sense to me, but I wonder if there is better middle ground where regrantors benefit from a degree of publicity.

This comes from personal experience. I received an FTX regrant for upskilling in technical AI safety research, as did several other students in similar positions as me. I did not know my regrantor personally, but rather messaged them on the EA Forum and hopped on a call to discuss careers in AI safety. They saw that I fit the profile of “potential AI safety technical researcher” and very quickly funded me without an extended vetting process. I would not have received my grant if (a) I didn’t often message people on the EA Forum or (b) I didn’t get on a call with a stranger without a clear goal in mind, both of which seem like poor screening criteria.

Perhaps it was an effective screen for “entrepreneurial” candidates, but I expect that an EA Forum post requesting applications could have produced several more grants of similar quality without overwhelming my regrantor. Regranting via personal connections reduces the pool of potential grantees to people who have thoroughly networked themselves within EA, which privileges paths like “move to the Bay” at the expense of paths like “go to your cheap state school with no EA group and study hard”. It’s a difficult line to walk and I’m not a grantmaker, but I think more public access might improve both the equity and quality of FTX regrants.

Edited to add: Given LTFF’s history of funding similar people and the drawbacks of regrantor publicity, FTX’s anonymity policy does seem reasonable to me. Appreciate the pushback.

Linch @ 2022-07-03T21:31 (+33)

People in that position (or know people who are): Please consider applying to the Long-term Future Fund*. LTFF is excited to receive upskilling applications from people who are potentially great at technical AI safety research and/or other longtermist priority areas, and they have more institutional capacity (including a network of advisors) to evaluate such proposals across the board than many regrantors individually have. 

* For newer onlookers, please note that LTFF is under EA Funds and is not directly affiliated with the FTX Future Fund, despite the (perhaps confusingly) similar names.

Gavin @ 2022-07-03T14:53 (+18)

Besides the huge downsides Ryan mentioned (imagine someone reading your whole blog to better craft the perfect adversarially fundable project), publicity would have some toxic effects for the regrantor. 

For instance, all new social interactions would have an ulterior interpretation ("they're sucking up for cash"). In a personal/professional soup like EA that could be maddening. One former grantmaker told me that the degree of sucking-up they got was part of why they moved on. I'm unusually sensitive to such things; I would probably decline to be a public grantmaker. 

Privacy also has risks (nepotism, the excess zero-sum social investment in the bloody Bay you mention, insufficient accountability), but those seem smaller to me. But private regrantors were previously balanced out by the open call channel, so it'd be good to hear from FF about how they intend to seek new or peripheral applicants.

Software makes compromise pretty easy though. I quite like the idea of a regrantor publishing an anon post explaining what they're looking for, with a form attached. 

Linch @ 2022-07-03T21:56 (+2)

I share this fear but I don't know if this is clearly stronger than other dynamics in EA when one party has something the other wants (e.g. prestige, network, advice, employment).

Gavin @ 2022-07-03T23:57 (+5)

Also don't know but I guess worse here, since it's your explicit job to listen to applicants, where the usual requests for introductions and attention are rarely part of anyone's job description.

quinn @ 2022-07-03T13:57 (+2)

and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)

I guess it's not realistic to litmus test individuals about their cold-emailing practices and their seriousness about the problem area they claim to be working in, before giving them access to the list.  

I would expect the cold emailing advice given by Y Combinator to result in emails that do not frustrate regrantors. 

Florence @ 2022-07-01T22:33 (+3)

I love the transparency of this post! 

Also, I particularly like how regranting utilizes the value of local knowledge.

"regrantors and grant recommenders could exploit local knowledge and diverse networks to make promising projects move forward that we might not have known about or had time to investigate ourselves."


 

kris.reiser @ 2022-07-04T17:03 (+1)

Hi! Thank you for the detailed update, it is very helpful.  Quick question, if an application was submitted to the Open Call, with confirmation and there has not been any communication at this point, has the application been denied? Thank you for any further clarification you can give.

Khorton @ 2022-07-04T18:59 (+3)

If they haven't responded yet, they lost it, or they responded but it get caught in your spam filters. You should definitely re-email, it's been months since they gave decisions.

kris.reiser @ 2022-07-05T07:37 (+1)

Good Advice, I have been checking the spam, and the confirmation didn't go there when it was sent originally but I have been checking just in case.  The difficult part is there is no way to re-email, the submission was done via a google form that has a no reply  email...  We don't expect feedback but would like to close the chapter on the submission so to speak.  I have been reading the comments and assumed rejections were sent to others, so was wondering where ours may be.  Thank you for the advice all the same. 

ketanrama @ 2022-07-05T23:03 (+1)

Hi Kris - I've sent you a DM to figure out what's going on.