Long-Term Future Fund: Ask Us Anything!

By AdamGleave @ 2020-12-03T13:44 (+89)

The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!

The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.

You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.

Please ask any questions you like about the fund, including but not limited to:

We'd also welcome more free-form discussion, such as:

We look forward to hearing your questions and ideas!


riceissa @ 2020-12-04T04:27 (+61)

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.

Any thoughts on this topic?

AdamGleave @ 2020-12-04T20:00 (+22)

The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).

Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out -- but of course will not want to renew the grant if it is clearly not working.

I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be a great hire but are high-variance often don't get hired. Funding them for a period of time to do independent work can derisk the grantee, since they'll have a more substantial portfolio to show.

The level of excitement about long-term independent work varies between fund managers. I tend to think it's hard for people to do great work independently. I'm still open to funding it, but I want to see a compelling case that there's not an organisation that would be a good home for the applicant. Some other fund managers are more concerned by perverse incentives in established organisations (especially academia), so are more willing to fund independent research.

I'd be interested to hear thoughts on how we could better support our grantees here. We do sometimes forward applications on to other funders (with the applicants permission), but don't have any systematic program to secure further funding (beyond applying for renewals). We could try something like "demo days" popular in the VC world, but I'm not sure there's a large enough ecosystem of potential funders for this to be worth it.

Linda Linsefors @ 2021-02-04T12:02 (+2)

I want to see a compelling case that there's not an organisation that would be a good home for the applicant.


My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there. 

Do you disagree with this?

abergal @ 2021-03-01T08:31 (+3)

I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.

AdamGleave @ 2021-03-01T21:26 (+6)

Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago.

I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up.

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.

Linda Linsefors @ 2021-03-04T03:15 (+8)

But I'd love to be proven wrong here.

I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers. 

I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.

However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.
 

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers.

We are working on that. I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

Habryka @ 2020-12-04T19:19 (+20)

Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet. 

I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k). 

Our current evaluation process routes feels pretty good for smaller projects, and granting to established organizations that have other active evaluators looking into them that we can talk to, but doesn't feel very well-suited to larger organizations that don't have existing evaluations done on them (there is a lot of due diligence work to be done on that I think requires higher staff capacity than we have). 

I also think the general process of the LTFF specializing into more something like venture funding, with other funders stepping in for more established organizations feels pretty good to me. I do think the current process has a lot of unnecessary uncertainty and risk in it, and I would like to work on that. So one thing I've been trying to get better at is predicting which projects could get long-term funding from other funders, and to try to help projects get to a place where they can receive long-term funding from more than just the LTFF.

Capital wise, I also think that we don't really have the funding to support organizations over longer periods of time. I.e. supporting 3 organizations at $500k a year would take up almost all of our budget, and I think it's not worth trading that off against the other smaller grants we've historically been making. But it is one of the most promising ways I would want to use additional funds we could get.

AdamGleave @ 2020-12-04T20:05 (+10)

I agree with @Habryka that our current process is relatively lightweight which is good for small grants but doesn't provide adequate accountability for large grants. I think I'm more optimistic about the LTFF being able to grow into this role. There's a reasonable number of people who we might be excited about working as fund managers -- the main thing that's held us back from growing the team is the cost of coordination overhead as you add more individuals. But we could potentially split the fund into two sub-teams that specialize in smaller and larger grants (with different evaluation process), or even create a separate fund in EA Funds that focuses on more established organisations. Nothing certain yet, but it's a problem we're interested in addressing.

Habryka @ 2020-12-04T20:45 (+7)

Ah yeah, I also think that if the opportunity presents itself we could grow into this role a good amount. Though I think on the margin I think it's more likely we are going to invest even more into more early-stage expertise and maybe do more active early-stage grantmaking.

gavintaylor @ 2020-12-06T18:48 (+3)

Just to add a comment with regards to sustainable funding for independent researchers. There haven't previously been many options available for this, however, there are a growing number of virtual research institutes through which affiliated researchers can apply to academic funding agencies. The virtual institute can then administer the grant for a researcher (usually for much lower overheads than a traditional institution), while they effectively still do independent work. The Ronin Institute administers funding from US granters, and I am a Board member at IGDORE which can receive funding from some European granters. That said, it may still be quite difficult for individuals to secure academic funding without having some traditional academic credentials (PhD, publications, etc.). 

Linda Linsefors @ 2021-02-04T11:52 (+3)

What do you mean by "There haven't previously been many options available"? What is stopping you from just giving people money? Why do you need an institute as middle hand?

steve2152 @ 2021-02-04T12:39 (+4)

My understanding is that (1) to deal with the paperwork etc. for grants from governments or government-like bureaucratic institutions, you need to be part of an institution that's done it before; (2) if the grantor is a nonprofit, they have regulations about how they can use their money while maintaining nonprofit status, and it's very easy for them to forward the money to a different nonprofit institution, but may be difficult or impossible for them to forward the money to an individual. If it is possible to just get a check as an individual, I imagine that that's the best option. Unless there are other considerations I don't know about.

Btw Theiss is another US organization in this space.

gavintaylor @ 2021-03-02T18:38 (+3)

If it is possible to just get a check as an individual, I imagine that that's the best option.

 

One other benefit of a virtual research institute is that they can act as formal employers for independent researchers, which may be desirable for things like receiving healthcare coverage or welfare benefits.

 

Thanks for mentioning Theiss, I didn't know of them before. Their website doesn't look so active now, but it's good to know about the  history of the independent research scene.

steve2152 @ 2021-03-02T22:41 (+2)

Theiss was very much active as of December 2020. They've just been recruiting so successfully through word-of-mouth that they haven't gotten around to updating the website.

I don't think healthcare and taxes undermine what I said, at least not for me personally. For healthcare, individuals can buy health insurance too. For taxes, self-employed people need to pay self-employment tax, but employees and employers both have to pay payroll tax which adds up to a similar amount, and then you lose the QBI deduction (this is all USA-specific), so I think you come out behind even before you account for institutional overhead, and certainly after. Or at least that's what I found when I ran the numbers for me personally. It may be dependent on income bracket or country so I don't want to over-generalize...

That's all assuming that the goal is to minimize the amount of grant money you're asking for, while holding fixed after-tax take-home pay. If your goal is to minimize hassle, for example, and you can just apply for a bit more money to compensate, then by all means join an institution, and avoid the hassle of having to research health care plans and self-employment tax deductions and so on.

I could be wrong or misunderstanding things, to be clear. I recently tried to figure this out for my own project but might have messed up, and as I mentioned, different income brackets and regions may differ. Happy to talk more. :-)

Jonas Vollmer @ 2021-02-04T15:28 (+2)

+1.

riceissa @ 2020-12-04T04:43 (+52)

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

anonymous_ea @ 2020-12-05T23:29 (+18)

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

Habryka @ 2020-12-06T04:46 (+17)

Thank you! 

I am planning to respond to this in more depth, but it might take me a few days longer, since I want to do a good job with it. So please forgive me if I don't get around to this before the end of the AMA.

WilliamKiely @ 2021-02-11T00:08 (+7)

Any update on this?

Habryka @ 2021-02-11T06:47 (+11)

I wrote a long rant that I shared internally that was pretty far from publishable, but then a lot of things changed, and I tried editing it for a bit, but more things kept changing. Enough that at some point I gave up on trying to edit my document to keep up with the new changes, and instead just wait until things settle down, so I can write something that isn't going to be super confusing.

Sorry for the confusion here. At any given point it seemed like things would settle down more so I would have a more consistent opinion. 

Overall, a lot of the changes have been great, and I am currently finding myself more excited about the LTFF than I have in a long time. But a bunch of decisions are still to be made, so I will hold off on writing a bit longer. Sorry again for the delay. 

RyanCarey @ 2020-12-04T15:57 (+34)

If you had $1B, and you weren't allowed to  give it to other grantmakers or fund prioritisation research, where might you allocate it? 

Habryka @ 2020-12-06T05:07 (+38)

$1B is a lot. It also gets really hard if I don't get to distribute it to other grantmakers. Here are some really random guesses. Please don't hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.

My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output. 

My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so. 

I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don't like the modern news landscape, and it doesn't take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).

I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.

I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don't know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.

If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It's really embarrassing that we don't have more data on that yet, even though it has such a large effect.

Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn't seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.

anonymous_ea @ 2020-12-11T02:53 (+2)

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

 

I'm really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable? 

Habryka @ 2020-12-11T03:20 (+7)

It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives. 

My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B). 

AdamGleave @ 2020-12-05T13:59 (+25)

The cop-out answer of course is to say we'd grow the fund team or, if that isn't an option, we'd all start working full-time on the LTFF and spend a lot more time thinking about it.

If there's some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:

  1. For any long-termist org who (a) I'd usually want to fund at a small scale; and (b) whose leadership's judgement I'd trust, I'd give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).

    I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation's prestige, and some things (like creating a professorship) often require endowments.

    However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I'd be selective here, but more in terms of "do I trust the board and leadership to a blank cheque" than "at a detailed level, do I think this org is doing the most valuable work?"

  2. I'd also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn't normally be something I'd be excited about. For example, I'd feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.

  3. This is more speculative, but I'd be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I'm in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.

    The goal wouldn't be to make money (which is good since most VC's don't seem to do that well!) But being an early investor gives a lot of leverage over a company's direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there's no guarantee I'd manage this -- perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it's a reasonable bet.

Linch @ 2020-12-06T20:01 (+9)

But start-ups are responsible for a big fraction of disruptive technology, so it's a reasonable bet.

What's your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?   

(Don't take too much time on this question, I just want to see a gut check plus a few sentences if possible).

AdamGleave @ 2020-12-07T09:50 (+15)

About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.

I think I put around 40% on it being a company that does already exist, and 20% on "other" (academia, national labs, etc).

Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower -- maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!

Linch @ 2020-12-08T20:00 (+6)

Thanks a lot, really appreciate your thoughts here!

Cullen_OKeefe @ 2020-12-04T00:52 (+32)

What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?

AdamGleave @ 2020-12-04T19:58 (+17)

As part of CEA's due diligence process, all grantees must submit progress reports documenting how they've spent their money. If a grantee applies for renewal, we'll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.

I’d like us to be more systematic in our grant evaluation, and this is something we're discussing. One problem is that many of the grants we make are quite small: so it just isn't cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.

I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.

It's important to note that other things help with these points, too. For 1) improving our grant making process, we are working on sharing best-practices between the different EA Funds. For 2) we are seeking to increase transparency about our internal processes, such as in this doc (which we will soon add as an FAQ entry). Since evaluation is time consuming in the short-term we are likely to only evaluate a small percentage of our grants, though we may scale this up as fund capacity grows.

MichaelA @ 2020-12-05T03:17 (+7)

Interesting question and answer!

Do  the LTFF fund managers make forecasts about potential outcomes of grants? 

And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time? (E.g., what you'd want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.)

One reason that that first question came to mind was that I previously read a 2016 Open Phil post that states:

  • Both the Open Philanthropy Project and GiveWell recently began to make probabilistic forecasts about our grants. For the Open Philanthropy Project, see e.g. our forecasts about recent grants to Philip Tetlock and CIWF. For GiveWell, see e.g. forecasts about recent grants to Evidence Action and IPA. We also make and track some additional grant-related forecasts privately. The idea here is to be able to measure our accuracy later, as those predictions come true or are falsified, and perhaps to improve our accuracy from past experience. So far, we are simply encouraging predictions without putting much effort into ensuring their later measurability.
  • We’re going to experiment with some forecasting sessions led by an experienced “forecast facilitator” - someone who helps elicit forecasts from people about the work they’re doing, in a way that tries to be as informative and helpful as possible. This might improve the forecasts mentioned in the previous bullet point.

(I don't know whether, how, and how much Open Phil and GiveWell still do things like this.)

Habryka @ 2020-12-05T03:23 (+8)

We haven't historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn't really super feasible to do for lots of grants. I've made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours.

It's possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation. 

MichaelA @ 2020-12-05T03:39 (+9)

Thanks for that answer.

It seems plausible to me that a useful version of forecasting grant outcomes would be too time-consuming to be worthwhile. (I don't really have a strong stance on the matter currently.) And your experience with useful forecasting for LessWrong work being very time-consuming definitely seems like relevant data.

But this part of your answer confused me:

my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn't really super feasible to do for lots of grants

Naively, I'd have thought that, if that was a major obstacle, you could just have a bunch of separate operationalisations, and people can forecast on whichever ones they want to forecast on. If, later, some or all operationalisations do indeed seem to have been too flawed for it to be useful to compare reality to them, assess calibration, etc., you could just not do those things for those operationalisations/that grant. 

(Note that I'm not necessarily imagining these forecasts being made public in advance or afterwards. They could be engaged in internally to the extent that makes sense - sometimes ignoring them if that seems appropriate in a given case.)

Is there a reason I'm missing for why this doesn't work? 

Or was the point about difficulty of agreeing on an operationalisation really meant just as evidence of how useful operationalisations are hard to generate, as opposed to the disagreement itself being the obstacle?

Linch @ 2021-02-06T00:03 (+4)

I think the most lightweight-but-still-useful forecasting operationalization I'd be excited about is something like
 

12/24/120 months from now,  will I still be very excited about this grant? 

12/24/120 months from now, will I be extremely excited about this grant?

This gets at whether people think it's a good idea ex post, and also (if people are well-calibrated) can quantify whether people are insufficiently or too risk/ambiguity-averse, in the classic sense of the term.

Jonas Vollmer @ 2021-02-12T15:12 (+4)

This seems helpful to assess fund managers' calibration and improve their own thinking and decision-making. It's less likely to be useful for communicating their views transparently to one another, or to the community, and it's susceptible to post-hoc rationalization. I'd prefer an oracle external to the fund, like "12 months from now, will X have a ≥7/10 excitement about this grant on a 1-10 scale?", where X is a person trusted by the fund managers who will likely know about the project anyway, such that the cost to resolve the forecast is small.

I plan to encourage the funds to experiment with something like this going forward.

Linch @ 2021-02-12T19:34 (+2)

I agree that your proposed operationalization is better for the stated goals, assuming similar levels of overhead.

MichaelA @ 2020-12-05T03:43 (+2)

Just to make sure I'm understanding, are you also indicating that the LTFF doesn't write down in advance what sort of proxies you'd want to see from this grant after x amount of time? And that you think the same challenges with doing useful forecasting for your LessWrong work would also apply to that?

These two things (forecasts and proxies) definitely seem related, and both would involve challenges in operationalising things. But they also seem meaningfully different.

I'd also think that, in evaluating a grant, I might find it useful to partly think in terms of "What would I like to see from this grantee x months/years from now? What sorts of outputs or outcomes would make me update more in favour of renewing this grant - if that's requested - and making similar grants in future?"

Habryka @ 2020-12-05T07:17 (+6)

We've definitely written informally things like "this is what would convince me that this grant was a good idea", but we don't have a more formalized process for writing down specific objective operationalizations that we all forecast on.

Jonas Vollmer @ 2020-12-07T11:26 (+7)

I'm personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that's on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.

AdamGleave @ 2020-12-05T13:26 (+6)

Do the LTFF fund managers make forecasts about potential outcomes of grants?

To add to Habryka's response: we do give each grant a quantitative score (on -5 to +5, where 0 is zero impact). This obviously isn't as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I'd like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high variance in scores between managers.

I think it might make sense to start doing forecasting for some of our larger grants (where we're willing to invest more time), and when the key uncertainties are easy to operationalize.

Cullen_OKeefe @ 2020-12-04T22:44 (+2)

Thank you!

jackmalde @ 2020-12-04T09:50 (+23)

I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?

To clarify I'm certainly not criticising - I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.

I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF? 

Habryka @ 2020-12-04T19:28 (+27)

Speaking just for myself on why I tend to prefer the smaller individual grants: 

Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don't really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable. 

CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don't yet have an established track record. 

I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations. 

Separately, I also think that I personally view a lot of the intellectual work to be done on the Long Term Future to be quite compatible with independent researchers asking for grants for just them, or maybe small teams around them. This feels kind of similar to how academic funding is often distributed, and I think makes sense for domains where a lot of people should explore a lot of different directions and we have set up infrastructure so that researchers and distillers can make contributions without necessarily needing a whole organization around them (which I think the EA Forum enables pretty well). 

In addition to both of those points, I also think evaluating organizations requires a somewhat different skillset than evaluating individuals and small team projects, and we are currently better at the second than the first (though I think we would reskill if we thought it was more likely that more organizational grants would become more important again). 

jackmalde @ 2020-12-04T19:59 (+1)

Thanks for this detailed answer. I think that all makes a lot of sense. 

AdamGleave @ 2020-12-04T20:35 (+19)

I largely agree with Habryka's comments above.

In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there's a lot of talent interested in the area, but there's limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there's less need to strike out in an independent direction. While I'm not sure on this, there might also be a cultural factor -- if you're trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it's just a one-person org). This seems much less important if you want to do research.

Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the orgs select even better people. But adding more talent won't necessarily increase the number of hires.

While 10-30% is a relatively small growth rate, if it is sustained then I expect it to eventually outstrip growth in the longtermist talent pipeline: my median guess would be sometime in the next 3-7 years. I see the LTFF's grants to individuals in part trying to bridge the gap while orgs scale up, giving talented people space to continue to develop, and perhaps even found an org. So I'd expect our proportion of individual grants to decline eventually. This is a personal take, though, and I think others on the fund are more excited about independent research on a more long-term basis.

Ozzie Gooen @ 2020-12-09T22:22 (+14)

My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship

 

This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually. 

I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I've made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky. 

AdamGleave @ 2020-12-10T12:48 (+4)

This is a good point, and I do think having multiple large funders would help with this. If the LTFF's budget grew enough I would be very interested in funding scalable interventions, but it doesn't seem like our comparative advantage now.

I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I've seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.

Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don't teach each new hire how to program from scratch. So I'd love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.

It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.

One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.

The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we'd still have to wait 3-5 years before the talent comes on tap unfortunately.

Ozzie Gooen @ 2020-12-10T18:03 (+5)

I agree that research organizations of the type that we see are particularly difficult to grow quickly.

My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted. 

Right now it seems like our solution to most problems is "try to solve it with experienced researchers", which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that's very hard to scale, as you note (I know of almost no organizations that have done this well). 
 

Separately,

The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we'd still have to wait 3-5 years before the talent comes on tap unfortunately.

I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while. 

abergal @ 2020-12-05T20:54 (+1)

Just want to say I agree with both Habryka's comments and Adam's take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don't have the capacity to absorb talent.

jackmalde @ 2020-12-05T10:19 (+1)

Thanks for this reply, makes a lot of sense!

Jonas Vollmer @ 2020-12-07T11:19 (+2)

I agree with Habryka and Adam.

Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I'd add the following:

  • The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded.
  • Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
jackmalde @ 2020-12-07T17:58 (+1)

Also makes sense, thanks.

Peter_Hurford @ 2020-12-04T02:18 (+21)

What do you think has been the biggest mistake by the LTF fund (at least that you can say publicly)?

Jonas Vollmer @ 2020-12-06T18:47 (+28)

(I’m not a Fund manager, but I’ve previously served as an advisor to the fund and now run EA Funds, which involves advising EA Funds.)

In addition to what Adam mentions, two further points come to mind:

1. I personally think some of the April 2019 grants weren’t good, and I thought that some (but not all) of the critiques the LTFF received from the community were correct. (I can’t get more specific here – I don’t want to make negative public statements about specific grants, as this might have negative consequences for grant recipients.) The LTFF has since implemented many improvements that I think will prevent such mistakes from occurring again.

2. I think we could have communicated better around conflicts of interest. I know of some 2019 grants donors perceived to be subject to a conflict of interest, but there actually wasn’t a conflict of interest, or it was dealt with appropriately. (I also can recall one case where I think a conflict of interest may not have been dealt with well, but our improved policies and practices will prevent a similar potential issue from occurring again.) I think we’re now dealing appropriately with COIs (not in the sense that we refrain from any grants with a potential COI, but that we have appropriate safeguards in place that prevent the COI from impairing the decision). I would like to publish an updated policy once I get to it.

AdamGleave @ 2020-12-05T13:20 (+21)

Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.

We've edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we'll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.

Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I'd like us to do this for around 10% of our grant applications, particularly those where we are a large proportion of an organisation's budget. In these cases it's particularly important that we hold the organisation accountable, and provide strategic advice. In around a third of these cases, we've chosen not to make the grant because we feel unexcited about the organisation's current direction, even though we think it could be a good donation opportunity for a more proactive philanthropist. We're looking to grow our capacity, so we can hopefully pursue more active philanthropy in the future.

AnonymousEAForumAccount @ 2020-12-09T13:25 (+7)

Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.

We've edited the fund page to make our focus areas more explicit 

 

I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have.

It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added): 

“While the Long-Term Future Fund is open to funding organizations that seek to reduce any type of global catastrophic risk — including risks from extreme climate change, nuclear war, and pandemics — grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term.” 

The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point).

Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.)

In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar transparency around higher level prioritization decisions.

Jonas Vollmer @ 2020-12-10T09:54 (+28)

The very first sentence on that page reads (emphasis mine):

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.

Here's a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we might work on in 2021.

We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil's report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.

AnonymousEAForumAccount @ 2020-12-10T15:30 (+7)

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities. 

If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI?

An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.

I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants. 

Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information. 

We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil's report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.

There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity. 

Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an explicit explanation, and shouldn’t be obscured by the fund’s messaging.

Jonas Vollmer @ 2020-12-10T18:18 (+7)

Thanks, I appreciate the detailed response, and agree with many of the points you made. I don't have the time to engage much more (and can't share everything), but we're working on improving several of these things.

AnonymousEAForumAccount @ 2020-12-11T14:10 (+12)

Thanks Jonas, glad to hear there are some related improvements in the works  For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.

Jonas Vollmer @ 2020-12-11T19:19 (+2)

Thanks!

jackmalde @ 2020-12-10T13:35 (+1)

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

I agree with you that that's pretty clear. Perhaps you could just have another sentence explaining that most grants historically have been AI-related because that's where you receive most of your applications?

On another note, I can't help but feel that "Global Catastrophic Risk Fund" would be a better name than "Long-term Future Fund". This is because there are other ways to improve the long-term trajectory of civilisation than by mitigating global catastrophic risks. Also, if you were to make this change, it may help distinguish the fund from the long-term investment fund that Founders Pledge may set up.

Jonas Vollmer @ 2020-12-10T18:24 (+6)

Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I'd personally think the current name seems more accurate.

jackmalde @ 2020-12-10T18:56 (+1)

Ah OK. The description below does make it sound like it's only global catastrophic risks.

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.

Perhaps include the word 'predominantly' before the word "making"?

Jonas Vollmer @ 2020-12-11T10:44 (+2)

The second sentence on that page (i.e. the sentence right after this one) reads:

In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

"Predominantly" would seem redundant with "in addition", so I'd prefer leaving it as-is.

jackmalde @ 2020-12-11T11:12 (+4)

OK sorry this is just me not doing my homework! That all seems reasonable.

Linch @ 2020-12-10T08:09 (+3)

Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding

Adam has mentioned elsewhere here that he will prefer making more biosecurity grants. An interesting question here is how much the messaging should be descriptive of past donations, vs aspirational of where they want to donate more to in the future.
 

AnonymousEAForumAccount @ 2020-12-10T15:35 (+1)

Good point! I'd say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don't think this needs to be particularly lengthy, a few sentences could do it. 

Habryka @ 2020-12-05T21:46 (+5)

I agree that both of these are among our biggest mistakes.

Linch @ 2021-01-26T03:17 (+19)

(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it's better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I'm trivially wrong for boring reasons and thus don't need a response). 
 

Open Phil's Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker's grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and  up to 10% can be more "discretionary." 
 
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers' judgements enough. 

Is there a similar (explicit or implicit) process at LTFF?

I ask because 

AdamGleave @ 2021-01-26T20:35 (+10)

This is an important question. It seems like there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position -- and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:

  1. Accountability generally seems to improve organisations functioning. It'd be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
  2. There's asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there's a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
  3. There's may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.

I'm not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and "safer" fund. Then donors can choose what kind of worldview they want to buy into.

That said, personally I don't feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don't think I'd want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.

One thing to flag is that we do occasionally (with applicant's permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it's also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.

MaxRa @ 2021-01-27T13:30 (+8)

Re: Accountability

I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.

  • trust from donors can still be gained by explaining a meaningful fraction of decisions
  • less legible bets may have higher EV
  • I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
  • some donors may still not trust judgement sufficiently
  • maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
  • evaluation of funded projects takes effort (but I imagine you want to do this anyway)
Habryka @ 2021-01-26T21:25 (+3)

So, there's a tradeoff between

(Looks like this sentence got cut off in the middle) 

AdamGleave @ 2021-01-27T13:43 (+3)

Thanks, fixed.

Linch @ 2021-02-12T23:33 (+2)

there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact

To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I'd favor an analogue of Open Phil's 50/40/10 rule  (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be. 

I'm not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and "safer" fund

This seems like a fine compromise that I'm in the abstract excited about, though of course it depends a lot on implementation details.

One thing to flag is that we do occasionally (with applicant's permission) make recommendations to private donors rather than providing funding directly from the LTFF[..] an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.

This is really good to hear!

Habryka @ 2021-01-26T03:34 (+6)

I do indeed think there has been a pressure towards lower risk grants, am not very happy about it, and think it reduced the expected value of the fund by a lot. I am reasonably optimistic about that changing again in the future, but it's one of the reasons why I've become somewhat less engaged with the fund. In particular Alex Zhu leaving the fund was I think a really great loss on this dimension.

Jonas Vollmer @ 2021-01-27T11:05 (+2)

I think you, Adam, and Oli covered a lot of the relevant points.

I'd add that the LTFF's decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren't very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn't be approved by the majority of the committee.  Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.)

I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a "legible longtermist fund" and a "judgment-driven longtermist fund" to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.

G Gordon Worley III @ 2020-12-03T21:57 (+18)

How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?

abergal @ 2020-12-04T18:27 (+24)

Really good question! 

We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:

  • $1.7M more than our current balance
  • $500K more per year than we’ve spent in previous years
  • $800K more than the total amount of donations received in 2020 so far
  • $400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need  $400K more in donations next year than what we got this year.)

Reasoning below:

Generally, we fund anything above a certain bar, without accounting explicitly for the amount of money we have. According to this policy, for the last two years, the fund has given out ~$1.5M per year, or ~$500K per grant round, and has not accumulated a significant buffer. 

This round had an unusually large number of high-quality applicants. We spent $500K, but we pushed two large grant decisions to our next payout round, and several of our applicants happened to receive money from another source just before we communicated our funding decision. This makes me think that if this increase in high-quality applicants persists, it would be reasonable to have $600K - $700K per grant round, for a total of ~$2M over the next year.

My personal guess is that the increase in high-quality applications will persist, and I’m somewhat hopeful that we will get even more high-quality applications, via a combination of outreach and potentially some active grantmaking.  This makes me think that $2M over the next year would be reasonable for not going below the ROI on the last marginal dollar of the grants we made this year, though I’m not certain. (Of the two other fund managers who have made quantitative guesses on this so far, one fund manager also had $2M as their median guess, while another thought slightly above $1.5M was more likely.)

I also think there’s a reasonable case for having slightly more than our median guess available in the fund. This would both act as a buffer in case we end up with more grants above our current bar than expected, and would let us proactively encourage potential grantees to apply for funding without being worried that we’ll run out of money.

If we got much more money than applications that meet our current bar, we would let donors know. I think we would also consider lowering our bar for funding, though this would only happen after checking in with the largest donors.

* This is less than the amount displayed in our fund page, which is still being updated with our latest payouts.

Ozzie Gooen @ 2020-12-04T16:30 (+17)

Do you have a vision for what the 3 to 10 year vision for the Long-Term Future Fund looks like? Do you expect it to be mostly the same and possibly add revenue, or have any large structural changes?

Jonas Vollmer @ 2020-12-07T19:00 (+19)

As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.

Some thoughts on this question:

  • LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
  • Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months). I am currently implementing the first iteration of a fund manager appointment process, where we invite potential grantmakers to apply as Fund managers, and are also considering hiring a full-time grantmaking grant specialist. Hopefully, this will allow the LTFF to increase the number of grants it can evaluate, and its active grantmaking capacity in particular.
  • Types of grants: Areas in which I expect the LTFF to be able to substantially expand its current grantmaking include academic teaching buy-outs, scholarships and top-up funding for poorly paid academics, research assistants for academics, and proactively seeding new longtermist organizations and research projects (active grantmaking).
  • Structural changes: I think having multiple fund managers on a committee rather than a single decision-maker leads to improved diversity of networks and opinions, and increased robustness in decision-making. Increasing the number of committee members on a single committee leads to disproportionately larger coordination overhead, so the way to scale this might be to create multiple committees. I also think a committee model would benefit from having one or more full-time staff who can dedicate their full attention to EA Funds or the LTFF and collaborate with a committee of part-time/volunteer grantmakers, so I may want to look into hiring for such positions.
  • Legible longtermist fund: Donating to the LTFF currently requires a lot of trust in the Fund managers because many of the grants are speculative and hard to understand for people less involved in EA. While I think the current LTFF grants are plausibly the most effective use of longtermist funding, there is significant donor demand for a more legible longtermist donation option (i.e., one that isn’t subject to massive information asymmetry and thus doesn’t rely on trust as much). This may speak in favor of setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
  • Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else. Regarding the LTFF and longtermism in particular, Open Phil has expanded its activities, Survival And Flourishing (SAF) has launched, and other donors and grantmakers (such as Longview Philanthropy) continue to be active in the area to some degree, which means that effective projects may get funded even if the LTFF doesn’t expand its grantmaking. It’s pretty plausible to me that EA Funds should pursue a strategy that’s less focused on grantmaking than what I wrote in the above paragraphs, which would mean that I might not dedicate as much attention to expanding the LTFF in the ways suggested above. I’m still thinking about this; the decision will likely depend on external feedback and experiments (e.g., how quickly we can make successful active grants).

If anyone has any feedback, thoughts, or questions about the above, I’d be interested in hearing from you (here or via PM).

Neel Nanda @ 2020-12-28T20:55 (+15)

Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else

 

I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:

  1. Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
  2. Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don't necessarily have high trust for the community

Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren't really part of the community, that I don't feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).

And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.

The obvious solution to this is to have two separate institutions, trying to do these two different things? But I'm not sure how workable that is here (and I'm not sure what a 'longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)

MichaelA @ 2021-03-25T04:42 (+2)

This sounds right to me. 

The obvious solution to this is to have two separate institutions, trying to do these two different things?

Do you mean this as distinct from Jonas's suggestion of:

setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.

It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?

But I'm not sure how workable that is here (and I'm not sure what a 'longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)

FWIW, my initial reaction is "Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks." 

Do you think this is harder than I'm imagining? Or maybe that the ideal would be to give to different types of things?

Neel Nanda @ 2021-03-25T14:57 (+1)

Do you mean this as distinct from Jonas's suggestion of:

Nah, I think Jonas' suggestion would be a good implementation of what I'm suggesting. Though as part of this, I'd want the LTFF to be less public facing and obvious - if someone googled 'effective altruism longtermism donate' I'd want them to be pointed to this new fund.

Hmm, I agree that a version of this fund could be implemented pretty easily - eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I'm not sure how to do it well and ethically.

Jonas Vollmer @ 2021-03-25T16:33 (+2)

Yeah, we could simply explain transparently that it would funge with Open Phil's longtermist budget.

gavintaylor @ 2020-12-04T13:39 (+17)

Are there any areas covered by the fund's scope where you'd like to receive more applications?

abergal @ 2020-12-06T22:44 (+20)

I’d overall like to see more work that has a solid longtermist justification but isn't as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.

There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:

  • Work on structured transparency tools for detecting risks from rogue actors
  • Work on information security’s effect on AI development
  • Work on the offense - defense balance in a world with many advanced AI systems
  • Work on the likelihood and moral value of extraterrestrial life
  • Work on increasing institutional competence, particularly around existential risk mitigation
  • Work on effectively spreading longtermist values outside of traditional movement-building

These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question-- I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.

AdamGleave @ 2020-12-06T12:03 (+18)

These are very much a personal take, I'm not sure if others on the fund would agree.

  1. Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There's certainly diminishing returns to money, and I don't want the long-termist community to engage in zero-sum consumption of Veblen goods. But there's also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.

    Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I'd quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.

    As a general rule, if I'd be happy to fund someone for $Y/year if they were doing this work by themselves, and they're getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has a good plan for what to do with the money. If you think you might benefit from more money, I'd encourage you to apply. Even if you don't think you'll get it: a lot of people underestimate how much their time is worth.

  2. Biosecurity. At the margins I'm about equally excited by biosecurity as I am about mitigating AI risks, largely because biosecurity currently seems much more neglected from a long-termist perspective. Yet the fund makes many more grants in the AI risk space.

    We have received a reasonable number of biosecurity applications in recent rounds (though we still receive substantially more for AI), but our acceptance rate has been relatively low. I'd be particularly excited about seeing applications with a relatively clear path to impact. Many of our applications have been for generally trying to raise awareness, and I think getting the details right is really crucial here: targeting the right community, having enough context and experience to understand what that community would benefit from hearing, etc.

sbehmer @ 2020-12-06T03:12 (+14)

What is the LTFF's position on whether we're currently at an extremely influential time for direct work? I saw that there was a recent grant on research into patient philanthropy, but most of the grants seem to be made from the perspective of someone who thinks that we are at "the hinge of history". Is that true?

Habryka @ 2020-12-06T04:44 (+13)

At least for me the answer is yes, I think the arguments for the hinge of history are pretty compelling, and I have not seen any compelling counterarguments. I think the comments on Will's post (which is the only post I know arguing against the hinge of history hypothesis) are basically correct and remove almost all basis I can see for Will's arguments. See also Buck's post on the same topic.

abergal @ 2020-12-07T03:50 (+10)

I think this century is likely to be extremely influential, but there's likely important direct work to do at many parts of this century.  Both patient philanthropy projects we funded have relevance to that timescale-- I'd like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I'm interested in how philanthropic institutions might change.

I also think it's worth spending some resources thinking about scenarios where this century isn't extremely influential.

Jonas Vollmer @ 2020-12-07T11:39 (+7)

Whether we are at the "hinge of history" is a gradual question; different moments in history have different degrees of influentialness. I personally think the current moment is likely very influential, such that I want to spend a significant fraction of the resources we have now, and I think on the current margin we should probably be spending more. I think this could change over the coming years, though.

Peter_Hurford @ 2020-12-04T02:17 (+13)

What are you not excited to fund?

AdamGleave @ 2020-12-05T18:07 (+29)

Of course there's lots of things we would not want to (or cannot) fund, so I'll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.

  1. Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them

    This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It's also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.

    I'm torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we'll solve some major problems without individuals or organisations pursuing this. So I wouldn't necessarily discourage people from pursuing this path, though you might want to think hard about whether you'll be able to avoid value drift. But there's a big information asymmetry as a donor: if someone is seeking support for something that isn't directly useful now, with the promise of doing something useful later, it's hard to know if they'll follow through on that.

  2. Movement building that increases quantity but reduces quality or diversity. The initial composition of a community has a big effect on its long-term composition: people tend to recruit people like themselves. The long-termist community is still relatively small, so we can have a substantial effect on the current (and therefore long-term) composition now.

    So when I look for whether to fund a movement building intervention, I don't just ask if it'll attract enough good people to be worth the cost, but also whether the intervention is sufficiently targeted. This is a bit counterintuitive, and certainly in the past (e.g. when I was running student groups) I tended to assume that bigger was always better.

    That said, the details really matter here. For example, AI risk is already in the public conscience, but most people have only been exposed to terrible low-quality articles about it. So I like Robert Miles YouTube channel since it's a vastly better explanation of AI risk than most people will have come across. I still think most of the value will come from a small percentage of people who seriously engage with it, but I expect it to be positive or at least neutral for the vast majority of viewers.

Habryka @ 2020-12-05T21:48 (+6)

I agree that both of these are among the top 5 things that I've encountered that make me unexcited about a grant.

abergal @ 2020-12-05T22:04 (+18)

Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying -- there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.

  1. Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
  2. Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding research work that changed my mind about this.
  3. Most self-improvement / community-member-improvement type work, e.g. “I want to create materials to help longtermists think better about their personal problems.” I’m not universally unexcited about funding this, and there are people who I think do good work like this, but my overall prior is that proposals here won’t be very good.

    I am also unexcited about the things Adam wrote.
Jonas Vollmer @ 2020-12-07T18:47 (+6)

(I drafted this comment earlier and feel like it's largely redundant by now, but I thought I might as well post it.)

I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change for this project to result in improvements in the long-term future.’ 

Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’) 

We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.

Peter_Hurford @ 2020-12-04T02:17 (+11)

What are you excited to fund?

AmritSidhu-Brar @ 2020-12-04T10:41 (+16)

A related question: are there categories of things you'd be excited to fund, but haven't received any applications for so far?

AdamGleave @ 2020-12-07T09:27 (+26)

I think the long-termist and EA communities seem too narrow on several important dimensions:

  • Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.

    I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.

    There's also risks from spreading ourselves too thin. It's important we maintain a coherent community that's able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we're going to need to specialize: I expect different fields will benefit from different norms and heuristics. But right now I don't think we know what the right way to split long-termism is, so I'd be hesitant to specialize too early.

  • I also think we are currently too centered in Europe and North America, and see a lot of value in having a more active community in other countries. Many long-term problems require some form of global coordination, which will benefit significantly from having people in a variety of countries.

    I do think we need to take care here. First impressions count a lot, so poorly targeted initial outreach could hinder long-term growth in a country. Even seemingly simple things like book translations can be quite difficult to get right. For example, the distinction in English between "safety" and "security" is absent in many languages, which can make translating AI safety texts quite challenging!

    More fundamentally, EA ideas arose out of quite a specific intellectual tradition around questions of how to lead a good life, what meaning looks like, and so on, so figuring out how our ideas do or don't resonate with people in places with very different intellectual traditions is a serious challenge.

  • Of course, our current demographic breakdown is not ideal for a community that wants to exist for many decades to come, and I think we're missing out on some talented people because of this. It doesn't help that many of the fields and backgrounds we are drawing from tend to be unrepresentative, especially in terms of gender balance. So improving this seems like it would dovetail well with drawing people from a broader range of academic backgrounds.

  • I also suspect that the set of motivations we're currently tapping into is quite narrow. The current community is mostly utilitarian. But the long-termist case stands up well under a wide range of moral theories, so I'd like to see us reaching people with a wider range of moral views.

  • Related to this I think we currently appeal only to a narrow range of personality types. This is inevitable to a degree: I'd expect individuals higher in conscientiousness or neuroticism to be more likely to want to work to protect the long-term future, for example. But I also think we have so far disproportionately attracted introverts, which seems more like an accident of the communities we've drawn upon and how we message things. Notably extraversion vs introversion does not seem to correlate with pro-environmental behaviours, for example, whereas agreeableness and openness were correlated (Walden, 2015; Hirsh, 2010).

I would be excited about projects that work towards these goals.

Jonas Vollmer @ 2020-12-08T20:45 (+7)

(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.)

I agree with Adam and Asya. Some quick further ideas off the top of my head:

  • More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it.
  • Research into the long-term risks (and potential benefits) of genetic engineering.
  • Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.)
  • Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
  • Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them).
  • Books about longtermism-relevant topics.
Neel Nanda @ 2020-12-29T11:34 (+3)
  • Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.

How important is this in the context of eg scihub existing?

Jonas Vollmer @ 2020-12-29T12:10 (+2)

Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it's not open access makes me a bit less excited.

AmritSidhu-Brar @ 2020-12-07T16:40 (+4)

That's really interesting to read, thanks very much! (Both for this answer and for the whole AMA exercise)

AdamGleave @ 2020-12-06T12:05 (+13)

I've already covered in this answer areas where we don't make many grants but I would be excited about us making more grants. So in this answer I'll focus on areas where we already commonly make grants, but would still like to scale this up further.

I'm generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn't neatly fit into an existing discipline.

Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we're just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.

G Gordon Worley III @ 2020-12-03T21:58 (+11)

LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?

AdamGleave @ 2020-12-04T20:08 (+14)

The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.

In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you're reading this and are interested in applying to the LTFF but haven't seen us fund projects in your area before -- don't let that put you off. We're open to funding things in a very broad range of areas provided there's a compelling long-termist case.

Because cause prioritization isn't actually that decision relevant for most of our applications, I haven't thought especially deeply about it. In general, I'd say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity building. I don't currently see promising interventions in climate change, which already attracts significant funding from other sources, although we'd be open to funding something that seemed neglected, especially if it focused on mitigating or predicting extreme risks.

One area where there's active debate is the degree to which we should support general governance improvements. For example, we made a $50,000 grant to the Center for Election Science (CES) in our September 2020 round. CES has significantly more room for funding, so the main thing holding us back was uncertainty regarding the long-termist case for impact compared to more targeted interventions.

Naomi N @ 2020-12-04T12:32 (+10)

What are the most common reasons for rejection for applications of the Long-Term Future Fund?

abergal @ 2020-12-05T16:16 (+28)

Filtering for obvious misfits, I think the majority reason is that I don't think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn't strong enough evidence that the project will be executed well.

Sorry if this is an unsatisfying answer-- I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is "this seems like it could be good, but isn't as good as other things we want to fund". Here are some more concrete kinds of reasons that I think have come up at least more than once:

  • Project seems good for the medium-term future, but not for the long-term future
  • Applicant wants to learn the answer to X, but X doesn't seem like an important question to me
  • Applicant wants to learn about X via doing Y, but I think Y is not a promising approach for learning about X
  • Applicant proposes a solution to some problem, but I think the real bottleneck in the problem lies elsewhere
  • Applicant wants to write something for a particular audience, but I don’t think that writing will be received well by that audience
  • Project would be good if executed exceptionally well, but applicant doesn't have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability
  • Applicant wants to do research on some topic, but their previous research on similar topics doesn't seem very good
  • Applicant wants money to do movement-building, but several people have reported negative interactions with them
PabloAMC @ 2021-02-19T19:28 (+3)

Hey Asya! I've seen that you've received a comment prize on this. Congratulations! I have found it interesting. I was wondering: you give these two reasons for rejecting a funding application

  • Project would be good if executed exceptionally well, but applicant doesn't have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability.
  • Applicant wants to do research on some topic, but their previous research on similar topics doesn't seem very good.

My question is: what method would you use to evaluate the track record of someone who has not done a Ph.D. in AI Safety, but rather on something like Physics (my case :) )? Do you expect the applicant to have some track record in AI Safety research? I do not plan on applying for funding on the short term, but I think I would find some intuition on this valuable. I also ask because I find it hard to calibrate myself on the quality of my own research.

abergal @ 2021-02-20T01:17 (+15)

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.

In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.

I do want to emphasize that "paying a smart person to test their fit for AI safety" is a really good use of money from my perspective-- if the person turns out to be good, I've in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.

Habryka @ 2020-12-05T03:36 (+9)

Most common is definitely that something doesn't really seem very relevant to the long-term future (concrete example: "Please fund this local charity that helps people recycle more"). This is probably driven by people applying with the same project to lots of different grant opportunities, at least that's how the applications often read. 

I would have to think a bit more about patterns that apply to the applications that pass the initial filter (i.e. are promising enough to be worth a deeper investigation).

jackmalde @ 2020-12-04T10:56 (+10)

Do you think it's possible that, by only funding individuals/organisations that actually apply for funding, you are missing out on even better funding opportunities for individuals or organisations that didn't apply for some reason?

If yes, one possible remedy might be putting more effort into advertising the fund so that you get more applications. Alternatively, you could just decide that you won't be limited by the applications you receive and that you can give money to individuals/organisations who don't actually apply for funding (but could still use it well). What do you think about these options?

AdamGleave @ 2020-12-05T16:51 (+5)

Yes, I think we're definitely limited by our application pool, and it's something I'd like to change.

I'm pretty excited about the possibility of getting more applications. We've started advertising the fund more, and in the latest round we got the highest number of applications we rated as good (score >= 2.0, where 2.5 is the funding threshold). This is about 20-50% more than the long-term trend, though it's a bit hard to interpret (our scores are not directly comparable across time). Unfortunately the percentage of good applications also dropped this round, so we do need to avoid too indiscriminate outreach to avoid too high a review burden.

I'm most excited about more active grant-making. For example, we could post proposals we'd like to see people work on, or reach out to people in particular areas to encourage them to apply for funding. Currently we're bottlenecked on fund manager time, but we're working on scaling that.

I'd be hesitant about funding individuals or organisations that haven't applied -- our application process is lightweight, so if someone chooses not to apply even after we prompt them, that seems like a bad sign. A possible exception would be larger organisations that already make the information we need available for assessment. Right now I'm not excited about funding more large organisations, since I think the marginal impact there is lower, but if the LTFF had a lot more money to distribute then I'd want to scale up our organisation grants.

jackmalde @ 2020-12-06T10:24 (+1)

Thanks for this reply. Active grant-making sounds like an interesting idea!

MarisaJurczyk @ 2020-12-05T20:32 (+3)

Good question! Relatedly, are there common characteristics among people/organizations who you think would make promising applicants but often don't apply? Put another way, who would you encourage to apply who likely hasn't considered applying?

AdamGleave @ 2020-12-06T13:37 (+11)

A common case is people who are just shy to apply for funding. I think a lot of people feel awkward about asking for money. This makes sense in some contexts - asking your friends for cash could have negative consequences! And I think EAs often put additional pressure on themselves: "Am I really the best use of this $X?" But of course as a funder we love to see more applications: it's our job to give out money, and the more applications we have, the better grants we can make.

Another case is people (wrongly) assuming they're not good enough. I think a lot of people underestimate their abilities, especially in this community. So I'd encourage people to just apply, even if you don't think you'll get it.

alexrjl @ 2020-12-06T14:08 (+14)

Do you feel that someone who had applied, unsuccessfully, and then re-applied for a similar project (but perhaps having gathered more evidence), would be more likely, less likely, or equally likely to get funding than someone submitting an identical application to the second case, but not having been rejected once before, having chosen to not apply?

It feels easy to get into the mindset of "Once I've done XYZ, my application will be stronger, so I should do those things before applying", and if that's a bad line of reasoning to use (which I suspect it might be), some explicit reassurance might result in more applications.

abergal @ 2020-12-07T19:03 (+9)

I think definitely more or equally likely. :) Please apply!

Jonas Vollmer @ 2020-12-07T17:29 (+6)

Another one is that people assume we are inflexible in some way (e.g., constrained by maximum grant sizes or fixed application deadlines), but we can often be very flexible in working around those constraints, and have done that in the past.

G Gordon Worley III @ 2020-12-03T21:56 (+7)

Do you have any plans to become more risk tolerant?

Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I'm not sure it takes enough risks that it represents a clearly better choice for someone like me whose fairly risk tolerant with their donations than donating to other established projects or just donating directly (but this has the disadvantage of making it hard for me to give something like seed funding and still get tax advantages).

AdamGleave @ 2020-12-04T20:10 (+18)

From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.

We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecurity and policy.

It's worth noting that, unless I see a clear case for a grant, I tend to predict a low expected value -- not just a high-risk opportunity. This is because I think most projects aren't going to positively influence the long-term future -- otherwise the biggest risks to our civilization would already be taken care of. Based on that prior, it takes significant evidence to update me in favour of a grant having substantial positive expected value. This produces similar decisions to risk-aversion with a more optimistic prior.

Unfortunately, it's hard to test this prior: we'd need to see how good the grants we didn't make would have been. I'm not aware of any grants we passed on that turned out to be really good. But I haven't evaluated this systematically, and we'd only know about those which someone else chose to fund.

An important case where donors may be better off making donations themselves rather than donating via us is when they have more information than we do about some promising donation opportunities. In particular, you likely hear disproportionately about grants we rejected from people already in your network. You may be in a much better position to evaluate these than we are, especially if the impact of the grant hinges on the individual's abilities, or requires a lot of context to understand.

It's unfortunate that individual donors can't directly make grants to individuals in a tax efficient manner. You could consider donating to a donor lottery -- these will allow you to donate the same amount of money (in expectation) in a tax efficient manner. While grants can only be made within CEA's charitable objects, this should cover the majority of things donors would want to support, and in any case the LTFF also faces this restriction. (Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.)

G Gordon Worley III @ 2020-12-05T00:17 (+9)

Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.

This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I'd still want LTF as a fall back for funds I couldn't figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).

Jonas Vollmer @ 2020-12-07T13:47 (+2)

Interested in talking more about this – sent you a PM!

EDIT: I should mention that this is generally pretty hard to implement, so there might be a large fee on such grants, and it might take a long time until we can offer it.

Ozzie Gooen @ 2020-12-04T16:34 (+6)

Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don't feel like I have a great picture of the details here. 

If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I'd hope that we could eventually identify opportunities for long-term impact that aren't "find a small set of particularly highly talented researchers", but things more like, "spend X dollars advertising Y in a way that could scale" or "build a sizeable organization of people that don't all need to be top-tier researchers".

abergal @ 2020-12-07T18:30 (+15)

Some things I think could actively cause harm:

  • Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
  • Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
  • Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
  • Movement-building projects that give a bad first impression of longtermists
  • Projects that risk attracting a lot of controversy or bad press
  • Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time-- e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.

More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level work to be positive-- if we want the community to attract people who do high-quality work, we should fund primarily high-quality work. Since the LTFF does not have infinite money, I don’t think this has much of an effect on my funding decisions, but I’d have to think about it more explicitly if we end up with much more money than our current funding bar requires. (There are also other obvious reasons not to fund all positive-EV things, e.g. if we expected to be able to use the money better in the future.)

I think it would be good to have scalable interventions for impact. A few thoughts on this:

  • At the org-level, there’s a bottleneck in mentorship and organizational capacity, and loosening it would allow us to take on more inexperienced people. I don’t know of a good way to fix this other than funding really good people to create orgs and become mentors. I think existing orgs are very aware of this bottleneck and working on it, so I’m optimistic that this will get much better over time.
  • Personally, I’m interested in experimenting with trying to execute specific high-value projects by actively advertising them and not providing significant mentorship (provided there aren’t negative externalities to the project not being executed well). I’m currently discussing this with the fund.
  • Overall, I think we will always be somewhat bottlenecked by having really competent people who want to work on longtermist projects, and I would be excited for people to think of scalable interventions for this in particular. I don’t have any great ideas here off the top of my head.
Jonas Vollmer @ 2020-12-07T21:02 (+10)

I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:

If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.

My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.

Ozzie Gooen @ 2020-12-09T22:02 (+7)

Thanks so much for this, that was informative. A few quick thoughts:

“Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along”

I’ve heard this one before and I could sympathize with it,  but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.)  Big companies often don’t have the ideal teams for new initiatives.  Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.  The obvious solution to this would be to have bigger orgs with more possibility.  Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.

“I think it would be good to have scalable interventions for impact.” In terms of money, I’ve been thinking about this too. If this were a crucial strategy it seems like the kind of thing that could get a lot more attention. For instance, new orgs that focus heavily on ways to decently absorb a lot of money in the future.

Some ideas I’ve had:

- Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars.

-  Add additional resources to make existing researchers more effective.

- Buy the rights to books and spend on marketing for the key ones.

- Pay for virtual assistants and all other things that could speed researchers out.

- Add additional resources to make nonprofits more effective, easily.

- Better budgets for external contractors.

- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.

While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.

Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers. 

abergal @ 2020-12-10T22:10 (+9)

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.

To clarify, I don’t think that most projects will be actively harmful-- in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along.

The obvious solution to this would be to have bigger orgs with more possibility.  

I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives.

 Some ideas I’ve had:

Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular:

 - Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars.

I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully.

 -  Add additional resources to make existing researchers more effective.

 - Pay for virtual assistants and all other things that could speed researchers out.

 - Add additional resources to make nonprofits more effective, easily.

Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me-- I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective).

 - Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.

I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory.

---

One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly accruing prestige and tractable work to do over time, rather than modeling them as a tech company the way you describe. I think there could be good reasons for this-- in particular, putting ourselves in the reference class of an academic field might attract the kind of people who want to be academics, which are generally the kinds of people we want-- people who are very smart and highly-motivated by the work itself rather than other perks of the job. For what it’s worth, though, my guess is that the academic model is suboptimal, and we should indeed move to a more tech-company like model on many dimensions.

Jonas Vollmer @ 2020-12-11T10:45 (+3)

Again, I agree with Asya. A minor side remark:

- Pay for virtual assistants and all other things that could speed researchers out.

As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them.

Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).

Jonas Vollmer @ 2020-12-07T13:58 (+4)

There will likely be a more elaborate reply, but these two links could be useful.

Ozzie Gooen @ 2020-12-07T21:09 (+2)

Thanks!

Peter_Hurford @ 2020-12-04T02:18 (+5)

What crucial considerations and/or key uncertainties do you think the EA LTF fund operates under?

MichaelA @ 2020-12-04T03:29 (+5)

Some related questions with slightly different framings: 

  • What types/lines of research do you expect would be particularly useful for informing the LTFF's funding decisions?
  • Do you have thoughts on what types/lines of research would be particularly useful for informing other funders'  funding decisions in the longtermism space?
  • Do you have thoughts on how the answers to those two questions might differ?
AdamGleave @ 2020-12-07T21:04 (+5)

What types/lines of research do you expect would be particularly useful for informing the LTFF's funding decisions?

I'd be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations -- although, of course, for many of them that's not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs.

I expect distillation style work to be particularly useful. I expect there's already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won't be a definitive answer, so it'd also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions.

Do you have thoughts on what types/lines of research would be particularly useful for informing other funders' funding decisions in the longtermism space?

While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn't really possible at the LTFF's scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great.

It's still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it's easier to get rigorous results in category theory than economics; it's easier to get policymakers to care about economics than category theory). But I suspect it's also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going in very different directions: e.g. machine learning and statistics.

Do you have thoughts on how the answers to those two questions might differ?

A key difference between the LTFF and some other funders is we receive donations on a rolling basis, and I expect these donations to continue to increase over time. By contrast, many major donors have an endowment to spend down. So for them it's a really important question to know how to time those donations: how much should they give now, v.s. donate later? Whereas I think for us the case for just donating every $ we receive seems pretty strong (except for keeping enough of a buffer to even out short-term fluctuations in application quality and donation revenue).

abergal @ 2020-12-06T23:10 (+1)

Edit: I really like Adam's answer

There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.

Here's a small sample of the things that feel particularly relevant to grants I've considered recently. I'm not sure if I would say these are the most crucial:

  • What sources of existential risk are plausible?

    • If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work.
    • If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all.
    • If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.
       
  • What kind of movement-building work is effective?

    • Adam writes above how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects.
    • It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality direct work, and could even be net-negative. If I decided this was true, I’d be less likely to fund movement-building projects at all.
       
  • What strands of AI safety work are likely to be useful?

    • I currently take a fairly unopinionated approach to funding AI safety work-- I feel willing  to fund anything that I think a sufficiently large subset of smart researchers would think is promising. I can imagine becoming more opinionated here, and being less likely to fund certain kinds of work.
    • If I believed that it was certain that very advanced AI systems were coming soon and would look like large neural networks, I would be unlikely to fund speculative work focused on alternate paths to AGI.
    • If I believed that AI systems were overwhelmingly unlikely to look like large neural networks, this would have some effect on my funding decisions, but I’d have to think more about the value of near-term work from an AI safety field-building perspective.
gavintaylor @ 2020-12-06T19:18 (+4)

Several comments have mentioned that CEA provides good infrastructure for making tax-deductible grants to individuals and also that the LTF  often does, and is well suited to, make grants to individual researchers. Would it make sense for either the LTF or CEA to develop some further guidelines about the practicalities of receiving and administering grants for individuals (or even non-charitable organisations) that are not familiar with this sort of income, to help funds get used effectively?
 
As a motivating example, when I recently received an LTF grant, I sought legal advice in my tax jurisdiction and found out the grant was tax-exempt. However, prior to that CEA staff said that many grantees do pay tax on grant funds and they would consider it reasonable for me to do so. I have been paid on scholarships and fellowships for nearly 10 years and had the strong expectation that such funding is typically tax-free, which lead me to follow this up with a taxation lawyer; still, I wonder if other people, who haven't previously received grant income, come into this with different expectations and end up paying tax unnecessarily. While specifics vary between tax-jurisdictions, having the right set of expectations for being a grantee helped me a lot. Maybe there would also be other general areas of grant receipt/administration that would be useful to provide advice on.

Jonas Vollmer @ 2020-12-07T14:17 (+4)

Thanks for the input, we'll take this into account. We do provide tax advice for the US and UK, but we've also looked into expanding this. Edit: If you don't mind, could you let me know which jurisdiction was relevant to you at the time?

gavintaylor @ 2020-12-08T19:30 (+3)

I received my LTF grant while living in Brazil (I forwarded the details of the Brazilian tax lawyer I consulted to CEA staff). However, I built up my grantee expectations while doing research in Australia and Sweden, and was happy they were also valid in Brazil. 
My intuition is that most countries that allow either PhD students or postdocs to receive tax-free income for doing research at universities will probably also allow CEA grants to individuals to be declared in a tax-free manner, at least if the grant is for a research project.

Jonas Vollmer @ 2020-12-08T20:13 (+2)

Makes sense, thanks!

andyljones @ 2020-12-11T22:37 (+1)

Is that tax advice published anywhere? I'd assumed any grants I received in the UK would be treated as regular income, and if that's not the case it's a pleasant surprise!

Jonas Vollmer @ 2020-12-13T14:23 (+4)

It's not public. If you like, you can PM me your email address and I can try asking someone to get in touch with you.

GMcGowan @ 2020-12-04T14:53 (+3)

What would you like to fund, but can't because of organisational constraints? (e.g. investing in private companies is IIRC forbidden for charities).

AdamGleave @ 2020-12-06T13:42 (+5)

It's actually pretty rare that we've not been able to fund something; I don't think this has come up at all while I've been on the fund (2 rounds), and I can only think of a handful of cases before.

It helps that the fund knows some other private donors we can refer grants to (with applicants permission), so in the rare cases something is out of scope, we can often still get it funded.

Of course, people who know we can't fund them because of the fund's scope may choose not to apply, so the true proportion of opportunities we're missing may be higher. A big class of things the LTFF can't fund is political campaigns. I think that might be high-impact in some high-stakes elections, though I've not donated to campaigns myself, and I'm generally pretty nervous of anything that could make long-termism perceived as a partisan issue (which it obviously is not).

I don't think we'd often want to invest in private companies. As discussed elsewhere in this thread, we tend to find grants to individuals better than to orgs. Moreover, one of the attractive points of investing in a private company is that you may get a return on your investment. But I think the altruistic return on our current grants is pretty high, so I wouldn't want to lock up capital. If we had 10-100x more money to distribute and so had to invest some of it to grant out later, then investing some proportion of it in companies where there's an altruistic upside might make more sense.

Jonas Vollmer @ 2020-12-07T17:31 (+2)

If a private company applied for funding to the LTFF and they checked the "forward to other funders" checkbox in their application, I'd refer them to private donors who can directly invest in private companies (and have done so once in the past, though they weren't funded).

Linda Linsefors @ 2021-02-04T13:25 (+1)

What do you think is a reasonable amount of time to spend on an application to the LFTT?

Jonas Vollmer @ 2021-02-04T15:31 (+6)

If you're applying for funding for a project that's already well-developed (i.e. you have thought carefully about its route to value, what the roadmap looks like, etc.), 30-60 minutes should be enough, and further time spent polishing likely won't improve your chances of getting funding.

If you don't have a well-developed project, it seems reasonable to add whichever amount of time it takes to develop the project in some level of detail on top of that.

Linda Linsefors @ 2021-02-09T13:41 (+9)

That's surprisingly short, which is great by the way. 

I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications.

I'm happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don't know that more polish will not help. 

You can probably save a lot of time on the side of the applicants by:

  • Stating how much time you recommend people spend on the application
  • Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for.

I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.

Jonas Vollmer @ 2021-02-10T11:21 (+3)

On reflection I actually think 1-4 hours seems more correct. That's still pretty short, and we'll do our best to keep it as quick and simple as possible.

We're just updating the application form and had been planning to make the types of changes you're suggesting (though not sharing successful applications - but that could be interesting, too)

Linda Linsefors @ 2021-02-04T13:19 (+1)

What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context? 

What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context? 

For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.

Jonas Vollmer @ 2021-02-04T15:42 (+3)

This isn't exactly what you asked, but the LTFF's acceptance rate of applications that aren't obvious rejections is ~15-30%.