Long-Term Future Fund Ask Us Anything (September 2023)
By Linch, calebp, abergal, Habryka, Thomas Larsen, Daniel_Eth, Clara Collier, Lauro Langosco, Lawrence Chan @ 2023-08-30T23:02 (+64)
LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.
I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).
We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.
About the Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.
In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.
Related posts
- LTFF and EAIF are unusually funding-constrained right now
- EA Funds organizational update: Open Philanthropy matching and distancing
- Long-Term Future Fund: April 2023 grant recommendations
- What Does a Marginal Grant at LTFF Look Like?
- Asya Bergal’s Reflections on my time on the Long-Term Future Fund
- Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
About the Team
- Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
- Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
- Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
- Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
You can find a list of our fund managers in our request for funding here.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.
Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.
If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.
AnonymousAccount @ 2023-08-31T22:03 (+41)
What fraction of the best projects that you currently can't fund has applied for funding from OpenPhilantropy directly? Reading this it seems that many would qualify.
Why doesn't OpenPhilantropy fund these hyper-promising projects if, as one grantmaker writes, they are "among the best historical grant opportunities in the time that I have been active as a grantmaker?" OpenPhilantropy writes that LTFF "supported projects we often thought seemed valuable but didn’t encounter ourselves." But since the chair of the LTFF is now a Senior Program Associate at OpenPhilantropy, I assume that this does not apply to existing funding opportunities.
Habryka @ 2023-09-05T02:32 (+10)
I have many disagreements with the funding decisions of Open Philanthropy, so some divergence here is to be expected.
Separately, my sense is Open Phil really isn't set up to deal with the grant volume that the LTFF is dealing with, in addition to its existing grantmaking. My current guess is that the Open Phil longtermist community building team makes like 350-450 grants a year, in total, with 7-8 full-time staff [edit: previously said 50-100 grants on 3-4 staff, because I forgot about half of the team, I am sorry. I also clarified that I was referring to the Open Phil longtermist community building team, not the whole longtermist part]. The LTFF makes ~250 grants per year, on around 1.5 full-time equivalents, which, if Open Phil were to try to take them on additionally, would require more staff capacity than they have available.
Also, Open Phil already has been having a good amount of trouble getting back to their current grantees in a timely manner, at least based on conversations I've had with various OP grantees, so I don't think there is a way Open Phil could fill the relevant grant opportunities, without just directly making a large grant to the LTFF (and also, honestly, the LTFF itself isn't that well set-up to take advantage of the opportunities that are presenting themselves to us, given our very limited staff capacity and even longer response times, hence my endorsement for a somewhat modest but not huge amount of funding).
I currently expect the default thing that would happen if most of our grantees were to apply to Open Phil is that Open Phil either wouldn't get back to them for many months, or they would get a response quickly saying that Open Phil doesn't have time to evaluate their grant request, but that they are encouraged to apply to other grantmakers like the LTFF [edit: or they might fund a small fraction of these new applications and reject the rest, though my guess is Open Phil would prefer to refer them to us on the margin].
Linch @ 2023-09-05T03:10 (+3)
I suspect your figures for Open Phil are pretty off on both the scale of people and the scale of the number of grants. I would guess (only counting people with direct grantmaking authority) OP longtermism would have:
- 5-6 people on Claire's team (longtermist CB)
- 1-2 people on alignment
- (Yes, this feels shockingly low to me as well)
- 2-5 people on biosecurity
- 3-6 people on AI governance
- probably other people I'm missing
Also looking at their website, it looks like there's a lag for when grants are reported (similar to us) but before May 2023, there appears to be 10-20 public grants reported per month (just looking at their grants database and filtering on longtermism). I don't know how many non-public grants they give out but I'd guess it's ~10-40% of the total.
First order, I think it's reasonable to think that OP roughly gives out a similar number of grants to us but at 10-20 times the dollar amount per grant.
This is not accounting for how some programs that OP would classify as a single program would be counted as multiple grants by our ontology, e.g. Century Fellowship.
Habryka @ 2023-09-05T05:48 (+2)
Sorry, I meant to just refer to the Open Phil longtermist community building team, which felt like the team that would most likely be able to take over some of the grant load, and I know much less about the other teams. Edited to correct that.
Agree that I underestimated things here. Agree that OP grants are vastly larger, which makes up a amount of the difference in grant-capacity per staff. Also additionally the case that OP seems particularly low on AI Alignment grant capacity, which is where most of the grants that I am most excited about would fall into, which formed a bunch of my aggregate impression.
abergal @ 2023-09-05T03:25 (+8)
[Speaking for myself, not Open Philanthropy]
Empirically, I've observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy's programs; I'd estimate around 10%. And my guess is the "best historical grant opportunities" that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular).
More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn't make strong inferences that a given project isn't cost-effective purely on the basis that Open Philanthropy hasn't already funded it.
- ^
I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
AnonymousAccount @ 2023-09-08T14:05 (+1)
I'd estimate around 10%.
Thank you for the detailed reply, that seems surprisingly little, I hope more apply.
Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that "some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality".
Thank you again for taking the time to reply given the extreme capacity constrains
calebp @ 2023-09-05T05:58 (+2)
Responding specifically to
OpenPhilantropy writes that LTFF "supported projects we often thought seemed valuable but didn’t encounter ourselves." But since the chair of the LTFF is now a Senior Program Associate at OpenPhilantropy, I assume that this does not apply to existing funding opportunities.
Having a chair who works at Open Phil has helped less than one might naively think. My impression is that Open Phil doesn't want to commit to evaluating LTFF applications that the LTFF thinks are good but doesn't have the ability to fund. We are working out how to more systematically share applications going forward in a way that doesn't create an obligation for Open Phil to evaluate them (or the impression that Open Phil has this obligation to the public), but I think that this will look more like Open Phil having the option to look at some grant applications we think are good, as opposed to Open Phil actually checking every application that we share with them.
Austin @ 2023-08-31T16:07 (+31)
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
Linch @ 2023-09-04T01:59 (+17)
Here are my guesses for the most valuable qualities:
- Deep technical background and knowledge in longtermist topics, particularly in alignment.
- Though I haven't studied this area myself, my understanding of the history of good funding for new scientific fields (and other forms of research "leadership"/setting strategic direction in highly innovative domains) is that usually you want people who are quite good at the field you want to advance or fund, even if they aren't the very top scientists.
- Basically you might not want the best scientists at the top, but for roles that require complex/nuanced calls in a deeply technical field, you want second-raters who are capable of understanding what's going on quickly and broadly. You don't want your research agenda implicitly set by mediocre scientists, or worse, non-technical people.
- Because we give more grants in alignment than other technical fields, I think a deep understanding of alignment and other aspects of technical AI safety should be prioritized over (eg) technical biosecurity or nuclear security or forecasting or longtermist philosophy.
- The other skillsets are still valuable ofc, and would be a plus in a fund manager.
- Though I haven't studied this area myself, my understanding of the history of good funding for new scientific fields (and other forms of research "leadership"/setting strategic direction in highly innovative domains) is that usually you want people who are quite good at the field you want to advance or fund, even if they aren't the very top scientists.
- Consistency and reliability.
- Because the LTFF chair can easily be a bottleneck, we want someone who can be quite reliable and is good at unblocking problems rather than have things slow down while waiting for their approval.
- This both means fairly high consistency during "normal" weeks but also a decent degree of emotional resiliency etc during more stressful times, either a lack of frequent other professional/life commitments or a process to work around such commitments, etc.
- I think some of LTFF's past dysfunctions can be attributed to Asya (our current fund chair) genuinely being unsure whether LTFF work is higher EV than her day job at Open Phil.[1]
- This is also the main reason I don't think I should be chair. Otherwise I'm an okay fit (though not an amazing fit because I think my technical understanding isn't sufficiently deep). But for health etc reasons (and also just looking at my own empirical track record), I don't think I have enough reliability that the LTFF deserves.
- Good generalist judgment.
- Good grantmaking relies on a lot of judgment calls, some obvious, some more subtle. The LTFF chair will need to set policies, hire people, etc, in ways that result in consistently good judgment calls. Most likely, this means the chair ought to have rather good judgment themselves.
Here are some other qualities that I think are also valuable, but lower priority:
- High professional integrity.
- It is very very important that we hire someone who, in their role as LTFF chair, consistently prioritizes the common good over personal gain.
- More than most other roles, there are a number of times where someone in the LTFF chair role can sacrifice the LTFF goals for short-term personal gain. We need someone who consistently tries their best to choose the higher-integrity option, whether for altruistic- or integrity- based reasons.
- This can be both obvious stuff (COIs, incentives or personal biases compromising professional judgment) and more subtle effects (eg overlooking mistakes or potential negatives of LTFF when talking to funders, chasing shiny and more prestigious proxies rather than being grounded in serving the good).
- I mention this below the top 3 not because I think this quality is less important than the other qualities, but because my guess is that it's less rare, especially conditional upon good judgment. My guess is that most people who fulfill the above criteria would have sufficiently high professional integrity, though it'd be very bad if we hire a chair who is below that bar.
- It is very very important that we hire someone who, in their role as LTFF chair, consistently prioritizes the common good over personal gain.
- Vision
- Ideally they can come in with a vision of how to make LTFF great, instead of being in reactive mode and just trying to do locally good and reasonable things.
- Good project and people management
- Being a good manager is probably good for running a good fund. Though actually I think running LTFF requires this less than for leading most multi-person longtermist projects; LTFF grantmakers and grantees are both fairly autonomous. Still, the bare minimum of good management you need is higher for being an LTFF fund chair than for (eg) being an independent longtermist researcher.
- Good stakeholder management.
- You should be the type of person who can understand and try to work with the priorities of funders, grantees, fund managers, advisors, etc. You don't need to be stellar at it, and you don't need to be loved, but at least you need most stakeholders to (correctly) believe that you care about their perspective and you're willing to work with them.
- Good communications ability
- As with the above, a "nice-to-have" is for people to easily be able to understand your perspective. That said, this isn't a critical ability as others on the fund (eg myself) can cover for a chair who is otherwise great but is not very good at written communication.
- Solid professional network in areas of interest for the LTFF. You can probably come in without much of a network, but I imagine things will go more smoothly if the new chair has connections they can use to promote new active grantmaking projects and new jobs, ask for advice from experts on tricky grants, etc. But I think this is something someone can build up over time as well, and shouldn't be a pre-requisite.
- ^
I'm pretty confused about how the numbers add up, naively the 5th hour on LTFF has to be more important than the 40th hour at OP; given the relative scales of the two organizations. But I don't really know which projects Asya is responsible for, how much I'm underestimating OP giving due to anonymous donations, etc.
BrownHairedEevee @ 2023-08-31T01:15 (+27)
How does the team weigh the interests of non-humans (such as animals, extraterrestrials, and digital sentience) relative to humans? What do you folks think of the value of interventions to help non-humans in the long-term future specifically relative to that of interventions to reduce x-risk?
Linch @ 2023-09-05T01:03 (+7)
I don't think there is a team-wide answer, and there certainly isn't an institutional answer that I'm aware of. My own position is a pretty-standard-within-EA form of cosmopolitanism, where a) we should have a strong prior in favor of moral value being substrate-independent, and b) we should naively expect people to (wrongly) underestimate the moral value of beings that look different from ourselves. Also as an empirical belief, I do expect the majority of moral value in the future to be held in minds that are very different from my own. The human brain is just such a narrow target in the space of possible designs, it'd be quite surprising to me if a million years from now the most effective way to achieve value is via minds-designed-just-like-2023-humans, even by the lights of typical 2023-humans.
There are some second-order concerns like cooperativeness (I have a stronger presumption in favor of believing it's correct to cooperate with other humans than with ants, or with aliens), but I think cosmopolitanism is mostly correct.
However, I want to be careful in distinguishing the moral value or moral patiency of other beings from their interests. It is at least theoretically possible to imagine agents (eg designed digital beings) with strong preferences and optimization ability but not morally relevant experiences. In those cases, I think there are cooperative reasons to care about their preferences, but not altruistic reasons. In particular, I think the case for optimizing for the preferences of non-existent beings is fairly weak, but the case for optimizing for their experiences (eg making sure future beings aren't tortured) is very strong.
That said, in practice I don't think we often (ever?) get competitive grant applications that specialize in helping non-humans in the LT future; most of our applications are about reducing risks of extinction or other catastrophic outcomes, with a smattering of applications that are about helping individuals and organizations think better (eg via forecasting or rationality training or improving mechanism/institutional design), with flow-through effects that I expect to be both positive for reducing global catastrophic risks and for other long-term outcomes.
calebp @ 2023-09-05T06:31 (+4)
I don't think we often (ever?) get competitive grant applications that specialize in helping non-humans in the LT future
I think we've funded some work on digital sentience before. I would personally be excited about seeing some more applications in this area. I think marginal work in this area could be competitive with AIS grants if the bar reduces (as I expect).
BrownHairedEevee @ 2023-09-07T04:28 (+2)
Thanks for the responses, @Linch and @calebp!
There are several organizations that work on helping non-humans in the long-term future, such as Sentience Institute and Center on Long-Term Risk; do you think that their activities could be competitive with the typical grant applications that LTFF gets?
Also, in general, how do you folks decide how to prioritize between causes and how to compare projects?
Linch @ 2023-09-07T21:39 (+4)
I'm confused about the prudence of publicly discussing specific organizations in the context of being potential grantees, especially ones that we haven't (AFAIK) given money to.
Linch @ 2023-09-08T10:50 (+2)
Okay, giving entirely my own professional view as I see it, absolutely not speaking for anybody else or the fund writ large:
There are several organizations that work on helping non-humans in the long-term future[...]; do you think that their activities could be competitive with the typical grant applications that LTFF gets?
To be honest, I'm not entirely sure what most of these organizations actually do research on, on a day-to-day basis. Here are some examples of what I understand to be the one-sentence pitch for many of these projects
- figure out models of digital sentience
- research on cooperation in large worlds
- how to design AIs to reduce the risk that unaligned AIs will lead to hyperexistential catastrophes
- moral circle expansion
- etc,
Intuitively, they all sound plausible enough to me. I can definitely imagine projects in those categories being competitive with our other grants, especially if and when our bar lowers to where I think the longtermist bar overall "should" be. That said, the specific details of those projects, individual researchers, and organizational structure and leadership matters as well[1], so it's hard to give an answer writ large.
From a community building angle, I think junior researchers who try to work on these topics have a reasonably decent hit rate of progressing to doing important work in other longtermist areas. So I can imagine a reasonable community-building case to fund some talent development programs as well[2], though I haven't done a BOTEC and again the specific details matter a lot.
- ^
For example, I'm rather hesitant to recommend funding to organizations where I view the leadership as having substantially higher-than-baseline rate of being interpersonally dangerous.
- ^
I happen to have a small COI with one of the groups so were they to apply, I will likely recuse myself from the evaluation.
NunoSempere @ 2023-09-05T19:33 (+26)
I've heard that you have a large delay between when someone applies to the fund, and when they hear back from you. How large is this delay right now? Are you doing anything in particular to address?
Linch @ 2023-09-06T08:47 (+10)
I think last time we checked, it was ~a month in the median, and ~2 months on average, with moderately high variance. This is obviously very bad. Unfortunately, our current funding constraints probably makes things worse[1], but I'm tentatively optimistic that with the a) new guest fund managers, b) more time to come up with better processes (now that I'm onboard ~full-time, at least temporarily) and c) hopefully incoming money (or at least greater certainty about funding levels), we can do somewhat better going forwards.
(Will try to answer other parts of your question/other unanswered questions on Friday).
- ^
Because we are currently doing a mix of a) holding on to grants that are above our old bar but below our current bar while waiting for further funding, and b) trying to refer them to other grantmakers, both of which takes up calendar time. Also, the lower levels of funding means we are, or at least I am, prioritizing other aspects of the job (eg fundraising, public communications) over getting back to applicants quickly.
RyanCarey @ 2023-09-06T23:21 (+38)
The level of malfunctioning that is going on here seems severe:
- The two month average presumably includes a lot of easy decisions, not just hard ones.
- The website still says LTFF will respond in 8 weeks (my emphasis)
- The website says they may not respond within an applicant's preferred deadline. But what it should actually say is that LTFF also may not respond within their own self-imposed deadline.
- And then the website should indicate when, statistically, it does actually tend to give a response.
- Moreover, my understanding is that weeks after these self-imposed deadlines, you still may have to send multiple emails and wait weeks longer to figure out what is going on.
Given all of the above, I would hope you could aim to get more than "somewhat better", and have a more comprehensive plan of how to get there. I get that LTFF is pretty broke rn and that we need an OpenPhil alternative, and that there's a 3:1 match going on, so probably it makes sense for LTFF to receive some funding for the time being. Also that you guys are trying hard to do good, probably currently shopping around unfunded grants etc. but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.
Linch @ 2023-09-06T23:46 (+17)
The website still says LTFF will respond in 8 weeks (my emphasis)
Oof. Apologies, I thought we've fixed that everywhere already. Will try to fix asap.
but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.
Yeah I think this is very fair. I do think the funding ecosystem is pretty broken in a bunch of ways and of course we're a part of that; I'm reminded of Luke Muelhauser's old comment about how MIRI's operations got a lot better after he read Nonprofit Kit for Dummies.
We are trying to hire for a new LTFF chair, so if you or anybody else you know is excited to try to right the ship, please encourage them to apply! There are a number of ways we suck and a chair can prioritize speed at getting back to grantees as the first thing to fix.
I can also appreciate wanting a new solution rather than via fixing LTFF. For what it's worth people have been consistently talking about shutting down LTFF in favor of a different org[1] approximately since I started volunteering here in early 2022; over the last 18 months I've gotten more pessimistic about replacements, which is one of the reasons why I decided to join ~full-time to try to fix it.
I think Manifund is faster than us (and iiuc the slowness of LTFF was a key reason that they've decided to make something new), donors reading this comment may be interested in donating to them.
- ^
Including a different arm at existing orgs like Open Phil for handling LTFF-like work.
calebp @ 2023-09-07T00:44 (+7)
Fwiw I think that this
but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.
Is not "very fair". Whilst I agree that we are slower than I'd like and slower than our website indicates I think it's pretty unclear that Open Phil is generally faster than us, I have definitely heard similar complaints about Open Phil, SFF and Longview (all the current EA funders with a track record > 1 year). My sense is that Ryan has the impression that we are slower than the average funder, but I don't have a great sense of how he could know this. If we aren't particularly bad relative to some average of funders that have existed for a while, I think the claim "we don't have it together on a basic level is" pretty unfair.
(after some discussion with Linch I think we disagree on what "get it together on a basic level means", one thing that Linch and I both agree on is that we should be setting more accurate expectations with grantees (e.g. in some of the ways Ryan has suggested) and that if we had set more accurate expectations we would not be having more than 10% more impact)
Here we say that the LTFF between Jan 22 - April 23:
- had a median response time of 29 days
- evaluated >1000 applications
- recommended ~$13M of funding across > 300 grants
Whilst using mostly part-time people (meaning our overheads are very low), dealing with complications from the FTX crash and running always-open general applications (which aim to be more flexible than round-based funds or specialised programs), and making grants in complex areas that don't just directly funge with Open Phil (unlike, for example, Longview's longtermism fund). It was pretty hard to get a sense of how much grantmaking Open Phil, SFF, Founders Pledge, and Longview have done over a similar timeframe (and a decent amount of what I do know isn't sharable), but I currently think we stack up pretty well.
I'm aware that my general tone could leave you with the impression that I am not taking the delays seriously, when I do actually directionally agree. I do think we could be much quicker, and that it’s important. Primarily we‘ll be improving some of our internal processes and increasing capacity.
abrahamrowe @ 2023-09-08T20:03 (+15)
Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention (which exceptions in some cases in both directions for EA Funds and other funders)
RyanCarey @ 2023-09-07T09:18 (+2)
I think the core of the issue is that there's unfortunately somewhat of a hierarchy of needs from a grant making org. That you're operating at size, and in diverse areas, with always-open applications, and using part-time staff is impressive, but people will still judge you harshly if you struggling to perform your basic service.
Regarding these basics, we seem to agree that an OpenPhil alternative should accurately represent their evaluation timelines on the website, and should give an updated timeline when the stated grant decision time passes (at least on request).
With regard to speed, just objectively, LTFF is falling short of the self-imposed standard - "within eight weeks, and typically in just four weeks". And I don't think that standard is an inappropriate one, given that LTFF is a leaner operation than OpenPhil, and afaict, past-LTFF, past SFF, and Fast Grants all managed to be pretty quick.
That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".
Habryka @ 2023-09-07T09:23 (+15)
That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".
Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics".
Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy rhythm again when there is a new fund chair, and the basics will be better covered again, when the funding ecosystem settles into more of an equilibrium again.
RyanCarey @ 2023-09-07T10:03 (+4)
Ok, it makes sense that a temporary 5x in volume can really mess you up.
Rebecca @ 2023-09-08T17:59 (+18)
If someone told me about a temporary 5x increase in volume that understandably messed things up, I would think they were talking about a couple month timeframe, not 8 months to 2 years. Surely there’s some point at which you step back and realise you need to adapt your systems to scale with demand? E.g. automating deadline notifications.
It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.
Linch @ 2023-09-08T18:02 (+2)
We already have automated deadline notifications; I'm not sure why you think it's especially helpful.
It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.
One potential hope is that other funders will step up in the longer term so it can reduce LTFF'S load; as an empirical matter I've gotten more skeptical about the short-term viability of such hopes in the last 18 months. [1]
- ^
Not long after I started, there were talks about sunsetting LTFF "soon" in favor of a dedicated program to do LTFF's work hosted in a larger longtermist org. Empirically this still hasn't happened and LTFF's workload has very much increased rather than decreased.
Rebecca @ 2023-09-08T18:11 (+1)
Partially based on Asya’s comment in her reflections post that there was difficulty keeping track of deadlines, and partially an assumption that the reason for in some cases not having any communication with an applicant by their stated time-sensitive deadline was because that was not kept track of. It’s good to hear you were keeping track of this, although confusing to me that it didn’t help with this.
Linch @ 2023-09-08T19:36 (+4)
There are probably process fixes in addition to personnel constraints; like once you ignore the first deadline it becomes a lot easier to ignore future deadlines, both individually and as a cultural matter.
This is why I agreed with Ryan on "can't even get it together on a basic level," certainly as a fund manager I often felt like I didn't have it together on a basic level , and I doubt that this opinion is unique. I think Caleb disagreed because from his vantage point other funders weren't clearly doing better given the higher load across the board (and there's some evidence they do worse); we ended up not settling the question on whether "basic level" should be defined in relation to peer organizations or in relation to how we internally feel about whether and how much things have gone wrong.
Probably the thing we want to do (in addition to having more capacity) is clearing out a backlog first and then assigning people to be responsible for other people's deadlines. Figuring this out is currently one of our four highest priorities (but not the highest).
Rebecca @ 2023-09-14T11:00 (+1)
By ‘the above’ I meant my comment rather than your previous one. Have edited to make this clearer.
dan.pandori @ 2023-09-07T01:32 (+1)
I deeply appreciate the degree to which this comment acknowledges issues and provides alternative organizations that may be better in specific respects. It has given me substantial respect for LTFF.
abergal @ 2023-09-07T16:02 (+7)
Hey Ryan:
- Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that's real bad.
- I agree that it would be good to have a comprehensive plan-- personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it's ultimately not my call.
NunoSempere @ 2023-09-07T10:15 (+4)
looking elsewhere
This blogpost of mine: Quick thoughts on Manifund’s application to Open Philanthropy might be of interest here.
Daniel_Eth @ 2023-09-06T22:43 (+4)
Another reason that the higher funding bar is likely increasing delays – borderline decisions are higher stakes, as we're deciding between higher EV grants. It seems to me like this is leading to more deliberation per grant, for instance.
Esben Kran @ 2023-09-01T07:31 (+22)
Thank you for hosting this! I'll repost a question on Asya's retrospective post regarding response times for the fund.
our median response time from January 2022 to April 2023 was 29 days, but our current mean (across all time) is 54 days (although the mean is very unstable)
I would love to hear more about the numbers and information here. For instance, how did the median and mean change over time? What does the global distribution look like? The disparity between the mean and median suggests there might be significant outliers; how are these outliers addressed? I assume many applications become desk rejects; do you have the median and mean for the acceptance response times?
porby @ 2023-09-02T20:16 (+20)
Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?
It would be better in expectation to have $X dollars of additional funding available in the field in the year 2028 than an additional full time AI safety researcher starting today.
Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.
I'd also be interested in hearing answers for a distribution of different years or different levels of research impact.
(This is a pretty difficult and high variance forecast, so don't worry, I won't put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)
Lauro Langosco @ 2023-09-06T20:04 (+5)
This is a hard question to answer, in part because it depends a lot on the researcher. My wild guess for a 90%-interval is $500k-$10m
Daniel_Eth @ 2023-09-06T23:59 (+4)
Annoy away – it's a good question! Of course, standard caveats to my answer apply, but there's a few caveats in particular that I want to flag:
- It's possible that by 2028 there will be one (or more) further longtermist billionaires who really open up the spigot, significantly decreasing the value of marginal longtermist money at that time
- It's possible that by 2028, AI would have gotten "weird" in ways that affect the value of money at that time, even if we haven't reached AGI (e.g., certain tech stocks might have skyrocketed by then, or it might be possible to turn money into valuable research labor via AI)
- You might be considering donation opportunities that significantly differ in value from other large funders in the field
- This is all pretty opinionated and I'm writing it on the fly, so others on the LTFF may disagree with me (or I might disagree with myself if I thought about it at another time).
In principle, we could try to assign probability distributions to all the important cruxes and Monte Carlo this out. Instead, I'm just going to give my answer based on simplifying assumptions that we still have one major longtermist donor who prioritizes AI safety to a similar amount as today, things haven't gotten particularly weird, your donation opportunities don't look that different from others' and roughly match donation opportunities now,[1] etc.
One marginal funding opportunity to benchmark the value of donations against would be funding the marginal AI alignment researcher, which probably costs ~$100k/yr. Assuming a 10% yearly discount rate (in line with the long-term, inflation-adjusted returns to equities within the US), funding this in perpetuity is equivalent to a lump-sum donation now of $1M, or a donation in 2028 of ($1M)*(1.1^5) = $1.6M.[2]
Then the question becomes, how valuable is the marginal researcher (and how would you expect to compare against them)? Borrowing from Linch's piece on the value of marginal grants to the LTFF, the marginal alignment grant often is a bit better than something like the following:
a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety... researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk.
In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional.
[Note: I think this sort of grant is well below the current funding threshold for the LTFF, given that we're currently in a funding crunch. But I would generally expect, for the longtermist community as a whole over longer timespans, the marginal grant would be only a bit higher than that.]
Note that for many of the people funded on that kind of grant, the main expected benefit would come not from the direct work of the initial grant, but instead on the chance that the researcher winds up being surprisingly good at alignment research; so, in considering the value of the "marginally-funded researcher," just note that it would be someone with stronger signals for alignment research than described above.
So from this, if you think you'd be around equivalent to the marginally-funded alignment researcher (where it's worth it to keep funding them based on their research output), I'd think your labor would be worth about $1-2M in donations in 2028.[3] It's harder to estimate the value for people doing substantially better work than that. I think values 10x that would probably apply to a decent number of people, and numbers 100x higher would be rare but not totally unimaginable.
- ^
Or that current donors are appropriately weighing how much they should spend now vs invest so even if the nature of donation opportunities differs, the (investment-adjusted) value of the donations is comparable. Note that I'm not trying to claim they are actually doing this, but the assumption makes the analysis easier.
- ^
There's a few simplifying assumptions here: I'm neglecting to consider how the cost of living/wages may raise this cost, and I'm also neglecting to consider how the labor wouldn't last in perpetuity but only for the remainder of the person's career (presumably either until they reach retirement, or until AI forces them into retirement or takes over).
- ^
In principle, a very-marginally worth it to fund person may be (on net) worth much less than this, since funding them would also cost the money to fund them. In practice, I think this rough calculation still gives us a good general ballpark for people to index on, as very few people are presumably almost exactly at the point of indifference.
porby @ 2023-09-07T22:08 (+3)
Thanks for breaking down details! That's very helpful. (And thanks to Lauro too!)
Mckiev @ 2023-09-05T10:15 (+18)
I'd love to see a database of waitlisted grant applications publicly posted and endorsed by LTFF, ideally with the score that LTFF evaluators have assigned. Would you consider doing it?
By waitlisted, I mean those that LTFF would have funded if it wasn't funding constrained.
calebp @ 2023-09-05T18:36 (+5)
Is it important to see identifiable information (so that a donor could fund that grant specifically), or are you more interested in the types of projects/grantees we'd like to fund? Here is a fictional example of the thing I have in mind.
Funding Request for AI Safety Community Events
Location: New York
Project Goals:
Strengthen communication and teamwork among AI safety researchers via workshops, social gatherings, and research retreats. Promote academic understanding and reach in the realm of AI cross-safety through seminars and workshops. Enhance the skills of budding researchers and students via reading groups, workshops, and tutorials.
Ongoing 3rd-year PhD with multiple publications in AI safety/alignment. Part of the Quantum Strategies Group and has connections with leading innovators in Quantum Computing. Mentored several students on AI safety projects. Key roles in organizing several AI safety events and workshops. Conducted lectures and tutorials on ethical considerations and technical facets of AI. Budget: Between £5000 - £20000. Costs encompass event necessities like food, venue, travel, and recognition awards for achievements.
Current Funding: A grant of $4000 from FLI designated for the STAI workshop with an additional £400 from other sources. No intention to use the funds for personal expenses.
Alternative Funding Sources: Considering application to the nonlinear network.
Mckiev @ 2023-09-07T08:01 (+4)
I meant real projects, so that potential donors could fund them directly. Both Manifund and Nonlinear Network gathered applications, but evaluating them remains a challenging task. Having a project publicly endorsed by LTFF, would have been a strong signal to potential funders in my opinion
Neel Nanda @ 2023-09-02T17:08 (+18)
What are some types of grant that you'd love to fund, but don't tend to get as applications?
Lawrence Chan @ 2023-09-05T18:51 (+8)
I'd personally like to see more well-thought out 1) AI governance projects and 2) longtermist community building projects that are more about strengthening the existing community as opposed to mass recruitment.
Grumpy Squid @ 2023-09-02T01:05 (+18)
Why did the LTFF/EAIF chairs step down before new chairs were recruited?
Habryka @ 2023-09-05T02:34 (+4)
The LTFF chair at least hasn't stepped down yet! Asya is leaving in October, IIRC, and by then we hope to have found a new chair.
I can't comment much on the EAIF. It does seem kind of bad that they didn't find a replacement chair before the current one resigned, but I don't know the details.
calebp @ 2023-09-05T06:17 (+3)
(Re the EAIF chair specifically)
We are hoping to publish some more posts about the EAIF soon; this is just an AMA for the LTFF. I am currently acting as the interim EAIF chair.
I am trying to work out what the strategy of the EAIF be over the next few months. It's plausible to me that we'll want to make substantive changes, in part due FTX and shifts in resources between cause areas. Before we hire a chair (or potentially as part of hiring a chair) I am planning to spend some time thinking about this, whilst keeping the EAIF moving with its current strategy.
Grumpy Squid @ 2023-09-05T21:24 (+1)
Thanks for the clarificaton!
Neel Nanda @ 2023-09-02T17:08 (+17)
What kinds of grants tend to be most controversial among fund managers?
Habryka @ 2023-09-05T05:57 (+4)
Somewhat embarrassingly we've been overwhelmed enough with grant requests in the past few months that we haven't had much time to discuss grants, so there hasn't been much opportunity for things to be controversial among the fund managers.
But guessing about what kinds of things I disagree most with other people on, my sense is that grants that are very PR-risky, and grants that are more oriented around a theory of change that involves people getting better at thinking and reasoning (e.g. "rationality development"), instead of directly being helpful with solving technical problems or acquiring resources that could be used by the broader longtermist community, tend to be the two most controversial categories. But again, I want to emphasize that I don't have a ton of data here, since the vast majority of grants are currently just evaluated by one fund manager and then sanity-checked by the fund chair, so there aren't a lot of contexts in which disagreements like this could surface.
calebp @ 2023-09-05T06:24 (+2)
I am not sure these are the most controversial, but I have had several conversations when evaluating AIS grants where I disagreed substantively with other fund managers. I think there are some object-level disagreements (what kinds of research do we expect to be productive) as well as meta-level disagreements (like "what should the epistemic process look like that decides what types of research get funded" or "how do our actions change the incentives landscape within EA/rationality/AIS").
Linch @ 2023-09-05T03:57 (+2)
I've answered both you and Quadratic Reciprocity here.
rileyharris @ 2023-08-31T22:58 (+17)
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
Linch @ 2023-09-05T04:23 (+7)
I don't know how many points I can really cleanly communicate to such a heterogeneous group, and I'm really worried about anything I say in this context being misunderstood or reified in unhelpful ways. But here goes nothing:
- First of all, I don't know man, should you really listen to my opinion? I'm just one guy, who happened to have some resources/power/attention vested in me; I worry that people (especially the younger EAs) vastly overestimate how much my judgment is worth, relative to their own opinions and local context.
- Thank you for applying, and for wanting to do the right thing. I genuinely appreciate everybody who applies, whether for a small project or large, in the hopes that their work can make the world a better place. It's emotionally hard and risky, and I have a lot of appreciation for the very small number people who tried to take a step in making the world better.
- These decisions are really hard, and we're likely to screw up. Morality is hard and longtermism by its very nature means worse feedback loops than normal. I'm sure you're familiar with how selection/rejections can often be extremely noisy in other domains (colleges, jobs, etc). There aren't many reasons to think we'll do better, and some key reasons to think we'd do worse. We tried our best to make the best funding decisions we could, given limited resources, limited grantmaker time, and limited attention and cognitive capabilities. It's very likely that we have and will continue to consistently fuck up.
- This probably means that if you continue to be excited about your project in the absence of LTFF funding, it makes sense to continue to pursue it either under your own time or while seeking other funding.
- Funding is a constraint again, at least for now. So earning-to-give might make sense. The wonderful thing about earning-to-give is that money is fungible; anybody can contribute, and probabilistically our grantees and would-be grantees are likely to be people with among the highest earning potentials in the world. So if you haven't found a good match for direct work (whether due to personal preferences or external factors like not receiving funding), earning-to-give can be a great option for both impact and other desiderata.
- Please don't work on capabilities in a scaling lab, or otherwise contributing to ending humanity. I don't know how much you care about my opinion, or even if you should. But while some people find it surprisingly comforting to work on projects that are destructive for the world when they suffer a temporary setback in attempting to save it, I suspect this will end up being the type of thing they'd regret, in addition to being straightforwardly[1] altruistically bad.
- ^
assuming that you agree with the object-level assessment that working in scaling labs hastens the world ending. Obviously there are reasonable object-level disagreements to be had here! (And I'm far from certain about that claim myself).
rileyharris @ 2023-08-31T22:56 (+17)
If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
Linch @ 2023-09-05T00:36 (+5)
Nah, at least in my own evaluation I don't think Open Phil evaluations take a large role in my evaluation qua evaluation. That said, LTFF has historically[1] been pretty constrained on grantmaker time so if we think OP evaluation can save us time, obviously that's good.
A few exceptions I can think of:
- I think OP is reasonably good at avoiding types-of-downside-risks-that-I-model-OP-as-caring-about (eg reputational harm), so I tend to spend less time vetting grants for that downside risk vector when OP has already funded them.
- For grants into technical areas I think OP has experience in (eg biosecurity), if a project has already been funded by OP (or sometimes rejected) I might ask OP for a quick explanation of their evaluation. Often they know key object-level facts that I don't.
- In the past, OP has given grants to us. I think OP didn't want to both fund orgs and to fund us to then fund those orgs, so we reduced evaluation of orgs (not individuals) that OP has already funded. I think switching over from a "OP gives grants to LTFF" model to a "OP matches external donations to us" model hopefully means this is no longer an issue.
Another factor going forwards is that we'll trying to increase epistemic independence and decrease our reliance on OP even further, so I expect to try to actively reduce how much OP judgments influence my thinking.
- ^
And probably currently as well, though at this very moment funding is a larger concern/constraint. We did make some guest fund manager hires recently so hopefully we're less time-bottlenecked now. But I won't be too surprised if grantmaker time becomes a constraint again after this current round of fundraising is over.
Greg_Colbourn @ 2023-09-05T14:53 (+16)
What are your AI timelines and p(doom)? Specifically:
1. What year do you think there is a 10%[1] chance that we will have AGI by? (P(AGI by 20XX)=10%).
2. What chance of doom do we have on our current trajectory given your answer to 1? P(doom|AGI in year 20XX).
[I appreciate that your answers will be subject to the usual caveats about definitions of AGI and doom, spread of probability distributions and model uncertainty, so no need to go into detail on these if pushed for time. Also feel free to be to give more descriptive, gut feel answers.]
Daniel_Eth @ 2023-09-06T01:38 (+12)
Presumably this will differ a fair bit for different members of the LTFF, but speaking personally, my p(doom) is around 30%,[1] and my median timelines are ~15 years (though with high uncertainty). I haven't thought as much about 10% timelines, but it would be some single-digit number of years.
- ^
Though a large chunk of the remainder includes outcomes that are much "better" than today but which are also very suboptimal – e.g., due to "good-enough" alignment + ~shard theory + etc, AI turns most of the reachable universe into paperclips but leaves humans + our descendants to do what we want with the Milky Way. This is arguably an existential catastrophe in terms of opportunity cost, but wouldn't represent human extinction or disempowerment of humanity in the same way as "doom."
Greg_Colbourn @ 2023-09-06T10:25 (+2)
Interesting that you give significant weight to non-extinction existential catastrophes (such as the AI leaving us the Milky Way). By what mechanism would that happen? Naively, all or (especially) nothing seem much more likely. It doesn't seem like we'd have much bargaining power with not perfectly-aligned ASI. If it's something analogous to us preserving other species, then I'm not optimistic that we'd get anything close to a flourishing civilisation confined to one galaxy. A small population in a "zoo"; or grossly distorted "pet" versions of humans; or merely being kept, overwhelmingly inactive, in digital storage, seem more likely.
Daniel_Eth @ 2023-09-06T23:00 (+2)
So I'm imagining, for instance, AGIs with some shards of caring about human ~autonomy, but also other (stronger) shards that are for caring about (say) paperclips (also this was just meant as an example). I was also thinking that this might be what "a small population in a 'zoo'" would look like – the Milky Way is small compared to the reachable universe! (Though before writing out my response, I almost wrote it as "our solar system" instead of "the Milky Way," so I was imagining a relatively expansive set within this category; I'm not sure if distorted "pet" versions of humans would qualify or not.)
Greg_Colbourn @ 2023-09-07T10:30 (+2)
Why wouldn't the stronger shards just overpower the weaker shards?
Greg_Colbourn @ 2023-09-06T10:28 (+1)
I haven't thought as much about 10% timelines, but it would be some single-digit number of years.
Please keep this in mind in your grantmaking.
Daniel_Eth @ 2023-09-06T22:48 (+4)
FWIW, I think specific changes here are unlikely to be cruxy for the decisions we make.
[Edited to add: I think if we could know with certainty that AGI was coming in 202X for a specific X, then that would be decision-relevant for certain decisions we'd face. But a shift of a few years for the 10% mark seems less decision relevant]
Greg_Colbourn @ 2023-09-07T10:32 (+2)
I think it's super decision-relevant if the shift leads you to 10%(+) in 2023 or 2024. Basically I think we can no longer rely on having enough time for alignment research to bear fruit, so we should be shifting the bulk of resources toward directly buying more time (i.e. pushing for a global moratorium on AGI).
Linch @ 2023-09-06T22:26 (+2)
Do you have specific examples of mistakes you think we're making, eg (with permission from the applicants) grants we didn't make that we would if we have shorter 10% timelines, or grants that we made that we shouldn't?
Greg_Colbourn @ 2023-09-07T10:38 (+4)
I don't know specifics on who has applied to LTFF, but I think you should be funding orgs and people like these:
All of these are new (post-GPT-4): Centre for AI Policy, Artificial Intelligence Policy Institute, PauseAI, Stop AGI, Campaign for AI Safety, Holly Elmore, Safer AI, Stake Out AI.
Also, pre-existing: Centre for AI Safety, Future of Life Institute.
(Maybe there is a bottleneck on applications too.)
Quadratic Reciprocity @ 2023-08-31T11:52 (+16)
What disagreements do the LTFF fund managers tend to have with each other about what's worth funding?
Linch @ 2023-09-05T03:56 (+5)
I'm answering both this question and Neel Nanda's question
What kinds of grants tend to be most controversial among fund managers?
in the same comment. As usual, other fund managers are welcome to disagree. :P
A few cases that comes to mind:
- When a grant appears to have both high upside and high downside risks (eg red/yellow flags in the applicant, wants to work in a naturally sensitive space, etc).
- Fund managers often have disagreements with each other on how to weigh upside and downside risks.
- Sometimes research projects that are exciting according to one (or a few) fund managers are object-level useless for saving the world according to other fund managers.
- Sometimes a particular fund manager champions a project and inside-view believes it has world-saving potential when other fund managers disagrees, sometimes nobody inside-view believes that it has world-saving potential but the project has outside-view indications of being valuable (eg the grantee has done useful work in the past, or has endorsements by people who had), and different fund managers treat the outside-view evidence more or less strongly.
- Grants with unusually high stipend asks.
- Sometimes a better-than-average grant application will ask for a stipend (or other expense) that's unusually high by our usual standards.
- We have internal disagreements both on the object-level of whether approving such grants is a good idea and also under which set of policies or values we ought to use to set salaries ("naive EV" vs "fairness among grantees" vs "having a 'equal sacrifice' perspective between us and the grantee" vs "not wanting to worsen power dynamics" vs "wanting to respect local norms in other fields" etc).
- For example, academic stipends for graduate students in technical fields are often much lower than salaries in the corporate world. A deliberation process might look like:
- A. Our normal policy of "paying 70% of counterfactual" would suggest a very high stipend
- B. But this will be very "out of line" with prevailing academic norms; it's also a lot more expensive than following normal academic norms, where grad students are paid in non-monetary value often
- C. But our grantees often want the non-monetary gains from a graduate degree much less than the typical grad student. Most of our grantees in academia don't really want to be professors, and care less about academic prestige than typical. They often sought grad school primarily because it's one of the few places with good technical mentorship in AI safety-adjacent fields.
- D. But if we pay them our normal level, then this is sort of "defecting" on prevailing academic norms. It'd look weird, draw attention, etc
- E. But if the stipends are as low as normal in CS academia (especially in some locations/universities), this is also creating a disparity between our academic grantees and other grantees, even if the former are actually higher impact...
- F. etc...
- For example, academic stipends for graduate students in technical fields are often much lower than salaries in the corporate world. A deliberation process might look like:
Daniel_Eth @ 2023-09-05T06:36 (+2)
To add to what Linch said, anecdotally, it seems like there's more disagreements when the path to impact of the grant is less direct (as opposed to, say, AI technical research), such as with certain types of governance work, outreach, or forecasting.
Quadratic Reciprocity @ 2023-08-31T09:23 (+15)
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don't already exist?
Linch @ 2023-09-08T20:57 (+6)
One thing I'd be interested in seeing is more applications from people outside of the Anglosphere and Western Europe. Both because of intellectual diversity reasons and fairly naive arguments like lower cost-of-living means we can fund more projects, technical talent in those countries might be less tapped, etc. Sometimes people ask me why we haven't funded many projects by people from developing countries, and (at least in my view) the short answer is that we haven't received that many relevant applications.
Daniel_Eth @ 2023-09-05T06:58 (+4)
Personally, I'd like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don't vibe with it. And I think we're missing out on a lot of these people's contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don't want them to be the ~only on-ramps to alignment.
[I realize I didn't answer your question literally, since there are some people working on this, but I figured you'd appreciate an answer to an adjacent question.]
Neel Nanda @ 2023-09-02T17:07 (+10)
Can grantees return money if their plans change, eg they get hired during a period of upskilling? If so, how often does this happen?
Linch @ 2023-09-04T04:44 (+4)
Yep, grantees are definitely allowed to do so and it sometimes happens!
I'll let someone who knows the numbers better answer with stats.
NunoSempere @ 2023-09-05T19:34 (+9)
How do you internally estimate you compare against OP/SFF/Habryka's new thing/etc.
Linch @ 2023-09-06T23:56 (+38)
So I wish the EA funding ecosystem was a lot more competent than we currently are. Like if we were good consequentialists, we ought to have detailed internal estimates of the value of various grants and grantmakers, models for under which assumptions one group or another is better, detailed estimates for marginal utility, careful retroactive evaluations, etc.
But we aren't very competent. So here's some lower-rigor takes:
- My current guess is that of the reasonably large longtermist grantmakers, solely valued at expected longtermist impact/$, our marginal grants are at or above the quality of all other grantmakers, for any given time period.
- Compared to Open Phil longtermism, before ~2021 LTFF was just pretty clearly more funding constrained. I expect this means more triaging for good grants (though iiuc the pool of applications was also worse back then; however I expect OP longtermism to face similar constraints).
- In ~2021 and 2022 (when I joined) LTFF was to some degree trying to adopt something like a "shared longtermist bar" across funders, so in practice we were trying to peg our bar to be like Open Phil's.
- So during that time I'm not sure there's much difference, naively I'd guess LTFF to do better than OP per $ by the lights of LTFF fund managers, and OP to do better by the lights of the median OP longtermist grant associate.
- In 2023 (especially after June), the bars have gotten quite out of skew because of our liquidity issues. So I expect LTFF marginal grants to be noticeably better than OP's at the current margin (and I moderately expect the median longtermist grantmaker at OP to agree with this assessment).
- However, if LTFF fundraising will go as well as I currently expect it to, I think by ~October or so I expect us to roughly recalibrate to having a bar similar to OP's. We (or at least I) am not trying very hard to substantially exceed OP's bar in an ideal case.
- Note that I'm trying my best to compare LTFF marginal grants to actual OP marginal grants. Unlike us OP also has a very large warchest, and I happen to be very confused about the value of OP's last dollar, which might be a more salient comparison for the in-practice counterfactual.
- My understanding is that OP will give more grants if they have more grantmaker capacity
- I know less about SFF, but my guess is that we're noticeably better than them. My reasoning is that I think their grants are somewhat high variance, and some of their grants are rather bad by my lights; while I haven't seen much evidence for SFF having a higher proportion of positive "hits" to justify the high variance. So my guess is the heavier left tail without a correspondingly higher right tail means that SFF grants have a lower mean, and maybe a lower median as well.
- I think we did better than the now-defunct Future Fund (both the main team and the regranting program) per $. I think Future Fund was trying to move a lot of money on fast timescales, and their deal flow was somewhat limited by applications that OP didn't pick up (which is much less of a problem at LTFF's scale). Though to be fair they were founded in 2022 when cost-effectiveness with money was much less of a concern[1].
- I also have the same "fatter left tail, not much evidence in favor of a fatter right tail" objection as I did with SFF.
- Some potential relevant thoughts here are in my adversarial selection in longtermist grantmaking post.
- I would draw attention to this self-quote "Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors."
- I think a number of other medium-sized grantmakers (Longview, Effective Giving, GWWC's Longtermist Fund, etc) are trying to explicitly or implicitly funge with OP's last dollar while adding a few more constraints, so naively I'd guess that LTFF's better than them to the extent we're better than OP (plus a little bit more to account for the additional cost of those constraints).
- A weak piece of outside-view evidence for LTFF being better than other longtermist funders for the marginal dollar is that other funders (OP, SFF) have given money to us to regrant, whereas we have not afaict given money to other funders to regrant.
- Though the same evidence can adequately be explained by us being more power-hungry and/or insufficiently humble, of course.
- A weak piece of outside-view evidence for LTFF being worse than other LTFF funders for the marginal dollar is that other institutional funders haven't directly offered to cover our funding gap (yet), though to some degree we're also actively seeking to have more independence from them.
- I'm less sure about Habryka's new thing (Lightspeed Grants), Manifund, and other new grantmakers. I think "too soon to tell" is my current stance.
- In Lightspeed's case, my impression is that they share a number of both applicants and evaluators with LTFF, so if they otherwise have a better process, I wouldn't be surprised if their grants are competitive with or better than ours.
- Otoh my impression is that they were more swamped with work than anticipated, so presumably this means some sacrifice in decision quality.
- Of course, "quality of grants evaluations/$" isn't the only thing that matters in a grantmaking organization. I think we do worse on other desiderata:
- I think we are solidly middle-of-the-pack or above average in terms of $s moved/grantmaker time or impact/grantmaker time.
- I think we do a pretty shitty job in terms of grantee experience:
- We are rather slow in getting back to applicants.
- I think Manifund is better? Lightspeed was trying to be better as well, not sure if they succeeded.
- As many people on the forum complained about, we rarely give feedback to rejected applicants, and limited feedback to approved applicants as well.
- We are rather slow in getting back to applicants.
- Our "brand" is less solid than Open Phil's. I suspect this limits our applicant pool some.
- I suspect we do very poorly on donor experience as well.
- To some degree our donor experience is non-existent, eg until recently we didn't even offer to talk to our largest donors.
- That said, one plus of the current model is that we aren't really Goodharting on or otherwise optimizing for non-consequentialist donor preferences, simply because to a large extent we aren't even aware of them!
- I suspect we're leaving a ton of value on the table by not trying to engage with new donors, who might counterfactually not have given anything to longtermist orgs.
- I think we do reasonably well on transparency, making informative posts, etc, especially recently.
- But this is compared to a relatively mediocre baseline.
- Though far from the most important priority, I wouldn't be surprised if LTFF is less fun to volunteer or work at than at other grantmaking organizations, especially as a grant evaluator.
- Asya: "I also suspect that the lack of active discussion about grants has made the fund a worse experience for fund managers— I might describe the overall shift in the culture of the fund to have gone from "lively epistemic forum" to "solitary grantmaking machine".)"
- In comparison, I expect being a Future Fund regrantor to be more fun, SFF's S-process and Manifund to have more productive discussions, etc.
- SFF changes their grant evaluators regularly to minimize goodharting by grantees; this is not something LTFF directly optimizes for nearly as much, so I suspect we're worse at it.
- If anything, I'd expect our level of public transparency (eg this Q&A, my posts) to make Goodharting even easier than baseline; this is a conscious tradeoff we're making in favor of greater transparency.
- ^
My day job at the time was trying to do research to identify good "longtermist megaprojects" lmao.
NunoSempere @ 2023-09-07T10:18 (+1)
Awesome reply, thanks
NunoSempere @ 2023-09-05T19:34 (+9)
My sense is that many of the people working on this fund are doing this part time. Is this the case? Why do that rather than hiring a few people to work full time?
Lauro Langosco @ 2023-09-06T19:41 (+1)
Yes, everyone apart from Caleb is part-time. My understanding is LTFF is looking make more full-time hires (most importantly a fund chair to replace Asya).
Linch @ 2023-09-06T21:48 (+5)
I'm currently spending ~95% of my work time on EA Funds stuff (and paid to do so), so effectively full-time. We haven't decided how long I'll stay on, but I want to keep working on EA Funds at least until it's in a more stable position (or, less optimistically, we make a call to wind it down).
But this is a recent change, historically Caleb was the only full-time person.
Wei Dai @ 2023-09-01T07:01 (+9)
Any thoughts on Meta Questions about Metaphilosophy from a grant maker perspective? For example have you seen any promising grant proposals related to metaphilosophy or ensuring philosophical competence of AI / future civilization, that you rejected due to funding constraints or other reasons?
Linch @ 2023-09-04T05:34 (+4)
(Speaking for myself) It seems pretty interesting. If I understand your position correctly, I'm also worried about developing and using AGI before we're a philosophically competent civilization, though my own framing is more like "man it'd be kind of sad if we lost most of the value of the cosmos because we sent von Neumann probes before knowing what to load the probes with."
I'm confused about how it's possible to know whether someone is making substantive progress on metaphilosophy; I'd be curious if you have pointers.
As a practical matter, I don't recall any applications related to metaphilosophy coming across my desk, or voting on metaphilosophy grants that other people investigated. The closest I can think of are applicants for a few different esoteric applications of decision theory. I'll let others at the fund speak about their experiences.
Wei Dai @ 2023-09-14T21:13 (+2)
I’m confused about how it’s possible to know whether someone is making substantive progress on metaphilosophy; I’d be curious if you have pointers.
I guess it's the same as any other philosophical topic, either use your own philosophical reasoning/judgement to decide how good the person's ideas/arguments are, and/or defer to other people's judgements. The fact that there is currently no methodology for doing this that is less subjective and informal is a major reason for me to be interested in metaphilosophy, since if we solve metaphilosophy that will hopefully give us a better methodology for judging all philosophical ideas, assuming the correct solution to metaphilosophy isn't philosophical anti-realism (i.e., philosophical questions don't have right or wrong answers), or something like that.
Lizka @ 2023-09-04T00:08 (+8)
How do you think about applications to start projects/initiatives that would compete with existing projects?
Linch @ 2023-09-05T03:36 (+2)
From my perspective, they seem great! If there is an existing project in a niche, this usually means that the niche is worth working on. And of course it seems unlikely that any of the existing ways of doing things are close to optimal, so more experimentation is often worthwhile! That said, 3 caveats I can think of:
- If you are working in a space that's already well-trodden, I expect that you're already familiar with the space and can explain why your project is different (if it is different). For example, if you're working in adversarial robustness for AI safety, then you should be very aware that this is a subject that's well-studied both in and outside of EA (eg in academia). So from my perspective, applicants not being aware of prior work is concerning, as is people being aware of prior work but not having a case for why their project is different/better.
- If your project isn't aiming to be different/better, that's also okay! For example, your theory of change might be "a total of 2 FTE-years have been spent on this research area, I think humanity should spend at least 10+ years on it to mine for more insights; I'm personally unusually excited about this area."
- But if that's the case, you should say so explicitly.
- If your project isn't aiming to be different/better, that's also okay! For example, your theory of change might be "a total of 2 FTE-years have been spent on this research area, I think humanity should spend at least 10+ years on it to mine for more insights; I'm personally unusually excited about this area."
- I'm more hesitant to fund projects entering a space with natural monopolies. For example, if your theory of change is "persuade the Californian government to set standards for mandatory reporting of a certain class of AI catastrophic failures by talking to policymakers[1]", this is likely not something where several different groups can realistically pursue in parallel without stepping on each other's toes.
- I'm wary of new projects that tries to carve out a large space for itself in its branding and communications, especially when there isn't a good reason to do so. I'm worried about it both in cases when there are already other players in a similar niche, and when there isn't. For example I think "80,000 Hours" is a better name than "Longtermist Career Consulting." The former can spin down naturally and can allow other orgs (like Probably Good) to enter the space, the latter name is somewhat uninviting.
- Also see earlier comment by me here; I still stand by it.
- ^
IIUC note also that as a fiscally sponsored project of EV, we may not legally be allowed to regrant to legislative political projects.
Siao Si @ 2023-09-02T19:49 (+7)
How many evaluators typically rate each grant application?
Linch @ 2023-09-05T03:32 (+3)
Right now, ~2-3
Daniel_Eth @ 2023-09-05T06:16 (+3)
[personal observations, could be off]
I want to add that the number tends to be higher for grants that are closer to the funding threshold or where the grant is a "bigger deal" to get right (eg larger, more potential for both upside and downside) than for those that are more obvious yes/no or where getting the decision wrong seems lower cost.
Neel Nanda @ 2023-09-02T17:08 (+7)
What are some past LTFF grants that you disagree with?
Daniel_Eth @ 2023-09-05T06:24 (+2)
In my personal opinion, the LTFF has historically funded too many bio-related grants and hasn't sufficiently triaged in favor of AI-related work.
calebp @ 2023-09-05T06:38 (+2)
Hmm, I think most of these grants were made when EA had much more money (pre-FTX crash), which made funding bio work much more reasonable than funding bio work rn, by my lights. I think on the current margin, we probably should fund stellar bio work.
Also, I want to note that talking negatively about specific applications might be seen as "punching down" or make applying to the LTFF higher risk than an applicant could have reasonably thought so fund managers may be unwilling to give concrete answers here.
Daniel_Eth @ 2023-09-05T06:47 (+2)
Hmm, I think most of these grants were made when EA had much more money
I think that's true, but I also notice that I tend to vote lower on bio-related grants than do others on the fund, so I suspect there's still somewhat of a strategic difference of opinion between me and the fund average on that point.
Linch @ 2023-09-05T06:53 (+2)
Yeah I tend to have higher uncertainty/a flatter prior about the EV of different things compared to many folks in a similar position; it's also possible I haven't sufficiently calibrated to the new funding environment.
Siao Si @ 2023-09-02T19:48 (+6)
Is there a place to donate to the operations / running of LTFF or the funds in general?
Linch @ 2023-09-04T05:29 (+4)
Not a specific place yet! In the past we've asked specific large donors to cover our costs (both paid grantmaker time and operational expenses), going forwards we'd like to move towards a model where all donors pay a small percentage, but this is not yet enacted.
In the meantime, you can make a donation to EA Funds, email us and say you want the donation to be earmarked for operational expenses. :)
Michael Simm @ 2023-09-07T05:31 (+4)
Given the rapid changes to the word that we're expecting to happen in the next few decades, how important do you feel that it is to spend money sooner rather than later?
Do you think there is a possibility of money becoming obsolete, which would make spending it now make much more sense than sitting on it and not being able to use it?
This could apply to money in general, with AI concerns, or any particular currency or of store value.
Daniel_Eth @ 2023-09-07T07:04 (+5)
Speaking personally, I think there is a possibility of money becoming obsolete, but I also think there's a possibility of money mattering more, as (for instance) AI might allow for an easier ability to turn money into valuable labor. In my mind, it's hard to know how this all shakes out on net.
I think there are reasons for expecting the value of spending to be approximately logarithmic with total spending for many domains, and spending on research seems to fit this general pattern pretty well, so I suspect that it's prudent to generally plan to spread spending around a fair bit over the years.
I also just want to note that I wouldn't expect this whole question to affect behavior of the LTFF much, as we decide what grants to fund, but we don't make plans to save/invest money for future years anyways (though, of course, it could affect behavior of potential donors to the LTFF).
Linch @ 2023-09-06T08:31 (+4)
On LessWrong, jacquesthibs asks:
If someone wants to become a grantmaker (perhaps with an AI risk focus) for an organization like LTFF, what do you think they should be doing to increase their odds of success?
Linch @ 2023-09-08T11:23 (+3)
On LessWrong, Lauro said:
IMO a good candidate is anything that is object-level useful for X-risk mitigation. E.g. technical alignment work, AI governance / policy work, biosecurity, etc.
To add to that, I'd expect practice with communication and reasoning transparency and having a broad (not just deep) understanding of other work in your cause area to be quite helpful. Also, to the extent that this is trainable, it's probably good to model yourself as training to become a high-integrity and reasonably uncompromising person now, because of course integrity failures "on the job" are very costly. My thoughts on who could make a good LTFF fund chair might also be relevant.
Greg_Colbourn @ 2023-09-05T15:17 (+4)
In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn't enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fund or funding circle could be set up that is dedicated to this (this is what I'm personally focusing my donations on; I'd like to be joined by others).
Lawrence Chan @ 2023-09-05T18:48 (+4)
Newbie fund manager here, but:
I strongly agree that governance work along these lines is very important; in fact, I'm currently working on governance full time instead of technical alignment research.
Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn't currently do much if any active solicitation of grants, we're ultimately bottlenecked by the applications we receive.
calebp @ 2023-09-05T20:14 (+8)
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable.
I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
Greg_Colbourn @ 2023-09-06T10:15 (+2)
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren't constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary - i.e. until there is a global consensus on the safety of running more powerful models (FLI's 6 month suggestion was really just a "foot in the door").
Greg_Colbourn @ 2023-09-06T10:29 (+2)
Thank you. This is encouraging. Hopefully there will be more applications soon.
Lizka @ 2023-09-04T00:07 (+4)
Do you know how/where people usually find out about the LTFF (to apply for funding and to donate)? Are some referral/discovery pathways particularly successful?
calebp @ 2023-09-05T18:48 (+4)
From a brief look at our How applicant heard about us question data, I think the breakdown over the top 200 or so applications we have received is something like:
- EA community (EA Forum, local groups, events, personal connections): 50-60%
- AI safety/EA programs (SERI-MATS, GovAI, etc.): 10-15%
- Direct LTFF outreach: 5-10%
- Recommendations from experienced members: 5-10%
- LessWrong, 80K Hours: 5-10%
- Career advising services: <5%
- Previous applicants/grantees: <5%
- Online searches: <5%
The EA community seems to be the dominant source, accounting for around half or more of referrals. Focused AI safety/EA programs and direct LTFF outreach collectively account for 15-25%. The remaining sources are more minor, each representing less than 10% likely. But this is an approximate estimate given the limitations of the data. The overall picture is that most people hear about LTFF through being part of the broader community.
Linch @ 2023-09-06T08:32 (+3)
On LessWrong, jacquesthibs asks:
Are there any plans to fundraise from high net-worth individuals, companies or governments? If so, does LTFF have the capacity/expertise for this? And what would be the plan beyond donations through the donation link you shared in the post?
Linch @ 2023-09-08T10:57 (+3)
We do have some plans to fundraise from high net-worth individuals, including doing very basic nonprofit things like offering to chat with some of our biggest past donors, as well as more ambitious targets like actively sourcing and reaching out to HNWs who have (eg) expressed concerns about AGI x-risk/GCRs but have never gotten around to actually donating to any AI x-safety projects. I don't know if we have the expertise for this, to some degree this is an empirical question.
We have no current plans to raise money from companies, governments, or (non-OP) large foundations.
I haven't thought about this much at all, but my current weakly-held stance is that I think a longtermist grantmaking organization is just a pretty odd project for governments and foundations to regrant to. I'd be more optimistic about fundraising and grantwriting efforts from organizations which are larger and have an easier-to-explain direct impact case: ARC, Redwood, FAR, CHAI, MIRI(?), etc.
I think raising money from companies is relatively much more tractable. But before we were to go down that route, I'd need to think a bit more about effects on moral licensing, safetywashing, etc. I don't want to (eg) receive money from Microsoft or OpenAI now on the grounds that it's better for us to have money to spend on safety than for them to spend such $s on capabilities, and then in a few years regret the decision because the nebulous costs of being tied with AI companies [1]ended up being much higher than I initially modeled.
- ^
One advantage of our current ignorance re:donors is that fund managers basically can't be explicitly or subtly pressured to Goodhart on donor preferences, simply because we don't actually know what donor preferences are (and in some cases don't even know who the donors are).
NunoSempere @ 2023-09-05T19:30 (+3)
How does your infrastructure look like? In particular, how much are you relying on Salesforce?
calebp @ 2023-09-05T20:08 (+6)
We use paper form for the application form, Airtable and Google docs for evaluation infrastructure (making decisions on what we want to fund) along with many Airtable and zapier automations. EV then uses some combination of salesforce, xero etc. to conduct due diligence, make the payment to the grant recipients etc.
My impression is that we are pretty Salesforce reliant on the grant admin side, and moving away from this platform would be very costly. We are not salesforce reliant at all on the evaluation side.
We don’t have any internal tooling for making botecs, people tend to use whatever system the like for this. I have been using squiggle recently and quite like it though I do think it’s still fairly high friction and slow for me for some reason.
Heramb Podar @ 2023-09-05T19:29 (+2)
What kind of criteria or plans do you look for in people who are junior in the AI governance field and looking for independent research grants? Is this a kind of application you would want to see more of?
Linch @ 2023-09-08T08:34 (+3)
Past experience with fairly independent research and access to high-quality mentors (so they are less likely to become directionless and/or depressed) are positives for me.
Lauro Langosco @ 2023-09-06T20:16 (+2)
Speaking for myself: it depends a lot on whether the proposal or the person seems promising. I'd be excited about funding promising-seeming projects, but I also don't see a ton of low-hanging fruit when it comes to AI gov research.
Callum Hinchcliffe @ 2023-09-04T16:33 (+2)
Can applicants update their application after submitting?
This was an extremely useful feature of Lightspeed Grants, because the strength of my application significantly improved every couple of weeks.
If it’s not a built-in feature, can applicants link to a google doc?
Thank you answering our questions!
Linch @ 2023-09-05T01:07 (+5)
There's no built-in feature but you can email us or link to google docs; as a practical matter I think it's much more common that applications are updated by having different funding needs or the applicants decided to pursue different projects (whether their new project needs or doesn't need funding) than because the applicant now looks significantly stronger.
You can also reapply after being rejected if your application is now substantially more competitive.
I'm hoping that LTFF will work towards being much more efficient going forwards, so it becomes less practically useful for applicants to feel a need to update their applications mid-evaluation. But this is aspirational; in the meantime I can totally see value in doing this.
Dawn Drescher @ 2023-09-03T22:17 (+2)
I’ll phrase this as a question to not be off-vibe: Would you like to create accounts with AI Safety Impact Markets so that you’ll receive a regular digest of the latest AI safety projects that are fundraising on our platform?
That would save them time since they don’t have to apply to you separately. If their project descriptions left open any questions you have, you can ask them in the Q & A section. You can also post critiques there, which may be helpful for the project developers and other donors.
Conversely, you can also send any rejected projects our way, especially if you think they’re net-positive but just don’t meet your funding bar.
Linch @ 2023-09-08T10:53 (+2)
Thanks for the offer! I think we won't have the capacity (or tbh, money) to really work on soliciting new grants in the next few weeks but feel free to ping Caleb or I again in say a month from now!
Dawn Drescher @ 2023-09-11T18:48 (+2)
Will do, thanks!
trevor1 @ 2023-08-31T02:29 (+2)
How small and short can a grant be? Is it possible for a grant to start out small, and then gradually gets bigger and sources more people if the research area turns out to be significantly more valuable than it initially appeared? If there's very few trustworthy math/quant/AI people in my city, could you help me source some hours from some reliable AI safety people in the Bay Area if the research area clearly ends up being worth their time?
Linch @ 2023-09-05T04:02 (+2)
How small and short can a grant be? Is it possible for a grant to start out small, and then gradually gets bigger and sources more people if the research area turns out to be significantly more valuable than it initially appeared?
In general, yes it can be arbitrarily short and small. In practice I think EV, who does our operations, has said that they prefer we don't make grants <$2,000 (? can't remember exact number), because the operational overhead from them might be too high per grant to justify the benefits.
Johannes B @ 2023-09-07T05:25 (+1)
In relations to EA related content (photography, YouTubevideos, documentary, podcasts, TikTok accounts) what type of projects would you like to see more of?
Linch @ 2023-09-08T08:33 (+3)
I don't have strong opinions here on form; naively I'd prefer some combination of longform work (so the entire message gets across without sacrificing nuance), popularity, and experimentation value.
In terms of content, I suspect there's still value left in detailed and nuanced explanations of various aspects of the alignment problem, as well as distillation for the best current work on partial progress on solutions (including by some of our grantees!)
In general I expect this type of communication to be rather tail-heavy, so the specific person and their fit with the specific project to matter heavily. Ideally I think I'd want someone who
- a) has experience (and preferably success) with their target type of communications,
- b) who has or can easily acquire a fairly deep understanding of the relevant technical subjects (at all levels of abstraction),
- c) who actually likes the work,
- d) and d has some form of higher-than-usual-for-the-field integrity (so they won't eg Goodhart on getting more people to sign up for 80k by giving unnuanced but emotionally gripping pitches).
Note that I haven't been following the latest state-of-the-art in either pitches or distillations, so it's possible that ideas that I think are good are already quite saturated.
trevor1 @ 2023-08-31T02:21 (+1)
Is there any way for me to print out and submit a grant in paper, non-digital form, also without mailing it? e.g. I send an intermediary to meet one of your intermediaries at some berkeley EA event or something, and they hand over an envelope containing several identical paper copies of the grant proposal. No need for any conversation, or fuss, or awkwardness, and the papers can be disposed of afterwards and normal communication would take place if the grant is accepted. I know it sounds weird, but I'm pretty confident that this mitigates risks of a specific class.
calebp @ 2023-09-05T06:43 (+2)
I know it sounds weird, but I'm pretty confident that this mitigates risks of a specific class.
I'd be interested in hearing what specific class of risks this mitigates. I'd be open to doing something like this, but my guess is that the plausible upside won't be high enough to justify the operational overhead.
If a project is very sensitive fund managers have been happy to discuss things in person with applicants, (e.g. at EA/rationality events) but we don't have a systematic way to make this happen rn and it's necessary infrequently enough that I don't plan to set one up.