Long-Term Future Fund: November 2019 short grant writeups
By Habryka @ 2020-01-05T00:15 (+46)
Since we’ve been dealing with a larger-than-usual set of commitments for the Long-Term Future Fund, including some internal restructuring, discussion of fund scope, and coordination of fundraising initiatives, we did not end up having enough time to produce a set of writeups with as much detail as those written for past rounds.
As a result, the following report consists of a relatively straightforward list of the grants we made, with short explanations of the reasoning behind them. I (Oliver Habryka) am planning to follow this up in a few weeks with more detailed explanations of my reasoning, and other fund members might do the same. I will still be available to respond to comments and questions in the comment section.
All the writeups here were written by me (Oliver Habryka), but do in some cases represent more of the fund team consensus than usual.
Grant Recipients
Grants Made By the Long-Term Future Fund
Each grant recipient is followed by the size of the grant and their one-sentence description of their project. All of these grants have been made.
- Damon Pourtahmaseb-Sasi ($40,000): Subsidized therapy/coaching/mediation for those working on the future of humanity.
- Tegan McCaslin ($40,000): Conducting independent research into AI forecasting and strategy questions.
- Vojtěch Kovařík ($43,000): Research funding for a year, to enable a transition to AI safety work.
- Jaspreet Pannu ($18,000): Surveying the neglectedness of broad-spectrum antiviral development.
- John Wentworth ($30,000): Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop.
- Elizabeth E. Van Nostrand ($19,000): Create a toolkit to bootstrap from zero to competence in ambiguous fields.
- Daniel Demski ($30,000): Independent research on agent foundations.
- Sam Hilton ($62,000): Supporting the rights of future generations in UK policy and politics.
- Topos Institute ($52,000): A summit for the world's leading applied category theorists to engage with human flourishing experts.
- Jason Crawford ($25,000): Tell the story of human progress to the world, and promote progress as a moral imperative.
- Kyle Fish ($30.000): Identifying white space opportunities for technical projects to improve biosecurity.
- AI Safety Camp Toronto ($29,000): AISC Toronto brings together aspiring researchers to work on concrete problems in AI safety.
- Miranda Dixon-Luinenburg ($20,000): Writing fiction to convey EA and rationality-related topics.
- Roam Research ($20,000): A note-taking tool for networked thought, actively used by many EA researchers.
- Joe Collman ($10,000): Investigation of AI Safety Via Debate and ML training.
Total distributed: $471,000
Writeups by Oliver Habryka
Damon Pourtahmaseb-Sasi ($40,000)
Subsidized therapy/coaching/mediation for those working on the future of humanity.
We are aware of a significant number of people (including many full-time employees) within the EA and longtermist communities who struggle with depression, anxiety, and other mental health problems. I think it makes sense to provide members of those communities with therapy and coaching sessions, which seem to be relatively effective at helping with those problems (the exact effect sizes are a highly disputed question, but it seems to me that on net, therapy and coaching seem to help a good amount). I also think that a major benefit is that some EAs are unwilling to see therapists who they expect to not understand their values or beliefs; they may be more willing to pursue therapy, and make more progress, with someone familiar with those values and beliefs.
Damon is a licensed therapist who has been offering services to people working in high-impact areas for the past year, and this grant is to allow him to spend a larger fraction of his time over the next year helping people working on high-impact projects, as well as to relocate to California to be able to offer his services to a larger number of people (he is currently located in Florida, where none of his current clients are located).
We’ve received a very large number of overwhelmingly positive testimonials that were sent to us from his current clients via an independent channel (i.e. Damon did not filter those testimonials for positive ones). This was one of the key pieces of evidence that led me to recommend this grant.
Tegan McCaslin ($40,000)
Conducting independent research into AI forecasting and strategy questions.
This is in significant part a continuation of our previous grant to Tegan for research into AI forecasting and strategy questions. Since then, Tegan has worked with other researchers I trust, and has received sufficient positive testimonials to make me comfortable with this grant. She also sent us some early drafts of research on comparing evolutionary optimization processes with current deep learning systems, which she is planning to publish soon, and which I think is promising enough to be worth funding. She also sent us some early draft work on long-term technological forecasting (10+ years into the future) that I also thought was promising.
Vojtěch Kovařík ($43,000)
Research funding for a year, to enable a transition to AI safety work.
Vojtěch previously did research in mathematics and game theory. He just finished an internship at FHI and is now interested in exploring a full-time career in AI Safety. To do so, he plans to spend a year doing research visits at various organizations and exploring some research directions he is excited about.
According to an FHI researcher we spoke to, Vojtěch seems to have performed well during his time at FHI, so it seemed good to allow him to try transitioning into a full-time AI safety role.
Jaspreet Pannu ($18,000)
Surveying the neglectedness of broad-spectrum antiviral development.
Jaspreet just finished her FHI summer fellowship. She’s now interested in translating an internal report on broad-spectrum antivirals (which she wrote during the fellowship) into two peer-reviewed publications.
She received positive testimonials from the people she worked with at FHI, and the development of broad-spectrum antivirals seems like a promising direction for reducing the chance of bioengineering-related catastrophes.
John Wentworth ($30,000)
Building a theory of abstraction for embedded agency using real-world systems for a tight feedback loop.
John participated in the recent MIRI Summer Fellows Program, where he proposed some research directions to other MIRI researchers that they were excited about. As well as receiving multiple strong testimonials from various AI alignment researchers, he has also been very actively posting his ideas to the AI Alignment Forum, where he has received substantive engagement and positive comments from several top researchers; this is one of the main reasons for this grant.
Elizabeth E. Van Nostrand ($19,000)
Creating a toolkit to bootstrap from zero to competence in ambiguous fields.
Elizabeth has a long track record of writing online about various aspects of effective altruism, rationality and cause prioritization, and also has a track record of doing high-quality independent research for a variety of clients.
Elizabeth is planning to more fully understand how people can come to quickly orient themselves in complicated fields like history and other social sciences, particularly in domains that are relevant to the long-term future (like the structure of the Scientific and Industrial Revolutions, as well as the factors behind civilizational collapse).
Daniel Demski ($30,000)
Independent research on agent foundations.
Daniel attended the MIRI Summer Fellows Program in 2017 and 2018, as well as the AI Summer Fellows program in 2018. During those periods, he developed some research directions that multiple researchers I contacted were excited about, and he received positive testimonials from the people he worked with at MIRI.
From his application:
My main focus for the first few months will be completing a collaborative paper on foundations of decision theory which began as discussions at MSFP 2018. The working title is "Helpful Foundations", and a very rough working draft can be seen here. The overall strategy is to first assume an agent given a specific scenario (world) would have some preferences over its actions. We then use VNM axioms to represent its preferences in each possible world as utilities. Pareto improvements are used to aggregate preferences across possible worlds, and a version of the Complete Class Theorem is used to derive a prior and utilities. However, because of the weight pulled by the CCT, it looks like we will be able to remove one or more VNM axioms and still arrive at our result.
Sam Hilton ($62,000)
Supporting the rights of future generations in UK policy and politics.
Sam Hilton runs the All-Party Parliamentary Group (APPG) for Future Generations in the British Parliament, and seems to have found significant traction with this project; many members of Parliament have engaged with the APPG and found their inputs valuable. This funding will support staff and other costs of the APPG’s secretariat, enabling the group to work more effectively.
Topos Institute ($52,000)
A summit for the world's leading applied category theorists to engage with human flourishing experts.
David Spivak and Brendan Fong, the two co-founders of the Topos Institute, are applying category theory to various problems in AI alignment and other areas I think are important, and are organizing a conference to facilitate exchange between the category theory community and people currently working on various technical problems around the long-term future.
More recently various researchers I have talked to in AI alignment have found various aspects of category theory quite useful when trying to solve certain technical problems, and David Spivak has a strong track record of academic and educational contributions.
Jason Crawford ($25,000)
Telling the story of human progress to the world, and promote progress as a moral imperative.
All of Jason's work is in the domain of Progress Studies. He works on understanding what the primary causes of historical technological progress were, and what the broad effects of different types of technological progress have been. Since I consider most catastrophic risks to be the result of badly controlled emerging technologies, understanding the historical causes of technological progress, and humanity’s track record in controlling those technologies, is an essential part of thinking about global catastrophic risk and the long-term future.
I also consider this grant to be valuable because Jason seems to be a very capable researcher who has attracted the attention of multiple people whose thinking I respect a lot (he also received a grant from Tyler Cowen's Emergent Ventures). I think there is a good chance he could become a highly influential public writer, and having him collaborate with researchers and thinkers working on global catastrophic risks could be very valuable in the long-run.
I also think that his current research is going to be highly and directly relevant in worlds where catastrophic risks are not the primary type of issue that turns out to be important, and where human flourishing through technological progress may be the most important cause area. (This reasoning is similar to that behind last round’s grant toward improving reproducibility in science.)
Kyle Fish ($30,000)
Identifying white space opportunities for technical projects to improve biosecurity.
From his application:
I plan to produce a technical report on opportunities for science and engineering projects to improve biosecurity and pandemic preparedness. Biosecurity is an established cause area in the EA community, and a variety of resources provide high-level overviews of potential paths to impact (careers in policy, direct work in synthetic biology, public health, etc.). However, there is a need for a clearer and deeper understanding of how technical expertise in relevant science and engineering disciplines can best be leveraged in this space. The report will cover three core topics: 1) technical analysis of relevant science and engineering subfields (ie vaccine development and vaccine alternatives, novel pathogen detection systems, emerging synthetic biology techniques). 2) the current landscape of organizations, academic labs, companies, and individuals working on technical problems in biosecurity, and summaries of the projects already underway. 3) An analysis of the white space opportunities where additional science and engineering innovation ought to be prioritized to mitigate biorisks.
I hope this project will ultimately reduce the risk of catastrophic damage from natural or engineering pathogens. This impact will likely be realized through a variety of different uses of the report:
+ As a guide for scientists and engineers interested in working on biosecurity, by providing a clear summary of the current state of the space and the technical project types they should consider pursuing
+ As a resource for the current biosecurity community to better understand the landscape of technical projects already underway
+ As a resource for grantmakers to inform funding decisions and prioritization
+ As a means of deepening my own understanding of opportunities in the biorisk space as I consider a more substantive shift toward a biosecurity-focused career trajectory
Given the difficulty of assessing biosecurity threats, it is unlikely that direct connections between this report and quantifiable reductions in biorisk will be possible. There are, however, a variety of proxy metrics that can be used to measure impact. Potential metrics include the number of individuals who use this report to inform a partial or complete career change or shift in technical focus, relative impact estimates for such changes, number of technical projects launched that align with the white spaces identified, and dollar amounts of funding allocated to such projects. Subjective evaluations of potential impact by current experts in the biosecurity space may also be useful. The best measurement strategy will depend in large part on the manner in which this report is ultimately distributed.
We also reached out to a variety of researchers we trust in the domain of biosecurity who gave strong positive feedback about Kyle’s project and his skills. He’s also spoken in the past at EA Events about biotech initiatives in clean meat, and has been working as a researcher on clean meat for the last few years, which provides him with a lot of the relevant biotech background to work in this space.
AI Safety Camp #5 ($29,000)
Bringing together aspiring researchers to work on concrete problems in AI safety.
This is our third grant to the AI Safety Camp, so the basic reasoning from past rounds is mostly the same. This round, I reached out to more past participants and received responses that were, overall, quite positive. I’ve also started thinking that the reference class of things like the AI Safety Camp is more important than I had originally thought.
Miranda Dixon-Luinenburg ($20,000)
Writing fiction to convey EA and rationality-related topics.
This is a continuation of a grant we made last round, so our reasoning remains effectively the same. Miranda sent us some drafts and documents that seem promising enough to be worth further funding, though we think that after this round she should likely seek independent funding, since we hope that her book will be far enough along by then to get more of an outside perspective on the project and potentially get funding from other funders.
Roam Research ($20,000)
A note-taking tool for networked thought, actively used by many EA researchers.
We have previously made a grant to Roam Research. Since then, a large number of researchers and other employees at organizations working in priority areas have started using Roam and seem to have benefited a lot from it. We received a large number of positive testimonials, and I’ve also found the product to be well-designed.
Despite that, our general sense is that Roam should try to attract external funding after this round, and we are not planning to recommend future grants to Roam (mostly due to it being well-suited to seeking more broader funding).
Joe Collman ($10,000)
Investigation of AI Safety Via Debate and ML training.
From the application:
I aim to work on a solo project with the guidance of David Krueger, with two main purposes:
The first is to learn and upskill in AI safety related areas.
The second is to explore AI safety questions focused on AI safety via debate (https://arxiv.org/abs/1805.00899), and connected ideas.
I think that David Krueger is doing good work in the space of AI alignment, and that funding Joe to work on things David thinks are important seems worth the small amount of requested funding. We recommended this grant mostly on the basis of referrals and testimonials. David has been collaborating with many good people I trust quite a bit over the past few years (FHI, Deepmind, CHAI and 80k), so that’s where a lot of my trust comes from.
Tetraspace Grouping @ 2020-01-05T14:14 (+21)
In the list at the top, Sam Hilton's grant summary is "Writing EA-themed fiction that addresses X-risk topics", rather than being about the APPG for Future Generations.
Miranda Dixon-Luinenburg's grant is listed as being $23,000, when lower down it's listed as $20,000 (the former is the amount consistent with the total being $471k).
aarongertler @ 2020-01-07T02:46 (+6)
Thanks for this note! I've fixed the grant amount in this Forum post, and Sam's description in this post and on the Funds site.
Ozzie Gooen @ 2020-01-11T10:00 (+16)
Kudos for another lengthy write up!
I know some of the people here so don't want to comment on individuals. I would say though that I'm particularly excited about collaborations with the Topos Institute; they seem like one of the most exciting academic groups/movements right now, and experimenting with working with them seems pretty valuable to me. Even if the only result is that they forward smart people the way of EA problems, it could be quite beneficial.
One uncertainty; I noticed that very few of these groups were large & repeated; the main one being Roam Research, which you say you plan to stop funding. Who do you think should fund small-to-medium sized groups around long-term work? My impression was that the Long-Term Future Fund was the place for people who wanted to put money into the most cost-effective long-term future projects, but it seems like it may be somewhat limited to very small projects; which could be a very reasonable decision, but a bit non-obvious to outsiders.
Some obvious subquestions here:
- Are these small interventions mostly more cost-effective than larger ones?
- If (1) is not true, then what are the best current strategies for funding a mix of small and larger interventions? Is the expectation that large donors set up individual relationships with these larger groups, and just the Long-Term Future Fund for the smaller groups?
- If (1) is not true, do you think it's possible that in the future EA Funds could be eventually be adjusted to also handle more of larger interventions?
Habryka @ 2020-01-11T21:19 (+15)
Are these small interventions mostly more cost-effective than larger ones?
I do think that, right now, at the margin, small interventions are particularly underfunded. I think that's mostly because there are a variety of funders in the space (like Open Phil and SFF) which are focusing on larger organizational grants, so a lot of the remaining opportunities are more in the space of smaller and early-stage projects.
Larks also brought up another argument for the LTFF focusing on smaller projects in his last AI Alignment Review:
I can understand why the fund managers gave over a quarter of the funds to major organisations – they thought these organisations were a good use of capital! However, to my mind this undermines the purpose of the fund. (Many) individual donors are perfectly capable of evaluating large organisations that publicly advertise for donations. In donating to the LTFF, I think (many) donors are hoping to be funding smaller projects that they could not directly access themselves. As it is, such donors will probably have to consider such organisation allocations a mild ‘tax’ – to the extent that different large organisations are chosen then they would have picked themselves.
I find this argument reasonably compelling and also considered it as one of the reasons for why I want us to focus on smaller projects (though I don't think it's the only, or even primary, reason).
In particular, I think the correct pipeline for larger projects is that the LTFF funds them initially until they have demonstrated enough traction such that funders like Open Phil, Ben Delo and SFF can more easily evaluate them.
I am not fundamentally opposed to funding medium-sized organizations for a longer period, and my model is that there is a size of organization that Open Phil and other funders are unlikely to fund, that have between 2-6 employees, due to being to small to really be worth their attention. I expect we have a good chance of providing long-term support for such organizations, if such opportunities arise (though I don't think we have so far found such opportunities that I would be excited about, though maybe the Topos grant ends up in that category).
One clarification on Roam Research: The primary reason why we didn't want to continue funding Roam was that we thought it very likely that Roam could get profit-oriented VC funding. And I did indeed receive an email from Connor White-Sullivan a few days ago saying that they successfully raised their seed round, parts due to having enough runway and funding security because of our grant, so I think saying that we wouldn't fund them further was the correct call. I think for more purely charity-oriented projects, it's more likely that we would want to support them for a longer period of time.
Ozzie Gooen @ 2020-01-12T12:35 (+16)
Thanks so much for the thoughtful response. My guess is that you have more information than I do, and are correct here, but just in case I wanted to share some thoughts that I have had that would provide some counter-evidence.
Don't feel the need to respond to any/all of this, especially if it's not useful to you. The LTFF would be the group who would hypothetically be taking this advice (If I were fully convinced of your side, it wouldn't make a big difference, because I'm not doing much grantmaking myself).
First, that clarification is useful about Roam, and my main comment wasn't really about them in particular, but rather them as an example. I'd definitely agree that if they are getting VC funding, then LTFF funding is much less important. I'm quite happy to hear that they raised their seed round![1]
I do think that, right now, at the margin, small interventions are particularly underfunded.
From my position, now, I don't quite get this sense, just FYI. My impression is that the "big donor" funding space is somewhat sporadic and not very forthcoming in their decisions and analysis with possible donees. There are several mid-sized EA-related organizations I know that are getting very little or very sporadic (i.e. 1-off grant) funding from OpenPhil or similar without a very clear picture on how that will translate to the long term. OpenPhil itself generally only donates 50% revenue maximum, and it's not always clear who is best to make the rest of that. It's also of course not very preferable to rely on one or two very large donors.
Having a few dependable medium-sized donors (like Larks) is really useful to a bunch of mid-sized EA orgs. Arguably the EA Funds could effectively be like these medium-sized donors, and I could see that being similarly useful.
I hate to bring up anonymous anecdotal data, but around half of the EA orgs I'm close with (of around 7) seem to have frustrations around the existing funding situation. These are 3-40 person organizations.
Some other thoughts:
- The Meta Fund seems to primarily give to medium-sized groups. I'm not convinced they are making a significant mistake, and I don't feel like they are so different from the Long-Term Future Fund that this decision should obviously be different.
- Many of the Meta Fund's payouts (and likely those of other funds, theirs just come to mind) are to groups that then re-grant that money. The EA Community Building Grants work that way, and arguably Charity Entrepreneurship is similar. I think that to me, having an EA group give money directly to people, rather than having it first go through a more locally-respected group that can focus more on specific types of experiments, is a bit of an anti-pattern; it's important when no group yet exists, but is generally sub-ideal. I personally would guess that $40k going to either of those two groups, would be better than the Meta Fund having given that money directly to individuals in similar reference classes (community builders and potential entrepreneurs). Having the in-between group allows for more expertise of the granter and more specialized mentorship, resources, and reputability.
- Similar to (2), I'm in general more comfortable with the idea of "give money to an org to hire a person" than the idea of "give money to one person to do independent work." The former comes with a bunch of benefits, similar to what was mentioned in (2).
I guess, backing up a bit, that there's quite a few reasons for organizations (perhaps small ones) to generally be more efficient than individuals on their own. That said, it's quite possible that while this may be the case, the organizations are far less neglected, and as of right now many of the individual projects are more neglected as to make them worthwhile.
I think I could buy this up to some amount of money; the amount isn't clear. Maybe if the LTFF gets to $4mil/year it should start moving to larger things?
Related, I just checked, and it seems like the 3 other EA Fund groups all primarily have been giving to existing medium-sized orgs. It's not obvious to me that Long-Term Future topics are dramatically different. There are big donors for those 3 categories as well, though maybe with less funding. Of course, an inconsistency could mean that they should consider changing that idea instead.
Larks also brought up another argument for the LTFF focusing on smaller projects in his last AI Alignment Review:
Larks seems like the ideal model of a medium-sized donor who's willing to spend time evaluating the best nonprofits. I'd imagine most other donors of their contribution amount are less familiar than they are. Personally, if I were in Lark's position, I'd prefer my money go through an interim body like the Long-Term Future Fund even when it goes to medium-sized organizations, because I wouldn't trust my own biases, but Larks may be particularly good. Another possible benefit of the LLFF is that fewer donors could be just easier to work with.
I imagine that there are many other donors who either/both would trust a dedicated group more, or just don't have enough money for it to be really worth the time to select (and decide against the lotteries for various reasons). Another obvious possible solution here would be to have a separate long-term fund for donors who do want to use it for medium-sized groups.
I could of course be wrong here! This is an empirical question about what the current and possible donors would want to see.
[1] I'd separately note though that I'm a bit uncomfortable about "tech things being used by EAs" to try going the VC-round, in general. The levels of scale needed for such things may require sacrifices that would make tools much less good for EA purposes; I definitely got this sense when working on Guesstimate. It may be that Roam is one of the super fortunate ones to both be able to do good for EA groups and also have a reasonable time raising money, but I think this in general is really tough and don't generally recommend others try to copy this approach.
Habryka @ 2020-01-12T20:59 (+14)
I ended up messaging Ozzie via PM to discuss some of the specific examples more concretely.
I think my position on all of this is better summarized by: "We are currently not receiving many applications from medium-sized organizations, and I don't think the ones that we do receive are competitive with more individual and smaller-project grants".
For me personally the exception here is Rethink Priorities, who have applied, who I am pretty positive on funding, and would strongly consider giving to in future rounds, though I can't speak for the other fund members on that.
Overall, I think we ended up agreeing more on the value of medium-sized orgs, and both think the value there is pretty high, though my experience has been that not that many orgs in that reference class actually applied. And we have actually funded a significant fraction of the ones that have applied (both the AI Safety Camp, and CFAR come to mind, and we would have likely funded Rethink Priorities two rounds ago if another funder hadn't stepped in first).
Ben_West @ 2020-01-14T02:31 (+6)
Thanks for writing this up despite all your other obligations Oli! If you have time either now or when you do the more in-depth write up, I would still be curious to hear your thoughts on success conditions for fiction.
Habryka @ 2020-01-14T03:51 (+2)
Thanks! I do really hope I can get around to this. I've had it on my to-do list for a while, but other things have continued to be higher priority. Though I expect things to die down in January, and have some more time set aside in February for writing up LTFF stuff.
Ben_West @ 2020-01-14T04:02 (+2)
Great – I appreciate your dedication to transparency even though you have so many other commitments!