Long-Term Future Fund: April 2019 grant recommendations

By Habryka @ 2019-04-23T07:00 (+142)

Please note that the following grants are only recommendations, as all grants are still pending an internal due diligence process by CEA.

This post contains our allocation and some explanatory reasoning for our Q1 2019 grant round. We opened up an application for grant requests earlier this year which was open for about one month, after which we received an unanticipated large donation of about $715k. This caused us to reopen the application for another two weeks. We then used a mixture of independent voting and consensus discussion to arrive at our current grant allocation.

What is listed below is only a set of grant recommendations to CEA, who will run these by a set of due-diligence tests to ensure that they are compatible with their charitable objectives and that making these grants will be logistically feasible.

Grant Recipients

Each grant recipient is followed by the size of the grant and their one-sentence description of their project.

Total distributed: $923,150

Grant Rationale

Here we explain the purpose for each grant and summarize our reasoning behind their recommendation. Each summary is written by the fund member who was most excited about recommending the relevant grant (plus some constraints on who had time available to write up their reasoning). These differ a lot in length, based on how much available time the different fund members had to explain their reasoning.

Writeups by Helen Toner

Alex Lintz ($17,900)

A two-day, career-focused workshop to inform and connect European EAs interested in AI governance

Alex Lintz and some collaborators from EA Zürich proposed organizing a two-day workshop for EAs interested in AI governance careers, with the goals of giving participants background on the space, offering career advice, and building community. We agree with their assessment that this space is immature and hard to enter, and believe their suggested plan for the workshop looks like a promising way to help participants orient to careers in AI governance.

Writeups by Matt Wage

Tessa Alexanian ($26,250)

A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers

We are funding Tessa Alexanian to run a one day biosecurity summit, immediately following the SynBioBeta industry conference. We have also put Tessa in touch with some experienced people in the biosecurity space who we think can help make sure the event goes well.

Shahar Avin ($40,000)

Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers

We are funding Shahar Avin to help him hire an academic research assistant and for other miscellaneous research expenses. We think positively of Shahar’s past work (for example this report), and multiple people we trust recommended that we fund him.

Lucius Caviola ($50,000)

Conducting postdoctoral research at Harvard on the psychology of EA/long-termism

We are funding Lucius Caviola for a 2-year postdoc at Harvard working with Professor Joshua Greene. Lucius plans to study the psychology of effective altruism and long-termism, and an EA academic we trust had a positive impression of him. We are splitting the cost of this project with the EA Meta Fund because some of Caviola’s research (on effective altruism) is a better fit for the Meta Fund while some of his research (on long-termism) is a better fit for our fund.

Ought ($50,000)

We funded Ought in our last round of grants, and our reasoning for funding them in this round is largely the same. Additionally, we wanted to help Ought diversify its funding base because it currently receives almost all its funding from only two sources and is trying to change that.

Our comments from last round:

Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications. We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant. Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future.

Writeups by Alex Zhu

Nikhil Kunapuli ($30,000)

A study of safe exploration and robustness to distributional shift in biological complex systems

Nikhil Kunapuli is doing independent deconfusion research for AI safety. His approach is to develop better foundational understandings of various concepts in AI safety, like safe exploration and robustness to distributional shift, by exploring these concepts in complex systems science and theoretical biology, domains outside of machine learning for which these concepts are also applicable. To quote an illustrative passage from his application:

When an organism within an ecosystem develops a unique mutation, one of several things can happen. At the level of the organism, the mutation can either be neutral in terms of fitness, maladaptive and leading to reduced reproductive success and/or death, or adaptive. For an adaptive mutation, the upgraded fitness of the organism will change the fitness landscape for all other organisms within the ecosystem, and in response, the structure of the ecosystem will either be perturbed into a new attractor state or destabilized entirely, leading to ecosystem collapse. Remarkably, most mutations do not kill their hosts, and most mutations also do not lead to ecosystem collapse. This is actually surprising when one considers the staggering complexity present within a single genome (tens of thousands of genes deeply intertwined through genomic regulatory networks) as well as an ecosystem (billions of organisms occupying unique niches and constantly co-evolving). One would naïvely think that a system so complex must be highly sensitive to change, and yet these systems are actually surprisingly robust. Nature somehow figured out a way to create robust organisms that could respond to and function in a shifting environment, as well as how to build ecosystems in which organisms could be free to safely explore their adjacent possible new forms without killing all other species.

Nikhil spent a summer doing research for the New England Complex Systems Institute. He also spent 6 months as the cofounder and COO of an AI hardware startup, which he left because he decided that direct work on AI safety is more urgent and important.

I recommended that we fund Nikhil because I think Nikhil’s research directions are promising, and because I personally learn a lot about AI safety every time I talk with him. The quality of his work will be assessed by researchers at MIRI.

Anand Srinivasan ($30,000)

Formalizing perceptual complexity with application to safe intelligence amplification

Anand Srinivasan is doing independent deconfusion research for AI safety. His angle of attack is to develop a framework that will allow researchers to make provable claims about what specific AI systems can and cannot do, based off of factors like their architectures and their training processes. For example, AlphaGo can “only have thoughts” about patterns on Go boards and lookaheads, which aren’t expressive enough to encode thoughts about malicious takeover.

AI researchers can build safe and extremely powerful AI systems by relying on intuitive judgments of their capabilities. However, these intuitions are non-rigorous and prone to error, especially since powerful optimization processes can generate solutions that are totally novel and unexpected to humans. Furthermore, competitive dynamics will incentivize rationalization about which AI systems are safe to deploy. Under fast takeoff assumptions, a single rogue AI system could lead to human extinction, making it particularly unreliable for us to rely exclusively on intuitive judgments about which AI systems are safe. Anand’s goal is to develop a framework that formalizes these intuitions well enough to permit future AI researchers to make provable claims about what future AI systems can and can’t internally represent.

Anand was the CTO of an enterprise software company that he cofounded with me, where he managed a six-person engineering team for two years. Upon leaving the company, he decided to refocus his efforts toward building safe AGI. Before dropping out of MIT, Anand worked on Ising models for fast image classification and fuzzy manifold learning (which was later independently published as a top paper at NIPS).

I recommended that we fund Anand because I think Anand’s research directions are promising, and I personally learn a lot about AI safety every time I talk with him. The quality of Anand’s work will be assessed by researchers at MIRI.

David Girardo ($30,000)

A research agenda rigorously connecting the internal and external views of value synthesis

David Girardo is doing independent deconfusion research for AI safety. His angle of attack is to elucidate the ontological primitives for representing hierarchical abstractions, drawing from his experience with type theory, category theory, differential geometry, and theoretical neuroscience.

I decided to fund David because I think David’s research directions are very promising, and because I personally learn a lot about AI safety every time I talk with him. Tsvi Benson-Tilsen, a MIRI researcher, has also recommended that David get funding. The quality of David’s work will be assessed by researchers at MIRI.

Writeups by Oliver Habryka

I have a broad sense that funders in EA tend to give little feedback to organizations they are funding, as well as organizations that they explicitly decided not to fund (usually due to time constraints). So in my writeups below I tried to be as transparent as possible in explaining the real reasons for what caused me to believe a grant was a good idea, what my biggest hesitations are, and took a lot of opportunities to explain background models of mine that might help others get better at understanding my future decisions in this space.

For some of the grants below, I think there exist more publicly defensible (or easier to understand) arguments for the grants that I recommended. However I tried to explain the actual models that drove my decisions for these grants, which are often hard to put into a few paragraphs of text, and so I apologize in advance for some of the explanations below almost certainly being a bit hard to understand.

Note that when I’ve written about how I hope a grant will be spent, this was in aid of clarifying my reasoning and is in no way meant as a restriction on what the grant should be spent on. The only restriction is that it should be spent on the project they applied for in some fashion, plus any further legal restrictions that CEA requires.

Mikhail Yagudin ($28,000)

Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020

From the application:

EA Russia has the oral agreements with IMO [International Math Olympiad] 2020 (Saint Petersburg, Russia) & EGMO [European Girls’ Mathematical Olympiad] 2019 (Kyiv, Ukraine) organizers to give HPMORs [copies of Harry Potter and the Methods of Rationality] to the medalists of the competitions. We would also be able to add an EA / rationality leaflet made by CFAR (I contacted Timothy Telleen-Lawton on that matter).

My thoughts and reasoning

[Edit & clarification: The books will be given out by the organisers of the IMO and EGMO as prizes for the 650 people who got far enough to participate, all of which are "medalists".]

My model for the impact of this grant roughly breaks down into three questions:

  1. What effects does reading HPMOR have on people?
  2. How good of a target group are Math Olympiad winners for these effects?
  3. Is the team competent enough to execute on their plan?

What effects does reading HPMOR have on people?

My models of the effects of HPMOR stem from my empirical observations and my inside view on rationality training.

How good of a target group are Math Olympiad winners for these effects?

I think that Math Olympiad winners are a very promising demographic within which to find individuals who can contribute to improving the long-term future. I believe Math Olympiads select strongly on IQ as well as (weakly) on conscientiousness and creativity, which are all strong positives. Participants are young and highly flexible; they have not yet made too many major life commitments (such as which university they will attend), and are in a position to use new information to systematically change their lives’ trajectories. I view handing them copies of an engaging book that helps teach scientific, practical and quantitative thinking as a highly asymmetric tool for helping them make good decisions about their lives and the long-term future of humanity.

I’ve also visited and participated in a variety of SPARC events, and found the culture there (which is likely to be at least somewhat representative of Math Olympiad culture) very healthy in a broad sense. Participants displayed high levels of altruism, a lot of willingness to help one another, and an impressive amount of ambition to improve their own thinking and affect the world in a positive way. These observations make me optimistic about efforts that build on that culture.

I think it’s important when interacting with minors, and attempting to improve (and thus change) their life trajectories, to make sure to engage with them in a safe way that is respectful of their autonomy and does not put social pressures on them in ways they may not yet have learned to cope with. In this situation, Mikhail is working with/through the institutions that run the IMO and EGMO, and I expect those institutions to (a) have lots of experience with safeguarding minors and (b) have norms in place to make sure that interactions with the students are positive.

Is the team competent enough to execute on their plan?

I don’t have a lot of information on the team, don’t know Mikhail, and have not received any major strong endorsement for him and his team, which makes this the weakest link in the argument. However, I know that they are coordinating both with SPARC (which also works to give books like HPMOR to similar populations) and the team behind the highly successful Russian printing of HPMOR, two teams who have executed this kind of project successfully in the past. So I felt comfortable recommending this grant, especially given its relatively limited downside.

Alex Turner ($30,000)

Building towards a “Limited Agent Foundations” thesis on mild optimization and corrigibility

From the application:

I am a third-year computer science PhD student funded by a graduate teaching assistantship; to dedicate more attention to alignment research, I am applying for one or more trimesters of funding (spring term starts April 1).

[…]

Last summer, I designed an approach to the “impact measurement” subproblem of AI safety: “what equation cleanly captures what it means for an agent to change its environment, and how do we implement it so that an impact-limited paperclip maximizer would only make a few thousand paperclips?”. I believe that my approach, Attainable Utility Preservation (AUP), goes a long way towards answering both questions robustly, concluding:

> By changing our perspective from “what effects on the world are ‘impactful’?” to “how can we stop agents from overfitting their environments?”, a natural, satisfying definition of impact falls out. From this, we construct an impact measure with a host of desirable properties […] AUP agents seem to exhibit qualitatively different behavior […]

Primarily, I aim both to output publishable material for my thesis and to think deeply about the corrigibility and mild optimization portions of MIRI’s machine learning research agenda. Although I’m excited by what AUP makes possible, I want to lay the groundwork of deep understanding for multiple alignment subproblems. I believe that this kind of clear understanding will make positive AI outcomes more likely.

My thoughts and reasoning

I’m excited about this because:

Potential concerns

These intuitions, however, are a bit in conflict with some of the concrete research that Alex has actually produced. My inside views on AI Alignment make me think that work on impact measures is very unlikely to result in much concrete progress on what I perceive to be core AI Alignment problems, and I have talked to a variety of other researchers in the field who share that assessment. I think it’s important that this grant not be viewed as an endorsement of the concrete research direction that Alex is pursuing, but only as an endorsement of the higher-level process that he has been using while doing that research.

As such, I think it was a necessary component of this grant that I have talked to other people in AI Alignment whose judgment I trust, who do seem excited about Alex’s work on impact measures. I think I would not have recommended this grant, or at least this large of a grant amount, without their endorsement. I think in that case I would have been worried about a risk of diverting attention from what I think are more promising approaches to AI Alignment, and a potential dilution of the field by introducing a set of (to me) somewhat dubious philosophical assumptions.

Overall, while I try my best to form concrete and detailed models of the AI Alignment research space, I don’t currently devote enough time to it to build detailed models that I trust enough to put very large weight on my own perspective in this particular case. Instead, I am mostly deferring to other researchers in this space that I do trust, a number of whom have given positive reviews of Alex’s work.

In aggregate, I have a sense that the way Alex went about working on AI Alignment is a great example for others to follow, I’d like to see him continue, and I am excited about the LTF Fund giving out more grants to others who try to follow a similar path.

Orpheus Lummis ($10,000)

Upskilling in contemporary AI techniques, deep RL and AI safety, before pursuing a ML PhD

From the application :

Notable planned subprojects:

My thoughts and reasoning

We funded Orpheus in our last grant round to run an AI Safety Unconference just after NeurIPS. We’ve gotten positive testimonials from the event, and I am overall happy about that grant.

I do think that of the grants I recommended this round, this is probably the one I feel least confident about. I don’t know Orpheus very well, and while I have received generally positive reviews of their work, I haven’t yet had the time to look into any of those reviews in detail, and haven’t seen clear evidence about the quality of their judgment. However, what I have seen seems pretty good, and if I had even a tiny bit more time to spend on evaluating this round’s grants, I would probably have spent it reaching out to Orpheus and talking with them more in person.

In general, I think time for self-study and reflection can be exceptionally important for people starting to work in AI Alignment. This is particularly true if they are following a more conventional academic path which could easily cause them to try to immediately work on contemporary AI capabilities research, because I generally think this has negative value even for people concerned about safety (though I do have some uncertainty here). I think giving people working on more classical ML research the time and resources to explore the broader implications of their work on safety, if they are already interested in that, is a good use of resources.

I am also excited about building out the Montreal AI Alignment community, and having someone who both has the time and skills to organize events and can understand the technical safety work seems likely to have good effects.

This grant is also the smallest grant we are funding this round, making me more comfortable with a bit less due diligence than the other grants, especially since this grant seems unlikely to have any large negative consequences.

Tegan McCaslin ($30,000)

Conducting independent research into AI forecasting and strategy questions

From the application:

1) I’d like to independently pursue research projects relevant to AI forecasting and strategy, including (but not necessarily limited to) some of the following:

I am actively pursuing opportunities to work with or under more senior AI strategy researchers [..], so my research focus within AI strategy is likely to be influenced by who exactly I end up working with. Otherwise I expect to spend some short period of time at the start generating more research ideas and conducting pilot tests on the order of several hours into their tractability, then choosing which to pursue based on an importance/tractability/neglectedness framework.

[..]

2) There are relatively few researchers dedicated full-time to investigating AI strategy questions that are not immediately policy-relevant. However, there nonetheless exists room to contribute to the research on existential risks from AI with approaches that fit into neither technical AI safety nor AI policy/governance buckets.

My thoughts and reasoning

Tegan has been a member of the X-risk network for several years now, and recently left AI Impacts. She is now looking for work as a researcher. Two considerations made me want to recommend that the LTF Fund make a grant to her.

  1. It’s easier to relocate someone who has already demonstrated trust and skills than to find someone completely new.
    1. This is (roughly) advice given by YCombinator to startups, and I think it’s relevant to the X-risk community. It’s cheaper for Tegan to move around and find the place for her to do her best work relative to an outsider who has not already worked within the X-risk network. A similarly skilled individual who is not already part of the network will need to spend a few years understanding the community and demonstrating that they can be trusted. So I think it is a good idea to help Tegan explore other parts of the community to work in.
  2. It’s important to give good researchers runway while they find the right place.
    1. For many years, the X-risk community has been funding-bottlenecked, keeping salaries low. A lot of progress has been made on this front and I hope that we’re able to fix this. Unfortunately, the current situation means that when a hire does not work out, the individual often doesn’t have much runway while reorienting, updating on what didn’t work out, and subsequently trialing at other organizations.
    2. This moves them much more quickly into an emergency mode, where everything must be optimized for short-term income, rather than long-term updating, skill building, and research. As such, I think it is important for Tegan to have a comfortable amount of runway while doing solo research and trialling at various organizations in the community.

While I haven’t spent the time to look into Tegan’s research in any depth, the small amount I did read looked promising. The methodology of this post is quite exciting, and her work there and on other pieces seems very thorough and detailed.

That said, my brief assessment of Tegan’s work was not the reason why I recommended this grant, and if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI.

Overall, I think Tegan is in a good place to find a valuable role in our collective X-risk reduction project, and I’d like her to have the runway to find that role.

Anthony Aguirre ($70,000)

A major expansion of the Metaculus prediction platform and its community

From the application:

The funds would be used to expand the Metaculus prediction platform along with its community. Metaculus.com is a fully-functional prediction platform with ~10,000 registered users and >120,000 predictions made to date on more than >1000 questions. The goals of Metaculus are:

There are two major high-priority expansions possible with funding in place. The first would be an integrated set of extensions to improve user interaction and information-sharing. This would include private messaging and notifications, private groups, a prediction “following” system to create micro-teams within individual questions, and various incentives and systems for information-sharing.

The second expansion would link questions into a network. Users would express links between questions, from very simple (“notify me regarding question Y when P(X) changes substantially) to more complex (“Y happens only if X happens, but not conversely”, etc.) Information can also be gleaned from what users actually do. The strength and character of these relations can then generate different graphical models that can be explored interactively, with the ultimate goal of a crowd-sourced quantitative graphical model that could structure event relations and propagate new information through the network.

My thoughts and reasoning

For this grant, and also the grants to Ozzie Gooen and Jacob Lagerros, I did not have enough time to write up my general thoughts on forecasting platforms and communities. I hope to later write a post with my thoughts here. But for a short summary, see my thoughts on Ozzie Gooen’s grant.

I am generally excited about people building platforms for coordinating intellectual labor, particularly on topics that are highly relevant to the long-term future. I think Metaculus has been providing a valuable service for the past few years, both in improving our collective ability to forecast a large variety of important world events and in allowing people to train and demonstrate their forecasting skills, which I expect to become more relevant in the future.

I am broadly impressed with how cooperative and responsive the Metaculus team has been in helping organizations in the X-risk space get answers to important questions, or provide software services to them (e.g. I know that they are helping Jacob Lagerros and Ben Goldhaber set up a private Metaculus instance focused on AI)

I don’t know Anthony well, and overall I am quite concerned that there is no full-time person on this project. My model is that projects like this tend to go a lot better if they have one core champion who has the resources to fully dedicate themselves to the project, and it currently doesn’t seem that Anthony is able to do that.

My current model is that Metaculus will struggle as a platform without a fully dedicated team or at least individual champion, though I have not done a thorough investigation of the Metaculus team and project, so I am not very confident of this. One of the major motivations for this grant is to ensure that Metaculus has enough resources to hire a potential new champion for the project (who ideally also has programming skills or UI design skills to allow them to directly work on the platform). That said, Metaculus should use the money as best they see fit.

I am also concerned about the overlap of Metaculus with the Good Judgment Project, and currently have a sense that it suffers from being in competition with it, while also having access to substantially fewer resources and people.

The requested grant amount was for $150k, but I am currently not confident enough in this grant to recommend filling the whole amount. If Metaculus finds an individual new champion for the project, I can imagine strongly recommending that it gets fully funded, if the new champion seems competent.

Lauren Lee ($20,000)

Working to prevent burnout and boost productivity within the EA and X-risk communities

From the application:

(1) After 2 years as a CFAR instructor/researcher, I’m currently in a 6-12 month phase of reorienting around my goals and plans. I’m requesting a grant to spend the coming year thinking about rationality and testing new projects.

(2) I want to help individuals and orgs in the x-risk community orient towards and achieve their goals.

(A) I want to train the skill of dependability, in myself and others.

This is the skill of a) following through on commitments and b) making prosocial / difficult choices in the face of fear and aversion. The skill of doing the correct thing, despite going against incentive gradients, seems to be the key to virtue.

One strategy I’ve used is to surround myself with people with shared values (CFAR, Bay Area) and trust the resulting incentive gradients. I now believe it is also critical to be the kind of person who can take correct action despite prevailing incentive structures.

Dependability is also related to thinking clearly. Your ability to make the right decision depends on your ability to hold and be with all possible realities, especially painful and aversive ones. Most people have blindspots that actively prevent this.

I have some leads on how to train this skill, and I’d like both time and money to test them.

(B) Thinking clearly about AI risk

Most people’s decisions in the Bay Area AI risk community seem model-free. They themselves don’t have models of why they’re doing what they’re doing; they’re relying on other people “with models” to tell them what to do and why. I’ve personally carried around such premises. I want to help people explore where their ‘placeholder premises’ are and create safety for looking at their true motivations, and then help them become more internally and externally aligned.

(C) Burnout

Speaking of “not getting very far.” My personal opinion is that most ex-CFAR employees left because of burnout; I’ve written what I’ve learned here, see top 2 comments: [https://forum.effectivealtruism.org/posts/NDszJWMsdLCB4MNoy/burnout-what-is-it-and-how-to-treat-it#87ue5WzwaFDbGpcA7]. I’m interested in working with orgs and individuals to prevent burnout proactively.

(3) Some possible measurable outputs / artifacts:

My thoughts and reasoning

Lauren worked as an instructor at CFAR for about 2 years, until Fall 2018. I review CFAR’s impact as an institution below; in general, I believe it has helped set a strong epistemic foundation for the community and been successful in recruitment and training. I have a great appreciation for everyone who helps them with their work.

Lauren is currently in a period of reflection and reorientation around her life and the problem of AGI, in part due to experiencing burnout in the months before she left CFAR. To my knowledge, CFAR has never been well-funded enough to offer high salaries to its employees, and I think it is valuable to ensure that people who work at EA orgs and burn out have the support to take the time for self-care after quitting due to long-term stress. Ideally, I think this should be improved by higher salaries that allow employees to build significant runway to deal with shocks like this, but I think that the current equilibrium of salary levels in EA does not make that easy. Overall, I think it’s likely that staff at highly valuable EA orgs will continue burning out, and I don’t currently see it as an achievable target to not have this happen (though I am in favor of people people working on solving the problem).

I do not know Lauren well enough to evaluate the quality of her work on the art of human rationality, but multiple people I trust have given positive reviews (e.g. see Alex Zhu above), so I am also interested to read her output on the subjects she is thinking about.

I think it’s very important that people who work on developing an understanding of human rationality take the time to add their knowledge into our collective understanding, so that others can benefit from and build on top of it. Lauren has begun to write up her thoughts on topics like burnout, intentions, dependability, circling, and curiosity, and her having the space to continue to write up her ideas seemed like a significant additional positive outcome of this grant.

I think that she should probably aim to make whatever she does valuable enough that individuals and organizations in the community wish to pay her directly for her work. It’s unlikely that I would recommend renewing this grant for another 6 month period in the absence of a relatively exciting new research project/direction, and if Lauren were to reapply, I would want to have a much stronger sense that the projects she was working on were producing lots of value before I decided to recommend funding her again.

In sum, this grant hopefully helps Lauren to recover from burning out, get the new rationality projects she is working on off the ground, potentially identify a good new niche for her to work in (alone or at an existing organization), and write up her ideas for the community.

Ozzie Gooen ($70,000)

Build infrastructure for the future of effective forecasting efforts

From the application:

What I will do

I applied a few months ago and was granted $20,000 (thanks!). My purpose for this money is similar but greater in scope to the previous round. The previous funding has given me the security to be more ambitious, but I’ve realized that additional guarantees of funding should help significantly more. In particular, engineers can be costly and it would be useful to secure additional funding in order to give possible hires security.

My main overall goal is to advance the use of predictive reasoning systems for purposes most useful for Effective Altruism. I think this is an area that could eventually make use of a good deal of talent, so I have come to see my work at this point as foundational.

This work is in a few different areas that I think could be valuable. I expect that after a while a few parts will emerge as the most important, but think it is good to experiment early when the most effective route is not yet clear.

I plan to use additional funds to scale my general research and development efforts. I expect that most of the money will be used on programming efforts.

Foretold

Foretold is a forecasting application that handles full probability distributions. I have begun testing it with users and have been asked for quite a bit more functionality. I’ve also mapped out the features that I expect people will eventually desire, and think there is a significant amount of work that would be significantly useful.

One particular challenge is figuring out the best way to handle large numbers of questions (1000 active questions plus, at a time.) I believe this requires significant innovations in the user interface and backend architecture. I’ve made some wireframes and have experimented with different methods, and believe I have a pragmatic path forward, but will need to continue to iterate.

I’ve talked with members of multiple organizations at this point who would like to use Foretold once it has a specific set of features, and cannot currently use any existing system for their purposes. […]

Ken

Ken is a project to help organizations set up and work with structured data, in essence allowing them to have private versions of Wikidata. Part of the project is Ken.js, a library which I’m beginning to integrate with Foretold.

Expected Impact

The main aim of EA forecasting would be to better prioritize EA actions. I think that if we could have a powerful system set up, it could make us better at predicting the future, better at understanding what things are important and better at coming to a consensus on challenging topics.

Measurement

In the short term, I’m using heuristics like metrics regarding user activity and upvotes on LessWrong. I’m also getting feedback by many people in the EA research community. In the medium to long term, I hope to set up evaluation/estimation procedures for many projects and would include this one in that process.

My thoughts and reasoning

This grant is to support Ozzie Gooen in his efforts to build infrastructure for effective forecasting. Ozzie requested $70,000 to hire a software engineer who would support him on his work on the prediction platform www.foretold.iothat he is working on.

Johannes Heidecke ($25,000)

Supporting aspiring researchers of AI alignment to boost themselves into productivity

From the application:

(1) We would like to apply for a grant to fund an upcoming camp in Madrid that we are organizing. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. Participants will work in groups on tightly-defined research projects in strategy and technical AI safety. Expert advisors from AI Safety/Strategy organizations will help refine proposals to be tractable and relevant. This allows for time-efficient use of advisors’ knowledge and research experience, and ensures that research is well-aligned with current priorities. More information: https://aisafetycamp.com/

(2) The field of AI alignment is talent-constrained, and while there is a significant number of young aspiring researchers who consider focussing their career on research on this topic, it is often very difficult for them to take the first steps and become productive with concrete and relevant projects. This is partially due to established researchers being time-constrained and not having time to supervise a large number of students. The goals of AISC are to help a relatively large number of high-talent people to take their first concrete steps in research on AI safety, connect them to collaborate, and efficiently use the capacities of experienced researchers to guide them on their path.

(3) We send out evaluation questionnaires directly after the camp and in regular intervals after the camp has passed. We measure impact on career decisions and collaborations and keep track of concrete output produced by the teams, such as blog posts or published articles.

We have successfully organized two camps before and are in the preparation phase for the third camp taking place in April 2019 near Madrid. I was the main organizer for the second camp and am advising the core team of the current camp, as well as organizing funding.

An overview of previous research projects from the first 2 camps can be found here:

https://aisafetycamp.com/2018/06/05/aisc-1-research-summaries/

https://aisafetycamp.com/2018/12/07/aisc2-research-summaries/

We have evaluated the feedback from participants of the first two camps in the following two documents:

https://docs.google.com/document/d/1f8wvsvQTv4wdBaggCaK8aKC5gFdIHUDcihnmVkZPM6I/edit?usp=sharing

https://docs.google.com/document/d/18v2e-S3iZrOPbE7d9n26sUs1K6CkUAvezRvRj_xlcj8/edit?usp=sharing

My thoughts and reasoning

I’ve talked with various participants of past AI Safety camps and heard broadly good things across the board. I also generally have a positive impression of the people involved, though I don’t know any of the organizers very well.

The material and testimonials that I’ve seen so far suggest that the camp successfully points participants towards a technical approach to AI Alignment, focusing on rigorous reasoning and clear explanations, which seems good to me.

I am not really sure whether I’ve observed significant positive outcomes of camps in past years, though this might just be because I am less connected to the European community these days.

I also have a sense that there is a lack of opportunities for people in Europe to productively work on AI Alignment related problems, and so I am particularly interested in investing in infrastructure and events there. This does however make this a higher-risk grant, since I think this means this event and the people surrounding it might become the main location for AI Alignment in Europe, and if the quality of the event and the people surrounding it isn’t high enough, this might cause long-term problems for the AI Alignment community in Europe.

Concerns

I also coordinated with Nicole Ross from CEA’s EA Grants project, who had considered also making a grant to the camp. We decided it would be better for the LTF Fund team to make this grant, though we wanted to make sure that some of the concerns Nicole had with this grant were summarized in our announcement:

This seems to roughly mirror my concerns above.

I would want to engage with the organizers a fair bit more before recommending a renewal of this grant, but I am happy about the project as a space for Europeans to get engaged with alignment ideas and work on them for a week together with other technical and engaged people.

Broadly, the effects of the camp seem very likely to be positive, while the (financial) cost of the camp seems small compared to the expected size of the impact. This makes me relatively confident that this grant is a good bet.

Vyacheslav Matyuhin ($50,000)

An offline community hub for rationalists and EAs

From the application:

Our team is working on the offline community hub for rationalists and EAs in Moscow called Kocherga (details on Kocherga are here).

We want to make sure it keeps existing and grows into the working model for building new flourishing local EA communities around the globe.

Our key assumptions are:

  1. There’s a gap between the “monthly meetup” EA communities and the larger (and significantly more productive/important) communities. That gap is hard to close for many reasons.
  2. Solving this issue systematically would add a lot of value to the global EA movement and, as a consequence, the long-term future of humanity.
  3. Closing the gap requires a lot of infrastructure, both organizational and technological.

So we work on building such an infrastructure. We also keep in mind the alignment and goodharting issues (building a big community of people who call themselves EAs but who don’t actually share EA virtues would be bad, obviously).

[..]

Concretely, we want to:

  1. Add 2 more people to our team.
  2. Implement our new community building strategy (which includes both organizational tasks such as new events and processes for seeding new working groups, and technological tasks such as implementing a website which allows people from the community to announce new private meetups or team up for coaching or mastermind groups)
  3. Improve our rationality workshops (in terms of scale and content quality). Workshops are important for attracting new community members, for keeping the high epistemic standards of the community and for making sure that community members can be as productive as possible.

To be able to do this, we need to cover our current expenses somehow until we become profitable on our own.

My thoughts and reasoning

The Russian rationality community is surprisingly big, which suggests both a certain level of competence from some of its core organizers and potential opportunities for more community building. The community has:

This grant is to the team that runs the Kocherga anti-cafe.

Their LessWrong write-up suggests:

I find myself having slightly conflicted feelings about the Russian rationality community trying to identify and integrate more with the EA community. I think a major predictor of how excited I have historically been about community building efforts has been a group’s emphasis on improving members’ judgement and thinking skills, as well as the degree to which it emphasizes high epistemic standards and careful thinking. I am quite excited about how Kocherga seems to have focused on those issues so far, and I am worried that this integration and change of identity will reduce that focus (as I think it has for some local and student groups that made a similar transition). That said, I think the Kocherga group has shown quite good judgement on this dimension (see here), which addresses many of my concerns, though I am still interested in thinking and talking about these issues further.

I’m somewhat concerned that I’m not aware of any major insights or unusually talented people from this community, but I expect the language barrier to be a big part of what is preventing me from hearing about those things. And I am somewhat confused about how to account for interesting ideas that don’t spread to the projects I care most about.

I think there are benefits to having an active Russian community that can take opportunities that are only available for people in Russia, or at least people who speak Russian. This particularly applies to policy-oriented work on AI alignment and other global catastrophic risks, which is also a domain that I feel confused about and have a hard time evaluating.

For a lot of the work that I do feel comfortable evaluating, I expect the vast majority of intellectual progress to be made in the English-speaking world, and as such, the question of how talent can flow from Russia to the existing communities working on the long-term future seems quite important. I hope this grant can facilitate a stronger connection between the rest of the world and the Russian community, to improve that talent and idea flow.

This grant seemed like a slightly better fit for the EA Meta fund. They decided not to fund it, so we made it instead, since it still seemed like a strong proposal to us.

What I have seen so far makes me confident that this grant is a good idea. However, before we make more grants like this, I would want to talk more to the organizers involved and generally get more information on the structure and culture of the Russian EA and rationality communities.

Jacob Lagerros ($27,000)

Building infrastructure to give x-risk researchers superforecasting ability with minimal overhead

From the application:

Build a private platform where AI safety and policy researchers have direct access to a base of superforecaster-equivalents, and where aspiring EAs with smaller opportunity costs but excellent calibration perform useful work.

[…]

I previously received two grants to work on this project: a half-time salary from EA Grants, and a grant for direct project expenses from BERI. Since then, I dropped out of a Master’s programme to work full-time on this, seeing that was the only way I could really succeed at building something great. However, during that transition there were some logistical issues with other grantmakers (explained in more detail in the application), hence I applied to the LTF for funding for food, board, travel and the runway to make more risk-neutral decisions and capture unexpected opportunities in the coming ~12 months of working on this.”

My thoughts and reasoning

There were three main factors behind my recommending this grant:

  1. My object-level reasons for recommending this grant are quite similar to my reasons for recommending Ozzie Gooen’s and Anthony Aguirre’s.
  2. Jacob has been around the community for about 3 years. The output of his that I’ve seen has included (amongst other things) competently co-directing EAGxOxford 2016, and some thoughtful essays on LessWrong (e.g. 1, 2, 3, 4).
  3. Jacob’s work seems useful to me, and is being funded on the recommendation of the FHI Research Scholars Programme and the Berkeley Existential Risk Initiative. He is also collaborating with others I’m excited about (Metaculus and Ozzie Gooen).

However, I did not assess the grant in detail, as the only reason Jacob asked for a grant was due to logistical complications with other grantmakers. Since FHI and BERI have already investigated the project in more detail, I was happy to suggest we pick up the slack to ensure Jacob has the runway to pursue his work.

Connor Flexman ($20,000)

Perform independent research in collaboration with John Salvatier

I am recommending this grant with more hesitation than most of the other grants in this round. The reasons for hesitation are as follows:

However, despite these reservations, I think this grant is a good choice. The two primary reasons are:

  1. Connor himself has worked on a variety of research and community building projects, and both by my own assessment and other people I talked to, has significant potential in becoming a strong generalist researcher, which I think is an axis on which a lot of important projects are bottlenecked.
  2. This grant was strongly recommended to me by John Salvatier, who is funded by an EA Grant and whose work I am generally excited about.

John did some very valuable community organizing while he lived in Seattle and is now working on developing techniques to facilitate skill transfer between experts in different domains. I think it is exceptionally hard to develop effective techniques for skill transfer, and more broadly techniques to improve people’s rationality and reasoning skills, but am sufficiently impressed with John’s thinking that I think he might be able to do it anyway (though I still have some reservations).

John is currently collaborating with Connor and requested funding to hire him to collaborate on his projects. After talking to Connor I decided it would be better to recommend a grant to Connor directly, encouraging him to continue working with John but also allowing him to switch towards other research projects if he finds he can’t contribute as productively to John’s research as he expects.

Overall, while I feel some hesitation about this grant, I think it’s very unlikely to have any significant negative consequences, and I assign some significant probability that this grant can help Connor develop into an excellent generalist researcher of a type that I feel like EA is currently quite bottlenecked on.

Eli Tyre ($30,000)

Broad project support for rationality and community building interventions

Eli has worked on a large variety of interesting and valuable projects over the last few years, many of them too small to have much payment infrastructure, resulting in him doing a lot of work without appropriate compensation. I think his work has been a prime example of picking low-hanging fruit by using local information and solving problems that aren’t worth solving at scale, and I want him to have resources to continue working in this space.

Concrete examples of projects he has worked on that I am excited about:

I think Eli has exceptional judgment, and the goal of this grant is to allow him to take actions with greater leverage by hiring contractors, paying other community members for services, and paying for other varied expenses associated with his projects.

Robert Miles ($39,000)

Producing video content on AI alignment

From the application:

My goals are:

  1. To communicate to intelligent and technically-minded young people that AI Safety:
    1. is full of hard, open, technical problems which are fascinating to think about
    2. is a real existing field of research, not scifi speculation
    3. is a growing field, which is hiring
  2. To help others in the field communicate and advocate better, by providing high quality, approachable explanations of AIS concepts that people can share, instead of explaining the ideas themselves, or sharing technical documents that people won’t read
  3. To motivate myself to read and internalise the papers and textbooks, and become a technical AIS researcher in future

My thoughts and reasoning

I think video is a valuable medium for explaining a variety of different concepts (for the best examples of this, see 3Blue1Brown, CGP Grey, and Khan Academy). While there are a lot of people working directly on improving the long term future by writing explanatory content, Rob is the only person I know who has invested significantly in getting better at producing video content. I think this opens a unique set of opportunities for him.

The videos on his Youtube channel pick up an average of ~20k views. His videos on the official Computerphile channel often pick up more than 100k views, including for topics like logical uncertainty and corrigibility (incidentally, a term Rob came up with).

More things that make me optimistic about Rob’s broad approach:

Rob is the first skilled person in the X-risk community working full-time on producing video content. Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas).

Rob made a grant request during the last round, in which he explicitly requested funding for a collaboration with RAISE to produce videos for them. I currently don’t think that working with RAISE is the best use of Rob’s talent, and I’m skeptical of the product RAISE is currently trying to develop. I think it’s a better idea for Rob to focus his efforts on producing his own videos and supporting other organizations with his skills, though this grant doesn’t restrict him to working with any particular organization and I want him to feel free to continue working on RAISE if that is the project he thinks is currently most valuable.

Overall, Rob is developing a new and valuable skill within the X-risk community, and executing on it in a very competent and thoughtful way, making me pretty confident that this grant is a good idea.

MIRI ($50,000)

My thoughts and reasoning

In sum, I think MIRI is one of the most competent and skilled teams attempting to improve the long-term future, I have a lot of trust in their decision-making, and I’m strongly in favor of ensuring that they’re able to continue their work.

Thoughts on funding gaps

Despite all of this, I have not actually recommended a large grant to MIRI.

However, this is all complicated by a variety of countervailing considerations, such as the following three:

  1. Power law distributions of impact only really matter in this way if we can identify which interventions we expect to be in the right tail of impact, and I have a lot of trouble properly bounding my uncertainty here.
  2. If we are faced with significant uncertainty about cause areas, and we need organizations to have worked in an area for a long time before we can come to accurate estimates about its impact, then it’s a good idea to invest in a broad range of organizations in an attempt to get more information. This is related to common arguments around “explore/exploit tradeoffs”.
  3. Sometimes, making large amounts of funding available to one organization can have negative consequences for the broader ecosystem of a cause area. Also, giving an organization access to more funding than it can use productively may cause it to make too many hires or lose focus by trying to scale too quickly. Having more funding often also attracts adversarial actors and increases competitive stakes within an organization, making it a more likely target for attackers.

I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round.

CFAR ($150,000)

[Edit: It seems relevant to mention that LessWrong is currently receiving operational support from CFAR, in a way that makes me technically an employee of CFAR (similar to how ACE and 80K were/are part of CEA for a long time). However, LessWrong operates as a completely separate entity with its own fundraising and hiring procedures, and I don't feel any hesitation or pressure to critique CFAR openly because of that relation. Though I find myself a tiny bit more hesitant to speak harshly of specific individuals, simply because I am only working a floor away from the CFAR offices and that does have some psychological effect on me. Though the same was true for CEA while LessWrong was located in the CEA office for a few months, and was true for residents of my group house while LessWrong was located in the living room of my group house for most of the past two years, so I don't think this effect is particularly large.]

I think that CFAR’s intro workshops have historically had a lot of positive impact. I think they have done so via three pathways.

  1. Establishing epistemic norms: I think CFAR workshops are quite good at helping the EA and rationality community establish norms about what good discourse and good reasoning look like. As a concrete example of this, the concept of Double Crux has gotten traction in the EA and rationality communities, which has improved the way ideas and information spread throughout the community, how ideas get evaluated, and what kinds of projects get resources. More broadly, I think CFAR workshops have helped in establishing a set of common norms about what good reasoning and understanding look like, similar to the effect of the sequences on LessWrong.
    1. I think that it’s possible that the majority of the value of the EA and rationality communities comes from having that set of shared epistemic norms that allows them to reason collaboratively in a way that most other communities cannot (in the same way that what makes science work is a set of shared norms around what constitutes valid evidence and how new knowledge gets created).
    2. As an example of the importance of this: I think a lot of the initial arguments for why AI risk is a real concern were “weird” in a way that was not easily compatible with a naive empiricist worldview that I think is pretty common in the broader intellectual world.
      1. In particular, the arguments for AI risk are hard to test with experiments or empirical studies, but hold up from the perspective of logical and philosophical reasoning and are generated by a variety of good models of broader technological progress, game theory, and related areas of study. But for those arguments to find traction, they required a group of people with the relevant skills and habits of thought for generating, evaluating, and having extended intellectual discourse about these kinds of arguments.
  2. Training: A percentage of intro workshop participants (many of whom were already working on important problems within X-risk) have seen significant improvements in competence; as a result, they became substantially more effective in their work.
  3. Recruitment: CFAR has helped many people move from passive membership in the EA and rationality community to having strong social bonds in the X-risk network.

While I do think that CFAR has historically caused a significant amount of impact, I feel hesitant about this grant because I am unsure whether CFAR can continue to create the same amount of impact in the future. I have a few reasons for this:

However, there are some additional considerations that led me to recommending this grant.

In the last year, I had some concerns about the way CFAR communicated a lot of its insights, and I sensed an insufficient emphasis on a kind of robust and transparent reasoning that I don’t have a great name for. I don’t think the communication style I was advocating for is always the best way to make new discoveries, but is very important for establishing broader community-wide epistemic norms and enables a kind of long-term intellectual progress that I think is necessary for solving the intellectual challenges we’ll need to overcome to avoid global catastrophic risks. I think CFAR is likely to respond to last year’s events by improving their communication and reasoning style in this respect (from my perspective).

My overall read is that CFAR is performing a variety of valuable community functions and has a strong enough track record that I want to make sure that it can continue existing as an institution. I didn’t have enough time this grant round to understand how the future of CFAR will play out; the current grant amount seems sufficient to ensure that CFAR does not have to take any drastic action until our next grant round. By the next grant round, I plan to have spent more time learning and thinking about CFAR’s trajectory and future, and to have a more confident opinion about what the correct funding level for CFAR is.


Ben_Kuhn @ 2019-04-10T09:45 (+130)

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.

If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:

Michelle_Hutchinson @ 2019-04-10T14:02 (+48)

I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and mean those serving on them are likely to step down sooner. That's not to say we shouldn't be discussing the grants - presumably it's useful for the committee to hear other people's views on the grants to get more information about them. But following Ben's suggestions seems crucial to EA funds continuing to be a useful way of donating into the future. In addition, to try to engage more in collaborative truthseeking rather than adversarial debate, we might try to:

  • Focus on constructive information / suggestions for future grants rather than going into depth on what's wrong with grants already given.
  • Spend at least as much time describing which grants you think are good and how, so that they can be built on, as on things you disagree with.
Milan_Griffes @ 2019-04-10T15:14 (+28)

+1

I think it's great that the Fund is trending towards more transparency & a broader set of grantees (cf. November 2018 grant report, cf. July 2018 concerns about the Fund).

And I really appreciate the level of care & attention that Oli is putting towards this thread. I've found the discussion really helpful.

Milan_Griffes @ 2019-04-10T15:54 (+25)

Relatedly, is Oli getting compensated for the work he's putting in to the Longterm Future Fund?

Seems good to move towards a regime wherein:

  • The norm is to write up detailed, public grant reports
  • Community members ask a bunch of questions about the grant decisions
  • The norm is that a representative of the grant-making staff fields all of these questions, and is compensated for doing so
Habryka @ 2019-04-10T19:04 (+41)

I don't get compensated, though I also don't think compensation would make much of a difference for me or anyone else on the fund (except maybe Alex).

Everyone on the fund is basically dedicating all of their resources towards EA stuff, and is generally giving up most of their salary potential for working in EA. I don't think it would make super much sense for us to get more money, given that we are already de-facto donating everything above a certain threshold (either literally in the case of the two Matts, or indirectly by taking a paycut and working in EA).

I think if people give more money to the fund because they come to trust the decisions of the fund more, then that seems like it would incentivize more things like this. Also if people bring up strong arguments against any of the reasoning I explained above, then that is a great win, since I care a lot about our fund distributions getting better.

Milan_Griffes @ 2019-04-10T19:21 (+14)

Got it.

The reason compensation seems good is that it formalizes the duty of engaging with the community's discourse, which probably pushes us further towards the above regime.

Right now, the community is basically banking on you & other fund managers caring a lot about engaging with the community. This is great, and it's great that you do.

Layering on compensation seems like a way of bolstering this engagement. If someone is compensated to do this engagement, then there's increased incentive for them to do it. (Though there's probably some weirdness around Goodhart-ing here.)

cf. Role of ombudsperson in public governance

Khorton @ 2019-04-10T19:40 (+7)

Compensation is also good in case you ever retire and someone else with different financial needs takes over (but it doesn't seem super important - there are other things you could solve first).

Raemon @ 2019-04-10T20:23 (+5)

I think that makes sense but in practice is something that makes more sense to handle through their day jobs. (If they went the route of hiring someone for whom managing the fund was their actual day job I'd agree that generally higher salaries would be good, for mostly the same reason they'd be good across the board in EA)

Milan_Griffes @ 2019-06-02T05:11 (+19)

Now that the dust has settled a bit, I'm curious what Habryka & the other fund managers think of the level of community engagement that occurred on this report...

  • What kinds of engagement seemed helpful?
  • What kinds of engagement seemed unnecessary?
  • What kinds of engagement were emotionally expensive to address?
  • Does it seem sustainable to write up grantmaker reasoning at this level of detail, for each grantmaking round going forward?
  • Does it seem sustainable to engage with questions & comments from the community at this level of detail, for each grantmaking round going forward?
Habryka @ 2019-06-05T02:05 (+4)

I have a bunch of complicated thoughts here. Overall I have been quite happy with the reception to this, and think the outcomes of the conversations on the post have been quite good.

I am a bit more time-strapped than usual, so I will probably wait on writing a longer retrospective until I set aside a bunch of time to answer questions on the next set of writeups.

Stefan_Schubert @ 2019-04-10T10:29 (+14)

Agree with this, especially the comments about rudeness. This also means that I disagree with Oli's comment elsewhere in this thread:

that people should feel free to express any system-1 level reactions they have to these grants.

In line with what Ben says, I think people should apply a filter to their system-1 level reactions, and not express them whatever they are.

Habryka @ 2019-04-10T22:09 (+27)

I think that people should feel comfortable sharing their system-1 expressions, in a way that does not immediately imply judgement.

I am thinking of stuff like the non-violent communication patterns, where you structure your observation in the following steps:

1. List a set of objective observations

2. Report your experience upon making those observations

3. Then your personal interpretations of those experiences and what they imply about your model of the world

4. Your requests that follow from those models

I think it's fine to stop part-way through this process, but that it's generally a good idea to not skip any steps. So I think it's fine to just list observations, and it's fine to just list observations and then report how you feel about those things, as long as you clearly indicate that this is your experience and doesn't necessarily involve judgement. But it's a bad idea to immediately skip to the request/judgement step.

Stefan_Schubert @ 2019-04-10T23:05 (+4)

OK, that is clarifying. Maybe your original comment could have been clearer, since this framing is quite different.

The issue that you raise in this comment is a big debate, and this is maybe not the place to discuss it in detail. In any case, as stated my view is that people should think carefully before they comment, and not run with their immediate feelings on sensitive topics.

Davis_Kingsley @ 2019-04-08T21:13 (+87)

I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.

Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.

Habryka @ 2019-04-08T21:21 (+14)

I will make sure to update this post with any new information about whether CEA can actually make these grants. My current guess is that maybe 1-2 grants will not be logistically feasible, but the vast majority should have no problem.

Elityre @ 2019-04-11T01:08 (+10)
I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka

Hear, hear.

I feel proud of the commitment to epistemic integrity that I see here.

Peter_Hurford @ 2019-04-08T21:16 (+75)

Thanks Habryka for raising the bar on the amount of detail given in grant explanations.

Habryka @ 2019-04-26T02:10 (+61)

This is the feedback that I sent to Greg about his EA-Hotel application, published with his permission. (He also provided some good responses that I hope he can reply with)

Thoughts on the EA Hotel:

The EA Hotel seems broadly pretty promising, though I do also have a good amount of concerns. First, the reasons why I am excited about the EA Hotel:

Providing a safety net: I think psychological safety matters a lot for people being able to take risks and have creative thoughts. Given that I think most of the value of the EA community comes from potentially pursuing high-risk projects and proposing new unconventional ideas, improving things on this dimension strikes me as pretty key for the success of the overall community.

I expect the EA Hotel has a chance to serve as a cheap distributed safety net for a lot of people who are worried that if they start working on EA stuff, they will run out of money soon and will then potentially end up having to take drastic actions as they run out of money. The EA Hotel can both significantly extend those people's runway, but also soften the costs of running out of money significantly for anyone who is working on EA-related ideas.

Acting on historical interest: There has been a significant historical interest in creating an EA Hub in a location with much lower living expenses, and from a process perspective I think we should very strongly reward people who feel comfortable acting on that level of community interest. Even if the EA Hotel turns out to be a bad idea, it strikes me as important that community members can take risks like this and have at least their expenses reimbursed afterwards (even if it turns out that the idea doesn't work out when implemented), as long as they went about pursuing the project in broadly reasonable terms.

Building high-dedication cultures: I generally think that developing strong cultures of people who have high-levels of dedication is a good way of multiplying the efforts of the people involved, and is generally something that should be encouraged. I think the EA Hotel has a chance to develop a strong high-dedication culture because moving to it requires a level of sacrifice (moving to Blackpool) that will only cause people above a pretty high dedication-threshold to show up. I do also think this can backfire (see later section on concerns).

I do however also have a set of concerns about the hotel. I think over the past few weeks as more things have been written about the hotel, I have started feeling more positive towards it, and would likely recommend a grant to the EA Hotel in the next LTF-Fund grant rounds, though I am not certain.

I think the EA Hotel is more likely than most other projects I have recommended grants to to be net negative, though I don't think it has a significant chance to be hugely negative.

Here are the concrete models around my concerns:

1. I think the people behind the EA Hotel where initially overeager to publicize the EA Hotel via broad media outreach in things like newspapers and others media outlets with broad reach. I think interaction with the media is well-modeled by a unilateralist curse-like scenario in which many participants have the individual choice to create a media narrative, and whoever moves first frames a lot of the media conversation. In general, I think it is essential for organizations in the long term future space to recognize this kind of dynamic and be hesitant to take unilateral action in cases like this.

I think the EA Hotel does not benefit much from media attention, and the community at large likely suffers from the EA Hotel being widely discussed in the media (not because it's weird, which is a dimension on which I think EA is broadly far too risk-averse to, but instead because it communicates the presence of free resources that are for the taking of anyone vaguely associated with the community, which tends to attract unaligned people and cause adversarial scenarios).

Note: Greg responded to this and I now think this point is mostly false, though I still think something in this space went wrong.

2. I think there is a significant chance of the culture of the EA Hotel becoming actively harmful for the people living there, and also spark unnecessary conflict in the broader community. I think there are two reasons why I am more worried about this than for most other locations:

3. I don't have a sense that Greg wants to really take charge on the logistics of running the hotel, and don't have a great candidate for someone else to run it. Though it seems pretty plausible that we could find someone to run it if we invest some time into finding someone.

Summary:

Overall, I think all of my concerns can be overcome, at which point I would be quite excited about supporting the hotel. It seems easy to change the way the hotel relates to the media, I think there are a variety of things one could do to avoid cultural problems, and I think we could find someone to who can take charge on the logistics of running the hotel.

At the moment, I think I would be in favor of giving a grant that covers the runway of the hotel for the next year. (There is the further question of whether they should get enough money to buy the hotel next door, which is something I am much less certain about)

Greg_Colbourn @ 2019-04-26T17:46 (+52)

My response (edited from my email to Habryka)

I think I would be in favor of giving a grant that covers the runway of the hotel for the next year.

Wow this is awesome, thanks!

# Thoughts on the EA Hotel:

Thanks for your detailed response.

First, the reasons why I am excited about the EA Hotel:
Providing a safety net ...
Acting on historical interest ...
Building high-dedication cultures ...

All good reasons, eloquently put!

1. I think the people behind the EA Hotel where initially overeager to publicize the EA Hotel via broad media outreach in things like newspapers and others media outlets with broad reach.

I think this is based on an unfortunate misconception. The whole thing with the media interest has been quite surprising to us. We have never courted the media - quite the opposite in fact. It started with The Economist approaching us. This was whilst I was on holiday and out of communication. The first I heard about it was 3 days before they went to press (the piece appeared in print whilst I was still away). The journalist was told not to come to Blackpool. I spoke to them on the phone and said I wanted more time to think about it and discuss it with people. They went ahead anyway and were told “no comment” by a resident when they knocked on the door. They picked up the story from Slate Star Codex originally and decided — whether we liked it or not — to run a piece on it. I don’t think there was anything we could’ve done to prevent it.

After that, The Times (and many other media outlets) picked it up. The Times journalist booked a call with me via my Calendly. At the exact time I was expecting the call, there was a knock on the door instead and she was there with a photographer - we got doorstepped! I had a panicked 5 minute talk “off the record” with her outside, where she explained that they were going to write about us anyway (whether we liked it or not) so I might as well let her in to interview some people so we could have at least some control over the narrative (we had none with the Economist piece).

I thought it went pretty well, and the piece could’ve been worse. However, they printed some errors, despite me sending clarifications - see the “For the record” here - which made me lose more faith in the journalistic process. It seems that even when you send corrections/clarifications they don’t factor them in if it doesn’t fit their narrative. And of course you have no right of reply (or at least no right of reply at the same level of visibility).

After The Times we had another national newspaper showing up at the door unannounced the next day. Gave them nothing despite them being very persistent.

In the next couple of weeks we were inundated with media requests. We discussed the issue with many people in EA and at CEA and 80k and decided against embracing the media (we could’ve been on prime-time TV and radio with millions of viewers/listeners). The decision was largely based around considerations encapsulated by the fidelity model of spreading ideas and the Awareness/Inclination Model of movement growth. We have turned down something like 20 media requests since The Times. Most were in October following the initial media interest. But we still get some every now and again. The outside view from my friends and family is that I’m completely crazy not to accept any of these offers. I think it’s probably the right call for the EA movement, but I’m still not 100% sure given that there is basically no data on the impact of mass media appearances on movement growth/talent discovery for EA - as far as I can tell, there hasn’t been any since the launch of GWWC and 80k (I’m talking appearances in national/international media with millions of viewers/readers here).

To try and avoid this misconception being perpetuated, I have added a disclaimer to the media page on our website saying that we have never courted the media. Also, journalistic ethics are such that requesting the media cover you is not something you can easily do or be successful with. You can write a press release and send it out, but they don’t generally do requests (note we definitely did not post a press release, nor do anything to publicise the project really, apart from me posting my initial EA Forum piece and sharing it on a couple of EA Facebook groups).

because it communicates the presence of free resources that are for the taking of anyone vaguely associated with the community

I’ve actually been surprised at how few applicants from outside the movement we’ve had, even after the media.

2. I expect the EA Hotel to attract a kind of person who is pretty young, highly dedicated and looking for some guidance on what to do with their life. I think this makes them a particularly easy and promising target for people who tend to abuse that kind of trust relationship and who are looking for social influence.

Yes, one thing I’m wary of is anyone looking to gain too much social influence at the hotel. Note that the average age is actually reasonably high at around 28 though (range 20-40) (i.e. there are a fair few people changing the trajectory of their careers).

the EA Hotel could form a geographically and memetically isolated group that is predisposed for conflict with the rest of the EA community in a way that could result in a lot of negative-sum conflict.

I don’t think we are especially memetically isolated - most of us keep up with the EA Forum and EA Facebook groups etc. There is generally a high level of shared memetic culture/jargon etc that is general to the broader movement. Geographically, many guests have travelled to EA events in continental Europe, and have visited other UK EA hubs like London and Oxford.

3. I don't have a sense that Greg wants to really take charge on the logistics of running the hotel, and don't have a great candidate for someone else to run it. Though it seems pretty plausible that we could find someone to run it if we invest some time into finding someone.

Yes, it’s not something that I want to do long term (although I have been doing a lot). And it’s taking a lot longer than I initially thought it would take to get things fully set up (especially setting up a charity and fundraising). There are two main aspects to the job really - logistics and guest mentoring/vetting. Currently one of the guests is taking on most of the cooking/dishes/food monitor work, and we have a rota for weekends (this could potentially be outsourced with more funds available). And we have a cleaner doing the cleaning/room changes. (Interim Manager) Toon has been doing checkins with guests to discuss their work. He’s only been working on the hotel part time though (he also runs RAISE) and is leaving in a couple of months. We haven’t been able to start the process for hiring a full time manager to take over due to funding insecurity. Would be great if you could help us find someone, thanks!

casebash @ 2019-04-26T10:42 (+20)

I thought I'd share my impressions as someone who has spent significant time at the EA hotel

I think this makes them a particularly easy and promising target for people who tend to abuse that kind of trust relationship and who are looking for social influence.

Most of the people at the EA hotel have been involved in the movement for a while, so they already have reasonably well-developed positions already

it's plausible that the EA Hotel could form a geographically and memetically isolated group that is predisposed for conflict with the rest of the EA community in a way that could result in a lot of negative-sum conflict.

The EA hotel has a limit of 2 years free accommodation (although it is possible exceptions might be made). Most people tend to stay only a certain number of months given that it is Blackpool and not the most desirable location. Further, there are regularly visitors and there is frequent change over in the guests. I actually feel more memetically isolated in Australia than when I was at the EA hotel; especially since visiting London is relatively easy.

Generally high-dedication cultures are more likely to cause people to overcommit to to take drastic actions that they later regret

None of the projects that I am aware of having being undertaken at the EA hotel seemed to be especially high risk. But further than this, whoever is running the checkins will have an opportunity to steer people away from high risk projects.

RobBensinger @ 2019-04-26T03:26 (+9)

Thanks for continuing to write up your thoughts in so much detail, Oliver; this is super interesting and useful stuff.

When you say "Note: Greg responded to this and I now think this point is mostly false", I assume that "this" refers to the previous point (1) rather than the subsequent point (2)?

Habryka @ 2019-04-26T04:27 (+4)

Yes, that corresponds to point (1), not point (2)

Jonas Vollmer @ 2019-04-27T16:09 (+6)

A point I'd personally want to add to Habryka's list: I'm currently unsure whether there is sufficiently good vetting of guests. Since the EA Hotel provides valuable services (almost) for free, it kind of acts as a de facto grantmaker, and runs the risk of funding people who are accidentally doing harm. There are reasons to think that harmful projects will be overrepresented in the application pool (Habryka also made some similar points). As I understand it, the EA Hotel is currently improving their vetting, which I think will be a step in the right direction, and could potentially resolve this issue.

Habryka @ 2019-04-27T21:43 (+18)

I am hesitant about this. I think to serve as a functional social safety net that allows people to take high-risk actions (including in the social domain, in the form of criticisms of high-status people or institutions), I think a high barrier to entry for the EA-Hotel might drastically reduce the psychological safety it could provide to many people.

Jonas Vollmer @ 2019-05-01T00:28 (+4)

Interesting! I agree with the points you make, but I was hoping that good vetting wouldn't suffer from these problems.

Habryka @ 2019-05-01T01:19 (+8)

I think if we had a vetting process that people could trust would reliably cause you to identify good people, even if they made a bunch of recent critical statements of high-status institutions or something in that reference class (or had their most recent project fail dramatically, etc.), then I think that might be fine.

But I think having such a vetting process and having that vetting process have a very low false negative rate and having it be transparent that that vetting process is that good are difficult enough to make it too costly.

Jonas Vollmer @ 2019-05-01T14:44 (+6)

There already is a basic vetting process; I'd mostly welcome fairly gradual improvements to lower downside risk. (I think my initial comment sounded more like the bar should be fairly high, similar to that of, e.g., the LTFF. This is not what I intended to say; I think it should still be considerably lower.)

I think even just explicitly saying something like "we welcome criticism of high-status people or institutions" would go a long way for both shaping people's perception of the vetting process and shaping the vetters' approach.

That said, your arguments did update me in the direction "small changes to the vetting process seem better than large changes."

Jonas Vollmer @ 2019-04-27T16:00 (+6)

My impression of how the EA Hotel crew dealt with media attention was something like "better than many did in the early stages (including myself in the early stages of EAF) but (due to lack of experience or training) considerably worse than most EA orgs would do these days." There are many counterintuitive lessons to be learnt, many of which I still don't fully understand, either.

However, since the initial media interest has abated, I think this isn't really relevant for current grants anyway.

MichaelPlant @ 2019-04-27T16:08 (+5)

Can you say what these lessons are? Would be good to have a write up of advice and I would like to see an EA forum post on this.

Jonas Vollmer @ 2019-04-27T17:03 (+13)

CEA's semi-internal media advice contains some valuable lessons. I was going to post a write-up on the EA Forum at some point, but given that media attention has been de-emphasized as an EA priority since, I decided against pursuing that (I also have some old "EA media strategy" presentation slides but unfortunately, they're in German). If lots of people thought this would be valuable, or if we learned that EA-Hotel-type issues occur on a regular basis, I'd consider it, though. (I also think much of my experience is only relevant to global poverty and animal welfare, not to AI or other cause areas.)

aarongertler @ 2019-04-29T22:53 (+10)

Personally, I'd be interested to see this writeup, and I'd definitely chip in with some of my thoughts if you posted it.

Khorton @ 2019-04-08T22:33 (+47)

I'm happy that people are pushing back on some of these grants, and even happier that Habryka is responding to graciously. However, I'm concerned that some comments are bordering on unhelpfully personal.

I'd suggest that, when criticising a particular project, commentors should try to explain the rule or policy that would help grant-makers avoid the same problem in the future. That should also help us avoid making things personal.

Examples I stole from other comments and reworded:

-"I'm skeptical of the grant to X because I think grantmakers should recuse themselves from granting to their friends." (I saw this criticism but don't actually know who it's referring to.)

-"I don't think any EA Funds should be given to printing books that haven't been professionally edited."

-"I think that people like Lauren should have funds available after they burn out, but I don't think the Long-Term Future Fund is the right source of post-burnout funds."

Habryka @ 2019-04-09T00:01 (+24)

I agree with this, but also think that people should feel free to express any system-1 level reactions they have to these grants. In my experience it can often be quite hard to formalize a critique into a concrete, operationalized set of policy changes, even if the critique itself is good and valid, and I don't think I want to force all commenters to fully formalize their beliefs before they can express them here.

I do think the end goal of the conversation should be a set of policies that the LTF-Fund can implement.

Raemon @ 2019-04-09T19:33 (+31)

I have a weird mix of feelings and guesses here.

I think it's good on the margin for people to be able to express opinions without needing to formalize them into recommendations for the reason stated here. I think the overall conversation happening here is very important.

I do still feel pretty sad looking at the comments here — some of the commenters seem to not have a model of what they're incentivizing.

They remind me of the stereotype of a parent who's kid has moved away and grown up, and doesn't call very often. And periodically the kid does call, but the first thing they hear is the parent complaining "why don't you ever call me?", which makes the kid less likely to call home.

EA is vetting constrained.

EA is network constrained.

These are actual hard problems, that we're slowly addressing by building network infrastructure. The current system is not optimal or fair, but progress won't go faster by complaining about it.

It can potentially go faster via improvements in strategy, and re-allocating resources. But each of those improvements will come in a tradeoff. You could hire more grantmakers full-time, but those grantmakers are generally working full-time on something else comparably important.

This writeup is unusually thorough, and Habryka has been unusually willing to engage with comments and complaints. But I think Habryka has higher-than-average willingness to deal with that.

When I imagine future people considering

a) whether to be a grantmaker,

b) whether to write up their reasons publicly

c) whether to engage with comments on those reasons

I predict that some of the comments on this thread to make all of those less likely (in escalating order). It also potentially makes grantees less likely to consent to public discussion of their evaluation, since it might get ridiculed in the comments.

Because EA is vetting constrained, I think public discussion of grant-reasoning is particularly important. It's one of the mechanisms that'll give people a sense of what projects will get funded and what goes into a grantmaking process, and get a lot of what's currently 'insider knowledge' more publicly accessible.

toonalfrink @ 2019-04-10T13:38 (+10)

As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.

Negative feedback isn't inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it's in bad faith.

Given that grantmakers are essentially at the steering wheel of EA, we can't afford for those people to not internalize this. They need to know all the criticism to make a good decision, they should cherish it.

Of course we can help them get this state of mind by celebrating their willingness to open up to scrutiny, along with the scrutiny

Khorton @ 2019-04-10T19:11 (+17)

I think on a post with 100+ comments the quality of decisions is more likely to be bottlenecked by the quality of feedback than the quantity. Being able to explain why you think something is a bad idea usually results in higher quality feedback, which I think will result in better decisions than just getting a lot of quick intuition-based feedback.

RyanCarey @ 2019-04-08T13:45 (+43)

This is a strong set of grants, much stronger than the EA community would've been able to assemble a couple of years ago, which is great to see.

When will you be accepting further applications and making more grants?

Habryka @ 2019-04-08T18:41 (+15)

I don't know yet. My guess is in around two months.

Habryka @ 2019-06-30T18:20 (+3)

Answer turned out to be closer to 3 months.

Evan_Gaensbauer @ 2019-04-17T08:14 (+41)

Summary: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling 'counterfactually unique' grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.

Based on that measure, 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a 'conservative' estimate, 16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.

These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund's track record going forward. I hope the other EA Funds emulate and build on this approach.

General Assessment

In his 2018 AI Alignment Literature Review and Charity Comparison, Larks had the following to say about changes in the management structure of the EA Funds.

I’m skeptical this will solve the underlying problem. Presumably they organically came across plenty of possible grants – if this was truly a ‘lower barrier to giving’ vehicle than OpenPhil they would have just made those grants. It is possible, however, that more managers will help them find more non-controversial ideas to fund.

To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn't a rule or practice of the EA Funds not making those kinds of grant. It's at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.

20 (~87% of total number) grant recommendations totalling $723,150 = ~73%

+ 3 (~13% of total number) grant recommendations totalling $200,00 = ~27%

= 23 grant (in total) recommendations totalling $923,150 = 100%


Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds' new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, 87% of the total number of grant recommendations were for efforts of individuals, totalling 73% of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.

In other words, the Long-Term Future (LTF) Fund is directly responsible for 87% of 23 grant recommendations made, totalling 73% of $923.15K worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds' current rate of efficiency in fulfilling the purposes it was set up for.

Criticisms and Conservative Estimates

Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably could not have identified by themselves. Yet another question is would they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers' individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a 'liberal' estimate.

I've at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn't sufficient to justify the grant, i.e., they were 'too risky.' If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a 'conservative' estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.

In total, these are 4 grants worth $138,000 that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don't seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.

Of the 20 recommendations made for unique grant recipients worth $673,150, if we were to exclude these 4 recommendations worth $138,000, that leaves 16 of 23, or 69.5% of total recommendations, worth $535,150 of $923,150, or ~58% worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this 'conservative' estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.

Conclusion

While these numbers don't address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.

John_Maxwell_IV @ 2019-04-17T19:25 (+4)
So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

You mean it functions like a venture capital fund or angel investor?

Milan_Griffes @ 2019-04-17T16:58 (+2)

This is great! Thank you for the care & attention you put into creating this audit.

Jess_Whittlestone @ 2019-04-09T19:20 (+39)

I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.

Habryka @ 2019-04-09T20:34 (+66)

Here is a rough summary of the process, it's hard to explain spreadsheets in words so this might end up sounding a bit confusing:

  • We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from -5 to +5 for each application
  • Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
  • We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
  • Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
  • As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes

However, some fund members weren't super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.

We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.

Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.

Risto_Uuk @ 2019-04-08T08:56 (+39)

You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications?

Habryka @ 2019-04-08T23:46 (+36)

Hmm, I don't think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:

  • The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
  • The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or properly account for, and I did not feel like I could cause the applicant to understand the downside risk they have to account for without investing significant effort and time
  • The grant was only tenuously EA-related and seemed to have been submitted to a lot of applications relatively indiscriminately
  • I was unable to understand the goals, implementation or other details of the grant
  • I simply expected the proposed plan to not work, for a large variety of reasons. Here are some of the most frequent:
    • The grant was trying to achieve something highly ambitious while seeming to allocate very little resources to achieving that outcome
    • The grantee had a track record of work that I did not consider to be of sufficient quality to achieve what they set out to do
  • In some cases the applicant asked for less than our minimum grant amount of $10,000
Peter_Hurford @ 2019-04-09T05:34 (+59)

Thanks for the transparent answers.

The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

In some cases the applicant asked for less than our minimum grant amount of $10,000

This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.

Habryka @ 2019-04-09T17:39 (+46)
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant.

I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we've had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).

I don't think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.

As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don't know Tyler that well, I wasn't sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)

The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it's often very hard to get a good sense of the competence of an applicant.

Habryka @ 2019-04-09T17:19 (+20)
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.

A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it's hard to get $1k grants.

Ben_Kuhn @ 2019-04-09T23:28 (+18)

Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?

Habryka @ 2019-04-10T00:14 (+12)

Here is my rough fermi:

My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.

Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.

To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.

Ben_Kuhn @ 2019-04-10T13:27 (+13)

If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.

This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't know about, but I'm curious if you (or someone from CEA) knows what they are?

[Not trying to imply that CEA is failing to optimize here or anything—I'm mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]

Jonas Vollmer @ 2019-04-10T14:00 (+18)

I actually think the $10k grant threshold doesn't make a lot of sense even if we assume the details of this "opportunity cost" perspective are correct. Grants should fulfill the following criterion:

"Benefit of making the grant" ≥ "Financial cost of grant" + "CEA's opportunity cost from distributing a grant"

If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of "market efficiency" of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)

IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid "$10k" rule. Since EA Fund committees generally consist of smart people, I think they'd be able to understand and implement this well.

Michelle_Hutchinson @ 2019-04-10T15:56 (+11)

This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).

It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.

Jonas Vollmer @ 2019-04-11T07:39 (+4)

I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it's a concern and it updates me a bit towards this not being a good idea.

I wasn't suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.

ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.

cole_haus @ 2019-04-10T01:44 (+5)

It's not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total "EA dollars" that the positions cost whereas your model seems to combine "EA dollars" (CEA costs) and "personal dollars" (their total salary).

Habryka @ 2019-04-10T03:00 (+6)

Hmm, I guess it depends a bit on how you view this.

If you model this in terms of "total financial resources going to EA-aligned people", then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.

If you want to model it as "money controlled directly by EA institutions" then it's closer to your number.

I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.

cole_haus @ 2019-04-10T05:23 (+1)

I don't particularly want to try to resolve the disagreement here, but I'd think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?

[1] I'd expect it to vary from person to person depending on their alignment, commitment, competence, etc.

cole_haus @ 2019-04-10T01:42 (+5)

I think you have some math errors:

  • $150k * 1.5 + $60k = $285k rather than $295k
  • Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
Habryka @ 2019-04-10T02:51 (+4)

Ah, yes. The second one. Will update.

Jonas Vollmer @ 2019-04-10T13:59 (+5)

(moved this comment here)

John_Maxwell_IV @ 2019-04-09T07:10 (+9)
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

I suspect there's low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you'd want each person to be located in a different geographic EA hub. (Perhaps you'd want supernetworkers covering disparate online communities devoted to EA as well.)

This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of "Fund the EA Hotel", maybe the key intervention is "Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses." (BTW, the EA Hotel is actually a pretty good place to be if you're an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)

Habryka @ 2019-04-09T16:23 (+40)
To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network.

That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:

  • Matt Fallshaw: Australia (but also travels a lot)
  • Helen Toner: Georgetown (I think)
  • Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
  • Matt Wage: New York

I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).

John_Maxwell_IV @ 2019-04-09T19:36 (+5)

Good to know!

RyanCarey @ 2019-04-10T22:54 (+4)

Isn't Matt in HK?

Habryka @ 2019-04-10T23:37 (+4)

He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)

Habryka @ 2019-07-15T20:51 (+3)

Update: I was just wrong, Matt is indeed primarily HK

jpaddison @ 2019-07-16T00:42 (+1)

Boy, there are two Matts in that list.

Habryka @ 2019-04-09T17:51 (+30)
Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

At least for me this doesn't really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I've worked with a lot of people on a lot of projects over the years. I've discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.

The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.

It's not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don't really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.

I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone's competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).

Jess_Whittlestone @ 2019-04-09T14:00 (+33)
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know - being able to do this should be a requirement of the job, IMO. Seconding Peter's question below, I'd be keen to hear if there are any plans to make progress on this.

If you really don't have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.

Habryka @ 2019-04-09T17:01 (+37)
but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know

To be clear, we did invest time into vetting applications from people we didn't know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.

My guess is that if you don't have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at least an additional 5, often 10, hours of effort into reaching out to them, performing interviews, getting testimonials, analyzing their case, etc. If you don't do this, I expect the average grant to be net negative.

Our review period lasted about one month. At 100 applications, assuming that you create an anonymous review process, this would have resulted in around 250-500 hours of additional work, which would have made this the full-time job for 2-3 of the 5 people on the grant board, plus the already existing ~80 hours of overhead this grant round required from the board. You likely would have filtered out about 50 of them at an earlier stage, so you can maybe cut that in half, resulting in ~2 full-time staff for that review period.

I don't think that level of time-investment is possible for the EA Funds, and if you make it a requirement for being on an EA Fund board, the quality of your grant decisions will go down drastically because there are very few people who have a track record of good judgement in this domain, who are not also holding other full-time jobs. That level of commitment would not be compatible with holding another full-time job, especially not in a leadership position.

I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications. I think it might make sense for us to hire a part-time staff to help with evaluations and do background research as well as interviews for us, but it's currently unclear to me how such a person would be managed and whether their salary would be worth the benefit, but it seems like plausibly the correct choice.

Jess_Whittlestone @ 2019-04-09T19:14 (+26)

Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.

It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that's the case, then I think there's a bigger problem (the job isn't being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.

Habryka @ 2019-04-09T20:03 (+35)

I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.

I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies that exist in the EA space have an even higher (funds-distributed)/staff ratio than this.

The Open Philanthropy Project has about 15-20 people assessing grants, and gives out at least 100 million dollars a year, and likely aims to give closer to a $1 billion dollars a year given their reserves.

BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year

My guess is that GiveWell also has about 10 staff assessing grants full-time, making grants of about $20 million dollars

I think at the current level of team-member-involvement, and since I do think there is a significant judgement-component to evaluating grants which allows the average LTF-Fund team member to act with higher leverage, plus the time that anyone involved in the LTF-landscape has to invest to build models and keep up to speed with recent developments, I actually think that the LTF-Fund team is able to make more comprehensive grant assessments per dollar granted than almost any other granting body in the space.

I do think that having more people who can assess grants and help distribute resources like this is key, and think that investing in training and recruiting those people should be one of the top priorities for the community at large.

Milan_Griffes @ 2019-04-09T20:09 (+4)
BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year

Note that BERI has only existed for a little over 2 years, and their grant-making has been pretty lumpy, so I don't think they've yet reached any equilibrium grant-making rate (one which could be believably expressed in terms of $X dollars / year).

Habryka @ 2019-04-09T20:14 (+10)

I agree. Though I think I expect the ratio of funds-distributed/staff to roughly stay the same, at least for a bit, and probably go up a bit.

I think older and larger organizations will have smaller funds-distributed/staff ratios, but I think that's mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.

Khorton @ 2019-04-10T11:03 (+28)

I would be in favour of this fund using ~5% of its money to pay for staff costs, including a permanent secretariat. The secretariat would probably decrease pressure on grantmakers a little, and improve grant/feedback quality a little, which makes the costs seem worth it. (I know you've already considered this and I want to encourage it!)

I imagine the secretariat would:

-Handle the admin of opening and advertising a funding round

-Respond to many questions on the Forum, Facebook, and by email, and direct more difficult questions to the correct person

-Coordinate the writing of Forum posts like this

-Take notes on what additional information grantmakers would like from applicants, contact applicants with follow-up questions, and suggest iterations of the application form

-(potentially) Manage handover to new grantmakers when current members step down

-(potentially) Sift through applications and remove those which are obviously inappropriate for the Long Term Future Fund

-(potentially) Provide a couple of lines of fairly generic but private feedback for applicants

Evan_Gaensbauer @ 2019-04-17T04:28 (+6)

This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don't think you'll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don't have the time.

Evan_Gaensbauer @ 2019-04-17T03:57 (+2)

The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it'd be too difficult to effectively accomplish in the timeframe of a few months.

In general, one issue the EA Funds face other granting bodies in EA don't face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it's distributed, is much more complicated than ones the CEA or a similar organization typically faces.

Milan_Griffes @ 2019-04-09T17:24 (+17)

Thanks for the care & attention you're putting towards all of these replies!

I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications.

Strong +1.

Evan_Gaensbauer @ 2019-04-17T03:47 (+4)

One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn't a "job" in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.

Denkenberger @ 2019-04-26T04:59 (+35)

I applaud the explanations of the decisions for the grants and also the responses to the questions. Now that things have calmed down, since the EA Long Term Future Fund team suggested that requests for feedback on unsuccessful grants be made publicly, I am doing that.

My proposal was to further investigate a new cause area, namely resilience to catastrophes that could disable electricity regionally or globally, including extreme solar storm, high-altitude electromagnetic pulses (caused by nuclear detonations), or a narrow AI computer virus. Since nearly everything is dependent on electricity, including pulling fossil fuels out of the ground, industrial civilization could grind to a halt. Many people have suggested hardening the grid to these catastrophes, but this would cost tens of billions of dollars. However, getting prepared for quickly providing food, energy, and communications needs in a catastrophe would cost much less money and provide much of the present generation (lifesaving) and far future (preservation of anthropological civilization) benefits. I have made a Guesstimate model assessing the cost-effectiveness of work to improve long-term future outcomes given one of these catastrophes. Both my inputs and Anders Sandberg’s inputs yield >95% confidence that work now on losing electricity/industry is more cost-effective than marginal work on AI safety (Oxford Prioritisation Project/ Owen Cotton-Barratt and Daniel Dewey did the AI section, except I truncated distributions and made AI more cost effective). There is also a blank (to avoid anchoring) Guesstimate model.

The specific proposal was to buy out of my teaching and/or fund a graduate student to research particularly high value of information relevant projects and submit papers. I think that feedback would be particularly helpful because it is not just about the particular proposal, but also whether the new cause area is worth investigating further.

For more background, see the three papers involving losing electricity/industry: feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes funded by an EA grant, so input can influence that paper.

Habryka @ 2019-06-11T20:10 (+9)

(Note, I am currently more time-constrained than I had hoped to be when writing these responses, so the above was written a good bit faster and with less reflection than my other pieces of feedback. This means errors and miscommunication is more likely than usual. I apologize for that.)

I ended up writing some feedback to Jeffrey Ladish, which covered a lot of my thoughts on ALLFED. 

My response to Jeffrey

Building off of that comment, here are some additional thoughts: 

  • As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED's work to be of comparatively lower importance than the other areas I tend to recommend grants in
  • Most of ALLFED's work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path. 
  • I have some hesitations about the structure of ALLFED as an organization. I've had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I've had with Finan who used to work at ALLFED, but left because of it being located in Alaska). 
  • I generally think ALLFED's work is of decent quality, helpful to many and made with well-aligned intentions, I just don't find it's core value proposition compelling enough to be excited about grants to it
Denkenberger @ 2019-06-13T03:29 (+14)

Thank you for you recent post and your ALLFED feedback.

I have made my request for such publicly so also responding publicly, as such openness can only be beneficial to the investigation and advancing of the causes we are passionate about.

We appreciate your view of ALLFED’s work being of “decent quality, helpful to many and made with well-aligned intention”.

We also appreciate many good points raised in your feedback, and would like to comment on them as follows.

As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED's work to be of comparatively lower importance than the other areas I tend to recommend grants in

People’s intuition on the long-term future impact of these type of catastrophes and the tractability of reducing that impact with money varies tremendously.

One possible mechanism for extinction from nuclear winter is as follows. It is tempting to think that if there is enough stored food to keep the population alive for five years until agriculture recovers, that 10% of people will survive. However, if the food is distributed evenly, then everyone will die after six months. It is not clear to me that the food will be so well protected from the masses that many people will survive. Similarly, there could be some continuous food production in these scenarios if managed sustainably such as fish that could relocate to the tropics. However, again, if there are many desperate people, they might eat all the fish, so everyone would starve. Similarly, hunter gatherers generally don’t have stored food and could starve. Even if agrarian societies managed to have some people survive on stored food, if there were collapse of anthropological civilization, the people might not be able to figure out how to become hunter gatherers again. Even if there is not extinction, it is not clear we would recover civilization, because we have had a stable climate the last 10,000 years and we would not have easily recoverable fossil fuels for industrial civilization. And even if we did not lose civilization, worse values from the nastiness of the die off could result in totalitarianism or end up in AGI (though you point out that it is possible we could be more careful with dangerous technologies the second time around).

As for the tractability, people have pointed out that many of the interventions we talk about have already been done at small scale. So it is possible that they would be adopted without further ALLFED funding (and we have a parameter for this in the Guesstimate models). However, there is some research that takes calendar time and cannot be parallelized (such as animal research). Furthermore, if there is panic before people find out that we could actually feed everyone, then the chaos that results probably means the interventions won't get adopted.

Given the large variation in intuitions, we have tried to do surveys to get a variety of opinions. For the agricultural catastrophes (nuclear winter, abrupt climate change, etc.) we got eight GCR researcher opinions. The result varied nearly four orders of magnitude. The most pessimistic found marginal funding of ALLFED now the same order of magnitude cost effectiveness as AI at the margin, the most optimistic four orders of magnitude higher cost effectiveness than AI (considering future work that will likely be done). I know you in particular are short on time, but I would encourage anyone interested in this issue to put their own values into the blank model (to avoid anchoring) and see what they produce for agricultural catastrophes. Of course even if it does not turn out to be more cost-effective than AI, it still could be competitive with engineered pandemic.

This particular EA Long Term Future Fund application was focusing on a different class of catastrophes, those that could disrupt electricity/industry (including solar storm, high-altitude electromagnetic pulses, or narrow AI computer virus). In this case, a poll was taken at EAG San Francisco 2018, so the data are less detailed. There appears to be fewer orders of magnitude variation in this case. Since the mean cost-effectiveness ratio to AI is similar, this likely would yield the most pessimistic person judging preparations for losing electricity/industry at the margin to be more cost-effective than AI. Again, here is a blank model for this cause area.

Most of ALLFED's work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path.

As for the specific path to optimize for improving the long-term future, in the book Feeding Everyone No Matter What, we did go through a number of problems associated with nuclear winter and food shortage was clearly the most important (and this has been recognized by others, including Alan Robock). However, for catastrophes that disable electricity/industry, it is true that issues such as water, shelter, communications and transportation are very important, which is why we have developed interventions for those as well.

I have some hesitations about the structure of ALLFED as an organization. I've had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I've had with Finan who used to work at ALLFED, but left because of it being located in Alaska).

This has been an interesting one for both myself and the team to consider.

One of the unique features of ALLFED is our structure which does correspond to our work on *both* research and preparedness. As such, we have opted for a small, flexible multi-location organization, which allows us to get to places and collaborate globally.

While I am myself indeed based in Alaska, we also have a strong UK team based in London and Oxford, busy developing collaborations with academia (e.g. UCL), finance and industry and attending European events (just back from Geneva and the United Nations Global Platform for DRR and heading to Combined Dealing with Disasters International Conference next month). As for attracting talent, we have built alliances with researchers at Michigan Technological University, Penn State, Tennessee State University, and the International Food Policy Research Institute who are ready to do ALLFED projects once we get funding. This is why our room for more funding in the next 12 months is more than $1 million. We have also co-authored papers with people at CSER, GCRI, and Rutgers University.

Overall, we feel the geographical spread has been beneficial to us and has certainly contributed to a greater diversity within the team and allowed access to a greater body of knowledge, contacts, connections. As a sideline, we feel all individuals with passion for the GCR work and with relevant talents should be able to contribute to it, regardless of their location, family/personal demands or physical abilities. Facilitating and enabling this via remote working has seemed an obvious benefit to the organization and the right thing to do.

We have read this EA forum post on local/remote teams with great interest and find its conclusions and recommendations consistent with our experience. Working across continents has certainly contributed to the development of robust internal organizational structures, clarity in goals, objectives, accountability, communications and such.

As for my personal experience of being based in Alaska, I don’t feel that my interaction with the team here has been significantly different than with remote team members (referring back to this: the people in Alaska are not in the same hallway, though we do have in-person meetings). So basically we can recruit students for projects that are routed through the University, but then other researchers can be remote.

The exceptions of course are if an experiment requires significant facilities and is not done by a student (as was the case with Finan) or if one’s personal preferences are for more social interactions.

We are of course concerned and have noted your comment on “relatively bad experiences interacting with some parts of (our) team”. We would very much like to learn more about this (if you don’t mind perhaps in private this time, to ensure people’s privacy/confidentiality).

We cannot help but wonder whether our commitment to diversity - including neurodiversity - may have had some unintended consequences… We do have individuals on the team whose communications needs and style may at times present something of a challenge, particularly to those unaware of such considerations. Thank you for alerting us of possible impacts of this; we will certainly look at this, and any other “team interactions” matters, and see how they can be managed better. We are hopeful that, overall, there have been many more positive interactions than dubious ones and would like to take this opportunity to thank you (and anybody else who may have experienced issues around this) for your patience and understanding.

Going forward - and this relates as much to this particular response and any future ALLFED team interaction at all, with anyone reading this - if any such interaction does not quite work out, please let me know (so we may either make good or provide context).

All in all, we are grateful for your feedback and pleased with our decision to engage in this publicly. Hopefully this will be of use not only to ALLFED as an organization but to the broader EA community.

Cullen_OKeefe @ 2019-04-10T04:52 (+32)

Regarding the donation to Lauren Lee:

To the extent that one thinks that funding the runways of burnt-out and/or transitioning EAs is a good idea to enable risk-neutral career decisions (which I do!), I'd note that funding (projects like) the EA Hotel seems like a promising way to do so. The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

Cullen_OKeefe @ 2019-04-10T05:19 (+24)

This could also help free up a significant amount of donation money. My guess is that a central entity that could be (more) risk-neutral than individual EAs would be a more efficient insurer of EA runway needs than individual EAs. Many EAs will never use their runways, and this will mean, at best, significantly delayed donations, which is a high opportunity cost. If runway-saving EAs would otherwise donate (part of) their runways (which I would if I knew the EA community would provide one if needed), there could be net gains in EA cashflow due to the efficiency of a central insurer.

I'm not super confident in this, and I could be wrong for a lot of reasons. Obviously, runways aren't purely altruistic, so one shouldn't expect all runway money to go to donations. And it might be hard or undesirable for EA to provide certain kinds of runway due to, e.g., moral hazard. It might also be hard for EA as a community to provide runways with any reasonable assurance that the outcome will be altruistic (I take this to be one of the main objections to the EA Hotel). Still, I think the idea of insuring EA runway needs could be promising.

toonalfrink @ 2019-04-11T19:37 (+6)

Am certainly open to considering this business model for the hotel.

Milan_Griffes @ 2019-04-15T20:35 (+4)

This is interesting, though the moral hazard / free-riding consideration seems like a big problem.

Cullen_OKeefe @ 2019-04-16T03:42 (+3)

I agree that moral hazard is, but you could also imagine an excludable EA insurance scheme that reduced free-riding. E.g., pay $X/month and if you lose your job you can live here for up to a year.

But since the employed EA community is not as diversified as the whole market, employed EAs may be more liable to systemic shocks that render the insurer insolvent. But of course, there's reinsurance...

toonalfrink @ 2019-04-11T19:32 (+7)

The hotel did apply.

The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

It's about $7500 per person per year

Milan_Griffes @ 2019-04-09T05:21 (+31)

CFAR:

I’ve gotten a sense that the staff isn’t interested in increasing the number of intro workshops, that the intro workshops don’t feel particularly exciting for the staff, and that most staff are less interested in improving the intro workshops than other parts of CFAR. This makes it less likely that those workshops will maintain their quality and impact, and I currently think that those workshops are likely one of the best ways for CFAR to have a large impact.
...
CFAR is struggling to attract top talent, partially because some of the best staff left, and partially due to a general sense of a lack of forward momentum for the organization. This is a bad sign, because I think CFAR in particular benefits from having highly talented individuals teach at their workshops and serve as a concrete example of the skills they’re trying to teach.

Why a large, unrestricted grant to CFAR, given these concerns? Would a smaller grant catalyze changes such that the organization becomes cash-flow positive?

By the next grant round, I plan to have spent more time learning and thinking about CFAR’s trajectory and future, and to have a more confident opinion about what the correct funding level for CFAR is.

What is going to happen between now & then that will help you learn enough to have a higher-credence view about CFAR?

Seems like a large, unrestricted grant permits further "business-as-usual" operations. Are "business-as-usual" operations the best state for driving your learning as a grant-maker?

PeterMcCluskey @ 2019-04-11T16:37 (+14)

I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

I don't consider that to be a desirable goal for CFAR.

Habryka's analysis focuses on CFAR's track record. But CFAR's expected value comes mainly from possible results that aren't measured by that track record.

My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren't interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.

Milan_Griffes @ 2019-04-15T20:31 (+2)
I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

Yes, that's roughly what I mean.

I'm gesturing towards "getting to a business structure where it's straightforward to go into survival mode, wherein CFAR maintains core staff & operations via workshop fees."

Seems like in that configuration, the org wouldn't be as buffeted by the travails of a 6-month or 12-month fundraising cycle.

I agree that being entirely supported by workshop fees wouldn't be a desirable goal-state for CFAR. But having a "survival mode" option at the ready for contingencies seems good.

Habryka @ 2019-04-09T19:10 (+10)
Why a large, unrestricted grant to CFAR, given these concerns? Would a smaller grant catalyze changes such that the organization becomes cash-flow positive?

I have two interpretations of what your potential concerns here might be, so might be good to clarify first. Which of these two interpretations is closer to what you mean?

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

Milan_Griffes @ 2019-04-09T19:27 (+3)

I'm curious about both (1) and (2), as they both seem like plausible alternatives that you may have considered.

Habryka @ 2019-04-09T23:31 (+18)

Seems good.

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

I am overall still quite positive on CFAR. I have significant concerns, but the total impact CFAR had over the course of its existence strikes me as very large and easily worth the resources it has taken up so far.

I don't think it's the correct choice for CFAR to take irreversible action right now because they correctly decided to not run a fall fundraiser, and I still assign significant probability to CFAR actually being on the right track to continue having a large impact. My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

I considered this for quite a while, but ultimately decided against it. I think grantmakers should generally be very hesitant to make earmarked or conditional grants to organizations, without knowing the way that organization operates in close detail. Some things that might seem easy to change from the outside often turn out to be really hard to change for good reasons, and this also has the potential to create a kind of adversarial relationship where the organization is incentivized to do the minimum amount of effort necessary to meet the conditions of the grant, which I think tends to make transparency a lot harder.

Overall, I much more strongly prefer to recommend unconditional grants with concrete suggestions for what changes would cause future unconditional grants to be made to the organization, while communicating clearly what kind of long-term performance metrics or considerations would cause me to change my mind.

I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:

  • make recommendations for a set of changes with conditional funding,
  • decide that CFAR does not require further funding from the LTF,
  • or be convinced that CFAR's current plans make sense and that they should have sufficient resources to execute those plans.
Milan_Griffes @ 2019-04-15T20:34 (+7)

This is super helpful, thanks!

My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

Perhaps a crux here is whether whatever mechanism historically drove CFAR's impact has already broken or not. (Just flagging, doesn't seem important to resolve this now.)

Habryka @ 2019-04-15T23:18 (+4)

Yeah, that's what I intended to say. "In the world where I come to the above opinion, I expect my crux will have been that whatever made CFAR historically work, is still working"

andzuck @ 2019-04-09T20:18 (+29)

Was wondering if you can explain more about the reasoning for funding Connor Flexman. Right now, the write-up doesn't explain much and makes me curious what "independent research" means. Also would be interested in learning what past projects Connor has worked on that led to this grant.

Habryka @ 2019-04-10T03:30 (+11)

The primary thing I expect him to do with this grant is to work together with John Salvatier on doing research on skill transfer between experts (which I am partially excited about because that's the kind of thing that I see a lot of world-scale model building and associated grant-making being bottlenecked on).

However, as I mentioned in the review, if he finds that he can't contribute to that as effectively as he thought, I want him to feel comfortable pursuing other research avenues. I don't currently have a short-list of what those would be, but would probably just talk with him about what research directions I would be excited about, if he decides to not collaborate with John. One of the research projects he suggested was related to studying historical social movements and some broader issues around societal coordination mechanisms that seemed decent.

I primarily know about the work he has so far produced with John Salvatier, and also know that he demonstrated general competence in a variety of other projects, including making money managing a small independent hedge fund, running a research project for the Democracy Defense Fund, doing some research at Brown university, and participating in some forecasting tournaments and scoring well.

Igor Terzic @ 2019-04-08T21:16 (+28)

I'd like to challenge the downside estimate re: HPMoR distribution funding.

So I felt comfortable recommending this grant, especially given its relatively limited downside

I think that funding this project comes with potentially significant PR and reputational risk, especially considering the goals for the fund. It seems like it might be a much better fit for the Meta fund, rather than for the fund that aims to: "support organizations that work on improving long-term outcomes for humanity".

Habryka @ 2019-04-10T01:29 (+10)

Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don't think I understand what kind of reputational risks you are worried about.

cole_haus @ 2019-04-10T01:55 (+23)

I am not OP but as someone who also has (minor) concerns under this heading:

  • Some people judge HPMoR to be of little artistic merit/low aesthetic quality
  • Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)

If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.

Clearly, there also many people that like HPMoR and don't have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.

Habryka @ 2019-04-10T02:50 (+16)

Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it's unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn't, as I have argued below).

The outcome might be that some people might start disliking HPMoR, but that doesn't seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.

I have some vague feeling that there might be some more weird downstream effects of this, but I don't think I have any concrete models of how they might happen, and would be interested in hearing more of people's concerns.

kbog @ 2019-04-12T07:27 (+3)

Not the book giveaway itself, but posting grant information like this can be very bad PR.

Khorton @ 2019-04-12T07:33 (+1)

I think I agree, but why do you think so?

Habryka @ 2019-04-08T21:37 (+9)

(Responding to the second point about which fund is a better fit for this, will respond to the first point separately)

I am broadly confused how to deal with the "which fund is a better fit?" question. Since it's hard to influence the long-term future I expect a lot of good interventions to go via the path of first introducing people to the community, building institutions that can improve our decision-making, and generally opting for building positive feedback loops and resources that we can deploy as soon as concrete opportunities show up.

My current guess is that we should check in with the Meta fund and their grants to make sure that we don't make overlapping grants and that we communicate any concerns, but that as soon as there is an application that we think is worth it from the perspective of the long-term-future that the Meta fund is not covering, that we should feel comfortable filling it, independently of whether it looks a bit like EA-Meta. But I am open to changing my mind on this.

Milan_Griffes @ 2019-04-08T23:39 (+3)

Could this be straightforwardly simplified by bracketing out far future meta work as within the remit of the Long Term Future Fund, and all other meta work (e.g. animal welfare institution-building, global development institution-building) as within the remit of the Meta Fund?

Not sure if that would cleave reality at the joints, but seems like it might.

Habryka @ 2019-04-08T23:51 (+10)

I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.

I think a single granting body is likely to end up missing a large number of good opportunities, and general intuitions arounds hits-based giving make me think that encouraging independence here is better than splitting up every grant into only one domain (this does rely on those granting bodies being able to communicate clearly around downside risk, which I think we can achieve).

rohinmshah @ 2019-04-09T16:15 (+9)

Is this different from having more people on a single granting body?

Possibly with more people on a single granting body, everyone talks to each other more and so can all get stuck thinking the same thing, whereas they would have come up with more / different considerations had they been separate. But this would suggest that granting bodies would benefit from splitting into halves, going over grants individually, and then merging at the end. Would you endorse that suggestion?

Habryka @ 2019-04-09T16:18 (+9)

I don't think you want to go below three people for a granting body, to make sure that you can catch all the potential negative downsides of a grant. My guess is that if you have 6 or more people it would be better to split it into two independent grant teams.

Peter_Hurford @ 2019-04-09T05:36 (+8)
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.

Yes, this is a great idea to help reduce bias in grantmaking.

MorganLawless @ 2019-04-08T20:36 (+28)

Mr. Habryka,

I do not believe the $28,000 grant to buy copies of HPMOR meets the evidential standard demanded by effective altruism. “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” With all due respect, it seems to me that this grant feels right but lacks evidence and careful analysis.

The Effective Altruism Funds are "for maximizing the effectiveness of your donations" according to the homepage. This grant's claim that buying copies of HPMOR is among the most effective ways to donate $28,000 by way of improving the long-term future rightly demands a high standard of evidence.

You make two principal arguments in justifying the grant. First, the books will encourage the Math Olympiad winners to join the EA community. Second, the book swill teach the Math Olympiad winners important reasoning skills.

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.

I have no idea who Mikhail Yagudin is so have no reason to suspect anything untoward, but the fact that you do not know him or his team augments this grant’s problems, as you are aware.

I understand that the EA Funds are thought of as vehicles to fund higher risk and more uncertain causes. In the words of James Snowden and Elie Hassenfeld, “some donors give to this fund because they want to signal support for GiveWell making grants which are more difficult to justify and rely on more subjective judgment calls, but have the potential for greater impact than our top charities.” They were referring to GiveWell and the Global Health and Development Fund, but I think you would agree that this appetite for riskier donations applies to the other funds, including this Long Term Future Fund.

However, higher risk and uncertainty does not mean no evidentiary standards at all. In fact, uncertain grants such as this one should be accompanied with an abundance of strong intuitive reasoning if there is no empirical evidence to draw from. The reasoning outlined in the forum post does not meet the standard in my view for the reasons I gave in the prior paragraphs.

More broadly, I think this grant would hurt the EA community. Returning to the quote I began with, “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” If I were a newcomer to the EA community and I saw this grant and the associated rationale, I would be utterly disenchanted by the entire movement. I would rightly doubt that this is among the most effective ways to spend $28,000 to improve the long term future and notice the absence of “evidence and careful analysis”. If effective altruism does not demand greater rigor than other charities, then there is no reason for a newcomer to join the effective altruism movement.

So what should be done?

  1. This grant should be directed elsewhere. EA Russia can find other funding to meet its oral promise that should not have been given without already having funding.

  2. EA Funds cannot both be a vehicle for riskier donations as well as the go-to recommendation for effective donations, as is stated in the Introduction to Effective Altruism. This flies in the face of transparency for what a newcomer would expect when donating. This is not the fault of this grant but the grant is emblematic of this broader problem. I also want to reiterate that I think this grant still does not meet the evidentiary standard, even when it is considered under the view of EA Funds as a vehicle for riskier donations.

Misha_Yagudin @ 2019-04-08T22:22 (+36)

Dear Morgan,

In this comment I want to address the following paragraph (#3).

I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.

I think that it is a miscommunication on my side.

EA Russia has the oral agreements with [the organizers of math olympiads]...

We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn't promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn't approached them as the EAs (as I am mindful of the reputational risks).

Misha_Yagudin @ 2019-04-08T22:55 (+22)

Dear Morgan,

In this comment I want to address the following paragraph (related to #2).

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!

a. While I agree that the books you've mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.

b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.

Ben Pace @ 2019-04-09T19:41 (+20)

I think this comment suggests there's a wide inferential gap here. Let me see if I can help bridge it a little.

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it's not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom's is much higher[1].

It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society's massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they'd done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.

In general I think someone's ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don't think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.

I'm focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the 'safest bets'. I am interested to know whether this perspective makes the grant's intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.

---

[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.

[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you've not already figured out how to incentivise - I don't think we've figured it all out yet.

Habryka @ 2019-04-08T20:55 (+18)

Thanks for your long critique! I will try to respond to as much of it as I can.

As I see it, there are four separate claims in your comment, each of which warrants a separate response:

1. The Long-Term Future Fund should make all of its giving based on a high standard of externally transparent evidence

2. Receiving HPMoRs is unlikely to cause the math olympiad participants to start working on the long-term future, or engage with the existing EA community

3. EA Russia has made an oral promise of delivering HPMoRs without having secured external funding first

4. If the Long-Term Future Fund is making grants that are this risky, they should not be advertised as the go-to vehicle for donations

I will start responding to some of them now, but please let me know if the above summary of your claims seems wrong.

Igor Terzic @ 2019-04-08T21:05 (+17)

I don't think that 2) really captures the objection the way I read it. It seems that on margin, there are much more cost effective ways of engaging math olympiad participants, and that the content distributed could be much more directly EA/AI related at lower cost than distributing 2000 pages of hard copy HPMoR.

Jan_Kulveit @ 2019-04-08T23:16 (+40)

I don't think anyone should be trying to persuade IMO participants to join the EA community, and I also don't think giving them "much more directly EA content" is a good idea.

I would prefer Math Olympiad winners to think about long-term, think better, and think independently, than to "join the EA community". HPMoR seems ok because it is not a book trying to convince you to join a community, but mostly a book about ways how to think, and a good read.

(If they readers eventually become EAs after reasoning independently, it's likely good; if they for example come to the conclusion there are mayor flaws in EA and it's better to engage with the movement critically, it's also good.)

Habryka @ 2019-04-08T23:26 (+9)

Agree with this.

I do think there is value in showing them that there exists a community that cares a lot about the long-term-future, and do think there is some value in them collaborating with that community instead of going off and doing their own thing, but the first priority should be to help them think better and about the long-term at all.

I think none of the other proposed books achieve this very well.

MorganLawless @ 2019-04-08T21:05 (+16)

Hello, first of all, thank you for engaging with my critique. I have some clarifications for your summary of my claims.

  1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

  2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

  3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

  4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

Habryka @ 2019-04-10T00:08 (+15)

Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:

1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.

The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.

I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.

2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:

  • I don't think the aim of this grant should be "to recruit IMO and EGMO winners into the EA community". I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
    • I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don't think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
    • Jan's comment summarized the concerns I have here reasonably well.
  • As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
  • As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn't be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don't see any problems with this.

4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: "Why might you choose to not donate to this fund?"

First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.

This is the first and top reason we list why someone might not want to donate to this fund. This doesn't necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.

From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.

MorganLawless @ 2019-04-10T17:51 (+3)

Thanks for the response. I don’t have the time to draft a reply this week but I’ll get back to you next week.

jpaddison @ 2019-04-08T14:40 (+27)

Forgive me if you've written it up elsewhere, but do you have a plan for follow-ups? In particular what success looks like in each case.

Thanks for the detailed writeups and for investigating so many grants.

Habryka @ 2019-04-08T18:40 (+13)

I would quite like us to do follow-ups, but the LTF-Fund is primarily time-constrained and solid follow-ups require a level of continuous engagement that I think currently would be quite costly for any of the current fund members.

I do think we might want to look into adding some additional structure to the fund where we maybe employ someone for half-time to follow up with grantees, perform research, help with the writeups, etc. But I haven't thought that through yet.

For now, I expect to perform follow-up evaluations when the same people re-apply for a new grant, in which case I will want to look in detail into how the past grants we gave them performed. I expect a lot of our grantees to reapply, so I do expect this to result in a good amount of coverage. This way there are also real stakes to the re-evaluation, which overall makes me think that I would be more likely to do a good job at them (as well as anyone else who might take them on).

Elityre @ 2019-04-11T00:52 (+24)

A small correction:

Facilitating conversations between top people in AI alignment (I’ve in particular heard very good things about the 3-day conversation between Eric Drexler and Scott Garrabrant that Eli facilitated)

I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.

(I'm aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experience I have trying my existing methods against hard, "real" conversations, the faster that development process can go. So, at least for the moment, it actively helps me when people request my facilitation. And also, a number of people, including Eric and Scott, have found it to be helpful for the immediate conversation.)

However, I co-facilitated that particular conversation between Eric and Scott. The other facilitators were, Eliana Lorch, Anna Salamon, and Owen Cotton Barratt.

Habryka @ 2019-04-11T21:14 (+3)

Will update to say "help facilitate". Thanks for the correction!

Moses @ 2019-04-11T18:34 (+3)

Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?

Elityre @ 2019-04-12T15:55 (+16)

At the moment, not really.

There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.

If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.

The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person was saying, clicks into place, and it doesn't just seem reasonable, it seems like a natural way to think about the situation".

Another frame is, "Everything you need to do to make Double Crux actually work."

A quick list of things conversational facilitation, as I do it, involves:

  • Tracking the state of mind of the participants. Tracking what's at stake for each person.
  • Noticing when Double Illusion of Transparency, or talking past eachother, is happening, and having the participants paraphrase or operationalize. Or in the harder cases, getting each view myself, and then acting as an intermediary.
  • Identifying Double Cruxes.
  • Helping the participants to track what's happening in the conversation and how this thread connects to the higher level goals. Cleaving to the query.
  • Keeping track of conversational threads, and promising conversational tacts.
  • Drawing out and helping to clarifying a person's inarticulate objections, when they don't buy an argument but can't say why.
  • Ontological translation: getting each participants conceptual vocabulary to make natural sense to you, and then porting models and arguments back and forth between the differing conceptual vocabularies.

I don't know if that helps. (I have some unpublished drafts on these topics. Eventually they're to go on LessWrong, but I'm likely to publish rough versions on my musings and rough drafts blog, first.)

Moses @ 2019-04-12T16:24 (+5)

Yes, that helps, thanks. "Mediating" might be a word which would convey the idea better.

Habryka @ 2019-05-29T02:35 (+20)

Feedback that I sent to Jeffrey Ladish about his application:

Excerpts from the application

I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe -- that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.

The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.

Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.

I expect my feasibility investigation to cover the following questions:

- Impact: what would it take for such a project to actually impact the far future?

- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?

- Neglectedness: what similar projects already exist?

Example questions:

Impact:

- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?

- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?

- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?

Neglectedness:

- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?

Tractability:

- What funding might be available for projects aimed at civilizational resilience?

- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?

- What are the benefits of starting a non profit vs. other project structures?

(3)

I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.

If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.

My thoughts

I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.

Since I've initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.

I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.

I think my biggest cruxes in this space are something like the following:

I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.

I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like "covering all the bases" in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don't think that is sufficient for me to recommend funding to it.

Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.

landfish @ 2020-02-10T06:07 (+9)

I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.

My conclusion after investigating potential collapse scenarios:

1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a "civilizational collapse", where an unprecedented number of people die and key technologies are (temporarily) lost.

2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.

3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.

Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn't have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.

joshjacobson @ 2020-11-01T04:29 (+3)

1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a "civilizational collapse", where an unprecedented number of people die and key technologies are (temporarily) lost.

Are you saying here that you believe the scenarios add up to a greater than 1% probability of collapse in the next hundred years, or that you believe there are multiple scenarios that each have greater than 1% probability?

landfish @ 2020-02-10T09:47 (+8)

Some quick answers to your questions based on my current beliefs:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?

I think the answer in the short term is no, if "completely collapses" means something like "is unable to get back to at least 1950's level technology in 500 years". I think think there are a number of things that could reduce humanity's "technological carrying capacity". I'm currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.

  • Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)

I think we can. I'm not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.

  • Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it's quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact

I'm not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven't read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)

  • Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?

Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don't have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I'm saying is that the population factor might not be the most important one in those scenarios.

  • Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)

I'm very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it's hard to say whether it would possible to actually alter the post-disaster civilization's profile at all. However, it's also hard to say whether we can alter the current civilization's profile at all, and it's not obvious to me that this latter task is easier.

HowieL @ 2020-02-10T15:28 (+7)

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Denkenberger @ 2019-06-12T04:24 (+7)

This is very helpful to see your reasoning and cruxes. I reply to the ALLFED related issues above, but I thought I would reply to the pandemic issue here. Here is one mechanism that could result in greater than 90% mortality from a pandemic: multiple diseases at the same time: multipandemic.

Habryka @ 2019-05-16T03:23 (+20)

This is the (very slightly edited) feedback that I sent to GCRI based on their application (caveat that GCR-policy is not my expertise and I only had relatively weak opinions in the discussion around this grant, so this should definitely not be seen as representative of the broader opinion of the fund):

I was actually quite positive on this grant, so the primary commentary I can provide is a summary of what would have been sufficient to move me to be very excited about the grant.
Overall, I have to say that I was quite positively surprised after reading a bunch of GCRI's papers, which I had not done before (in particular the paper that lists and analyzes all the nuclear weapon close-calls).
I think the biggest thing that made me hesitant about strongly recommending GCRI, is that I don't have a great model of who GCRI is trying to reach. I am broadly not super excited about reaching out to policy makers at this stage of the GCR community's strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there. I do have some models of capacity-building that suggest some concrete actions, but those have more to do with building functional research institutions that are focused on recruiting top-level talent to think more about problems related to the long term future.
I noticed that while I ended up being quite positively surprised by the GCRI papers, I hadn't read any of them up to that point, and neither had any of the other fund members. This made me think that we are likely not the target audience of those papers. And while I did find them useful, I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions  around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.
I think the key thing that I would need to be very excited about GCRI is to understand and be excited by target group that GCRI is trying to communicate to. My current model suggests that GCRI is primarily trying to reach existing policy makers, which seems unlikely to contribute to furthering the conceptual progress around global catastrophic risks much.

Seth wrote a great response that I think he is open to posting to the forum.

SethBaum @ 2019-05-17T04:34 (+12)

Oliver Habryka's comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.

My understanding of Oliver's comments is that while he found GCRI's research to be of a high intellectual quality, he did not have confidence that the research is having sufficient positive impact. There seem to be four issues at play: GCRI’s audience, the value of policy outreach on global catastrophic risk (GCR), the review of proposals on unfamiliar topics, and the extent to which GCRI’s research addresses fundamental issues in GCR.

(1) GCRI’s audience

I would certainly agree that it is important for research to have a positive impact on the issues at hand and not just be an intellectual exercise. To have an impact, it needs an audience.

Oliver's stated impression is that GCRI's audience is primarily policy makers, and not the EA long-term future (EA-LTF) community or global catastrophic risk (GCR) experts. I would agree that GCRI's audience includes policy makers, but I would disagree that our audience does not include the EA-LTF community or GCR experts. I would add that our audience also includes scholars who work on topics adjacent to GCR and can make important contributions to GCR, as well as people in other relevant sectors, e.g. private companies working on AI. We try to prioritize our outreach to these audiences based on what will have the most positive impact on reducing GCR given our (unfortunately rather limited) resources and our need to also make progress on the research we are funded for. We very much welcome suggestions on how we can do this better.

The GCRI paper that Oliver described ("the paper that lists and analyzes all the nuclear weapon close-calls" is A Model for the Probability of Nuclear War. This paper is indeed framed for policy audiences, which was in part due to the specifications of the sponsor of this work (the Global Challenges Foundation) and in part because the policy audience is the most important audience for work on nuclear weapons. It is easy to see how reading that paper could suggest that policy makers are GCRI's primary audience. Nonetheless, we did manage to embed some EA themes into the paper, such as the question of how much nuclear war should be prioritized relative to other issues. This is an example of us trying to stretch our limited resources in directions of relevance to wider audiences including EA.

Some other examples: Long-term trajectories of human civilization was largely written for audiences of EA-LTF, GCR experts, and scholars of adjacent topics. Global Catastrophes: The Most Extreme Risks was largely written for the professional risk analysis community. Reconciliation between factions focused on near-term and long-term artificial intelligence was largely written for… well, the title speaks for itself, and is a good example of GCRI engaging across multiple audiences.

The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.

(2) The value of policy outreach

Oliver writes, “I am broadly not super excited about reaching out to policy makers at this stage of the GCR community's strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there.”

This is consistent with comments I've heard expressed by other people in the EA-LTF-GCR community, and some colleagues report hearing things like this too. The general trend has been that people within this community who are not active in policy outreach are much less comfortable with it than those who are. This makes sense, but it also is a problem that holds us back from having a larger positive impact on policy. This includes GCRI’s funding and the work that the funding supports, but it is definitely bigger than GCRI.

This is not the space for a lengthy discussion of policy outreach. For now, it suffices to note that there is considerable policy expertise within the EA-LTF-GCR community, including at GCRI and several other organizations. There are some legitimately tricky policy outreach issues, such as in drawing attention to certain aspects of risky technologies. Those of us who are active in policy outreach are very attentive to these issues. A lot of the outreach is more straightforward, and a nontrivial portion is actually rather mundane. Improving awareness about policy outreach within the EA-LTF-GCR community should be an ongoing project.

It is also worth distinguishing between policy outreach and policy research. Much of GCRI's policy-oriented work is the latter. The research can and often does inform the outreach. Where there is uncertainty about what policy outreach to do, policy research is an appropriate investment. While I'm not quite sure what is meant by "this stage of the GCR community's strategic understanding", there's a good chance that this understanding could be improved by research by groups like GCRI, if we were funded to do so.

(3) Reviewing proposals on unfamiliar topics

We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI's work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.

This makes me wonder if the Long-Term Future Fund may benefit from a more decentralized review process, possibly including some form of peer review. It seems like an enormous burden for the fund’s team to have to know all the nuances of all the projects and issue areas that they could be funding. I certainly would not want to do all that on my own. It is common for funding proposal evaluation to include peer review, especially in the sciences. Perhaps that could be a way for the fund’s team to lighten its load while bringing in a wider mix of perspectives and expertise. I know I would volunteer to review some proposals, and I'm confident at least some of my colleagues would too.

It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.

(4) GCRI’s research on fundamental issues in GCR

As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:

* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.

* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.

* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.

See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.

Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.

I will note that GCRI has always wanted to focus primarily on the big cross-cutting GCR issues, but we have never gotten significant funding for it. Instead, our funding has gone almost exclusively to more narrow work on specific risks. That is important work too, and we are grateful for the funding, but I think a case can be made for more support for cross-cutting work on the big issues. We still find ways to do some work on the big issues, but our funding reality prevents us from doing much.

Habryka @ 2019-05-17T05:51 (+8)
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.

I want to make sure that there isn't any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund.

I don't currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.

Habryka @ 2019-05-17T06:13 (+11)

On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.

SethBaum @ 2019-05-18T05:03 (+5)
I do view this publishing of the LTF-responses as part of an iterative process.

That makes sense. I might suggest making this clear to other applicants. It was not obvious to me.

SethBaum @ 2019-05-18T04:59 (+2)

Thanks, this is good to know.

Habryka @ 2019-05-17T05:47 (+8)

(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to)

As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.
* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.
* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.
See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.

Of those, I had read "Long-term trajectories of human civilization" and "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation).

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum).

I also generally found the arguments in them not particularly compelling (in particular I found the arguments in "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).

I highlighted the "A model for the probability of nuclear war" not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don't think that applies to any of the three papers you linked above.

I don't currently have a great operationalization of what I mean by "fundamental confusions around global catastrophic risks" so I am sorry for not being able to be more clear on this. One kind of bad operationalization might be "research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space". It seems plausible to me that you are currently aiming to write some papers with a goal like this in mind, but I don't think most of GCRI's papers achieve that. The "A model for the probability of nuclear war" did feel like a paper that might actually achieve that, though from what you said it might have not actually have had that goal.

SethBaum @ 2019-05-18T05:23 (+7)

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.

That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.

The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).

I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:

· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)

· The study The Malicious Use of Artificial Intelligence, which, despite being led by FHI and CSER, is focused on near-term and sub-existential risks from AI

· The paper Bridging near- and long-term concerns about AI by Stephen Cave and Seán S. ÓhÉigeartaigh of CSER/CFI

All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.

One kind of bad operationalization might be "research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space".

That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don't know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.

It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)

Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.

(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn't even have dedicated funding for the original paper, so we've instead focused on other work.)

Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:

· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”

· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.

Raemon @ 2019-05-21T01:21 (+8)
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.

Just wanted to make a quick note that I also felt the "overview" style posts aren't very useful to me (since they mostly encapsulate things I already had thought about)

At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.

SethBaum @ 2019-05-21T15:28 (+1)

Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.

Habryka @ 2019-05-17T04:59 (+8)

Thanks for posting the response! Some short clarifications:

We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI's work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.

My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn't update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable).

It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.

I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.

SethBaum @ 2019-05-18T05:29 (+3)

All good to know, thanks.

I'll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It's not certain I'll have the chance to complete given my other responsibilities it but hopefully I will.

Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI's work. We could follow up privately about that.

Peter_Hurford @ 2019-07-13T00:12 (+18)

Question: How funding constrained do you feel like the Long-Term Future Fund is? Do you feel like you get to make essentially every grant you think you'd reasonably want to make or are there more awesome grants you would've made if only the fund had raised more money?

Habryka @ 2019-07-14T18:31 (+6)

Stefan Torges from REG recently asked me for our room for funding, and I sent him the following response back:

About the room for funding question, here are my rough estimates (this is for money in addition to our expected donations of about $1.6M per year): 
75% confidence threshold: ~$1M
50%: ~$1.5M
25%: ~$3M 
10%: ~$5M
Happy to provide more details on what kind of funding I would expect in the different scenarios. 

The value of these marginal grants doesn't feel like it would go down more than 20% than our current worst grants, since in every round I feel like there is a large number of grants that are highly competitive with the lowest-ranked grants we do make.

In other words, I think we have significant room for funding at about the quality level of grants we are currently making.

tessa @ 2021-06-01T02:29 (+17)

Myself and the other organizers of Catalyst (the eventual name of "A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers") recently wrote up a retrospective on the project, which may be of interest for people trying to understand how our LTFF funding was put to use.

tcheasdfjkl @ 2019-04-08T05:28 (+17)

"Mikhail Yagudin ($28,000): Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020"

Why does this cost so much?

Habryka @ 2019-04-08T05:36 (+11)

It's a pretty large number of books, from the application:

Giving HPMoRs out would allow EA or Rationalist communities to establish initial contact with about 650 gifted students (~200 for EGMO and ~450 for IMO)
matthew.vandermerwe @ 2019-04-08T13:31 (+16)

$43/unit is still quite high - could you elaborate a bit more?

Misha_Yagudin @ 2019-04-08T16:50 (+22)

Hi Matthew,

1. $43/unit is an upper bound. While submitting an application, I was uncertain about the price of on-demand printing. My current best guess is that EGMO book sets will cost $34..40. I expect printing cost for IMO to be lower (economy of scale).

2. HPMOR is quite long (~2007 pages according to Goodreads). Each EGMO book set consists of 4 hardcover books.

3. There is an opportunity to trade-off money for prestige by printing only the first few chapters.

matthew.vandermerwe @ 2019-04-09T09:31 (+25)

Thanks for clarifying, that seems reasonable.

FWIW I share the view that sending all 4 volumes might not be optimal. I think I'd find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.

RyanCarey @ 2019-04-08T17:17 (+18)

It's a bit surprising to me that you'd want to send all four volumes.

alexlintz @ 2019-04-09T07:26 (+21)

Yeah I tend to agree that sending the whole thing is unnecessary. The first 17 chapters of printed version distributed at CFAR workshops (I think, haven't actually been to one) is enough to get people engaged enough to move to the online medium. I'm guessing sending just that small-looking book will make people more likely to read it as seeing a 2k page book would definitely be intimidating enough to stop many from actually starting.

I do tend to think giving the print version is useful as it incurs some sort of reciprocity which should incentivize reading it.

Habryka @ 2019-04-08T18:36 (+12)

I think it's worth trying. My model is that making a good first impression with a top IMO performer is easily worth $50+, and I think the logistical overhead plays out such that I think you pay about $30 additional dollars to make a significantly better impression than by handing out a small 16-chapter booklet, which seems worth it.

RobBensinger @ 2019-04-09T06:21 (+42)

Money-wise this strikes me as a fine thing to try. I'm a little worried that sending people the entire book set might cause some people to not read it who would have read a booklet, because they're intimidated by the size of the thing.

Psychologically, people generally need more buy-in to decide "I'll read the first few chapters of this 1800-page multi-volume book and see what I think" than to decide "I'll read the first few chapters of this 200-page book that has five sequels and see what I think", and even if the intended framing is the latter one, sending all 1800 pages at once might cause some people to shift to the former frame.

One thing that can help with this is to split HPMoR up into six volumes rather than four, corresponding to the book boundaries Eliezer proposed (though it seems fine to me if they're titled 'HPMoR Vol. 1' etc.). Then the first volume or two will be shorter, and feel more manageable. Then perhaps just send the first 3 (or 2?) volumes, and include a note saying something like 'If you like these books, shoot us an email at [email] and we'll ship you the second half of the story, also available on hpmor.com.'

This further carves up the reading into manageable subtasks in a physical, perceptual way. It does carry the risk that some people might stop when they get through the initial volumes. It might be a benefit in its own right to cause email conversations to happen, though, since a back-and-forth can lead to other useful things happening.

Habryka @ 2019-04-09T17:12 (+13)

The thing that makes me more optimistic here is that the organizers of IMO and EGMO themselves have read HPMoR, and that the books are (as far as I understand it) handed out as part of the prize-package of IMO and EGMO.

I think this makes it more natural to award a large significant-seeming prize, and also comes with a strong encouragement to actually give the books a try.

My model is that only awarding the first book would feel a lot less significant, and my current models of human psychology suggests that while it is the case that some people will feel intimidated by the length of the book, the combined effect of being given a much smaller-seeming gift plus the inconvenience of having to send an email or fill out a form or go to a website to continue reading the book is larger than the effect of the size of the book being overwhelming.

The other thing that having full physical copies enables is book-lending. I printed a full copy of HPMoR a few years ago and have borrowed it out to at least 5 people, maybe one of which would have read the book if I had just sent them a link or just borrowed them the first few chapters (I have given out the small booklets and generally had less success at that than loaning parts of my whole printed book series).

However, I am not super confident of this, and the tradeoff strikes me as relatively close. I yesterday also had a longer conversation about this on the EA-Corner discord and after chatting with me for a while a lot of people seemed to think that giving out the whole book was a better idea, but it did take a while, which is some evidence of inferential distance.

RobBensinger @ 2019-04-09T19:26 (+3)

That all makes sense. In principle I like the idea of trying both options at some point, in case one turns out to be obviously better. I do think that splitting things up into 6 books is better than 4, cost allowing, so that the first effort chunk feels smaller.

Habryka @ 2019-04-09T20:06 (+11)

I do agree with that, and this also establishes a canonical way of breaking the books up into parts. @Misha: Do you think that's an option?

Misha_Yagudin @ 2019-04-11T17:40 (+11)

Oliver, Rob, and others thank you for your thoughts.
1. I don't think that experimenting with the variants is an option for EGMO [severe time constraints].
2. For IMO we have more than enough time, and I will incorporate the feedback and considerations into my decision-making.

BryonyMC @ 2019-04-21T19:19 (+1)

Food for thought: just in thinking how to maximize the value of experimenting with distribution; an alternative approach would be to print the first book and distribute to the math olympiads then invest the rest of the money into converting HPMOR into a podcast/audiobook that can be shared more widely and outlining a “next steps” resource to guide readers. If distributing the books fails (depending on your definition of distribution being a “success”) you avoid sinking $28k into books sitting on shelves at home and now have a widely available podcast (to access for free or a small donation) that can increase HPMOR’s reach over time. (FYI the funds raised through small donations for access could be used to sponsor future printings for youth competitions). 

A podcast or a revamped online version becomes a renewable resource, whereas once those books are distributed, they (and the money) are gone. For those interested, the model that comes to mind is HP and the Sacred Text. Using Harry Potter to convey certain ideas or messages is not uncommon given its global reach. HPST is using it for different reasons obviously but HOW they are distributing the idea might be worth pursing with HPMOR too. HP Alliance is another group using HP to convey a message (their focus is on political and social activism). HPMOR could have greater value long-term if there were alternative methods for accessing it beyond a 2000 page series. 

Ben Pace @ 2019-04-21T21:02 (+11)

A high quality podcast has been made (for free, by the excellent fanbase). It’s at www.hpmorpodcast.com.

BryonyMC @ 2019-04-21T22:33 (+6)

This is great, thank you! Surprised I haven't stumbled across this before... Even better if it's already an available resource, it seems worth sharing with the IMO students and other relevant groups (which was the essence of my suggestion above).

Denkenberger @ 2019-04-08T06:42 (+7)

And why so much focus on math rather than science/engineering?

Habryka @ 2019-04-08T07:08 (+7)

I've considered grants to give books to potentially engineering focused competitions (the same group the current grant goes to also asked about whether we would be interested in giving out books to other competition communities), but I currently think the value of math olympiads is likely to be the highest, for the following reasons:

1. There are positive feedback loops in having other institutions in place to serve as a point of contact for people who end up being inspired by the books. For math olympiad winners we have SPARC and ESPR as well as a broader existing network of people engaged with the math olympiad community. This is less the case for other competitions.

2. My sense is that of the olympiad and competition communities, the math olympiad community is the largest, and tends to attract the best people

3. I think mathematics skill transfers more directly into being predictive of general intelligence than other skills, and also seems more relevant to some of the problems that I am most concerned about solving, like technical problems around AI Alignment

I am thinking about recommending grants to additionally give books to be handed out at other competitions, but I think we should wait and see how these grants play out before we invest more resources into giving out books in this way.

Misha_Yagudin @ 2019-04-08T17:12 (+17)

A bit of a tangent to #3. It seems to me that solving AI Alignment requires breakthroughs and the demographic we are targeting is potentially very well equipped to do so

According to “Invisible Geniuses: Could the Knowledge Frontier Advance Faster?” (Agarwal & Gaule 2018), IMO gold medalists are 50x more likely to win a Fields Medal than PhD graduates of US top-10 math programs. (h/t Gwern)

Jonas Vollmer @ 2019-04-08T14:39 (+10)

On #3, this goes in a similar direction.

Milan_Griffes @ 2019-04-09T00:47 (+15)
Overall, I think it’s likely that staff at highly valuable EA orgs will continue burning out, and I don’t currently see it as an achievable target to not have this happen (though I am in favor of people people working on solving the problem).

Very curious to read more about your view on this at some point (perhaps would be best as a standalone post).

From my present vantage, if it's likely that staff at EA orgs will continue burning out in a nonstochastic way, working to address that seems incredibly leveraged.

Broadly, poor mental health & burnout seem quite tractable. See:

And perhaps there are tractable things that can be changed about the organizational & social cultures in which employees of these orgs exist in.

Habryka @ 2019-04-09T02:33 (+18)

I agree that I might want to write a top-level post about this at some point. Here is a super rough version of my current model:

To do things that are as difficult as EAs are trying to do, you usually need someone to throw basically everything they have behind it, similarly to my model of early stage startups. At the same time, your success rates won't be super high because the problems we are trying to solve are often of massive scale, often lack concrete feedback loops, and don't have many proven solutions.

And even if you succeed some amount, it's unlikely that you will be rewarded with a comparable amount of status or resources than you would if you were to build a successful startup. My model is that EA org success tends to look weird and not really translate into wealth or status in the broader world. This puts large cognitive strain on you, in particular given the tendency for high scrupulosity in the community, by introducing cognitive dissonance between your personal benefit and your moral ideals.

This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.

Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don't expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.

Milan_Griffes @ 2019-04-09T05:46 (+6)

Thanks for this.

Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don't expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.

There's more to say here, but for now I'll just note that everything in the model above this paragraph is compatible with a world where burnout & mental health are very tractable & very leveraged (and also compatible with a world where they aren't):

  • "throwing everything you have towards the problem" – nudge work norms, group memes, and group myths toward more longterm thinking (e.g. Gwern's interest in Long Content and the Long Now)
  • "massive scale problems" – put more effort towards chunking the problems into easy-to-operationalize chunks
  • "lack of concrete feedback loops" – build more concrete feedback loops, and/or build work methodologies that don't rely on concrete feedback loops (e.g. Wiles' proof of Fermat's Last Theorem)
  • "lack of proven solutions" – prove out solutions, and study what has worked for longterm-thinking cultures in the past. (Some longterm-thinking cultures: China, the Catholic Church, most of Mahayana Buddhism, Judaism)
  • "high-scrupulosity culture" – nudge the culture towards a lower-neuroticism equilibrium
  • "starved on management capacity" – study what has worked for great managers & great institutions in the past, distill lessons from that, then build a culture that trains up strong managers internally and/or attracts great managers from the broader world

Also there's the more general strategy of learning about cultures where burnout isn't a problem (of which there are many), and figuring out what can be brought from those cultures to EA.

Dale @ 2019-06-06T00:58 (+14)

I found the article impressively detailed in laying out your reasoning, and it gives me significantly more confidence that the fund will be funding the sort of smaller opportunities that individual donors might have trouble accessing otherwise. It provides much more detail than I would have expected, on a wide variety of generally good projects. I'm also pleased about the geographic spread. So nice one!

In contrast to some other commenters, I have no objection to the HPMOR project. While I can see some potential downsides, it seems like it plausibly could be quite good if implemented sensitively, and shouldn't be dismissed out of hand.

I am a little more skeptical of the Lauren Lee grant however. There could be value in supporting promising new people trying something new out - like many of Alex Zhu's grants. However, that doesn't seem like it applies to someone who has already worked in the sector for two years. At this point we should be expecting significantly more concrete evidence, but what evidence we have here (burning out at CFAR, lack of ability to finish projects to completion) does not seem entirely positive.

We might also look for a set of highly impactful planned outputs. However, the actual list does not seem to meet this criteria:

A program where I do 1-on-1 sessions with individuals or orgs; I’d create reports based on whether they self-report improvements

X-risk orgs (e.g. FHI, MIRI, OpenPhil, BERI, etc.) deciding to spend time/money on my services may be a positive indicator, as they tend to be thoughtful with how they spend their resources

Writings or talks

Workshops with feedback forms

A more effective version of myself (notable changes = gaining the ability to ride a bike / drive a car / exercise—a PTSD-related disability, ability to finish projects to completion, others noticing stark changes in me)

These seem to be a mixture of CFAR-like things (raising the question of why an ex-CFAR employee is better placed to provide them than CFAR) and activities that, while good, are not something that I would expect the fund to support (feedback forms, learning to ride a bike).

I think this is an especially big issue given the history of organizations having a lower bar for giving money - essentially sinecures - to members of the bay area community.


oliverbramford @ 2019-04-10T14:28 (+14)

Would you be able to provide any further information regarding the reasons for not recommending the proposal I submitted for an 'X-Risk Project Database'? Ask: $12,375 for user research, setup, and feature development over 6 months.

Project summary:

Create a database of x-risk professionals and their work, starting with existing AI safety/x-risk projects at leading orgs, to improve coordination within the field.

The x-risk field and subfields are globally distributed and growing rapidly, yet x-risk professionals still have no simple way to find out about each other’s current work and capabilities. This results in missed opportunities for prioritisation, feedback and collaboration, thus retarding progress. To improve visibility and coordination within the x-risk field, and to expedite exceptional work, we will create a searchable database of leading x-risk professionals, organisations and their current work.

Application details

p.s. applause for the extensive explanations of grant recommendations!!

cstx @ 2019-04-10T21:20 (+9)

This database from Issa Rice seems relevant to your proposal: https://aiwatch.issarice.com

Habryka @ 2019-05-16T03:20 (+7)

Sorry for the long delays on this, I am still planning to get back to you, there were just some other things that ended up taking up all of my LTF-Fund allocated time which are now resolved, so I should be able to write up my thoughts soon.

riceissa @ 2019-08-25T21:04 (+1)

Hi Oliver, are you still planning to reply to this? (I'm not involved with this project, but I was curious to hear your feedback on it.)

Habryka @ 2019-08-26T16:23 (+2)

Yes! Due to a bunch of other LTFF things taking up my time I was planning to post my reply to this around the same time as the next round of grant announcements.

Habryka @ 2019-04-10T19:27 (+6)

I will get back to you, but it will probably be a few days. It seems fairer to first send feedback to the people I said I would send private feedback too, and then come back to the public feedback requests.

Milan_Griffes @ 2019-04-08T23:19 (+12)

Many of these grants seem to fall under the remit of the EA Meta Fund.

Could you expand more about how Long Term Future Fund grant-making is differentiated from Meta Fund grant-making?

Habryka @ 2019-04-08T23:32 (+8)

I made a short comment here about this, though obviously there is more to be said on this topic.

Milan_Griffes @ 2019-04-09T00:36 (+10)

Ozzie's grant: How is Foretold differentiated from Ought's Mosaic? From a quick look, they appear to be attacking a similar problem-space.

Does Ozzie have a go-to-market strategy? Seems like a lot of what he's doing would be very profitable & desired by many companies, if executed well.

Relatedly, why not take an equity stake in Ozzie's project, rather than structure this as a donation?

Habryka @ 2019-04-09T02:23 (+11)

Ozzie was the main developer behind the initial version of Mosaic, so I do expect some of the overlap to be Ozzie's influence.

I don't think I want Ozzie to commit at this point to being a for-profit entity with equity to be given out. It might turn out that the technology he is developing is best built on a non-profit basis. It also seems legally quite confusing/difficult to have the LTF-Fund own a stake in someone else's organization (I don't even know whether that's compatible with being a 501c3).

I expect Ozzie to be better placed to talk about his own go-to-market strategy instead of me guessing at Ozzie's intentions. I obviously have my own models of what I expect Ozzie to do, but in this case it seems better for Ozzie to answer that question.

Milan_Griffes @ 2019-04-09T05:24 (+2)
Ozzie was the main developer behind the initial version of Mosaic, so I do expect some of the overlap to be Ozzie's influence.

Right, I'm wondering about the bull case for simultaneously funding the two projects.

Habryka @ 2019-04-09T19:16 (+4)

Hmm, since I was relatively uninvolved with the Ought grant I have some difficulty giving a concrete answer to that. From an all things considered view (given that Matt was interested in funding it) I think both grants are likely worth funding, and I expect the two organizations to coordinate in a way to mostly avoid unnecessary competition and duplicate effort.

Milan_Griffes @ 2019-04-09T19:28 (+2)
I expect the two organizations to coordinate in a way to mostly avoid unnecessary competition and duplicate effort.

Curious to hear more about what that will look like, though probably Ozzie's better positioned to reply.

Ozzie Gooen @ 2019-04-09T21:58 (+42)

Happy to chime in here.

I've previously worked at Ought for a few months and have helped them make Mosaic. I've been talking a decent amount with different Ought team members. We share broad interests of how to break down reasoning, but are executing this in very different ways. Mosaic works in breaking down a very large space of problems into tiny text subproblems. I'm working on a prediction application, which works by having people predict probabilities of future events, and separately share information about their thinking. I think that essentially no one who sees both applications would consider them to be equivalent.

I'm doing very similar work as part of my research at FHI. The plan is not to attempt to become a business or monetize in the foreseeable future. I've been around the startup scene a lot before, and have come to better understand the limitations of getting money in business ways. In almost all cases, from what I can tell, experimental and charitable desires become pushed aside in order to optimize for profits. I've considered this with Guesstimate. Originally I thought I could make a business that also would be useful to EAs, but later realized that it would be exceptionally difficult. Most realistic business strategies for sales looked like domain-specific tools, for instance, a real-estate-specific distribution application, which would sell a lot more but be quite useless for EA causes.

In this case, my first and main priority is to experiment/innovate in the space. I think that doing this in the research setting at this point will be the best way to ensure that the work stays focussed on the long-term benefits.

Hypothetically, if in a few years we wind up with something that was optimized for EA uses, but happens to be easily monetizable with low effort, then that could be a useful way of helping to fund things. However, I really don't want to commit to a specific path at this stage, which is really early on.

Milan_Griffes @ 2019-04-09T22:51 (+4)

Awesome, thanks for jumping in!

Most realistic business strategies for sales looked like domain-specific tools, for instance, a real-estate-specific distribution application, which would sell a lot more but be quite useless for EA causes.

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Raemon @ 2019-04-09T22:58 (+6)
What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Is there a particular reason to assume that'd be a good idea?

Milan_Griffes @ 2019-04-09T23:05 (+4)
Raemon @ 2019-04-10T02:35 (+8)

I'm familiar with good things coming out of those places, but not sure why they're the appropriate lens in this case.

Popping back to this:

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

This makes more sense to me when you actually have a company large enough to theoretically have multiple arms. AFAICT there are no arms here, there are just like 1-3 people working on a thing. And I'd expect getting to the point where you could have that requires at least 5-10 years of work.

What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?

Milan_Griffes @ 2019-04-10T02:45 (+5)
What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?
  • More social capital & prestige
  • More robust revenue situation
  • More freedom to act opportunistically to support other projects that you care about (as a small-scale angel funder; as a mentor)
  • (probably) More learning about how organizations work from setting up an org with diverse stakeholders

Case study: Matt Fallshaw & how Bellroy enabled TrikeApps, which supported a lot of good stuff (e.g. LessWrong 1.0). But Bellroy just sells (nice) wallets.

Milan_Griffes @ 2019-04-10T02:49 (+2)

Also to clarify: I'm imagining Ozzie + co-founders build a company, and Ozzie dedicates a fair bit of his capacity to research all along the way.

Raemon @ 2019-04-10T03:19 (+10)

Part of my thinking here is that this would be a mistake: focus and attention are some of the most valuable things, and splitting your focus is generally not good.

Milan_Griffes @ 2019-04-10T04:08 (+7)

This seems highly person-dependent: definitely true for some people, definitely not true for others.

Also, effective administrators & executives tend to multitask heavily, e.g. Robert Moses.

Ozzie Gooen @ 2019-04-14T14:11 (+11)

I feel pretty flattered to be even vaguely categorized with all of those folks, but I think it's pretty unlikely of working out that well (it almost always is). If I was pretty sure (>30%) I could make a company as large as Apple/Twitter/Tesla/YC, I'd be pretty happy to go that route.

I've chatted to hundreds of entrepreneurs and tried this, arguably, twice before. That said, if later it's predicted that going the more direct business way would be better for total expected value, I could definitely be open to changing later.

Ozzie Gooen @ 2019-04-14T14:18 (+7)

Another thing to note: I'm optimizing on a time-horizon of around 10-30 years. Making a business first could easily take 6-20 years.

Milan_Griffes @ 2019-04-14T16:38 (+3)
I'm optimizing on a time-horizon of around 10-30 years.

Does this flow from your AGI timeline estimate?

Ozzie Gooen @ 2019-04-17T17:05 (+6)

Basically, though it's a bit extra short when weighted for what we can change. Transformative narrow AI or other transformative technologies could also apply.

Milan_Griffes @ 2019-04-08T21:16 (+10)

Could you publish also a list of runner-up's? (i.e. applicants that were closely considered but didn't make the cut?)

I think that'd be helpful as the community thinks through the decision-making process here.

Habryka @ 2019-04-08T21:23 (+26)

I currently don't feel comfortable publishing who applied and did not receive a grant, without first checking in with the applicants. I can imagine that in future round there would be some checkbox that applicants can check to indicate that they feel comfortable with their application being shared publicly even if they do not receive a grant.

Milan_Griffes @ 2019-04-08T23:13 (+4)

Got it.

Do you plan to check with the applicants from this round? Seems quick to do, and could surface a lot of helpful information.

Habryka @ 2019-04-08T23:27 (+7)

I have told all applicants that I would be interested in giving public feedback on their application, and will do so if they comment on this thread.

Milan_Griffes @ 2019-04-09T00:14 (+4)

Huh, I submitted two applications but didn't see your note re: public feedback. Perhaps you missed me?

Habryka @ 2019-04-09T00:19 (+7)

I sent you a different email which indicated that I was already planning on sending you feedback directly within the next two weeks. The email which will include that feedback will then also include a request to share it publicly.

There was a small group of people (~7) where I had a sense that direct feedback would be particularly valuable, and you were part of that group, so I sent them a different email indicating that I am going to give them additional feedback in any case, and it was difficult to fit in a sentence that also encouraged them asking for feedback publicly since I had already told them I would send them feedback.

Milan_Griffes @ 2019-04-09T00:20 (+3)

Got it, thanks.

baleparalysis @ 2019-04-08T20:10 (+6)

Skeptical about the cost effectiveness of several of these.

Ought - 50k. "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Is that really your aim now, being a grant dispenser for random AI companies? What happened to saving lives?

"Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." If they have enough money and this is a token grant, why is it 50k? Why not reduce to 15-20k and spend the rest on something else?

Metaculus - 70k, Ozzie Gooen - 70k, Jacob Lagerros - 27k. These are small companies that need funding; why are you acting as grant-givers here rather than as special interest investors?

Robert Miles, video content on AI alignment - 39k. Isn't this something you guys and/or MIRI should be doing, and could do quickly, for a lot less money, without having to trust that someone else will do it well enough?

Fanfiction handouts - 28k. What's the cost breakdown here? And do you really think this will make you be taken more seriously? If you want to embrace this fanfic as a major propaganda tool, it certainly makes sense to get it thoroughly edited, especially before doing an expensive print run.

CFAR - 150k(!). If they're relying on grants like this to survive, you should absolutely insist that they downsize their staff. This funding definitely shouldn't be unrestricted.

Connor Flexman - 20k. "Techniques to facilitate skill transfer between experts in different domains" is very vague, as is "significant probability that this grant can help Connor develop into an excellent generalist researcher". I would define this grant much more concretely before giving it.

Lauren Lee - 20k. This is ridiculous, I'm sure she's a great person but please don't use the gift you received to provide sinecures to people "in the community".

Nikhil Kunapul's research - 30k and Lucius Caviola's postdoc - 50k. I know you guys probably want to go in a think-tanky direction but I'm still skeptical.

The large gift you received should be used to expand the influence of EA as an entity, not as a one-off. I think you should reconsider grants vs investment when dealing with small companies, the CFAR grant also concerns me, and of course in general I support de-emphasizing AI risk in favor of actual charity.

Peter_Hurford @ 2019-04-08T21:15 (+57)

This comment strikes me as quite uncharitable, but asks really good questions that I do think would be good to see more detail on.

Habryka @ 2019-04-08T23:56 (+17)

I would be interested in other people creating new top-level comments with individual concerns or questions. I think I have difficulty responding to this top-level comment, and expect that other people stating their questions independently will overall result in better discussion.

aarongertler @ 2019-04-08T22:38 (+37)
The large gift you received should be used to expand the influence of EA as an entity, not as a one-off [...] and of course in general I support de-emphasizing AI risk in favor of actual charity.

While I'm not involved in EA Funds donation processing or grantmaking decisions, I'd guess that anyone making a large gift to the Far Future Fund does, in fact, support emphasizing AI risk, and considers funding this branch of scientific research to be "actual charity".

It could make sense for people with certain worldviews to recommend that people not donate to the fund for many reasons, but this particular criticism seems odd in context, since supporting AI risk work is one of the fund's explicit purposes.

--

I work for CEA, but these views are my own.

baleparalysis @ 2019-04-08T23:29 (+1)

If the donation was specifically earmarked for AI risk, that aside isn't relevant, but most of the comment still applies. Otherwise, AI risk is certainly not the only long term problem.

Habryka @ 2019-04-08T23:35 (+3)

I was not informed of any earmarking, so I don't think there were any stipulations around that donation.

aarongertler @ 2019-04-08T22:41 (+34)
Robert Miles, video content on AI alignment - 39k. Isn't this something you guys and/or MIRI should be doing, and could do quickly, for a lot less money, without having to trust that someone else will do it well enough?

Creating good video scripts is a rare skill. So is being able to explain things on a video in a way many viewers find compelling. And a large audience of active viewers is a rare resource (one Miles already has through his previous work).

I share some of your questions and concerns about other grants here, but in this case, I think it makes a lot of sense to outsource this tricky task, which most organizations do badly, to someone with a track record of doing it well.

--

I work for CEA, but these views are my own.

Ozzie Gooen @ 2019-04-09T22:04 (+49)

I honestly think this was one of the more obvious ones on the list. 39k for one full year of work is a bit of a steal, especially for someone who already has the mathematical background, video production skills, and audience. I imagine if CEA were to try to recreate that it would have a pretty hard time, plus the recruitment would be quite a challenge.

Cullen_OKeefe @ 2019-04-10T21:06 (+27)

I second this analysis and agree that this was a great grant. I was considering donating to Miles' Patreon but was glad to see the Fund step in to do so instead. It's more tax-efficient to do it that way. Miles is a credible, entertaining, informative source on AI Safety and could be a real asset to beginners in the field. I've introduced people to AIS using his videos.

RyanCarey @ 2019-04-08T20:34 (+32)

It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.

Milan_Griffes @ 2019-04-08T20:45 (+9)

+1. I have pretty different thoughts about many of the points you raise.

Jan_Kulveit @ 2019-04-09T00:02 (+6)

I don't think karma/voting system should be given that much attention or should be used as a highly visible feedback on project funding.

Habryka @ 2019-04-09T00:10 (+23)

I do think that it would help independently of that by allowing more focused discussion on individual issues.

Jan_Kulveit @ 2019-04-09T00:24 (+6)

To clarify - agree with the benefits of splitting the discussion threads for readability, but I was unenthusiastic about the motivation be voting.

Milan_Griffes @ 2019-04-09T00:29 (+5)

{Made this a top-level comment at Oli's request.}

Habryka @ 2019-04-09T01:40 (+4)

(Will reply to this if you make it a top-level comment, like the others)

Milan_Griffes @ 2019-04-09T05:22 (+4)

K, it's now top-level.

Milan_Griffes @ 2019-04-09T00:22 (+3)

Ought: why provide $50,000 to Ought rather than ~$15,000, given that they're not funding constrained?

Habryka @ 2019-04-09T00:30 (+3)

(Top-level seems better, but will reply here anyway)

The Ought grant was one of the grants I was least involved in, so I can't speak super much to the motivation behind that one. I think you will want to get Matt Wage's thoughts on that.

Milan_Griffes @ 2019-04-09T01:39 (+2)

Cool, do you know if he's reading & reacting to this thread?

Habryka @ 2019-04-09T02:35 (+3)

Don't know. My guess is he will probably read it, but I don't know whether he will have the time to respond to comments.

Ozzie Gooen @ 2019-04-09T22:09 (+24)

"Metaculus - 70k, Ozzie Gooen - 70k, Jacob Lagerros - 27k. These are small companies that need funding; why are you acting as grant-givers here rather than as special interest investors?"

I'm not sure why you think all of these are companies. Metaculus is a company, but the other two aren't.

Personally, I think it would be pretty neat if this group (or a similar one) were to later set up the legal infrastructure to properly invest in groups where that would make sense. But this would take quite a bit of time (both fixed costs and marginal costs), and if there are only a few groups per year (one, in this case, I believe) is probably not worth it.

Cullen_OKeefe @ 2019-04-10T21:01 (+5)

I'd like to also echo others' comments thanking the team for responding and engaging with questioning of these decisions.

A question I have as a consistent donor to the fund: under which circumstances, if any, would the team consider regranting to, e.g., the EA Meta Fund? Under some facts (e.g., very few good LTF-specific funding opportunities but many good meta/EA Community funding opportunities), couldn't that fund do more good for the LTF than projects more classically appropriate to the LTF Fund?* Or would you always consider meta causes as potential recipients of the LTF Fund, and therefore see no value regranting since the Meta Fund would not be in a better position than you to meet such requests?

I ask because, though I still think these grants have merit, I can also imagine a future in which donations to the Meta Fund would have more value to the LTF than the LTF Fund. But I imagine the LTF Fund could be better-positioned than me to make that judgment and would prefer it to do so in my stead. But if the LTF Fund would not consider regranting to the next-best fund, then I would have to scrutinize grants more to see which fund is creating more value for the LTF. But this defeats the purpose of the LTF.

*The same might be said of the other Funds too, but Meta seems like the next best for the LTF specifically IMO.

Milan_Griffes @ 2019-04-09T01:31 (+5)

Did the EA Hotel apply?

If so, are they open to the reasoning about why they didn't get a grant being made public?

Habryka @ 2019-04-09T02:34 (+20)

I don't feel comfortable disclosing who has applied and who hasn't applied without the relevant person's permission.

Greg_Colbourn @ 2019-04-09T09:02 (+13)

We applied. Judging by the email I received, I think we are also part of the small group of ~7 mentioned here. Awaiting the follow up email.