Update from Open Philanthropy’s Longtermist EA Movement-Building team
By ClaireZabel @ 2022-03-10T19:37 (+200)
Summary
- Open Philanthropy’s Longtermist EA Movement-Building team aims to grow and support the pool of people who are well-positioned to work on longtermist priority projects.
- This post outlines our recent work and strategic updates as a team, and isn’t meant to represent the work or views of other teams at Open Phil.
- We think this is a very promising space, and we’re hiring for several roles so that we can move faster and deploy more funding.
- Over time, we have become more confident in the value of the grants we’ve already made, since our grantees mostly seem to be bringing in promising people to work on longtermist projects at a good rate.
- This has led us to begin spending our time differently:
- Less time evaluating opportunities (since we’ve come to think that most of the things we want to fund will probably be above our “bar” for impact)
- More time trying to generate additional opportunities (e.g. by creating different programs where people can apply for funding, like our scholarships or course development grants).
- More time trying to better understand the field and share our findings.
- We’ve also come to prioritize “time-effectiveness” over “cost-effectiveness” in most cases (that is, aiming to achieve our goals while conserving EA time/labor, even if that means spending more money).
- I think we should have made those changes faster than we did, and see it as a mistake that I didn’t (a) hire more quickly and (b) advocate more forcefully for certain opportunities that were promising, but difficult to evaluate.
- For our future grantmaking on our team, I’m concerned about avoiding measurability bias (prioritizing grants that come with impressive numbers/credentials attached) and certain forms of motivated reasoning.
- There are many kinds of projects we hope to fund in the future that could allow us to sharply scale up our total grantmaking.
For much more detail on all of this, see the rest of the post.
Introduction
This post is a report and update on the Open Philanthropy Longtermist Effective Altruism Movement-Building team’s thinking and goals. It’s written by me, Claire, and mostly represents my perspective. I’m writing this in my role as an Open Phil staff member, but I take sole responsibility for the angsty commentary near the bottom.
Our team currently consists of me, Asya Bergal, Bastian Stern, and Eli Rose. We are supported by Open Phil’s “longtermist budget” (funding to support projects motivated by the longtermist view), but unlike the other longtermist cause areas, we aren’t aiming to make progress on longtermist priorities directly. Instead, our goal is to grow and support the pool of people motivated and well-positioned to work on longtermist priority projects (e.g. reducing existential risk and aiming to improve the far future). [1]
I think our team’s grantmaking has high expected value, because (1) in my experience, most of the relevant object-level longtermist projects are bottlenecked by the dearth of aligned people who are good fits (so our goals are aimed at a core problem), (2) there’s a lot of funding to direct (which is bottlenecked by the number of grantmakers working to direct it), and (3) we have a reasonably high number of potential grantmaking projects we are working as fast as we can to implement (i.e. there’s a feeling of traction) and could implement faster if we had more capacity on the team. Not coincidentally, we are currently hiring for several roles.
What’s happened so far
I took over this area from Nick Beckstead (who was working on it part-time) in early 2019. Bastian started working with me and Eli joined in 2020, and Asya joined in 2021.
I think a lot has changed since then, and a lot of important changes are ongoing. We committed funds equaling ~$17M in the area in 2019, ~$26M in 2020, and ~$60M in 2021. So far in 2022, we’ve already committed >$65M (though some key aspects of the relevant grants are still TBD[2]), so we are on track to continue to vastly increase our giving. I hope and believe that if we hire more strong grantmakers, we can double giving in this area several more times (i.e., there’s sufficient funder interest and are/will be worthy opportunities).
As I’m going to discuss a bit below, I’ve shifted further away from focusing on “money moved” figures, and I think they can be misleading proxies for impact. Even among funding that is “above the bar” from a financial perspective[3], I think the top decile is at least an order of magnitude more impactful (per dollar) than the bottom decile grantmaking that we’re doing. In other words, seeing that more money has been moved doesn’t tell you much unless you have a sense of where it falls along the cost-effectiveness spectrum (or time-effectiveness spectrum).
Right now, the time of aligned longtermists working on high-priority projects seems to be the scarcer resource and perhaps the more useful metric to focus on. Still, “money moved” is one of the easier figures to report, and I reported it above because I think it mostly tracks the more meaningful but less measurable growth in projects and funding opportunities my team is working with.
Over the last few years, my thinking about my role and the role of other longtermist grantmakers has shifted significantly. In the past, I spent a lot more time working on the question: “How can I tell if a funding opportunity in the longtermist meta space meets the bar?” Nowadays, we spend less time on evaluation and more time creating new funding opportunities to achieve our main goal (growing and supporting people who help with longtermist priority projects).[4]
There were a few reasons for this change:
- Over time, I got to know and understand many of the relevant grantees more, and developed a better sense of how they were working, what their core competencies were, and what their own research and metrics were suggesting about their impact. The fifth time you evaluate a grantee’s work is likely to teach you much less than the first time.
- Research (e.g. this and some unpublished analysis) we did suggested that our previous grants were mostly successfully recruiting people we thought were promising at reasonable rates.
- There’s a lot of nuance here, but basically, enough people (mostly doing object-level work our advisors think is promising in longtermist priority areas) reported (including when we didn’t prompt them by mentioning specific projects) that (more or less) the projects we are funding helped them significantly on their path to their current work.
- It also suggested to me that high-quality object-level work can be as effective at achieving “meta” goals as meta work for a variety of reasons. Such work often opens up exciting “surface area” for people who are good fits (e.g. one conceptual insight can create opportunities for valuable research on several better-scoped sub-questions, a new org aimed at a key goal can often create roles for people who are ready to contribute but not be founders). It also demonstrates at a gut level that progress is plausible, and it showcases that there are talented teams doing this important work (and that working with them might be fun and a valuable learning experience).
- Tangentially, I think a lot of community-builders in EA-land underestimate the importance of engaging with object-level EA causes themselves, in terms of their ability to in turn do successful outreach to people who are good fits for working in those causes. In my experience, it’s hard to convince someone to go into a cause area when you don’t really understand the cause area or why it’s important (and you risk potentially putting them off by giving them clumsy or inaccurate explanations). And on the other hand, it’s really helpful for one’s own intellectual development to try to understand a few different causes, and the different dynamics they face trying to solve core problems.
- The amount of longtermist-motivated funding available rose substantially
- As Ben Todd at 80,000 Hours noted, EA-motivated funding available has risen vastly in the last few years. (Note that only some of this funding will go to longtermist projects, though I expect it to be a substantial fraction).
- Also, I strongly suspect based on conversations with some other funders and their representatives that we would be able to mobilize more funding than is currently clearly aimed at longtermist goals if there were clearer funding gaps than the ones that exist and there wasn’t an apparent funding overhang.
Metrics of impact
The above points led me to think that in fact, going forwards, grants of the kind we were making will likely be substantially “above the (new) bar”.
That updated my views in several ways:
- It seemed much less likely that additional time spent evaluating those opportunities more thoroughly would lead to changes in our decision to fund them.
- It seemed more likely that less cost-effective-seeming opportunities we hadn’t previously been exploring would “meet the bar”.
- And if more opportunities were “above the bar” and there were more funding, it seemed more plausible that the bottleneck for distributing funding as well as possible would be grantmaker time available for opening up opportunities.
- It generally seemed more helpful to spend time doing various forms of research and thinking that would help people identify and aim at impactful and neglected activities that might be strong fits for them.
So, we’re shifting to prioritize our and our grantees’ time more highly. And we’ve been creating different kinds of open calls[4] for people to request different kinds of fairly short-term support[5] (in contrast to grants where we support existing organizations). For these, we’ve started thinking in terms of how much quality-weighted longtermist output we think they’ll produce per hour we put into them, rather than focusing primarily on output per dollar. (In an ideal world, we’d have a conversion factor we trusted between longtermist time and dollars and be able to get an aggregate longtermist resource cost estimate; this is more of a heuristic about which factor will tend to dominate for the kinds of decisions we’re making, given the situation we find ourselves in).
On the one hand, I think giving in this category (the short-term support, including for less EA-engaged individuals) tends to be less impactful per dollar compared to many other outreach activities aimed at less EA-engaged people.[6] But, I think it can have more positive impact per hour of EA (grantmaker and grantee) labor used.
For example, when we fund e.g. 80,000 Hours, we (amongst other activities) support their full-time advisors to advise interested people about how to have more impactful careers. With our scholarship programs, we’re also trying to cause people to spend more time on more impactful activities. But rather than do this via the 80k advisors, our scholarship programs use money “directly” (without much intermediating EA labor) to try to make impactful careers more accessible and attractive. In general, we think we get less impact per dollar from interventions that consume money “directly” like this. Since EA labor is the scarcer resource in many contexts, these types of interventions can make sense for grantmakers to prioritize.
I think it’s good for people starting projects of various kinds to think through not just monetary costs, but also the amount of aligned EA labor required to make a project work well. However, I expect most important longtermist projects to consume a ton of EA labor (including high opportunity cost labor), and am worried many newer EAs are already too hesitant to ask for support and advice because of personal and professional underconfidence, so it’s confusing.
Other changes that seem important to me
I’m not going to try to justify these now, just share my impressions sans evidence or explanation.
- It seems like, relative to a few years ago, the rate of new meta projects spinning up has increased substantially. I’m extremely interested to see if we see early indicators of EA/longtermist community growth start to pick up again in the next year or two in response (and will be a lot more pessimistic about the value of much of our work if that doesn’t happen).
- My sense is that, relative to when this post was written, it’s substantially easier to get an EA job, and in fact there tends to be substantial competition over the most promising-seeming hires (though still many more applicants than jobs). This is probably healthier for the pool of people who want to work at EA organizations, though also a potentially worrying indicator of the number of projects (and resulting labor needs) growing faster than the pool of people that want to join those projects (despite many of the jobs now offering meaningfully higher salaries and better benefits, and the field writ broad having more roles and thus leading to higher effective job security).
Our mistakes
By which I mostly mean “my mistakes”, given the relative recency of my teammates joining and getting up to speed, and my responsibility for final calls about team strategy and direction.
I think:
- I should have hired more people, more quickly. And, had a slightly lower bar for hiring in terms of my confidence that someone would be a good fit for the work, with corresponding greater readiness to part ways if it wasn’t a good fit.
- I should have been faster to reorient towards valuing time and creating opportunities, relative to evaluating existing grants and their impact per dollar, and more intense about communicating to grantees and others about this change. Relatedly, I probably asked for too much information from grantees for too long, and I wish I’d been more comfortable advocating forcefully for high-EV bets even when evidence was very sparse.
- I generally tend towards spreading myself and my team too thin and taking on too many projects, rather than either critically evaluating which projects are most important and focusing on them (including potentially, just focusing on one to the temporary exclusion of all else), or implementing meta-level changes that make it so we can take on more projects without becoming overstretched (like hiring or streamlining our processes, which we’ve also been doing). When I’ve tried to make these meta-level changes, I’ve generally felt like the time was very well spent. I’ve often found people on my team unable to put a lot of focused full-time effort into a project that might have deserved it because of other/ongoing responsibilities, and I think that’s caused me to avoid considering particularly time-consuming projects.
On a meta level, I think most of my mistakes revolve around being unnecessarily slow to reorient around a change, so I’m trying to address that pattern by, when I notice myself having the thought that we might be erring slightly in some direction, trying to more quickly evaluate the hypothesis that we might actually be erring substantially and that fixing it should be a top priority.
The other, weaker pattern I noticed was being bottlenecked on emotional pain tolerance and reputational concerns (e.g. related to advocating for very uncertain grants for which I have little evidence when they have a reasonable probability of going poorly, or making riskier hiring decisions which might end in mutual unhappiness).
Mistakes I’m worried we will make
- Our area is rife with opportunities to fall prey to measurability bias, either directly, or mediated by status gradients where we’re socially rewarded for reporting measurable, impressive results. I think that if we aren’t careful, that could cause us to focus on ambitious grantmaking projects that spend a lot of money and/or involve impressive-sounding figures (e.g. funding very large numbers of people, sponsoring very popular content, or working with prestigious/high profile people). That could come at the cost of the kind of work that seems most promising to us when we’re at our most reflective (which tends slightly more towards work aimed at helping exceptionally promising people become more involved and resolve key cruxes in a more targeted way), or lead us to do things that end up being net negative, though it’s complicated because I think all of the kinds of projects above have the potential to be really impactful.
- The above concern seems especially relevant as we hire more people; I think I’ll have to rely more on trust and metrics relative to really developing my own inside view about particular projects.
- I find myself flinching away from pessimistic models of how transformative artificial intelligence (TAI) could unfold, e.g. along the lines of what I hear from people at MIRI about very high odds of unaligned TAI being developed by default and most or all AI alignment agendas having little hope of success, with catastrophic results. Large parts of those models tend to make sense to me when I try to evaluate the arguments for myself, but they also frighten me. It seems like in those worlds, it’s harder for people in my position to have a big positive impact, which is demotivating. I’m worried about motivated reasoning causing me to underestimate how likely this kind of situation is.
- I think it’s often useful for longtermists to act as though they are in plausible-seeming worlds where they can have an unusually big impact and ignore worlds they can’t affect very much, because the expected value of one’s actions is likely dominated by actions taken in worlds where you’re well-positioned to have a big impact. But, I’m still concerned about (a) missing hard-but-possible routes to substantial positive impact in these worlds and (b) accidentally having substantially negative impact in those worlds, if it’s easy to have negative impact but hard to have positive impact.
- I (and probably other grantmakers) can end up spending lots of time on the most borderline fund vs. not-fund decisions. Deciding whether to recommend funding is often the most clear and salient decision a grantmaker can make, and grants that are near the bar are the most challenging to decide on. But, those decisions generally have pretty low stakes, somewhat by definition (if you’re correct that the EV of the grant is right around the bar for funding, the decision to fund or not will lead to only small gains or losses in expectation[7]). I think it’s better to think about options besides funding or not funding — like creating new programs, seeing whether you can somehow help a promising grantee, or sharing information and insights with other funders or grantees — but it’s a bit less intuitive to do so.
Looking forward
Over the next few years, I expect us to spend more time on projects engaging with high-school students (largely for the reasons listed here) as well as working more directly with community-building efforts aimed at undergraduates.
If we found the right people (we’re hiring!) I could also imagine us spending tens of millions of dollars more on the following projects, which could easily end up seeming similarly cost-effective as our previous grantmaking:
- AI Safety-focused meta work, i.e. aiming specifically at causing more people who are good fits for AI safety research to work on it (via projects like EA Cambridge’s AGI Safety Fundamentals or supporting AI safety-focused groups at universities).
- Supporting the production of more excellent content on EA, longtermism, and transformative technology (e.g. books, web content, YouTube videos).
- Rationality-and-epistemics-focused community-building. Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.
- Trying to make EA ideas and discussion opportunities more accessible outside current EA hubs, especially outside the Anglophone West (e.g. via translating content and supporting groups at relevant universities). I think that in the English-speaking Western world, there are or soon will be somewhat diminishing returns to additional recruiting efforts in the most recruiting-saturated contexts; this doesn’t seem as true for other locations.
- Supporting marketing and advertising for high-quality content that discusses ideas important to EA or longtermist projects. I think this could be good because, while writing strong original content generally takes deep understanding of the relevant ideas (which is in short supply), this isn’t as much the case for spreading existing content (this relies more on funding and marketing skills).
- ^
Sometimes, our team supports projects that aren't directly aimed at these priorities, often because we think their value from a movement-building perspective is sufficiently high that it justifies supporting them (i.e. in those cases we might have different motives for supporting a project than the people who work on it have for working on it.)
- ^
Two caveats about this though:
- A relatively small fraction of the funding is for different regranting programs, and so has not in fact yet “bottomed out”, and will absorb more grantmaker labor before that happens (and might be reported by another entity as part of their money moved in the future).
- This is somewhat driven by unusually large outliers. However, grants that were previously outliers in terms of size are becoming more common. I’m left pretty uncertain about how much we should expect to give this year.
- ^
I.e. in expectation a better use of funding than the longtermist last dollar. See here for a discussion about last dollars in the global health and wellbeing space.
- ^
So far, this includes our RFP for outreach projects, course development program, early-career funding, undergraduate scholarship, with more to come. The FTX Foundation Future Fund also currently has an open round.
- ^
I’d love to have a better name for this category, suggestions welcome
- ^
There are also programs that support highly EA-engaged individuals, which I think can be really impactful per dollar and hour, but there’s a limited number of such people and so only so much financial support to provide.
- ^
Occasionally, spending more time can lead one to realize that a grant is actually really promising or really net negative, but I think that’s pretty rare.
Akash @ 2022-03-12T00:01 (+21)
Thank you for this write-up, Claire! I will put this in my "posts in which the author does a great job explaining their reasoning" folder.
I noticed that you focused on mistakes. I appreciate this, and I'm also curious about the opposite:
- What are some of the things that went especially well over the last few years? What decisions, accomplishments, or projects are you most proud of?
- If you look back in a year, and you feel really excited/proud of the work that your team has done, what are some things that come to mind? What would a 95th+ percentile outcome look like? (Maybe the answer is just "we did everything in the Looking Forward" section, but I'm curious if some other things come to mind).
ClaireZabel @ 2022-03-15T03:14 (+21)
Thanks Akash. I think you're right that we can learn as much from successes and well-chosen actions as mistakes, and also it's just good to celebrate victories. A few things I feel really pleased about (on vacation so mostly saying what comes to mind, not doing a deep dive):
- My sense is that our (published and unpublished) research has been useful for clarifying my picture of the meta space, and helpful to other organizations (and led to some changes I think are pretty promising, like increased focus on engaging high schoolers who are interested in longtermist-related ideas, and some orgs raising salaries), though I think some of that is still TBD and I wish I had a more comprehensive picture.
- We've funded just a bunch of new initiatives I'm quite excited about, and I'm happy we were there to find worthy projects with funding needs and encourage founding new projects in the space, and to support their growth. My best guess is that projects we fund will lead to a substantial increase in the EA/longtermist community.
- When I look back at both my portfolio of grants made, and anti-portfolio (grants explicitly considered but not made), I mostly feel very satisfied. As far as I can tell were far more false positives (grants we made that had meh results) than negatives (grants I think we should have made but didn't), but roughly similar false-negatives-that-seem-like-big-misses to false-positives-that-were-actively-meaningfully-harmful (the sample size in both of those categories is pretty small).
- I like and respect everyone on my team, they are all sincerely aimed at the real goals we share, and I think they all bring different important focuses and strengths to the table.
If you look back in a year, and you feel really excited/proud of the work that your team has done, what are some things that come to mind? What would a 95th+ percentile outcome look like? (Maybe the answer is just "we did everything in the Looking Forward" section, but I'm curious if some other things come to mind.)
A mixture of "not totally sure" and "don't want to do a full reveal" but the "Looking Forward" section above lists a bunch of components. In addition:
- We or other funders seize most of the remaining the remaining obvious-and-important-seeming opportunities for impactful giving (that I currently know of in our space) that are lying fallow.
- We complete a few pieces of research/analysis I think could give us a better sense of how overall-effective EA/LT "recruiting" work has been over the last few years and how it compares to more object-level work (and we do indeed get a better sense and disseminate it to people who will find it useful).
- We gather and vet more resources for giving grantees that want it more non-financial support (e.g. referrals for support for various kinds of legal advice, executive and management coaching.)
Miranda_Zhang @ 2022-03-11T02:39 (+17)
After your talk at the SERI Conference, I really enjoyed reading this more detailed write-up of your recent updates. I'd be keen to see an update on how the Longtermist EA Movement-Building team ends up trying to address the concerns you're worried about!
In particular, I share concerns around the possibility that grant evaluation could become increasingly affected by more visible signals like certain names or reputations. To me, this is not only by default a concern considering the small size of the EA community, but also seems more of a risk with longtermist causes or other projects where EA alignment seems especially important and it is cheaper to rely on reputation (rather than investigating whether an unknown applicant is sufficiently aligned).
ClaireZabel @ 2022-03-17T00:47 (+2)
Thanks Miranda, I agree these are things to watch really closely for.
weeatquince @ 2022-03-15T16:29 (+11)
Hi Claire,
Thank you for the write-up. I have a question I would love to hear your (and other people's) thoughts on. You said:
I should have hired more people, more quickly. And, had a slightly lower bar for hiring in terms of my confidence that someone would be a good fit for the work, with corresponding greater readiness to part ways if it wasn’t a good fit.
This is really interesting as goes against the general tone of advice that I hear that suggests that being cautious about hiring. That said I do feel at times that the EA community is perhaps more cautious and puts more effort into hiring than other places I have worked.
I wondered if you had any elaboration, such as: advise on how someone at an EA org can tell if they are being too cautious? When you felt you should have taken more risks? What things it is worth taking risks on and what things it is not worth taking risks on? How you plan to change wat you do going forward?
No worries if nothing to add but it would be helpful to hear (I am involved right now in hiring decisions at a few EA orgs).
ClaireZabel @ 2022-03-17T05:32 (+15)
So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org).
Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to huge differences in their ability to produce high-quality versions of various relevant outputs. Often I find that new founders underestimate those differences and so e.g. underestimate how much a given product might decline in quality when handed from one staff member to a new one.
They'll say things like "oh, to learn [the answer to complicated question X] we'll have [random-seeming new person] research [question X]" in a way that feels totally insensitive to the fact that the question is difficult to answer, that it'd take even a skilled researcher in the relevant domain a lot of time and trouble, that they have no real plan to train the new person or evidence the new person is unusually gifted at the relevant kind of research, etc., and I think that dynamic is upstream of a lot of project failures I see. I.e. I think a lot of people have a kind of magical/non-gears-level view of hiring, where they sort of equate an activity being someone's job with that activity being carried out adequately and in a timely fashion, which seems like a real bad assumption with a lot of the projects in EA-land.
But yeah, I think we were too cautious nonetheless.
Cases where hiring more aggressively seems relatively better:
- The upside is large (an important thing is bottlenecked on person-power, and that bottleneck is otherwise excessively challenging to overcome)
- The work you need done is:
- Well scoped,
- Easy to evaluate
- Something people train in effectively outside your org
- Trainable
- Has short feedback loops
- You are
- An experienced manager
- Proficient with the work in question
- Emotionally ready to fire an employee if that seems best
- This is taking place in a country where it's legally and culturally easier to fire people
- Your team culture and morale is such that a difficult few months with someone who isn't working out is unlikely to deal permanent damage.
weeatquince @ 2022-03-17T08:48 (+2)
Really helpful. Good to get this broader context. Thank you!!
James Ozden @ 2022-03-11T01:53 (+11)
Thanks for writing this up, I found the transparency around your perceived mistakes and future uncertainty incredibly refreshing and inspiring!
ClaireZabel @ 2022-03-12T01:42 (+2)
Thanks for the kinds words, James!
michelle_ma @ 2022-03-11T02:52 (+10)
Thanks for posting! Your discussion of mistakes and rationality-and-epistemics-focused community-building reminded of this post, particularly Will's comment about funding/supporting a red team to criticize EA/longtermism. Is Open Phil open to doing something like this?
ClaireZabel @ 2022-03-12T01:41 (+12)
Thoughtful and well-informed criticism is really useful, and I'd be delighted for us to support it; criticism that successfully changes minds and points to important errors is IMO among the most impactful kinds of writing.
In general, I think we'd evaluate it similarly to other kinds of grant proposals, trying to gauge how relevant the proposal is to the cause area and how good a fit the team is to doing useful work. In this case, I think part of being a good fit for the work is having a deep understanding of EA/longtermism, having really strong epistemics, and buying into the high-level goal of doing as much good as possible.
MaxRa @ 2022-03-11T17:30 (+8)
Thanks for sharing your thoughts so transparently! :)
I'm particularly interested in this point:
Sometimes, our team supports projects that are directly aimed at [making object level progress], often because we think their value from a movement-building perspective is sufficiently high that it justifies supporting them (i.e. in those cases we might have different motives for supporting a project than the people who work on it have for working on it.)
a) I have the impression that we urgently need more smart people working on longtermist issues, particularly AI safety, governance and strategy
b) What do you think about the idea of encouraging longtermist researchers in general, and AI researchers in particular, to see their impact more than currently in terms of growing a field vs. making direct object-level progress?
- as you say, both direct progress and getting more people on board are far from mutually exclusive, but I'd be surprised if it wouldn't change what people are working on if we'd deliberatively prioritize the latter more
- concrete examples: we might encourage them to do more things like contributing to course curricula, networking with and outreach to top CS departments, organize workshops, develop prizes and benchmarks
Ben Pace @ 2022-03-10T20:56 (+8)
Great post.
I didn't quite parse this paragraph:
For example, when we fund e.g. 80,000 Hours, we (amongst other activities) support their full-time advisors to advise interested people about how to have more impactful careers. With our scholarship programs, we’re also trying to cause people to spend more time on more impactful activities. But rather than do this via the 80k advisors, our scholarship programs use money “directly” (without much intermediating EA labor) to try to make impactful careers more accessible and attractive. In general, we think we get less impact per dollar from interventions that consume money “directly” like this. Since EA labor is the scarcer resource in many contexts, these types of interventions can make sense for grantmakers to prioritize.
I think you're saying that your scholarships seem good to you, and that this has something to do with the value of your time versus the value of 80k staff time, but I'm not quite sure how you're connecting these variables, and exactly whose time you're saving with the scholarships (I imagine it takes you a lot of time to make the scholarship decisions, but maybe not).
ClaireZabel @ 2022-03-10T23:30 (+18)
Hm yeah, I can see how this was confusing, sorry!
I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of dedicated longtermist time per career shift.
I think the scholarships are more time-consuming for us per dollar disbursed than giving grants to 80k, but less time-consuming in aggregate because there's effectively no grantee "middle man" also spending time.
Of course, some of the scholarships directly fund people to do object-level valuable things, this argument just concerns their role in making certain career paths more attractive and accessible.
Does that make more sense?
Ben Pace @ 2022-03-11T22:14 (+2)
Thanks! The core thing I'm hearing you say is that the scholarships are the sort of thing you wouldn't fund on a cost-effectiveness metric and 80k is, but that on a time-effectiveness metric that changes it so that the scholarships are now competitive.
ClaireZabel @ 2022-03-11T22:45 (+11)
No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.
I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advising doesn't have enough room for funding. If 80k advising could absorb a bunch more orders of magnitude of funding with no diminishing returns, then I could imagine us not wanting to fund these scholarships from a cost-effectiveness perspective but wanting to fund them from a time-effectiveness perspective.
A place where it could make a material difference is if I imagine a hypothetical generalist EA asking what they should work on. I can imagine them noting that a given intervention (e.g. mentoring a few promising people while taking a low salary) is more cost-effective (and I think cost-effectiveness is often the default frame EAs think in), and me encouraging them to investigation whether a different intervention allows them to accomplish more with their time while being less cost-effective (e.g. setting up a ton of digital advertising of a given piece of written work), and saying that right now, the second intervention might be better.
Linch @ 2022-03-10T23:15 (+12)
My personal reading of the post is that they think the scholarship decisions don't take up a lot of time, relative to 80k advisory stuff.
Luise @ 2022-03-24T12:13 (+2)
Hi Claire,
what are your thoughts on "going one meta-level up" and trying to build the meta space? Specifically creating opportunities like UGAP, the GCP internships, or running organisers' summits to get more and better community builders? I'm unsure but I thought this might be at odds with some of the points you raised, e.g., that we might neglect object-level work and its community-building effect. I'd love to hear your thoughts!
ClaireZabel @ 2022-03-25T07:07 (+6)
I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road.