lukeprog's Quick takes

By lukeprog @ 2024-11-14T10:11 (+7)

null
lukeprog @ 2024-11-14T10:12 (+161)

Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:

I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.

Jason @ 2024-11-14T14:22 (+52)

Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.

Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well-written post didn't say that.

Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It's generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.

  1. ^

    I place this in quotes because the term is ambiguous.

lukeprog @ 2024-11-19T02:47 (+4)

Good Ventures did indicate to us some time ago that they don't think they're the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment's claim about an aversion to opportunities that are "even slightly right of center in any policy work," (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.

Also, I don't actually think this should affect people's actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is "available opportunities that look high-impact from an AI GCR perspective," not "available funding." If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can't guarantee we'll think it's promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we've simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.

David Mathers🔸 @ 2024-11-19T07:49 (+4)

Can you say what the "some kinds" are? 

Habryka @ 2024-11-14T18:12 (+50)

I think it might be a good idea to taboo the phrase "OP is funding X" (at least when talking about present day Open Phil). 

Historically, OP would have used the phrase "OP is funding X" to mean "referred a grant to X to GV" (which approximately was never rejected). One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was "rejecting X" or "defunding X"). 

Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term "OP is funding X" is confusing (including IMO in your comment, where in your last few bullet points you talk about "OP has given far more to global health than AI" when I think to not confuse people here, it would be good to say "OP has recommended far more grants to global health", since OP itself has not actually given away any money directly, and in the rest of your comment you use "recommend").

I think the key thing for people to understand is why it no longer makes sense to talk about "OP funding X", and where it makes sense to model OP grant-referrals to GV as still closely matching OPs internal cost-effectiveness estimates.[1] 

For organizations and funders trying to orient towards the funding ecosystem, the most important thing is understanding what GV is likely to fund on behalf of an OP recommendation. So when people talk about "OP funding X" or "OP not funding X" that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[2]

Now having gotten that clarification out of the way, my guess is most of the critiques that you have seen about OP funding are basically accurate when seen through this lens (though I don't know what critiques you are referring to, since you aren't being specific). As an example, as Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.

I think your comment is useful, and helps people understand some of how OP is trying to counteract the ways GV's withdrawal from many crucial funding areas has affected things, which I am glad about. I do also think your comment has far too much of the vibe of "nothing has changed in the last year" and "you shouldn't worry too much about which areas GV wants or want to not fund". De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.

I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:

Yes, over the last 1-2 years our relationship to GV has changed, and I think it no longer really makes sense to think about OP 'funding X'. These days, especially in the catastrophic risk space, it makes more sense to think of OP as a middleman between grantees and other foundations and large donors. This is a large shift, and I think understanding how that shift has changed funding allocation is of crucial importance when trying to predict which projects in this space are underfunded, and what new projects might be able to get funding.

95%+ of recommendations we make are to GV. When GV does not want to fund something, it is up to a relatively loose set of external funders we have weaker relationships with to make the grant, and will hinge on whether those external funders have appetite for that kind of grant, which depends heavily on their more idiosyncratic interests and preferences. Most grants that we do not refer to GV, but would like to see funded, do not ultimately get funded by other funders.[3]

[Add the rest of your comment, ideally explaining how GV might differ from OP here[4]]

  1. ^

    And another dimension to track is "where OPs cost-effectiveness estimates are likely to be wrong". I think due to the tricky nature of the OP/GV relationship, I expect OP to systematically be worse at making accurate cost-effectiveness estimates where GV has strong reputation-adjacent opinions, because of course it is of crucial importance for OP to stay "in-sync" with GV, and repeated prolonged disagreements are the kind of thing that tend to cause people and organizations to get out of sync.

  2. ^

    Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).

  3. ^

    I don't know how true this is and of course you should write what seems true to you here. I currently think this is true, but also "60% of grants referred get made" would not be that surprising. And also of course this is a two-sided game where OP will take into account whether there are any funders even before deciding whether to evaluate a grant at all, and so the ground truth here is kind of tricky to establish.

  4. ^

    For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it's much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.

lukeprog @ 2024-11-19T02:48 (+4)

Replying to just a few points…

I agree about tabooing "OP is funding…"; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.

Re: "large negative incentive for founders and organizations who are considering working more with the political right." I'll note that we've consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that "engages with the right" (your phrasing), e.g. Horizon fellows and many other GV grantees regularly engage with Republicans, and seem likely to do even more of that on the margin given the incoming GOP trifecta.

Re: "nothing has changed in the last year." No, a lot has changed, but my quick-take post wasn't about "what has changed," it was about "correcting some misconceptions I'm encountering."

Re: "De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing." This isn't true, including specifically for my team ("AI governance and policy").

I also don't think this was ever true: "One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV." There's plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.

Habryka @ 2024-11-19T04:15 (+2)

Re: "nothing has changed in the last year." No, a lot has changed, but my quick-take post wasn't about "what has changed," it was about "correcting some misconceptions I'm encountering."

Makes sense. I think it's easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it's a tempting and recurring error mode for established institutions to react to people freaking out with a "calm down" attitude, even when that's inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)

In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).

Habryka @ 2024-11-19T04:04 (+2)

Re: "De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing." This isn't true, including specifically for my team ("AI governance and policy").

I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn't make any sense to count that as something OP "influenced". 

With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don't think it's true in "AI governance and policy" where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn't imply my betting odds, since I want at least a bit of profit margin). 

Happy to refer to some trusted third-party arbiter for adjudicating.

lukeprog @ 2024-11-19T06:34 (+3)

I'd rather not spend more time engaging here, but see e.g. this.

Rebecca @ 2024-11-19T06:27 (+2)

I’m confused by the wording of your bet - I thought you had been arguing than more than 90% are by GV, not ‘more than 90% are by a non-GV funder’

Habryka @ 2024-11-19T06:40 (+2)

Sorry, just a typo!

Habryka @ 2024-11-19T04:01 (+2)

I also don't think this was ever true: "One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV." There's plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.

I used the double negative here very intentionally. Funding recommendations don't get made by majority vote, and there isn't such a thing as "the Open Phil view" on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant. 

That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying "this was never true" is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).

Will Aldred @ 2024-11-14T15:51 (+10)

I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages

(Fwiw, the crowd prediction on the Metaculus question ‘Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?’ currently sits at 43%.)

MichaelDickens @ 2024-11-19T19:59 (+5)

[1] Several of our grantees regularly criticize leading AI companies in their official communications [2] organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose

Could you give examples of these?