There should be more AI safety orgs

By mariushobbhahn @ 2023-09-21T14:53 (+117)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Ajeya @ 2023-09-26T02:54 (+59)

(Cross-posted to LessWrong.)

I’m a Senior Program Officer at Open Phil, focused on technical AI safety funding. I’m hearing a lot of discussion suggesting funding is very tight right now for AI safety, so I wanted to give my take on the situation.

At a high level: AI safety is a top priority for Open Phil, and we are aiming to grow how much we spend in that area. There are many potential projects we'd be excited to fund, including some potential new AI safety orgs as well as renewals to existing grantees, academic research projects, upskilling grants, and more. 

At the same time, it is also not the case that someone who reads this post and tries to start an AI safety org would necessarily have an easy time raising funding from us. This is because:

EvanMcVail @ 2023-10-12T03:14 (+23)

By the way, Open Philanthropy is actively hiring for roles on Ajeya’s team in order to build capacity to make more TAIS grants! You can learn more and apply here.

Ajeya @ 2023-10-12T18:33 (+4)

And a quick note that we've also added an executive assistant / operations role since Evan wrote this comment! 

Tom Barnes @ 2023-09-28T12:42 (+13)

Thanks Ajeya, this is very helpful and clarifying!

I am the only person who is primarily focused on funding technical research projects ... I began making grants in November 2022

Does this mean that prior to November 2022 there were ~no full-time technical AI safety grantmakers at Open Philanthropy? 

OP (prev. GiveWell labs) has been evaluating grants in the AI safety space for over 10 years. In that time the AI safety field and Open Philanthropy have both grown, with OP granting over $300m on AI risk. Open Phil has also done a lot of research on the problem. So, from someone on the outside, it seems surprising that the number of people making grants has been consistently low

OllieBase @ 2023-09-28T12:52 (+3)

Daniel Dewey was a Program Officer for potential risks from advanced AI at OP for several years. I don't know how long he was there for, but he was there in 2017 and left before May 2021.

Thomas Kwa @ 2023-09-21T17:14 (+43)

I think funding is a bottleneck. Everything I've heard suggests the funding environment is really tight: CAIS is not hiring due to lack of funding. FAR is only hiring one RE in the next few months due to lack of funding. Less than half of this round of MATS scholars were funded for independent research. I think this is because there are not really 5-10 EA funders able to fund at large scale, just OP and SFF; OP is spending less than they were pre-FTX. At LTFF the bar is high, LTFF's future is uncertain, and they tend not to make huge grants anyway. So securing funding should be a priority for anyone trying to start an org.

Edit: I now think the impact of these orgs is uncertain enough that one should not conclude with certainty there is a funding bottleneck.

mariushobbhahn @ 2023-09-21T17:27 (+21)

I have heard mixed messages about funding. 

From the many people I interact with and also from personal experience it seems like funding is tight right now. However, when I talk to larger funders, they typically still say that AI safety is their biggest priority and that they want to allocate serious amounts of money toward it. I'm not sure how to resolve this but I'd be very grateful to understand the perspective of funders better. 

I think the uncertainty around funding is problematic because it makes it hard to plan ahead. It's hard to do independent research, start an org, hire, etc. If there was clarity, people could at least consider alternative options. 

Linch @ 2023-09-22T05:22 (+32)

(My own professional opinions, other LTFF fund managers etc might have other views) 

Hmm I want to split the funding landscape into the following groups:

  1. LTFF
  2. OP
  3. SFF
  4. Other EA/longtermist funders
  5. Earning-to-givers
  6. Non-EA institutional funders.
  7. Everybody else

LTFF

At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that's much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.

Going forwards, I don't really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we'll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we're likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations. 

(Note that in $ terms LTFF isn't a particularly large fraction of the longtermist or AI x-safety funding landscape, I'm only talking about it most because it's the group I'm the most familiar with).

Open Phil

I'm not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision.  As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it's not obvious that grantmaking capacity is their true bottleneck, as a) I'm not sure they're trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It's possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.

SFF

I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.

Other EA/Longtermist funders

My impression is that other institutional funders in longtermism either don't really have the technical capacity or don't have the gumption to fund projects that OP isn't funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding "obviously safe" projects.

Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).

Earning-to-givers

I don't have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there's a sufficiently large need for funding. My current guess is that it's fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:

  1. pooling the money in a (semi-)centralized source
  2. choosing for themselves where to give to
  3. saving the money for better projects later.

If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn't be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.

Non-EA institutional funders

I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it's much harder for both individuals and grantmakers like LTFF to seek institutional funding[3]

I know FAR has attempted some of this already.

Everybody else

As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It's harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren't culturally EA or longtermist or whatever.

  1. ^

    Which will also be harder after OP's matching expires.

  2. ^

    If the rest of the funding landscape doesn't change, the tier which I previously called our 5M tier (as in 5M/6 months or 10M/year) can probably absorb on the order of 6-9M over 6 months, or 12-18M over 12 months. This is in large part because the lack of other funders means more projects are applying to us.

  3. ^

    Regranting is pretty odd outside of EA; I think it'd be a lot easier for e.g. FAR or ARC Evals to ask random foundations or the US government for money directly for their programs than for LTFF to ask for money to regrant according to our own best judgment. My understanding is that foundations and the US government also often have long forms and application processes which will be a burden for individuals to fill; makes more sense for institutions to pay that cost.

JoshuaBlake @ 2023-09-25T06:45 (+1)

There's some really useful information here. Getting it out in a more visible way would be useful.

Linch @ 2023-09-26T04:33 (+2)

Thanks! I've crossposted the comment to LessWrong. I don't think it's polished enough to repost as a frontpage post (and I'm unlikely to spend the effort to polish it). Let me know if there are other audiences which will find this comment useful

Vaidehi Agarwalla @ 2023-09-22T05:51 (+18)

"Less than half of this round of MATS scholars were funded for independent research."

-> Its not clear to me what exactly the bar for independent research should be. It seems like it's not a great fit for a lot of people, and I expect it to be incredibly hard to do it well as a relatively junior person. So it doesn't have to be a bad thing that some MATS scholars didn't get funding.

Also, I don't necessarily think that orgs being unable to hire is in and of itself a sign of a funding bottleneck. I think you'd first need to make the case that these organisations are crossing a certain impact threshold.

(I do believe AIS lacks diversify of funders and agree with your overall point).

Thomas Kwa @ 2023-09-22T06:20 (+1)

Fair point about the independent research funding bar. I think the impact of CAIS and FAR are hard to deny, simply because they both have several impressive papers.

Ben_West @ 2023-09-26T00:09 (+11)

Thanks for writing this! It seems like a valuable point to consider, and one that I have been thinking about myself recently.

My guess is that most of the people who are capable of founding an organization are also capable of being middle or senior managers within existing organizations, and my intuition is that they would probably be more impactful there. I'm curious if you have the opposite intuition?

mariushobbhahn @ 2023-09-26T07:37 (+2)

I touched on this a little bit in the post. I think it really depends on a couple of assumptions.
1. How much management would they actually get to do in that org? At the current pace of hiring, it's unlikely that someone could build a team as quickly as you can with a new org.
2. How different is their agenda from existing ones? What if they have an agenda that is different from any agenda that is currently done in an org? Seems hard/impossible to use the management skills in an existing org then.
3. How fast do we think the landscape has to grow? If we think a handful of orgs with 100-500 members in total is sufficient to address the problem, this is probably the better path. If we think this is not enough, starting and scaling new orgs seems better. 

But like I said in the post, for many (probably most) people starting a new org is not the best move. But for some it is and I don't think we're supporting this enough as a community. 

Ben_West @ 2023-09-26T18:51 (+4)

At the current pace of hiring, it's unlikely that someone could build a team as quickly as you can with a new org.

Can you say more about why this is?

The standard assumption is that the proportional rate of growth is independent of absolute size, i.e. a large company is as likely to grow 10% as a small company is. As a result, large companies are much more likely to grow in absolute terms than small companies are.[1]

I could imagine various reasons why AI safety might deviate from the norm here, but am not sure which of them you are arguing for. (Sorry if this was in the post and I'm not able to find it.)

  1. ^

    My understanding is that there is dispute about whether these quantities are actually independent, but I'm not aware of anything suggesting that small companies will generally grow in absolute terms faster than large companies (and understand that there is substantial empirical evidence which suggests the opposite).

ShayBenMoshe @ 2023-09-26T07:06 (+1)

Many people live in an area (or country) where there isn't even a single AI safety organization, and can't or don't want to move. In that sense - no they can't even join an existing organization (in any level).

(I think founding an organization has other advantages over joining an existing one, but this is my top disagreement.)

Prometheus @ 2023-09-25T02:57 (+6)

(crossposted from lesswrong)

I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they're interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing 

Denis @ 2023-09-28T11:33 (+2)

Great article. Identifying the problem is half the solution. You have provided one provocative answer, which challenges us to either agree with you or propose a better solution!

Your description mirrors my experience in looking to move into this area (AI Safety / Governance) and talking to others wanting to move into this area. There is so much goodwill from organisations and from individuals, but my feeling is that they are just overwhelmed - by the extent of the work needed, by the number of applications for each role, by the logistics and the funding challenges. Even if the money is "available", it requires quite a lot of investigation and paperwork to actually get it, which takes away a valuable resource. 

This week in the BlueDot AI Safety/Governance course, the topic was "Career Advice". People spoke of applying for roles and discovering that there were more than 100 (even more than 500) applicants for individual roles in some cases. Which then means organisations with limited resources spend a lot of these resources on the hiring process. 

And yet, you can't just take and organisation and double the work-force in a month and expect it to maintain the same quality and culture that has made it so valuable in the first place. But at the same time, one of the lessons I've been learning is that organisations who want to make an impact often need a lot of time to build credibility. You can do great work, but if decision-makers have never heard of you, it may not be very impactful. 

I know that there are organisations (e.g. Rethink Priorities) who are actively looking for potential founders. The problem is that being a founder requires a quite specific skill-set, commitment and energy-level. 

I think that an interesting, alternative way to address this would mirror what tends to happen in the corporate world if rapid expansion is needed. Let's say you have a company of 100 and you realise you want to become 200 by the end of the year. Here's what you might do:

  1. Identify the very most critical work that you're currently doing, and make sure the right people continue to work on that. (first and foremost, don't make things worse!)
  2. With that caveat, think about who from your organisation would have the skillset to recruit, manage, coach, train, mentor new people. Maybe pick a team of 20, including a range of levels, but, if anything, tending towards more senior. 
  3. Treat the growth like a project, with stages - planning, sourcing funding, strategy, recruitment, ... This "project" will be the full-time work of these people for the next few years. 
  4. Create a clear long-term vision for how the new organisation will look in a few years, and recruit towards that. (Don't just recruit 100 new-graduates and expect to have a functional organisation). 
  5. Maybe the first round of recruiting might be 10-20 relatively senior people who will become part of the leadership team and who will take over some of the work of recruiting. In this first phase, each new person will have an experience mentor who has worked for the organisation for some time, and after 3-6 months, these new people will be in a position to coach and mentor new-hires themselves. 
  6. Do not fully "merge" the new and old organisations until you are very confident that it will work well. At the same time, ensure that the new organisation has all the benefits of the existing one, including access to people for networking, advice, contacts, name-recognition. 
     

Obviously this is a vastly oversimplified scheme. But the point is: if you can make this work, instead of a new organisation which may struggle for recognition, for resources, for purpose, .., you can vastly increase the potential of an already existing, successful organisation which is currently resource-limited. As you say, the talent is there, ready to work. The problems are there, ready to be solved. 

Walt @ 2023-09-25T10:16 (+1)

(Cross-posted from LW)

and Kaarel’s work on DLK

@Kaarel is the research lead at Cadenza Labs (previously called NotodAI), our research group which started during the first part of SERI MATS 3.0 (There will be more information about Cadenza Labs hopefully soon!) 

Our team members broadly agree with the post! 

Currently, we are looking for further funding to continue to work on our research agenda. Interested funders (or potential collaborators) can reach out to us at info@cadenzalabs.org.

Roman Leventov @ 2023-09-24T10:03 (+1)

(Cross-posted from LW)

@Nathan Helm-Burger's comment made me think it's worthwhile to reiterate here the point that I periodically make:

Direct "technical AI safety" work is not the only way for technical people (who think that governance & politics, outreach, advocacy, and field-building work doesn't fit them well) to contribute to the larger "project" of "ensuring that the AI transition of the civilisation goes well".

Now, as powerful LLMs are available, is the golden age to build innovative systems and tools to improve[1]:

I believe that if such projects are approached with integrity, thoughtful planning, and AI safety considerations at heart rather than with short-term thinking (specifically, not considering how the project will play out if or when AGI is developed and unleashed on the economy and the society) and profit-extraction motives, they could shape to shape the trajectory of the AI transition in a positive way, and the impact may be comparable to some direct technical AI safety/alignment work.

In the context of this post, it's important that the verticals and projects mentioned above could either be conventionally VC-funded because they could promise direct financial returns to the investors, or could receive philanthropic or government funding that wouldn't otherwise go to technical AI safety projects. Also, there is a number of projects in these areas that are already well-funded and hiring.

Joining such projects might also be a good fit for software engineers and other IT and management professionals who don't feel they are smart enough or have the right intellectual predispositions to do good technical research, anyway, even there was enough well-funded "technical AI safety research orgs". There should be some people who do science and some people who do engineering.

  1. ^

    I didn't do serious due diligence and impact analysis on any of the projects mentioned. The mentioned projects are just meant to illustrate the respective verticals, and are not endorsements.