There should be more AI safety orgs
By mariushobbhahn @ 2023-09-21T14:53 (+117)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullAjeya @ 2023-09-26T02:54 (+59)
(Cross-posted to LessWrong.)
I’m a Senior Program Officer at Open Phil, focused on technical AI safety funding. I’m hearing a lot of discussion suggesting funding is very tight right now for AI safety, so I wanted to give my take on the situation.
At a high level: AI safety is a top priority for Open Phil, and we are aiming to grow how much we spend in that area. There are many potential projects we'd be excited to fund, including some potential new AI safety orgs as well as renewals to existing grantees, academic research projects, upskilling grants, and more.
At the same time, it is also not the case that someone who reads this post and tries to start an AI safety org would necessarily have an easy time raising funding from us. This is because:
- All of our teams whose work touches on AI (Luke Muehlhauser’s team on AI governance, Claire Zabel’s team on capacity building, and me on technical AI safety) are quite understaffed at the moment. We’ve hired several people recently, but across the board we still don’t have the capacity to evaluate all the plausible AI-related grants, and hiring remains a top priority for us.
- And we are extra-understaffed for evaluating technical AI safety proposals in particular. I am the only person who is primarily focused on funding technical research projects (sometimes Claire’s team funds AI safety related grants, primarily upskilling, but a large technical AI safety grant like a new research org would fall to me). I currently have no team members; I expect to have one person joining in October and am aiming to launch a wider hiring round soon, but I think it’ll take me several months to build my team’s capacity up substantially.
- I began making grants in November 2022, and spent the first few months full-time evaluating applicants affected by FTX (largely academic PIs as opposed to independent organizations started by members of the EA community). Since then, a large chunk of my time has gone into maintaining and renewing existing grant commitments and evaluating grant opportunities referred to us by existing advisors. I am aiming to reserve remaining bandwidth for thinking through strategic priorities, articulating what research directions seem highest-priority and encouraging researchers to work on them (through conversations and hopefully soon through more public communication), and hiring for my team or otherwise helping Open Phil build evaluation capacity in AI safety (including separately from my team).
- As a result, I have deliberately held off on launching open calls for grant applications similar to the ones run by Claire’s team (e.g. this one); before onboarding more people (and developing or strengthening internal processes), I would not have the bandwidth to keep up with the applications.
- On top of this, in our experience, providing seed funding to new organizations (particularly organizations started by younger and less experienced founders) often leads to complications that aren't present in funding academic research or career transition grants. We prefer to think carefully about seeding new organizations, and have a different and higher bar for funding someone to start an org than for funding that same person for other purposes (e.g. career development and transition funding, or PhD and postdoc funding).
- I’m very uncertain about how to think about seeding new research organizations and many related program strategy questions. I could certainly imagine developing a different picture upon further reflection — but having low capacity combines poorly with the fact that this is a complex type of grant we are uncertain about on a lot of dimensions. We haven’t had the senior staff bandwidth to develop a clear stance on the strategic or process level about this genre of grant, and that means that we are more hesitant to take on such grant investigations — and if / when we do, it takes up more scarce capacity to think through the considerations in a bespoke way rather than having a clear policy to fall back on.
EvanMcVail @ 2023-10-12T03:14 (+23)
By the way, Open Philanthropy is actively hiring for roles on Ajeya’s team in order to build capacity to make more TAIS grants! You can learn more and apply here.
Ajeya @ 2023-10-12T18:33 (+4)
And a quick note that we've also added an executive assistant / operations role since Evan wrote this comment!
Tom Barnes @ 2023-09-28T12:42 (+13)
Thanks Ajeya, this is very helpful and clarifying!
I am the only person who is primarily focused on funding technical research projects ... I began making grants in November 2022
Does this mean that prior to November 2022 there were ~no full-time technical AI safety grantmakers at Open Philanthropy?
OP (prev. GiveWell labs) has been evaluating grants in the AI safety space for over 10 years. In that time the AI safety field and Open Philanthropy have both grown, with OP granting over $300m on AI risk. Open Phil has also done a lot of research on the problem. So, from someone on the outside, it seems surprising that the number of people making grants has been consistently low
OllieBase @ 2023-09-28T12:52 (+3)
Daniel Dewey was a Program Officer for potential risks from advanced AI at OP for several years. I don't know how long he was there for, but he was there in 2017 and left before May 2021.
Thomas Kwa @ 2023-09-21T17:14 (+43)
I think funding is a bottleneck. Everything I've heard suggests the funding environment is really tight: CAIS is not hiring due to lack of funding. FAR is only hiring one RE in the next few months due to lack of funding. Less than half of this round of MATS scholars were funded for independent research. I think this is because there are not really 5-10 EA funders able to fund at large scale, just OP and SFF; OP is spending less than they were pre-FTX. At LTFF the bar is high, LTFF's future is uncertain, and they tend not to make huge grants anyway. So securing funding should be a priority for anyone trying to start an org.
Edit: I now think the impact of these orgs is uncertain enough that one should not conclude with certainty there is a funding bottleneck.
mariushobbhahn @ 2023-09-21T17:27 (+21)
I have heard mixed messages about funding.
From the many people I interact with and also from personal experience it seems like funding is tight right now. However, when I talk to larger funders, they typically still say that AI safety is their biggest priority and that they want to allocate serious amounts of money toward it. I'm not sure how to resolve this but I'd be very grateful to understand the perspective of funders better.
I think the uncertainty around funding is problematic because it makes it hard to plan ahead. It's hard to do independent research, start an org, hire, etc. If there was clarity, people could at least consider alternative options.
Linch @ 2023-09-22T05:22 (+32)
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
- LTFF
- OP
- SFF
- Other EA/longtermist funders
- Earning-to-givers
- Non-EA institutional funders.
- Everybody else
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that's much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don't really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we'll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we're likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn't a particularly large fraction of the longtermist or AI x-safety funding landscape, I'm only talking about it most because it's the group I'm the most familiar with).
Open Phil
I'm not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it's not obvious that grantmaking capacity is their true bottleneck, as a) I'm not sure they're trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It's possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don't really have the technical capacity or don't have the gumption to fund projects that OP isn't funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding "obviously safe" projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don't have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there's a sufficiently large need for funding. My current guess is that it's fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
- pooling the money in a (semi-)centralized source
- choosing for themselves where to give to
- saving the money for better projects later.
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn't be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it's much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It's harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren't culturally EA or longtermist or whatever.
- ^
Which will also be harder after OP's matching expires.
- ^
If the rest of the funding landscape doesn't change, the tier which I previously called our 5M tier (as in 5M/6 months or 10M/year) can probably absorb on the order of 6-9M over 6 months, or 12-18M over 12 months. This is in large part because the lack of other funders means more projects are applying to us.
- ^
Regranting is pretty odd outside of EA; I think it'd be a lot easier for e.g. FAR or ARC Evals to ask random foundations or the US government for money directly for their programs than for LTFF to ask for money to regrant according to our own best judgment. My understanding is that foundations and the US government also often have long forms and application processes which will be a burden for individuals to fill; makes more sense for institutions to pay that cost.
JoshuaBlake @ 2023-09-25T06:45 (+1)
There's some really useful information here. Getting it out in a more visible way would be useful.
Linch @ 2023-09-26T04:33 (+2)
Thanks! I've crossposted the comment to LessWrong. I don't think it's polished enough to repost as a frontpage post (and I'm unlikely to spend the effort to polish it). Let me know if there are other audiences which will find this comment useful
Vaidehi Agarwalla @ 2023-09-22T05:51 (+18)
"Less than half of this round of MATS scholars were funded for independent research."
-> Its not clear to me what exactly the bar for independent research should be. It seems like it's not a great fit for a lot of people, and I expect it to be incredibly hard to do it well as a relatively junior person. So it doesn't have to be a bad thing that some MATS scholars didn't get funding.
Also, I don't necessarily think that orgs being unable to hire is in and of itself a sign of a funding bottleneck. I think you'd first need to make the case that these organisations are crossing a certain impact threshold.
(I do believe AIS lacks diversify of funders and agree with your overall point).
Thomas Kwa @ 2023-09-22T06:20 (+1)
Fair point about the independent research funding bar. I think the impact of CAIS and FAR are hard to deny, simply because they both have several impressive papers.
Ben_West @ 2023-09-26T00:09 (+11)
Thanks for writing this! It seems like a valuable point to consider, and one that I have been thinking about myself recently.
My guess is that most of the people who are capable of founding an organization are also capable of being middle or senior managers within existing organizations, and my intuition is that they would probably be more impactful there. I'm curious if you have the opposite intuition?
mariushobbhahn @ 2023-09-26T07:37 (+2)
I touched on this a little bit in the post. I think it really depends on a couple of assumptions.
1. How much management would they actually get to do in that org? At the current pace of hiring, it's unlikely that someone could build a team as quickly as you can with a new org.
2. How different is their agenda from existing ones? What if they have an agenda that is different from any agenda that is currently done in an org? Seems hard/impossible to use the management skills in an existing org then.
3. How fast do we think the landscape has to grow? If we think a handful of orgs with 100-500 members in total is sufficient to address the problem, this is probably the better path. If we think this is not enough, starting and scaling new orgs seems better.
But like I said in the post, for many (probably most) people starting a new org is not the best move. But for some it is and I don't think we're supporting this enough as a community.
Ben_West @ 2023-09-26T18:51 (+4)
At the current pace of hiring, it's unlikely that someone could build a team as quickly as you can with a new org.
Can you say more about why this is?
The standard assumption is that the proportional rate of growth is independent of absolute size, i.e. a large company is as likely to grow 10% as a small company is. As a result, large companies are much more likely to grow in absolute terms than small companies are.[1]
I could imagine various reasons why AI safety might deviate from the norm here, but am not sure which of them you are arguing for. (Sorry if this was in the post and I'm not able to find it.)
- ^
My understanding is that there is dispute about whether these quantities are actually independent, but I'm not aware of anything suggesting that small companies will generally grow in absolute terms faster than large companies (and understand that there is substantial empirical evidence which suggests the opposite).
ShayBenMoshe @ 2023-09-26T07:06 (+1)
Many people live in an area (or country) where there isn't even a single AI safety organization, and can't or don't want to move. In that sense - no they can't even join an existing organization (in any level).
(I think founding an organization has other advantages over joining an existing one, but this is my top disagreement.)
Prometheus @ 2023-09-25T02:57 (+6)
(crossposted from lesswrong)
I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they're interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing
Denis @ 2023-09-28T11:33 (+2)
Great article. Identifying the problem is half the solution. You have provided one provocative answer, which challenges us to either agree with you or propose a better solution!
Your description mirrors my experience in looking to move into this area (AI Safety / Governance) and talking to others wanting to move into this area. There is so much goodwill from organisations and from individuals, but my feeling is that they are just overwhelmed - by the extent of the work needed, by the number of applications for each role, by the logistics and the funding challenges. Even if the money is "available", it requires quite a lot of investigation and paperwork to actually get it, which takes away a valuable resource.
This week in the BlueDot AI Safety/Governance course, the topic was "Career Advice". People spoke of applying for roles and discovering that there were more than 100 (even more than 500) applicants for individual roles in some cases. Which then means organisations with limited resources spend a lot of these resources on the hiring process.
And yet, you can't just take and organisation and double the work-force in a month and expect it to maintain the same quality and culture that has made it so valuable in the first place. But at the same time, one of the lessons I've been learning is that organisations who want to make an impact often need a lot of time to build credibility. You can do great work, but if decision-makers have never heard of you, it may not be very impactful.
I know that there are organisations (e.g. Rethink Priorities) who are actively looking for potential founders. The problem is that being a founder requires a quite specific skill-set, commitment and energy-level.
I think that an interesting, alternative way to address this would mirror what tends to happen in the corporate world if rapid expansion is needed. Let's say you have a company of 100 and you realise you want to become 200 by the end of the year. Here's what you might do:
- Identify the very most critical work that you're currently doing, and make sure the right people continue to work on that. (first and foremost, don't make things worse!)
- With that caveat, think about who from your organisation would have the skillset to recruit, manage, coach, train, mentor new people. Maybe pick a team of 20, including a range of levels, but, if anything, tending towards more senior.
- Treat the growth like a project, with stages - planning, sourcing funding, strategy, recruitment, ... This "project" will be the full-time work of these people for the next few years.
- Create a clear long-term vision for how the new organisation will look in a few years, and recruit towards that. (Don't just recruit 100 new-graduates and expect to have a functional organisation).
- Maybe the first round of recruiting might be 10-20 relatively senior people who will become part of the leadership team and who will take over some of the work of recruiting. In this first phase, each new person will have an experience mentor who has worked for the organisation for some time, and after 3-6 months, these new people will be in a position to coach and mentor new-hires themselves.
- Do not fully "merge" the new and old organisations until you are very confident that it will work well. At the same time, ensure that the new organisation has all the benefits of the existing one, including access to people for networking, advice, contacts, name-recognition.
Obviously this is a vastly oversimplified scheme. But the point is: if you can make this work, instead of a new organisation which may struggle for recognition, for resources, for purpose, .., you can vastly increase the potential of an already existing, successful organisation which is currently resource-limited. As you say, the talent is there, ready to work. The problems are there, ready to be solved.
Walt @ 2023-09-25T10:16 (+1)
(Cross-posted from LW)
and Kaarel’s work on DLK
@Kaarel is the research lead at Cadenza Labs (previously called NotodAI), our research group which started during the first part of SERI MATS 3.0 (There will be more information about Cadenza Labs hopefully soon!)
Our team members broadly agree with the post!
Currently, we are looking for further funding to continue to work on our research agenda. Interested funders (or potential collaborators) can reach out to us at info@cadenzalabs.org.
Roman Leventov @ 2023-09-24T10:03 (+1)
(Cross-posted from LW)
@Nathan Helm-Burger's comment made me think it's worthwhile to reiterate here the point that I periodically make:
Direct "technical AI safety" work is not the only way for technical people (who think that governance & politics, outreach, advocacy, and field-building work doesn't fit them well) to contribute to the larger "project" of "ensuring that the AI transition of the civilisation goes well".
Now, as powerful LLMs are available, is the golden age to build innovative systems and tools to improve[1]:
- Politics: see https://cip.org/, Audrey Tang's projects
- Social systems: innovative LLM/AI-first social networks that solve the social dilemma? (I don't have a good existing examples of such projects, though)
- Psychotherapy, coaching: see Inflection
- Economics: see Verses, One Project, the Gaia Consortium
- Epistemic infrastructure: see Subconscious Network, Ought, the Cyborgism agenda, Quantum Leap (AI safety edtech)
- Authenticity infrastructure: see Optic, proof-of-personhood projects
- Cybersec/infosec: see various AI startups for cybersecurity, trustoverip.org
- More?
I believe that if such projects are approached with integrity, thoughtful planning, and AI safety considerations at heart rather than with short-term thinking (specifically, not considering how the project will play out if or when AGI is developed and unleashed on the economy and the society) and profit-extraction motives, they could shape to shape the trajectory of the AI transition in a positive way, and the impact may be comparable to some direct technical AI safety/alignment work.
In the context of this post, it's important that the verticals and projects mentioned above could either be conventionally VC-funded because they could promise direct financial returns to the investors, or could receive philanthropic or government funding that wouldn't otherwise go to technical AI safety projects. Also, there is a number of projects in these areas that are already well-funded and hiring.
Joining such projects might also be a good fit for software engineers and other IT and management professionals who don't feel they are smart enough or have the right intellectual predispositions to do good technical research, anyway, even there was enough well-funded "technical AI safety research orgs". There should be some people who do science and some people who do engineering.
- ^
I didn't do serious due diligence and impact analysis on any of the projects mentioned. The mentioned projects are just meant to illustrate the respective verticals, and are not endorsements.