Three Reflections from 101 EA Global Conversations

By Akash @ 2022-04-25T22:02 (+128)

I recently attended EAGxOxford, EAGxBoston, and EAG London. In total, I had about 101 one-on-one conversations (+-10; depends on how you count casual/informal 1-1s). The vast majority of these were with people interested in longtermist community-building or AI safety. 

Here are three of my biggest takeaways:

1) There are many people doing interesting work who aren’t well-connected

I recently moved to Berkeley, one of the major EA hubs for AI safety research and longtermist movement-building. I live with several people who are skilling up in alignment research, and I work out of an office with people who regularly talk about the probability of doom, timelines, takeoff speeds, MIRI’s agenda, Paul and Eliezer dialogues, ELK, agent foundations, interpretability, and, well, you get it.

At the EAGs, I was excited to meet many people (>30) interested in AI safety and longtermist community-building. Several (>10) of them were already dedicating a large portion of their time to AI safety or longtermist community-building (e.g., by spending a summer on AI safety research, leading a local EA chapter, or contracting for EA orgs.)

One thing stood out to me, though: Many of the people I spoke to, including those who were already investing >100 hours into EA work, weren’t aware of the people/models/work in Berkeley and other EA hubs. 

Here’s a hypothetical example:

The point here is not that Alice should immediately drop what she’s doing. But I found it interesting how many people didn’t even realize what options they had available. Alice, for example, could apply for a grant to skill-up in AI safety research in an EA Hub. But she often doesn’t even realize this, or even if she does, she doesn’t seriously consider it when she’s thinking about her summer plans. 

I don’t think people should blindly defer to the people/models in EA hubs. But I do think that exposure to these people/models will generally help people make more informed decisions. Two quick examples:

One of the easiest ways to do this, I claim, is to talk directly to people doing this kind of work. After 1:1s with people who were doing (or seriously considering) longtermist work, I often asked, “Who would be good for this person to talk to?” and then I immediately threw them into some group chats.

More broadly, I’ve updated in the direction of the following claim: There are people doing (or capable of doing) meaningful longtermist work outside of major EA hubs. I’m excited about interventions that try to find these individuals and connect them to people who can support their work, challenge their thinking, and introduce them to new opportunities

2) Considering wide action spaces is rare and valuable

It’s extremely common for people to think about the opportunities that are in front of them, rather than considering the entire action space of possibilities.

A classic example is when I met Bob, a community-builder at Peter Singer University.

I think “considering wide action spaces” and “taking weird ideas seriously” are two of the traits that I most commonly see in highly impactful people. To be clear, I think considerations of personal fit are important, and we don’t want anyone trying anything. But I claim that people generally default to dismissing ideas prematurely and failing to seriously consider what it would look like to do something that deviates from the natural, intuitive, default pathways.

If you are a student at PSU, I encourage you to think seriously about projects, internships, research projects, skilling-up quests, and other opportunities that exist outside of PSU. Maybe the best thing for you to do is to stay, but you won’t know unless you consider the wide action space.

3) People should write down their ideas

At least 10 times during EAGs, someone was describing something they had thought in some detail (examples: a project proposal, a grant idea, comparisons between career options they had been considering). 

And I asked, “Wow, have you written any of this up?”

And the person (usually) responded, “Oh… uh. No—well, not yet! I might write it up later/I’m planning to write it up/Maybe after the conference I’ll write it up/I’m nervous to write it up/I don’t have enough to actually write up…”

Some benefits of writing that I’ve noticed:

If you’re reading this, I encourage you to take 30-60 minutes to start writing something. Here are some examples of things that I’ve been encouraging my friends (and myself) to write up:

If you write something down by April 30, feel free to submit to the Community Builder Writing Contest.

Miscellaneous Reflections

I’m grateful to Madhu Sriram, Luise Wöhlke, Lara Thurnherr, and Harriet Patterson for feedback on a draft of this post.


Charles He @ 2022-04-25T23:12 (+13)

I know very little about this, but in a recent conference, I heard from informed people that S-risk seems to be one area in longtermism or AI-risk that isn't as well funded as others right now (comparatively to the funding situation in AI-risk and longtermism). 

As the OP says, S-risks are one of the few areas that's relevant to worldviews or theories of change with "very short timelines"—their "tractability" might rise with the underlying likelihood of AGI emergence or "short timelines".

These S-risks are particularly important in one perspective about these "short timelines" that seems to benefit from a certain perspective about "AI-risk", described below:

Chi @ 2022-04-26T09:39 (+30)

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

Charles He @ 2022-04-26T14:49 (+2)

This sounds great!

I heard from informed people that S-risk seems to be one area in longtermism or AI-risk that isn't as well funded as others right now

Would you say this is statement wrong, or a bad characterization of the funding situation? 

I want to be corrected so I don't spread misinformation.

Chi @ 2022-04-26T16:12 (+15)

I didn't run this by anyone else in the s-risk funding space, so please don't hold others to these numbers/opinions.
 

Tl;dr: I think this is probably right in direction but with lots of caveats. In particular, it's still the case that s-risk has a lot of money (~low hundreds $m) compared to ideas/opportunities at least right now and at least possibly more so than general longtermism. I think this might change soon since I expect s-risk money to grow less than general longtermist money.

edit: I think s-risk is ideas constrained when it comes to small grants and funding (and ideas) constrained for large grants/investments.

I'd estimate s-risk to have something in the low hundreds $m in expected value (not time-discounted) of current assets specifically dedicated to it. Your question is slightly hard to answer since I'm guessing OpenPhil and FTXF would fund at least some s-risk projects if there were more proposals/more demand for money in s-risk. Also, a lot of funded people and projects who don't work directly on s-risk still care about s-risk. Maybe that should be counted somehow. Naively not counting these people and OpenPhil/FTXF money at all and comparing current total assets in general longtermism vs. s-risk:

In absolute terms: Yup, general longtermism definitely has much more money (~two orders of magnitude.) My guess is that this ratio will grow bigger over time and that it will in expectation grow bigger over time. (~70% credence for each of the claims? Again confused about how to count OpenPhil and FTX F money and how they'll decide to spend money in the future. If I stick to not counting them as s-risk money at all, then >70% credence.)

Per person working on s-risk/general longtermism: Would still say yes although I don't have a good way to count s-risk people and general longtermist people. Could be closer to even and probably not (much) more than an order of magnitude difference. Again, quick and wild guess is that the difference will in expectation grow larger over time, but less confident in this than my guess about how the ratio of absolute money will develop. (55%?)

Per quality-adjusted idea/opportunity to spend money: Unsure. I'd (much) rather have more money-eating ideas/opportunities to reduce s-risk than more money to reduce s-risk but I'm not sure if this is more or less the case compared to general longtermism (s-risk has both fewer ideas/opportunities and less money). Also don't know how this will develop. Arguably, the ratio between money and idea/opportunity also isn't a great metric because you might care more about absolutes here. I think some people might argue that s-risk is less funding constrained compared to ideas-constrained than general longtermism. This isn't exactly what you've asked for but still seems relevant. OTOH, having less absolute money does mean that the s-risk space might struggle to fund even one really expensive project.

edit: I do think if we had significantly more money right now, we would be spending more money now-ish.

Per "how much people in the EA community care about this issue": Who knows :) I'm  obviously both biased and in a position that selects for my opinion.

Funding infrastructure: Funding in s-risk is even more centralized than in general longtermism, so if you think diversification is good, more s-risk funders are good :) There are also fewer structured opportunities for funding in s-risk and I think the s-risk funding sources are generally harder to find. Although again, I assume one could easily apply with an s-risk motivated proposal to general longtermist places, so it's kind of weird to compare the s-risk funding infrastructure to the general longtermist funding infrastructure.

 

I wrote this off the cuff and in particular, might substantially revise my predictions with 15 minutes of thought.

Charles He @ 2022-04-26T16:51 (+2)

Wow, thanks for the reply!

Ok, so for me, the takeaway and socially best message (for a proponent of S-risk) is probably:

"For strong ideas/founders/leaders, there is ample funding for top new initiatives in S-risk."

Also, if you might revise this with "15 minutes of thought", that implies that you wrote this detailed, thoughtful comment in comparable time, which seems really impressive.

Chi @ 2022-04-26T17:40 (+9)

Haha, no, it took me quite a bit longer to phrase what I wrote but I didn't have dedicated non-writing thinking time, e.g. the claim about the expected ratio of future assets seems like something I could sanity check + get a better number for with a pen and pencil and a few minutes but I was too lazy to do that :)

(And I can't let false praise of me stand)

edit to also comment on the substantial part of your comment: Yes, that takeaway seems good to me!

edit edit: Although I'd caveat that s-risk is less mature than general longtermism (more "pre-paradigmatic" for people who like that word), so there might be less (obvious) to do for founders/leaders right now and that can be very frustrating. We still always want to hear about such people.

last edit?: And as in general longtermism, if somebody is interested in s-risk and has really high EtG potential, I might sometimes prefer that. Especially given what I said above about founder/leader type people. Something within an order of magnitude or two of FTX F for s-risk reduction would obviously be a huge win for the space and I don't think it's crazy to think that people could achieve that.

Charles He @ 2022-04-25T23:16 (+2)

Comment: "Very short timelines" might be conflated with "inevitability". 

(The following isn't my idea, I've heard about it several times now. It seems good to share, even though my explanation is really basic.)

For many people with short timelines, it's less that they view AGI as coming in "15 or 50 years", but more that they view the "shape of the path" of the emergence of AGI as inevitable in some deep sense. 

To explain it in one way: to these people, watching civilization avoid dangerous AGI, is sort of like watching a drunkard walking forward in a landscape that has deep, dangerous holes.  These holes get bigger and bigger over time as the drunkard walks. 

Eventually, the holes are going to get so big, and gain such vast, slippery slopes, that even a skilled person won't be able to escape slipping into it.

To get more "gearsy", these people with negative views believe that AI hardware and models/patterns/training will get much better and widely distributed. Government regulation will be highly inadequate (e.g. due to "moloch") and won't even come close to being effective in preventing or regulating AGI.

If you have this belief, this gets even worse, once you consider other civilizations ("grabby aliens"). This is because, if you think aggressive AGI is inevitable, if you think it's a "lower entropy" state, it must also be so for any civilization. Then even if your civilization manages to escape it, some other will come across it. Then, it seems likely some aggressive AGI will always emerge, and will prevail and grab the other civilizations.

 

This all might be relevant to S-risk, because if you can't prevent AGI, you can shape the path it emerges, and you might avoid extremely dark, S-risk scenarios. 

If you believe AGI is so inevitable, then it is logical to believe you can find it (and focused efforts can find it ahead of everyone else). This explains why some subset of people might be "trying to find AGI" or take certain other interventions, that might seem wilder to someone without these perspectives.

Note that some people with these beliefs might not have that "high of a probability" on S-risk or even have certain timelines on AGI. It's more that they view S-risk as extremely bad, in a way that warrants serious attention (certainly more than right now). The reason for pointing this out is that the actual probability of S-risk might be low, and also that understanding this lower risk might make the presentation/explanation of this view more effective and reasonable.

Benjamin_Todd @ 2022-04-29T15:06 (+9)

Just wanted to add that at 80k we notice a lot of people around who can benefit from these three things, even people who are pretty interested in EA. In fact, I'd say these three things are a pretty good summary of the main value-adds and aims of 80k's one-on-one team.

calebp @ 2022-04-25T23:48 (+8)

I really liked this post, one of the best things that I have read here in a while.

+1 for taking weird ideas seriously and considering wide action spaces being underrated.

Akash @ 2022-05-05T21:24 (+2)

Thank you, Caleb! 

Miranda_Zhang @ 2022-04-25T23:41 (+6)

Really enjoyed this post and the takeaways, which I thought were insightful and ~fairly novel (at least amongst EAG(x) reflections). I'm a big proponent of 3) and definitely think it can be useful to have things written up in advance of the conference, too. People may not be inclined to read it at the conference but at least they'll have something to refer to after!

Thanks for this, Akash!

Akash @ 2022-05-05T21:22 (+3)

Miranda, your FB profile & EA profile are great examples of #3 :) 

ChanaMessinger @ 2022-04-25T22:08 (+6)

This is great! Awesome work, Akash!

Akash @ 2022-05-05T21:21 (+2)

Thank you, Chana!

Evie Cottrell @ 2022-05-05T17:43 (+3)

I really really loved section 2 of this post!! It articulates a mindset shift that I think is important and valuable, and I've not seen it written out like that before. 

Akash @ 2022-05-05T21:21 (+2)

Thanks, Evie!