Ozzie Gooen's Quick takes

By Ozzie Gooen @ 2020-09-22T19:17 (+7)

null
Ozzie Gooen @ 2024-08-05T16:23 (+106)

I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important - like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.

This is very frustrating to me.

First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation - not having EAs be allowed in many orgs makes this very difficult.

Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.

And a lighter third - it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.

Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are similar stigmas against people across the political spectrum, for example, to certain crowds. A big part of “talking about morality and important issues, while having something non-obvious to say” is being hated by a bunch of people. In this vein, arguably we should be aiming for a world where it winds up that there’s a larger stigma.

But a lot clearly has to do with the decisions made by what seems like a few EAs. FTX hurt the most. I think the OpenAI board situation resulted in a lot of ea-paranoia, arguably with very little upside. More recently, I think that certain EA actions in ai policy are getting a lot of flak.

There was a brief window, pre-FTX-fail, where there was a very positive EA media push. I’ve seen almost nothing since. I think that “EA marketing” has been highly neglected, and that doesn’t seem to be changing.

Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative. When this arm upsets people, all of EA gets blamed.

My guess is that there are many other changes to do here too.

CEA is the obvious group to hypothetically be in charge of the EA parts of this. In practice, it seems like CEA has been very busy with post-FTX messes and leadership changes.

So I think CEA, as it becomes stable, could do a lot of good work making EA marketing work somehow. And I hope that the AI safety governance crowd can get better at not pissing off people. And hopefully, other EAs can figure out other ways to make things better and not worse.

If the above doesn’t happen, honestly, it could be worth it for EAs themselves to try to self-fund or coordinate efforts on this. The issue isn’t just one of “hurting long-term utility”, it’s one that just directly hurts EAs - so it could make a lot of sense for them to coordinate on improvements, even just in their personal interests.

NickLaing @ 2024-08-05T19:26 (+32)

On the positive front, I know its early days but GWWC have really impressed me with their well produced, friendly yet honest public facing stuff this year - maybe we can pick up on that momentum?

Also EA for Christians is holding a British conference this year where Rory Stewart and the Archbishop of Canterbury (biggest shot in the Anglican church) are headlining which is a great collaboration with high profile and well respected mainstream Christian / Christian-adjacent figures.

Chris Leong @ 2024-08-06T07:10 (+2)

Any examples you wish to highlight?

NickLaing @ 2024-08-06T08:20 (+7)

I think in general their public facing presentation and marketing seems a cut above any other EA org - happy to be proven wrong by other orgs which are doing a great job too. What I love is how they present their messages with such positivity, while still packing a real punch and not watering down their message. Check out their web-page and blog to see their work.

A few concrete examples
- This great video "How rich are you really?" 


- Nice rebranding of "Giving what we can pledge" to the snappier and clearer "10% pledge" 
- The diamond symbol as a simple yet strong sign of people taking the pledge, both on the forum here and linkedin
- An amazing linked-in push with lots of people putting the diamond and explaining why they took the pledge. Many posts have been received really positively on my wall.

That's just what I've noticed.
 

OllieBase @ 2024-08-05T18:07 (+25)

(Jumping in for our busy comms/exec team) Understanding the status of the EA brand and working to improve it is a top priority for CEA :) We hope to share more work on this in future.

Ozzie Gooen @ 2024-08-06T02:54 (+2)

Thanks, good to hear! Looking forward to seeing progress here.

yanni kyriacos @ 2024-08-06T00:50 (+12)

I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering). 

I think people are still underweighting how much the public are going to hate labs in 1-3 years.

Evan_Gaensbauer @ 2024-08-06T06:21 (+2)

I was telling organizers with PauseAI like Holly Elmore they should be emphasizing this more several months ago.

yanni kyriacos @ 2024-08-06T23:27 (+2)

I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.

My instinct as to why people don't find it a compelling argument;

  1. They don't have short timelines like me, and therefore chuck it out completely
  2. Are struggling to imagine a hostile public response to 15% unemployment rates
  3. Copium 
Evan_Gaensbauer @ 2024-08-07T00:48 (+3)

At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI--a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA. 

>They don't have short timelines like me, and therefore chuck it out completely

Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists. 

>Are struggling to imagine a hostile public response to 15% unemployment rates

Those in PauseAI and similar movements don't.

>Copium 

While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it'll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of 'hope' on a technical level, though on a gut level I don't have faith in EA. I definitely don't blame those who have any faith left in EA, let alone those who see hope in it. 

Much of the difference here is the mindset towards 'people', and how they're modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn't. While many here on the EA Forum may not agree that much--or even most--of what movements like PauseAI are doing are as effective as they could or should be, they at least haven't succumbed to a plague of doomerism beyond what can seemingly even be justified.


To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: "now is not the time for moral cowardice." There are some effective altruists who heeded that sort of call when it was being made. There are others who weren't effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction--from narrow/near-term AI, to AGI/ASI. 

EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety. 

JWS 🔸 @ 2024-08-05T20:38 (+11)

Very sorry to hear these reports, and was nodding along as I read the post.

If I can ask, how do they know EA affiliation was the decision? Is this an informal 'everyone knows' thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?

Of course, please don't share any personal information, but I think it's important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/history of people here.

(Feel free to DM me Ozzie if that's easier)

Ozzie Gooen @ 2024-08-06T02:56 (+15)

I'm thinking of around 5 cases. I think in around 2-3 they were told, the others it was strongly inferred. 

bruce @ 2024-08-06T03:18 (+6)

I think that certain EA actions in ai policy are getting a lot of flak.

Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative

Would you be happy to expand on these points?

Ozzie Gooen @ 2024-08-06T04:35 (+10)

I think that certain EA actions in ai policy are getting a lot of flak.

On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs. 


See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
 
 (Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn't get it to work in this editor after a few minutes of attempts)


I also came across this transcript, from Amjad Masad, CEO of Replit, on Tucker Carlson, recently
https://www.happyscribe.com/public/the-tucker-carlson-show/amjad-masad-the-cults-of-silicon-valley-woke-ai-and-tech-billionaires-turning-to-trump
 



[00:24:49]

Organized, yes. And so this starts with a mailing list. In the nineties is a transhumanist mailing list called the extropions. And these extropions, they might have got them wrong, extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast, or technology in general progressing so fast that you can't predict what happens. It's self evolving and it just. All bets are off. We're entering a new world where you.

[00:25:27]

Just can't predict it, where technology can't.

[00:25:29]

Be controlled, technology can't be controlled. It's going to remake, remake everything. And those people believe that's a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational whatever. And so they really wanted for the singularity to happen. And there's this young guy on this list, his name's Iliezer Itkowski, and he claims he can write this AI and he would write really long essays about how to build this AIH suspiciously. He never really publishes code, and it's all just prose about how he's going to be able to build AI anyways. He's able to fundraise. They started this thing called the Singularity Institute. A lot of people were excited about the future, kind of invested in him. Peter Thiel, most famously. And he spent a few years trying to build an AI again, never published code, never published any real progress. And then came out of it saying that not only you can't build AI, but if you build it, it will kill everyone. So he switched from being this optimist. Singularity is great to actually, AI will for sure kill everyone. And then he was like, okay, the reason I made this mistake is because I was irrational.

[00:26:49]

And the way to get people to understand that AI is going to kill everyone is to make them rational. So he started this blog called less wrong and less wrong walks you through steps to becoming more rational. Look at your biases, examine yourself, sit down, meditate on all the irrational decisions you've made and try to correct them. And then they start this thing called center for Advanced Rationality or something like that. Cifar. And they're giving seminars about rationality, but.

[00:27:18]

The intention seminar about rationality, what's that like?

[00:27:22]

I've never been to one, but my guess would be they will talk about the biases, whatever, but they have also weird things where they have this almost struggle session like thing called debugging. A lot of people wrote blog posts about how that was demeaning and it caused psychosis in some people. 2017, that community, there was collective psychosis. A lot of people were kind of going crazy. And this all written about it on the Internet, debugging.

[00:27:48]

So that would be kind of your classic cult technique where you have to strip yourself bare, like auditing and Scientology or. It's very common, yes.

[00:27:57]

Yeah.

[00:27:59]

It's a constant in cults.

[00:28:00]

Yes.

[00:28:01]

Is that what you're describing?

[00:28:02]

Yeah, I mean, that's what I read on these accounts. They will sit down and they will, like, audit your mind and tell you where you're wrong and all of that. And it caused people huge distress on young guys all the time talk about how going into that community has caused them huge distress. And there were, like, offshoots of this community where there were so suicides, there were murders, there were a lot of really dark and deep shit. And the other thing is, they kind of teach you about rationality. They recruit you to AI risk, because if you're rational, you're a group. We're all rational now. We learned the art of rationality, and we agree that AI is going to kill everyone. Therefore, everyone outside of this group is wrong, and we have to protect them. AI is going to kill everyone. But also they believe other things. Like, they believe that polyamory is rational and everyone that.

[00:28:57]

Polyamory?

[00:28:57]

Yeah, you can have sex with multiple partners, essentially, but they think that's.

[00:29:03]

I mean, I think it's certainly a natural desire, if you're a man, to sleep with more indifferent women, for sure. But it's rational in the sense how, like, you've never meth happy, polyamorous, long term, and I've known a lot of them, not a single one.

[00:29:21]

So how would it might be self serving, you think, to recruit more impressionable.

[00:29:27]

People into and their hot girlfriends?

[00:29:29]

Yes.

[00:29:30]

Right. So that's rational.

[00:29:34]

Yeah, supposedly. And so they, you know, they convince each other of all these cult like behavior. And the crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI. And the AI labs and the people who are starting these companies were reading all this stuff. So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationale community. He was part of the original mailing list. I think he would call himself a rationale part of the rational community. But he wrote a book about AI and how AI is going to kill everyone, essentially. I think he monitored his views more recently, but originally he was one of the people that are kind of banging the alarm. And the foundation of OpenAI was based on a lot of these fears. Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they group of people, I don't think everyone at OpenAI really believed that. But some of the original founding story was that, and they were recruiting from that community so much.

[00:30:46]

So when Sam Altman got fired recently, he was fired by someone from that community, someone who started with effective altruism, which is another offshoot from that community, really. And so the AI labs are intermarried in a lot of ways with this community. And so it ends up, they kind of borrowed a lot of their talking points, by the way, a lot of these companies are great companies now, and I think they're cleaning up house.

[00:31:17]

But there is, I mean, I'll just use the term. It sounds like a cult to me. Yeah, I mean, it has the hallmarks of it in your description. And can we just push a little deeper on what they believe? You say they are transhumanists.

[00:31:31]

Yes.

[00:31:31]

What is that?

[00:31:32]

Well, I think they're just unsatisfied with human nature, unsatisfied with the current ways we're constructed, and that we're irrational, we're unethical. And so they long for the world where we can become more rational, more ethical, by transforming ourselves, either by merging with AI via chips or what have you, changing our bodies and fixing fundamental issues that they perceive with humans via modifications and merging with machines.

[00:32:11]

It's just so interesting because. And so shallow and silly. Like a lot of those people I have known are not that smart, actually, because the best things, I mean, reason is important, and we should, in my view, given us by God. And it's really important. And being irrational is bad. On the other hand, the best things about people, their best impulses, are not rational.

[00:32:35]

I believe so, too.

[00:32:36]

There is no rational justification for giving something you need to another person.

[00:32:41]

Yes.

[00:32:42]

For spending an inordinate amount of time helping someone, for loving someone. Those are all irrational. Now, banging someone's hot girlfriend, I guess that's rational. But that's kind of the lowest impulse that we have, actually.

[00:32:53]

We'll wait till you hear about effective altruism. So they think our natural impulses that you just talked about are indeed irrational. And there's a guy, his name is Peter Singer, a philosopher from Australia.

[00:33:05]

The infanticide guy.

[00:33:07]

Yes.

[00:33:07]

He's so ethical. He's for killing children.

[00:33:09]

Yeah. I mean, so their philosophy is utilitarian. Utilitarianism is that you can calculate ethics and you can start to apply it, and you get into really weird territory. Like, you know, if there's all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another third person that came in for a regular checkup or they will die. You're ethically, you're supposed to kill that guy, get his organ, and put it into the other two. And so it gets. I don't think people believe that, per se. I mean, but there's so many problems with that. There's another belief that they have.

[00:33:57]

But can I say that belief or that conclusion grows out of the core belief, which is that you're God. Like, a normal person realizes, sure, it would help more people if I killed that person and gave his organs to a number of people. Like, that's just a math question. True, but I'm not allowed to do that because I didn't create life. I don't have the power. I'm not allowed to make decisions like that because I'm just a silly human being who can't see the future and is not omnipotent because I'm not God. I feel like all of these conclusions stem from the misconception that people are gods.

[00:34:33]

Yes.

[00:34:34]

Does that sound right?

[00:34:34]

No, I agree. I mean, a lot of the. I think it's, you know, they're at roots. They're just fundamentally unsatisfied with humans and maybe perhaps hate, hate humans.

[00:34:50]

Well, they're deeply disappointed.

[00:34:52]

Yes.

[00:34:53]

I think that's such a. I've never heard anyone say that as well, that they're disappointed with human nature, they're disappointed with human condition, they're disappointed with people's flaws. And I feel like that's the. I mean, on one level, of course. I mean, you know, we should be better, but that, we used to call that judgment, which we're not allowed to do, by the way. That's just super judgy. Actually, what they're saying is, you know, you suck, and it's just a short hop from there to, you should be killed, I think. I mean, that's a total lack of love. Whereas a normal person, a loving person, says, you kind of suck. I kind of suck, too. But I love you anyway, and you love me anyway, and I'm grateful for your love. Right? That's right.

[00:35:35]

That's right. Well, they'll say, you suck. Join our rationality community. Have sex with us. So.

[00:35:43]

But can I just clarify? These aren't just like, you know, support staff at these companies? Like, are there?

[00:35:50]

So, you know, you've heard about SBF and FDX, of course.

[00:35:52]

Yeah.

[00:35:52]

They had what's called a polycule.

[00:35:54]

Yeah.

[00:35:55]

Right. They were all having sex with each other.

[00:35:58]

Given. Now, I just want to be super catty and shallow, but given some of the people they were having sex with, that was not rational. No rational person would do that. Come on now.

[00:36:08]

Yeah, that's true. Yeah. Well, so, you know. Yeah. It's what's even more disturbing, there's another ethical component to their I philosophy called longtermism, and this comes from the effective altruist branch of rationality, long termism. Long termism. What they think is, in the future, if we made the right steps, there's going to be a trillion humans, trillion minds. They might not be humans, that might be AI, but they're going to be trillion minds who can experience utility, who can experience good things, fun things, whatever. If you're a utilitarian, you have to put a lot of weight on it, and maybe you discount that, sort of like discounted cash flows. Uh, but you still, you know, have to pause it that, you know, you know, if. If there are trillions, perhaps many more people in the future, you need to value that very highly. Even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety, because they think that AI, because they're rational. They arrived, and we can talk about their arguments in a second. They arrived at the conclusion that AI is going to kill everyone.

[00:37:24]

Therefore, effective altruists and rational community, all these branches, they're all kind of focused on AI safety, because that's the most important thing, because we want a trillion people in the future to be great. But when you're assigning value that high, it's sort of a form of Pascal's wager. It is sort of. You can justify anything, including terrorism, including doing really bad things, if you're really convinced that AI is going to kill everyone and the future holds so much value, more value than any living human today has value. You might justify really doing anything. And so built into that, it's a.

[00:38:15]

Dangerous framework, but it's the same framework of every genocidal movement from at least the French Revolution. To present a glorious future justifies a bloody present.

[00:38:28]

Yes.

[00:38:30]

And look, I'm not accusing them of genocidal intent, by the way. I don't know them, but those ideas lead very quickly to the camps.

[00:38:37]

I feel kind of weird just talking about people, because generally I like to talk about ideas about things, but if they were just like a silly Berkeley cult or whatever, and they didn't have any real impact on the world, I wouldn't care about them. But what's happening is that they were able to convince a lot of billionaires of these ideas. I think Elon maybe changed his mind, but at some point he was convinced of these ideas. I don't know if he gave them money. I think there was a story at some point, Wall Street Journal, that he was thinking about it. But a lot of other billionaires, billionaires gave them money, and now they're organized, and they're in DC lobbying for AI regulation. They're behind the AI regulation in California and actually profiting from it. There was a story in pirate wares where the main sponsor, Dan Hendrix, behind SB 1047, started a company at the same time that certifies the safety of AI. And as part of the bill, it says that you have to get certified by a third party. So there's aspects of it that are kind of. Let's profit from it.

[00:39:45]

By the way, this is all allegedly based on this article. I don't know for sure. I think Senator Scott Weiner was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let's call them, and they're very well organized, and also a lot of them still have connections to the big AI labs, and some of them work there, and they would want to create a situation where there's no competition in AI regulatory capture, per se. I'm not saying that these are the direct motivations. All of them are true believers. But you might infiltrate this group and direct it in a way that benefits these corporations.

[00:40:32]

Yeah, well, I'm from DC, so I've seen a lot of instances where my bank account aligns with my beliefs. Thank heaven. Just kind of happens. It winds up that way. It's funny. Climate is the perfect example. There's never one climate solution that makes the person who proposes it poorer or less powerful.

Evan_Gaensbauer @ 2024-08-06T06:20 (+23)

To be fair to the CEO of Replit here, much of that transcript is essentially true, if mildly embellished. Many of those events or outcomes associated with EA, or adjacent communities during their histories, that should be the most concerning to anyone other than any FTX-related events and for reasons beyond just PR concerns, can and have been well-substantiated.

Habryka @ 2024-08-06T06:37 (+2)

My guess is this is obvious, but the "debugging" stuff seems as far as I can tell completely made up. 

I don't know of any story in which "debugging" was used in any kind of collective way. There was some Leverage-research adjacent stuff that kind of had some attributes like this, "CT-charting", which maybe is what it refers to, but that sure would be the wrong word, and I also don't think I've ever heard of any psychoses or anything related to that. 

The only in-person thing I've ever associated with "debugging" is when at CFAR workshops people were encouraged to create a "bugs-list", which was just a random list of problems in your life, and then throughout the workshop people paired with other people where they could choose any problem of their choosing, and work with their pairing partner on fixing it. No "auditing" or anything like that. 

I haven't read the whole transcript in-detail, but this section makes me skeptical of describing much of that transcript as "essentially true".

Sarah Levin @ 2024-08-06T19:37 (+15)

I have personally heard several CFAR employees and contractors use the word "debugging" to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.

In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.

Will Aldred @ 2024-08-06T18:35 (+10)

For what it’s worth, I was reminded of Jessica Taylor’s account of collective debugging and psychoses as I read that part of the transcript. (Rather than trying to quote pieces of Jessica’s account, I think it’s probably best that I just link to the whole thing as well as Scott Alexander’s response.)

titotal @ 2024-08-06T18:15 (+10)

I presume this account is their source for the debugging stuff, wherein an ex-member of the rationalist Leverage institute described their experiences.  They described the institute as having "debugging culture", described as follows:

In the larger rationalist and adjacent community, I think it’s just a catch-all term for mental or cognitive practices aimed at deliberate self-improvement.

At Leverage, it was both more specific and more broad. In a debugging session, you’d be led through a series of questions or attentional instructions with goals like working through introspective blocks, processing traumatic memories, discovering the roots of internal conflict, “back-chaining” through your impulses to the deeper motivations at play, figuring out the roots of particular powerlessness-inducing beliefs, mapping out the structure of your beliefs, or explicating irrationalities.

and:

1. 2–6hr long group debugging sessions in which we as a sub-faction (Alignment Group) would attempt to articulate a “demon” which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.

The podcast statements seem to be an embellished retelling of the contents of that blog post (and maybe the allegations made by scott alexander in the comments of this post). I don't think describing them as "completely made up" is accurate. 

Evan_Gaensbauer @ 2024-08-06T22:48 (+17)

Leverage was an EA-aligned organization, that was also part of the rationality community (or at least 'rationalist-adjacent'), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.

At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.

Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That's a consensus between EA, Leverage, and the rationality community agree on--one of few things left that they still agree on at all.

AnonymousEAForumAccount @ 2024-08-08T23:22 (+18)

From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…

At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. 

 

While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage. 

Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.

At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”

This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced. 

Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I've had over several years of being involved in EA…  It seemed like unscientific, crackpot psychology…  it felt extremely cultish… The experience left me feeling humiliated and manipulated.” 

I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.

The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.

Ozzie Gooen @ 2024-08-09T22:27 (+4)

Quick point - I think the relationship between CEA and Leverage was pretty complicated during a lot of this period.

There was typically a large segment of EAs who were suspicious of Leverage, ever since their founding. But Leverage did collaborate with EAs on some specific things early on (like the first EA Summit). It felt like an uncomfortable alliance type situation. If you go back on the forum / Lesswrong, you can read artifacts.

I think the period of 2018 or so was unusual. This was a period where a few powerful people at CEA (Kerry, Larissa) were unusually pro-Leverage and got to power fairly quickly (Tara left, somewhat suddenly). I think there was a lot of tension around this decision, and when they left (I think this period lasted around 1 year), I think CEA became much less collaborative with Leverage.

One way to square this a bit is that CEA was just not very powerful for a long time (arguably, its periods of "having real ability/agency to do new things" have been very limited). There were periods where Leverage had more employees (I'm pretty sure). The fact that CEA went through so many different leaders, each with different stances and strategies, makes it more confusing to look back on.

I would really love for a decent journalist to do a long story on this history, I think it's pretty interesting.

Habryka @ 2024-08-06T21:03 (+2)

Huh, yeah, that sure refers to those as "debugging". I've never really heard Leverage people use those words, but Leverage 1.0 was a quite insular and weird place towards the end of its existence, so I must have missed it. 

I think it's kind of reasonable to use Leverage as evidence that people in the EA and Rationality community are kind of crazy and have indeed updated on the quotes being more grounded (though I also feel frustration with people equivocating between EA, Rationality and Leverage).

(Relatedly, I don't particularly love you calling Leverage "rationalist" especially in a context where I kind of get the sense you are trying to contrast it with "EA". Leverage has historically been much more connected to the EA community, and indeed had almost successfully taken over CEA leadership in ~2019, though IDK, I also don't want to be too policing with language here)

ChanaMessinger @ 2024-08-06T17:38 (+10)

I think it might describe how some people experienced internal double cruxing. I wouldn't be that surprised if some people also found the 'debugging" frame in general to give too much agency to others relative to themselves, I feel like I've heard that discussed.

Habryka @ 2024-08-06T21:06 (+2)

Based on the things titotal said, seems like it very likely refers to some Leverage stuff, which I feel a bit bad about seeing equivocated with the rest of the ecosystem, but also seems kind of fair. And the Zoe Curzi post sure uses the term "debugging" for those sessions (while also clarifying that the rest of the rationality community doesn't use the term that way, but they sure seemed to)

Evan_Gaensbauer @ 2024-08-06T22:20 (+2)

I wouldn't and didn't describe that section of the transcript, as a whole, as essentially true. I said much of it is. As the CEO might've learned from Tucker Carlson, who in turned learned from FOX News, we should seek to be 'fair and balanced.'

As to the debugging part, that's an exaggeration that must have come out the other side of a game of broken telephone on the internet. It seems that on the other side of that telephone line would've been some criticisms or callouts I've read years ago of some activities happening in or around CFAR. I don't recollect them in super-duper precise detail right now, nor do I have the time today to spend an hour or more digging them up on the internet

For the perhaps wrongheaded practices that were introduced into CFAR workshops for a period of time other than the ones from Leverage Research, I believe the others were some introduced by Valentine (e.g., 'againstness,' etc.). As far as I'm aware, at least as it was applied at one time, some past iterations of Connection Theory bore at least a superficial resemblance to some aspects of 'auditing' as practiced by Scientologists.

As to perhaps even riskier practices, I mean they happened not "in" but "around" CFAR in the sense of not officially happening under the auspices of CFAR, or being formally condoned by them, though they occurred within the CFAR alumni community and the Bay Area rationality community. It's murky, though there was conduct in the lives of private individuals that CFAR informally enabled or emboldened, and could've/should've done more to prevent. For the record, I'm aware CFAR has effectively admitted those past mistakes, so I don't want to belabor any point of moral culpability beyond what has been drawn out to death on LessWrong years ago.

Anyway, activities that occurred among rationalists in the social network that in CFAR's orbit, that arguably arose to the level of triggering behaviour comparable in extremity to psychosis, include 'dark arts' rationality, and some of the edgier experiments of post-rationalists. That includes some memes spread and behaviours induced in some rationalists by Michael Vassar, Brent Dill, etc.

To be fair, I'm aware much of that was a result not of spooky, pseudo-rationality techniques, but some unwitting rationalists being effectively bullied into taking wildly mind-altering drugs, as guinea pigs in some uncontrolled DIY experiment. While responsibility for these latter outcomes may not be as attributable to CFAR, they can be fairly attributed to some past mistakes of the rationality community, albeit on a vague, semi-collective level.

RedStateBlueState @ 2024-08-06T05:03 (+3)

I think it's worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that's not to say they're irrelevant, but I don't think they are representative of silicon valley as a whole

Ozzie Gooen @ 2024-08-07T13:54 (+2)

I think Garry Tan is more left-wing, but I'm not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists.

I think that the right-wing techies are often the loudest, but there are also lefties in this camp too. 

(Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)

Ozzie Gooen @ 2024-08-07T13:50 (+2)

Random Tweet from today: https://x.com/garrytan/status/1820997176136495167

Garry Tan is the head of YCombinator, which is basically the most important/influential tech incubator out there. Around 8 years back, relations were much better, and 80k and CEA actually went through YCombinator.

I'd flag that Garry specifically is kind of wacky on Twitter, compared to previous heads of YC. So I definitely am not saying it's "EA's fault" - I'm just flagging that there is a stigma here. 

I personally would be much more hesitant to apply to YC knowing this, and I'd expect YC would be less inclined to bring in AI safety folk and likely EAs. 

Rebecca @ 2024-08-07T19:57 (+5)

I find it very difficult psychologically to take someone seriously if they use the word ‘decels’.

JWS 🔸 @ 2024-08-07T14:44 (+3)

Random Tweet from today: https://x.com/garrytan/status/1820997176136495167

Want to say that I called this ~9 months ago.[1]

I will re-iterate that clashes of ideas/worldviews[2] are not settled by sitting them out and doing nothing, since they can be waged unilaterally.

  1. ^

    Especially if you look at the various other QTs about this video across that side of Twitter

  2. ^

    Or 'memetic wars', YMMV

Ozzie Gooen @ 2024-08-06T04:54 (+4)

Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative

My impression is that the current EA AI policy arm isn't having much active dialogue with the VC community and the like. I see Twitter spats that look pretty ugly, I suspect that this relationship could be improved on with more work.

At a higher level, I suspect that there could be a fair bit of policy work that both EAs and many of these VCs and others would be more okay with than what is currently being pushed. My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don't matter much to others, so we can essentially trade and come out better than we are now. 

Chris Leong @ 2024-08-06T09:53 (+3)

My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don't matter much to others, so we can essentially trade and come out better than we are now.


That seems like the wrong play to me. We need to be focused on achieving good outcomes and not being popular.

Ozzie Gooen @ 2024-08-07T03:37 (+6)

My personal take is that there are a bunch of better trade-offs between the two that we could be making. I think that the narrow subset of risks is where most of the value is, so from that standpoint, that could be a good trade-off. 

Ozzie Gooen @ 2024-06-25T00:46 (+71)

I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work.
- Very little biorisk content here, perhaps because of info-hazards.
- Little technical AI safety work here, in part because that's more for LessWrong / Alignment Forum.
- Little AI governance work here, for whatever reason.
- Not too much innovative, big-picture longtermist prioritization projects happening at the moment, from what I understand. 
- The cause of "EA community building" seems to be fairly stable, not much bold/controversial experimentation, from what I can tell.
- Fairly few updates / discussion from grantmakers. OP is really the dominant one, and doesn't publish too much, particularly about their grantmaking strategies and findings.

It's been feeling pretty quiet here recently, for my interests. I think some important threads are now happening in private slack / in-person conversations or just not happening. 

Ryan Greenblatt @ 2024-06-26T03:44 (+27)

I don't comment or post much on the EA forum because the quality of discourse on the EA Forum typically seems mediocre at best. This is especially true for x-risk.

I think this has been true for a while.

MathiasKB @ 2024-06-28T10:33 (+10)

Any ideas for what we can do to improve it?

The whole manifund debacle has left me quite demotivated. It really sucks that people are more interested debating contentious community drama, than seemingly anything else this forum has to offer.

NickLaing @ 2024-06-28T12:21 (+6)

Thanks for the reminder, definitely got sucked in too much myself....

Will get back to commenting more on GHD posts and write another of my own soon!

Matt Brooks @ 2024-08-10T20:17 (+1)

What's the "whole manifund debacle"? People complaining about Curtis Yarvin or something?

MathiasKB🔸 @ 2024-08-10T20:19 (+2)

https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest

Ryan Greenblatt @ 2024-06-28T18:28 (+1)

I think there signal vs noise tradeoffs, so I'm naively tempted to retreat toward more exclusivity.

This poses costs of its own, so maybe I'd be in favor of differentiation (some more and some less exclusive version).

Low confidence in this being good overall.

Vasco Grilo🔸 @ 2024-08-11T08:44 (+4)

Hi Ryan,

Could you share a few examples of what you consider good quality EA Forum posts? Do you think the content linked on the EA Forum Digest also "typically seems mediocre at best"?

Jeff Kaufman @ 2024-06-26T18:32 (+10)

Very little biorisk content here, perhaps because of info-hazards.

When I write biorisk-related things publicly I'm usually pretty unsure of whether the Forum is a good place for them. Not because of info-hazards, since that would gate things at an earlier stage, but because they feel like they're of interest to too small a fraction of people. For example, I could plausibly have posted Quick Thoughts on Our First Sampling Run or some of my other posts from https://data.securebio.org/jefftk-notebook/ here, but that felt a bit noisy?

It also doesn't help that detailed technical content gets much less attention than meta or community content.  For example, three days ago I wrote a comment on @Conrad K.'s thoughtful Three Reasons Early Detection Interventions Are Not Obviously Cost-Effective, and while I feel like it's a solid contribution only four people have voted on it.  On the other hand, if you look over my recent post history at my comments on Manifest, far less objectively important comments have ~10x the karma. Similarly the top level post was sitting at +41 until Mike bumped it last week, which wasn't even high enough that (before I changed my personal settings to boost biosecurity-tagged posts) I saw it when it came out.  I see why this happens--there are a lot more people with the background to engage on a community topic or even a general "good news" post--but it still doesn't make me as excited to contribute on technical things here.

Raemon @ 2024-06-26T19:58 (+23)

I'm with Ozzie here. I think EA Forum would do better with more technical content even if it's hard for most people to engage with. 

Ozzie Gooen @ 2024-06-26T19:19 (+10)

I'd be excited to have discussions of those posts here!

A lot of my more technical posts also get very little attention - I also find that pretty unmotivating. It can be quite frustrating when clearly lower-quality content on controversial stuff gets a lot more attention.

But this seems like a doom loop to me.  I care much more about strong technical content, even if I don't always read it, than I do most of the community drama. I'm sure most leaders and funders feel similarly. 

Extended far enough, the EA Forum will be a place only for controversial community drama. This seems nightmarish to me. I imagine most forum members would agree. 

I imagine that there are things the Forum or community can do to bring more attention or highlighting to the more technical posts. 

Jeff Kaufman @ 2024-06-27T14:34 (+4)

Here you go: Detecting Genetically Engineered Viruses With Metagenomic Sequencing

But this was already something I was going to put on the Forum ;)

Vaidehi Agarwalla @ 2024-06-25T06:05 (+9)

I wonder if the forum is even a good place for a lot of these discussions? Feels like they need some combination of safety / shared context, expertise, gatekeeping etc?

Ozzie Gooen @ 2024-06-25T17:50 (+7)

If it's not, there is a question of what the EA Forum's comparative advantage will be in the future, and what is a good place for these discussions.

Personally, I think this forum could be good for at least some of this, but I'm not sure.

Seth Ariel Green @ 2024-06-26T13:49 (+9)

Three use cases come to mind for the forum:

  • establishing a reputation in writing as a person who can follow good argumentative norms (perhaps as a kind of extended courtship of EA jobs/orgs)
  • disseminating findings that are mainly meant for other forums, e.g. research reports
  • keeping track of what the community at large is thinking about/working on, which is mostly facilitated by organizations like RP & GiveWell using the forum to share their work.

I don’t think I would use the forum for hashing out anything I was really thinking hard about; I’d probably have in-person conversations or email particular persons.

JP Addison @ 2024-06-25T21:01 (+7)

I don't know about you but I just learned about one of the biggest updates to OPs grantmaking in a year on the Forum.

That said, the data does show some agreement with your and commenters vibe of lowering quantity.

I agree that the Forum could be a good place for a lot of these discussions. Some of them aren't happening at all to my knowledge.[1] Some of those should be, and should be discussed on the Forum. Others are happening in private and that's rational, although you may be able to guess that my biased view is that a lot more should be public, and if they were, should be posted on the Forum.

Broadly: I'm quite bullish on the EA community as a vehicle for working on the world's most pressing problems, and of open online discussion as a piece of our collective progress. And I don't know of a better open place on the internet for EAs to gather.

  1. ^

    Part of that might be because as EA gets older the temperature (in the annealing sense) rationally lowers.

Ozzie Gooen @ 2024-06-25T21:09 (+6)

Yep - I liked the discussion in that post a lot, but the actual post seemed fairly minimal, and written primarily outside of the EA Forum (it was a link post, and the actual post was 320 words total.)

For those working on the forum, I'd suggest work on bringing in more of these threads to the forum. Maybe reach out to some of the leaders in each group and see how to change things.

I think that AI policy in particular is most ripe for better infrastructure (there's a lot of work happening, but no common public forums, from what I know), though it probably makes sense to be separate from the EA Forum (maybe like the Alignment Forum), because a lot of them don't want to be associated too much with EA, for policy reasons. 

I know less about Bio governance, but would strongly assume that a whole lot of it isn't infohazardous. That's definitely a field that's active and growing. 

For foundational EA work / grant discussions / community strategy, I think we might just need more content in the first place, or something. 

I assume that AI alignment is well-handled by LessWrong / Alignment Forum, difficult and less important to push to happen here.

Nathan Young @ 2024-06-25T15:04 (+4)

So I did used to do more sort of back of the envelope stuff, but it didn't get much traction and people seemed to think it was unfished (it was) so I guess I had less enthusiasm.

NickLaing @ 2024-06-25T05:10 (+4)

Yeah even on the global health front the last 3 months or so have felt especially quiet

Vaidehi Agarwalla @ 2024-06-25T06:05 (+2)

Curious if you think there was good discussion before that and could point me to any particularly good posts or conversations?

NickLaing @ 2024-06-25T07:25 (+5)

There are still bunch of good discussions (see mostly posts with 10+ comments) in the last 6 months or so, its just that we can sometimes even go a week or two without more than one or two ongoing serious GHD chats. Maybe I'm wrong and there hasn't actually been much (or any) meaningful change in activity this year looking at this.

https://forum.effectivealtruism.org/?tab=global-health-and-development

Tristan Williams @ 2024-06-28T16:57 (+3)

As a random datapoint, I'm only just getting into the AI Governance space, but I've found little engagement with (some) (of[1]) (the) (resources) I've shared and have just sort of updated to think this is either not the space for it or I'm just not yet knowledgeable enough about what would be valuable to others. 

 

  1. ^

    I was especially disappointed with this one, because this was a project I worked on with a team for some time, and I still think it's quite promising, but it didn't receive the proportional engagement I would have hoped for. Given I optimized some of the project for putting out this bit of research specifically, I wouldn't do the same now and would have instead focused on other parts of the project. 

Ozzie Gooen @ 2024-06-26T17:58 (+2)

It seems from the comments that there's a chance that much of this is just timing - i.e. right now is unusually quiet. It is roughly mid-year, maybe people are on vacation or something, it's hard to tell.

I think that this is partially true. I'm not interested in bringing up this point to upset people, but rather to flag that maybe there could be good ways of improving this (which I think is possible!)

Ozzie Gooen @ 2023-08-16T00:41 (+58)

Personal reflections on self-worth and EA

My sense of self-worth often comes from guessing what people I respect think of me and my work. 

In EA... this is precarious. The most obvious people to listen to are the senior/powerful EAs.

In my experience, many senior/powerful EAs I know:
1. Are very focused on specific domains.
2. Are extremely busy.
3. Have substantial privileges (exceptionally intelligent, stable health, esteemed education, affluent/ intellectual backgrounds.)
4. Display limited social empathy (ability to read and respond to the emotions of others)
5. Sometimes might actively try not to sympathize/empathize with many people, because they are judging them for grants, and want don't want to be biased. (I suspect this is the case for grantmakers). 
6. Are not that interested in acting as a coach/mentor/evaluator to people outside their key areas/organizations.
7. Don't intend or want others to care too much about what they think outside of cause-specific promotion and a few pet ideas they want to advance.

A parallel can be drawn with the world of sports. Top athletes can make poor coaches. Their innate talent and advantages often leave them detached from the experiences of others. I'm reminded by David Foster Wallace's How Tracy Austin Broke My Heart.

If you're a tennis player, tying your self-worth to what Roger Federer thinks of you is not wise. Top athletes are often egotistical, narrow-minded, and ambivalent to others. This sort of makes sense by design - to become a top athlete, you often have to obsess over your own abilities to an unnatural extent for a very long period.

Good managers are sometimes meant to be better as coaches than they are as direct contributors. In EA, I think those in charge seem more like "top individual contributors and researchers" than they do "top managers." Many actively dislike management or claim that they're not doing management. (I believe funders typically don't see their work as "management*", which might be very reasonable.)

But that said, even a good class of managers wouldn't fully solve the self-worth issue. Tying your self-worth too much to your boss can be dangerous - your boss already has much power and control over you, so adding your self-worth to the mix seems extra precarious.

I think if I were to ask any senior EA I know, "Should I tie my self-worth with your opinion of me?" they would say something like,

"Are you insane? I barely know you or your work. I can't at all afford the time to evaluate your life and work enough to form an opinion that I'd suggest you take really seriously."

They have enough problems - they don't want to additionally worry about others trying to use them as judges of personal value.

But this raises the question, Who, if anyone, should I trust to inform my self-worth?

Navigating intellectual and rationalist literature, I've grown skeptical of many other potential evaluators. Self-judgment carries inherent bias and ability to Goodhart. Many "personal coaches" and even "executive coaches" raise my epistemic alarm bells. Friends, family, and people who are "more junior" come with different substantial biases.

Some favored options are "friends of a similar professional class who could provide long-standing perspective" and "professional coaches/therapists/advisors."

I’m not satisfied with any obvious options here. I think my next obvious move forward is to acknowledge that my current situation seems subpar and continue reflecting on this topic. I've dug into the literature a bit but haven't found answers I've yet found compelling.

Joseph Lemien @ 2023-08-16T13:21 (+22)

Who, if anyone, should I trust to inform my self-worth?

My initial thought is that it is pretty risky/tricky/dangerous to depend on external things for a sense of self-worth? I know that I certainly am very far away from an Epictetus-like extreme, but I try to not depend on the perspectives of other people for my self-worth. (This is aspirational, of course. A breakup or a job loss or a person I like telling me they don't like me will hurt and I'll feel bad for a while.)

A simplistic little thought experiment I've fiddled with: if I went to a new place where I didn't know anyone and just started over, then what? Nobody knows you, and you social circle starts from scratch. That doesn't mean that you don't have a worth as a human being (although it might mean that you don't have any worth in the 'economic' sense of other people wanting you, which is very different).

There might also be an intrinsic/extrinsic angle to this. If you evaluate yourself based on accomplishments, outputs, achievements, and so on, that has a very different feeling than the deep contentment of being okay as you are.

In another comment Austin mentions revenue and funding, but that seems to be a measure of things VERY different from a sense of self-worth (although I recognize that there are influential parts of society in which wealth or career success is seen as the proxies for worth). In favorable market conditions I have high self worth?

I would roughly agree with your idea of "trying not to tie my emotional state to my track record." 

Vanessa @ 2023-08-17T04:50 (+12)

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".

EdoArad @ 2023-08-16T08:28 (+9)

It's funny, I think you'd definitely be in the list of people I respect and care about their opinion of me. I think it's just imposter syndrome all the way up.

Personally, one thing that seemed to work a bit for me is to find peers which I highly appreciate and respect and schedule weekly calls with them to help me prioritize and focus, and give me feedback. 

Austin @ 2023-08-16T02:59 (+6)

A few possibilities from startup land:

  • derive worth from how helpful your users find your product
  • chase numbers! usage, revenue, funding, impact, etc. Sam Altman has a line like "focus on adding another 0 to your success metric"
  • the intrinsic sense of having built something cool
Patrick Gruban @ 2023-08-16T05:26 (+9)

After transitioning from for-profit entrepreneurship to co-leading a non-profit in the effective altruism space, I struggle to identify clear metrics to optimize for. Funding is a potential metric, but it is unreliable due to fluctuations in donors' interests. The success of individual programs, such as user engagement with free products or services, may not accurately reflect their impact compared to other potential initiatives. Furthermore, creating something impressive doesn't necessarily mean it's useful. 

Lacking a solid impact evaluation model, I find myself defaulting to measuring success by hours worked, despite recognizing the diminishing returns and increased burnout risk this approach entails.

sphor @ 2023-08-17T01:44 (+5)

This is brave of you to share. It sounds like there are a few related issues going on. I have a few thoughts that may or may not be helpful:

  1. Firstly, you want to do well and improve in your work, and you want some feedback on that from people who are informed and have good judgment. The obvious candidates in the EA ecosystem are people who actually aren't well suited to give this feedback to you. This is tough. I don't have any advice to give you here. 
  2. However it also sounds like there are some therapeutic issues at play. You mention therapists as a favored option but one you're not satisfied with and I'm wondering why? Personally I suspect that making progress on any therapeutic issues that may be at play may also end up helping with the professional feedback problem. 
  3. I think you've unfairly dismissed the best option as to who you can trust: yourself. That you have biases and flaws is not an argument against trusting yourself because everyone and everything has biases and flaws! Which person or AI are you going to find that doesn't have some inherent bias or ability to Goodhart?
Sam_Coggins @ 2023-08-24T05:04 (+4)

Five reasons why I think it's unhelpful connecting our intrinsic worth to our instrumental worth (or anything aside from being conscious beings):

  1. Undermines care for others (and ourselves): chickens have limited instrumental worth and often do morally questionable things. I still reckon chickens and their suffering are worthy of care. (And same argument for human babies, disabled people and myself)
  2. Constrains effective work: continually assessing our self-worth can be exhausting (leaving less time/attention/energy for actually doing helpful work). For example, it can be difficult to calmy take on constructive feedback (on our work, or instrumental strengths or instrumental weaknesses) when our self-worth is on the line.
  3. Constrains our personal wellbeing and relationships: I've personally found it hard to enjoy life when continuously questioning my self-worth and feeling guilty/shameful when the answer seems negative
  4. Very hard to answer: including because the assessment may need to be continuously updated based on the new evidence from each new second of our lives
  5. Seems pointless to answer (to me): how would accurately measuring our self-worth (against a questionable benchmark) make things better? We could live in a world where all beings are ranked so that more 'worthy' beings can appropriately feel superior, and less 'worthy' beings can appropriately feel 'not enough'. This world doesn't seem great from perspective

Despite thinking these things, I often unintentionally get caught up muddling my self-worth with my instrumental worth (can relate to the post and comments on here!) I've found 'mindful self-compassion' super helpful for doing less of this

Ben_West @ 2023-08-16T17:47 (+4)

This is an interesting post and seems basically right to me, thanks for sharing.

Patrick Gruban @ 2023-08-16T05:12 (+4)

Thank you, this very much resonates with me

Ozzie Gooen @ 2023-08-16T00:41 (+4)

The most obvious moves, to me, eventually, are to either be intensely neutral (as in, trying not to tie my emotional state to my track record), or to iterate on using AI to help here (futuristic and potentially dangerous, but with other nice properties).

EdoArad @ 2023-08-16T08:07 (+2)

How would you use AI here?

Ozzie Gooen @ 2023-08-16T12:24 (+2)

A very simple example is, "Feed a log of your activity into an LLM with a good prompt, and have it respond with assessments of how well you're doing vs. your potential at the time, and where/how you can improve." You'd be free to argue points or whatever. 

Joseph Lemien @ 2023-08-16T13:36 (+7)

Reading this comment makes me think that you are basing your self-worth on your work output. I don't have anything concrete to point to, but I suspect that this might have negative effects on happiness, and that being less outcome dependent will tend to result in a better emotional state.

EdoArad @ 2023-08-16T19:23 (+4)

That's cool. I had the thought of developing a "personal manager" for myself of some form for roughly similar purposes

Ozzie Gooen @ 2024-03-24T22:06 (+37)

(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)

Thoughts on the OpenAI Strategy

OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.

First, they say flat out that they're going for AGI.

Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.

"Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1]

On Hacker News, one of their employees says,

"We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2]

You can read more about this mission on the charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3]

This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:

  1. Make AGI
  2. Turn AGI into huge profits
  3. Give 100x returns to investors
  4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
  5. Use AGI for "the benefit of all"?

I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.

Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.

I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).

This would be a massive power gain for a small subset of people.

If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.

On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.

And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.

But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves. 


(Aside on the details of Step 5)
I would love more information on Step 5, but I don’t blame OpenAI for not providing it.

My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.

[1] https://openai.com/blog/openai-lp/

[2] https://news.ycombinator.com/item?id=19360709

[3] https://openai.com/charter/
[4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom

[5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/


Also, see:
https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html
Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now.

https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm

https://moores.samaltman.com/

https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/

Ozzie Gooen @ 2024-12-01T21:45 (+36)

Around EA Priorities:

Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").

If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.

I feel incredibly unsatisfied with the public EA dialogue around AI safety strategy now. From what I can tell, there's some intelligent conversation happening by a handful of people at the Constellation coworking space, but a lot of this is barely clear publicly. I think many people outside of Constellation are working on simplified models, like "AI is generally dangerous, so we should slow it all down," as opposed to something like, "Really, there are three scary narrow scenarios we need to worry about."

I recently spent a week in DC and found it interesting. But my impression is that a lot of people there are focused on fairly low-level details, without a great sense of the big-picture strategy. For example, there's a lot of work into shovel-ready government legislation, but little thinking on what the TAI transition should really look like.

This sort of myopic mindset is also common in the technical space, where I meet a bunch of people focused on narrow aspects of LLMs, without much understanding of how their work exactly fits into the big picture of AI alignment. As an example, a lot of work seems like it would help with misuse risk, even when the big-picture EAs seem much more focused on accident risk.

Some (very) positive news is that we do have far more talent in this area than we did 5 years ago, and there's correspondingly more discussion. But it still feels very chaotic.

A bit more evidence - it seems like OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?), but they have seemed to provide more support for AI safety community building / recruiting, and AI policy. But all of this still represents perhaps ~30% or so of their total budget, and I don't sense that that's about to change. Overall this comes off as measured and cautious. Meanwhile, it's been difficult to convince other large donors to get into this area. (Other than Jaan Tallinn, he might well have been the strongest dedicated donor here).

Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.

But again, I'm very hopeful that we can find interventions that will help in the long-term, so few of these excite me. I'd expect and hope that interventions that help the long-term future would ultimately improve animal welfare and more.

So on one hand, AI risk seems like the main intervention area for the long-term, but on the other, the field is a bit of a mess right now.

I feel quite frustrated that EA doesn't have many other strong recommendations for other potential donors interested in the long-term. For example, I'd really hope that there could be good interventions to make the US government or just US epistemics more robust, but I barely see any work in that area.

"Forecasting" is one interesting area - it currently does have some dedicated support from OP. But it honestly seems to be in a pretty mediocre state to me right now. There might be 15-30 full-time people in the space at this point, and there's surprisingly little in terms of any long-term research agendas.

Peter Favaloro @ 2024-12-04T23:02 (+20)

Hi Ozzie – Peter Favaloro here; I do grantmaking on technical AI safety at Open Philanthropy. Thanks for this post, I enjoyed it.

I want to react to this quote:
…it seems like OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?)

I agree that over the past year or two our grantmaking in technical AI safety (TAIS) has been too bottlenecked by our grantmaking capacity, which in turn has been bottlenecked in part by our ability to hire technical grantmakers. (Though also, when we've tried to collect information on what opportunities we're missing out on, we’ve been somewhat surprised at how few excellent, shovel-ready TAIS grants we’ve found.)

Over the past few months I’ve been setting up a new TAIS grantmaking team, to supplement Ajeya’s grantmaking. We’ve hired some great junior grantmakers and expect to publish an open call for applications in the next few months. After that we’ll likely try to hire more grantmakers. So stay tuned!

Ozzie Gooen @ 2024-12-05T17:30 (+4)

That sounds exciting, thanks for the update. Good luck with team building and grantmaking!

Will Aldred @ 2024-12-02T00:21 (+18)

OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?), but they have seemed to provide more support for AI safety community building / recruiting

Yeah, I find myself very confused by this state of affairs. Hundreds of people are being funneled through the AI safety community-building pipeline, but there’s little funding for them to work on things once they come out the other side.[1]

As well as being suboptimal from the viewpoint of preventing existential catastrophe, this also just seems kind of common-sense unethical. Like, all these people (most of whom are bright-eyed youngsters) are being told that they can contribute, if only they skill up, and then they later find out that that’s not the case.

  1. ^

    These community-building graduates can, of course, try going the non-philanthropic route—i.e., apply to AGI companies or government institutes. But there are major gaps in what those organizations are working on, in my view, and they also can’t absorb so many people.

Ozzie Gooen @ 2024-12-02T01:27 (+8)

Yea, I think this setup has been incredibly frustrating downstream. I'd hope that people from OP with knowledge could publicly reflect on this, but my quick impression is that some of the following factors happened:
1. OP has had major difficulties/limitations around hiring in the last 5+ years. Some of this is lack of attention, some is that there aren't great candidates, some is a lack of ability. This effected some cause areas more than others. For whatever reason, they seemed to have more success hiring (and retaining talent) for community than for technical AI safety. 
2. I think there's been some uncertainties / disagreements into how important / valuable current technical AI safety organizations are to fund. For example, I imagine if this were a major priority from those in charge of OP, more could have been done. 
3. OP management seems to be a bit in flux now. Lost Holden recently, hiring a new head of GCR, etc. 
4. I think OP isn't very transparent and public with explaining their limitations/challenges publicly.
5. I would flag that there are spots at Anthropic and Deepmind that we don't need to fund, that are still good fits for talent.
6. I think some of the Paul Christiano - connected orgs were considered a conflict-of-interest, given that Ajeya Cotra was the main grantmaker. 
7. Given all of this, I think it would be really nice if people could at least provide warnings about this. Like, people entering the field are strongly warned that the job market is very limited. But I'm not sure who feels responsible / well placed to do this. 

Peter Wildeford @ 2024-12-03T16:26 (+12)

Thanks for the comment, I think this is very astute.

~

Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

I don't think that all AI safety orgs are actually fully funded since there are orgs that OP cannot fund for reasons (see Trevor's post and also OP's individual recommendations in AI) other than cost-effectiveness and also OP cannot and should not fund 100% of every org (it's not sustainable for orgs to have just one mega-funder; see also what Abraham mentioned here). Also there is room for contrarian donation takes like Michael Dickens's.

Ozzie Gooen @ 2024-12-03T17:03 (+8)

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP. 

For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post

While a bunch of these salaries are on the high side, not all of them are.

Ozzie Gooen @ 2024-12-02T02:00 (+9)

On AI safety, I think it's fairly likely (40%?) that the risk of x-risk (upon a lot of reflection) in the next 20 years is less than 20%, and that the entirety of the EA scene might be reducing it to say 15%.

This means that the entirety of the EA AI safety scene would help the EV of the world by ~5%.

On one hand, this is a whole lot. But on the other, I'm nervous that it's not ambitious enough, for what could be one of the most [combination of well-resourced, well-meaning, and analytical/empirical] groups of our generation.

One thing I like about epistemic interventions is that the upper-bounds could be higher. 

(There are some AI interventions that are more ambitious, but many do seem to be mainly about reducing x-risk by less than an order of magnitude, not increasing the steady-state potential outcome) 

I'd also note here that an EV gain of 5% might not be particularly ambitious. It could well be the case that many different groups can do this - so it's easier than it might seem if you think goodness is additive instead of multiplicative. 

Ozzie Gooen @ 2023-08-01T15:41 (+32)

I really don't like the trend of posts saying that "EA/EAs need to | should do X or Y".

EA is about cost-benefit analysis. The phrases need and should implies binaries/absolutes and having very high confidence.

I'm sure there are thousands of interventions/measures that would be positive-EV for EA to engage with. I don't want to see thousands of posts loudly declaring "EA MUST ENACT MEASURE X" and "EAs SHOULD ALL DO THING Y," in cases where these mostly seem like un-vetted interesting ideas. 

In almost all cases I see the phrase, I think it would be much better replaced with things like;
"Doing X would be high-EV"
"X could be very good for EA"
"Y: Cost and Benefits" (With information in the post arguing the benefits are worth it)
"Benefits|Upsides of X" (If you think the upsides are particularly underrepresented)"

I think it's probably fine to use the word "need" either when it's paired with an outcome (EA needs to do more outreach to become more popular) or when the issue is fairly clearly existential (the US needs to ensure that nuclear risk is low). It's also fine to use should in the right context, but it's not a word to over-use. 

Zach Stein-Perlman @ 2023-08-01T17:03 (+24)

See also EA should taboo "EA should"

OllieBase @ 2023-08-01T17:06 (+19)

Related (and classic) post in case others aren't aware: EA should taboo "EA should".

Lizka makes a slightly different argument, but a similar conclusion

Brad West @ 2023-08-01T20:03 (+17)

Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.

I think EAs are too eager to hedge their language and use weak language regarding promising ideas.

For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.

https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

Yonatan Cale @ 2023-08-04T11:11 (+4)

What about social norms, like "EA should encourage people to take care of their mental health even if it means they have less short-term impact"?

Ozzie Gooen @ 2023-08-04T14:14 (+2)

Good question.

First, I have a different issue with that phrase, as it's not clear what "EA" is. To me, EA doesn't seem like an agent. You can say, "....CEA should" or "...OP should".

Normally, I prefer one says "I think X should". There are some contexts, specifically small ones (talking to a few people, it's clearly conversational) where saying, "X should do Y" clearly means "I feel like X should do Y, but I'm not sure". And there are some contexts where it means "I'm extremely confident X should do Y".

For example, there's a big difference between saying "X should do Y" to a small group of friends, when discussing uncertain claims, and writing a mass-market book titled "X should do Y". 

NickLaing @ 2023-08-01T21:14 (+1)

I haven't noticed this trend, could you list a couple of articles like this? Or even DM me if you're not comfortable listing them here.

Ozzie Gooen @ 2023-08-01T21:28 (+9)

I recently noticed it here:
https://forum.effectivealtruism.org/posts/WJGsb3yyNprAsDNBd/ea-orgs-need-to-tabletop-more

Looking back, it seems like there weren't many more very recently. Historically, there have been some.

EA needs consultancies
EA needs to understand its “failures” better
EA needs more humor
EA needs Life-Veterans and "Less Smart" people
EA needs outsiders with a greater diversity of skills
EA needs a hiring agency and Nonlinear will fund you to start one
EA needs a cause prioritization journal
Why EA needs to be more conservative

Looking above, many of those seem like "nice to haves". The word "need" seems over-the-top to me.
 

VictorW @ 2023-08-04T12:18 (+3)

There are a couple of strong "shoulds" in the EA Handbook (I went through it over the last two months as part of an EA Virtual program) and they stood out to me as the most disagreeable part of EA philosophy that was presented.

Ozzie Gooen @ 2024-06-25T17:45 (+27)

On the funding-talent balance:

When EA was starting, there was a small amount of talent, and a smaller amount of funding. As one might expect, things went slowly for the first few years.

Then once OP decided to focus on X-risks, there was ~$8B potential funding, but still fairly little talent/capacity. I think the conventional wisdom then was that we were unlikely to be bottlenecked by money anytime soon, and lots of people were encouraged to do direct work.

Then FTX Future Fund came in, and the situation got even more out-of-control. ~Twice the funding. Projects got more ambitious, but it was clear there were significant capacity (funder and organization) constraints.

Then (1) FTX crashed, and (2) lots of smart people came into the system. Project capacity grew, AI advances freaked out a lot of people, and successful community projects helped train a lot of smart young people to work on X-risks.

But funding has not kept up. OP has been slow to hire for many x-risk roles (AI safety, movement building, outreach / fundraising). Other large funders have been slow to join in.

So now there's a crunch for funding. There are a bunch of smart-seeming AI people now who I bet could have gotten funding during the FFF, likely even before then with OP, but are under the bar now.

I imagine that this situation will eventually improve, but of course, it would be incredibly nice if it could happen sooner. It seems like EA leadership eventually fix things, but it often happens slower than is ideal, with a lot of opportunity loss in that time.

Opportunistic people can fill in the gaps. Looking back, I think more money and leadership in the early days would have gone far. Then, more organizational/development capacity during the FFF era. Now, more funding seems unusually valuable.

If you've been thinking about donating to the longtermist space, specifically around AI safety, I think it's likely that funding this year will be more useful than funding in the next 1-3 years. (Of course, I'd recommend using strong advisors or giving to funds, instead of just choosing directly, unless you can spend a fair bit of time analyzing things).

If you're considering entering the field as a nonprofit employee, heed some caution. I still think the space can use great talent, but note that this is an unusually competitive time to get many paid roles or to get nonprofit grants.

RAB @ 2024-06-26T07:05 (+3)

Any thoughts on where e.g. 50K could be well spent?

Ozzie Gooen @ 2024-06-26T17:55 (+4)

(For longtermism)

If you have limited time to investigate / work with, I'd probably recommend either the LTFF or choosing a regranter you like at Manifund. 

If you have a fair bit more time, and ideally the expectation of more money in the future, then I think a lot of small-to-medium (1-10 employee) organizations can use some long-term, high-touch donors. Honestly this may settle more down to fit / relationships than identifying the absolute best org - as long as it's funded by one of the groups listed above or OP, as money itself is a bit fungible between orgs.

I think a lot of nonprofits have surprisingly few independent donors, or even strong people that can provide decent independent takes. I might write more about this later.

(That said, there are definitely ways to be annoying / a hindrance, as an active donor, so try to be really humble here if you are new to this)

Ozzie Gooen @ 2020-09-22T19:17 (+21)

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

Ozzie Gooen @ 2021-12-05T01:58 (+18)

Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.

Some musicians, like Madonna, just continued to "re-invent" themselves every few years.

Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints. 

It's really difficult to keep a single understood identity, while also conveying different kinds of information.

Narrow identities are important for a lot of reasons. I think the main one is predictability, similar to a company brand. If your identity seems to dramatically change hour to hour, people wouldn't be able to predict your behavior, so fewer could interact or engage with you in ways they'd feel comfortable with.

However, narrow identities can also be suffocating. They restrict what you can say and how people will interpret that. You can simply say more things in more ways if you can change identities. So having multiple identities can be a really useful tool.

Sadly, most academics and intellectuals can only really have one public identity.

---

EA researchers currently act this way.

In EA, it's generally really important to be seen as calibrated and reasonable, so people correspondingly prioritize that in their public (and then private) identities. I've done this. But it comes with a cost.

One obvious (though unorthodox) way around this is to allow researchers to post content either under aliases. It could be fine if the identity of the author is known, as long as readers can keep these aliases distinct.

I've been considering how to best do this myself. My regular EA Forum name is just "Ozzie Gooen". Possible aliases would likely be adjustments to this name.

- "Angry Ozzie Gooen" (or "Disagreeable Ozzie Gooen")

- "Tech Bro Ozzie Gooen"

- "Utility-bot 352d3"

These would be used to communicate in very different styles, with me attempting what I'd expect readers to expect of those styles.

(Normally this is done to represent viewpoints other than what they have, but sometimes it's to represent viewpoints they have, but wouldn't normally share)

Facebook Discussion

Ozzie Gooen @ 2024-12-09T16:57 (+15)

I can't seem to find much EA discussion about [genetic modification to chickens to lessen suffering]. I think this naively seems like a promising area to me. I imagine others have investigated and decided against further work, I'm curious why. 

emre kaplan🔸 @ 2024-12-11T07:29 (+20)

Lewis Bollard:

"I agree with Ellen that legislation / corporate standards are more promising. I've asked if the breeders would accept $ to select on welfare, & the answer was no b/c it's inversely correlated w/ productivity & they can only select on ~2 traits/generation."

Ozzie Gooen @ 2024-12-11T17:01 (+9)

Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."

I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"

Lorenzo Buonanno🔸 @ 2024-12-12T23:03 (+4)

20% of the global cost of growing chickens is probably in the order of at least ~$20B, which is much more than the global economy is willing to spend on animal welfare.

As mentioned in the other comment, I think it's extremely unlikely that there is a way to stop "most" of the chicken suffering while increasing costs by only ~20%.

Some estimate the better chicken commitment already increases costs by 20% (although there is no consensus on that, and factory farmers estimate 37.5%), and my understanding is that it doesn't stop most of the suffering, but "just" reduces it a lot.

Ebenezer Dukakis @ 2024-12-12T07:07 (+3)

Has there been any discussion of improving chicken breeding using GWAS or similar?

Even if welfare is inversely correlated with productivity, I imagine there are at least a few gene variants which improve welfare without hurting productivity. E.g. gene variants which address health issues due to selective breeding.

Also how about legislation targeting the breeders? Can we have a law like: "Chickens cannot be bred for increased productivity unless they meet some welfare standard."

Ben Stevenson @ 2024-12-12T15:10 (+6)

England prohibits "breeding procedures which cause, or are likely to cause, suffering or injury to any of the animals concerned". Defra claim Frankenchickens meet this standard and THLUK are challenging that decision in court.

Note that prohibiting breeding that causes suffering is different to encouraging breeding that lessens suffering, and that selective breeding is different to gene splicing, etc., which I think is what is typically meant by genetic modification.

Lorenzo Buonanno🔸 @ 2024-12-10T01:20 (+12)

I think it is discussed every now and then, see e.g. comments here: New EA cause area: Breeding really dumb chickens and this comment

And note that the Better Chicken Commitment includes a policy of moving to higher welfare breeds.


Naively, I would expect that suffering is extremely evolutionarily advantageous for chickens in factory farm conditions, so chickens that feel less suffering will not grow as much meat (or require more space/resources). For example, based on my impression that broiler chickens are constantly hungry, I wouldn't be surprised if they would try to eat themselves unless they felt pain when doing so. But this is a very uninformed take based on a vague understanding of what broiler chickens are optimized for, which might not be true in practice.

 

I think this idea might be more interesting to explore in less price-sensitive contexts, where there's less evolutionary pressure and animals live in much better conditions, mostly animals used in scientific research. But of course it would help much fewer animals who usually suffer much less. 

Charlie_Guthmann @ 2024-12-10T05:31 (+3)

adding on that wholefoods https://www.wholefoodsmarket.com/quality-standards/statement-on-broiler-chicken-welfare
has made some commitments to switching breeds, we discussed this briefly at a Chicago EA meeting. I didn't get much info but they said that going and protesting/spreading the word to whole foods managers to switch breeds showed some success.  

Thomas Kwa @ 2024-12-10T01:32 (+10)

It was mentioned at the Constellation office that maybe animal welfare people who are predisposed to this kind of weird intervention are working on AI safety instead. I think this is >10% correct but a bit cynical; the WAW people are clearly not afraid of ideas like giving rodents contraceptives and vaccines. My guess is animal welfare is poorly understood and there are various practical problems like preventing animals that don't feel pain from accidentally injuring themselves constantly. Not that this means we shouldn't be trying.

Dicentra @ 2024-12-10T01:58 (+8)

I heard someone from Kevin Esvelt's lab talking about this + pain-free lab mice once

david_reinstein @ 2024-12-10T00:29 (+8)

Quick thought. Maybe people anticipate this being blocked by governments because it “seems like playing god” etc. I know that would be hypocritical given the breeding already used to make them overweight etc. But it seems to be the way a lot of people see this.

Jordan Pieters 🔸 @ 2024-12-09T17:38 (+5)

By coincidence, I just came across this layer-hen genetics project that got funding from OP. I don't know much about the work or how promising it might be.

Ozzie Gooen @ 2024-12-01T15:13 (+13)

I think I broadly like the idea of Donation Week. 

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead. 

(That said, in terms of the donation, I'd hope that we could donate to RP as a whole and trust RP to allocate it accordingly, instead of formally restricting the money, which can be quite a hassle in terms of accounting) 

JWS 🔸 @ 2024-12-01T17:06 (+8)

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and charity. Not saying you're wrong, but it's not necessarily a problem.

Furthermore, my anecdotal take from the voting patterns as well as the comments on the discussion thread seem to indicate that neglectedness is often high on the mind of voters - though I admit that commenters on that thread are a biased sample of all those voting in the election.

It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." 

Is it underwhelming? I guess if you want the donation election to be about spurring lots of donations to small, spunky EA-startups working in weird-er cause areas, it might be, but I don't think that's what I understand the intention of the experiment to be (though I could be wrong). 

My take is that the election is an experiment with EA democratisation, where we get to see what the community values when we do a roughly 1-person-1-ballot system instead of those-with-the-moeny decide system which is how things work right now. Those takeaways seem to be:

  • The broad EA community values Animal Welfare a lot more than the current major funders
  • The broad EA community sees value in all 3 of the 'big cause areas' with high-scoring charities in Animal Welfare, AI Safety, and Global Health & Development.
Ozzie Gooen @ 2024-12-01T15:33 (+6)

I (with limited information) think the EA Animal Welfare Fund is promising, but wonder how much of that matches the intention of this experiment. It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." Though I guess, that does represent a good state of the world. (The public thinks that the current experts are basically right) 

Ozzie Gooen @ 2024-06-26T20:52 (+13)

When I hear of entrepreneurs excited about prediction infrastructure making businesses, I feel like they gravitate towards new prediction markets or making new hedge funds.

I really wish it were easier to make new insurance businesses (or similar products). I think innovative insurance products could be a huge boon to global welfare. The very unfortunate downside is that there's just a ton of regulation and lots of marketing to do, even in cases where it's a clear win for consumers.

Ideally, it should be very easy and common to get insurance for all of the key insecurities of your life.

I think a lot of people have certain issues that both:

  1. They worry about a lot
  2. They over-weight the risks of these issues

In these cases, insurance could be a big win!

In a better world, almost all global risks would be held primarily by asset managers / insurance agencies. Individuals could have highly predictable lifestyles.

(Of course, some prediction markets and other markets can occasionally be used for this purpose as well!)

Owen Cotton-Barratt @ 2024-06-27T07:43 (+4)

Some of these things are fundamentally hard to insure against, because of information asymmetries / moral hazard.

e.g. insurance against donor issues would disproportionately be taken by people who had some suspicions about their donors, which would drive up prices, which would get more people without suspicions to decline taking insurance, until the market was pretty tiny with very high prices and a high claim rate. (It would also increase the incentives to commit fraud to give, which seems bad.)

Jason @ 2024-06-27T16:39 (+2)

Some of these harms seem of a sort that does not really feel compensable with money. While romantic partner's defection might create some out-of-pocket costs, but I don't think the knowledge that I'd get some money out of my wife defecting would make me feel any better about the possibility!

Also, I'd note that some of the harms are already covered by social insurance schemes to a large extent. For instance, although parents certainly face a lot of costs associated with "[h]aving children with severe disabilities / issues," a high percentage of costs in the highest-cost scenarios are already borne by the public (e.g., Medicaid, Social Security/SSI, the special education system, etc.) or by existing insurers (e.g., employer-provided health insurance). So I'd want to think more about the relative merits of novel private-sector insurance schemes versus strengthening the socialized schemes.

Ozzie Gooen @ 2024-06-28T14:45 (+4)

While romantic partner's defection might create some out-of-pocket costs, but I don't think the knowledge that I'd get some money out of my wife defecting would make me feel any better about the possibility

Consider this, as examples of where it might be important:
1. You are financially dependent on your spouse. If they cheated on you, you would likely want to leave them, but you wouldn't want to be trapped due to finances.
2. You're nervous about the potential expenses of a divorce. 

I think that this situation is probably a poor fit for insurance at this point, just because of moral risks that would happen, but perhaps one day it might be viable to some extent.

> So I'd want to think more about the relative merits of novel private-sector insurance schemes versus strengthening the socialized schemes.

I'm all for improvements on socialized schemes too. No reason not for both strategies to be tested and used. In theory, insurance could be much easier and faster to be implemented. It can take ages for nation-wide reform to happen.

Ozzie Gooen @ 2021-09-16T14:59 (+13)

A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.

Lots of confident-looking pictures of people with fancy and impressive sounding projects.

I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.

So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.”

Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort).

I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?).

Not sure how to finish this post here. I think part of me wants to encourage junior researchers to lean on humility, but at the same time, I don’t want to shame those who don’t feel like they can do so for reasons of not-being-homeless (or simply having to leave research). I think the easier thing is to slowly spread common knowledge and encourage a culture where proper calibration is just naturally incentivized.

Facebook Thread

Ozzie Gooen @ 2021-09-16T15:00 (+2)

Relevant post by Nuño: https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers?fbclid=IwAR1M0zumAQ452iOAOVKGEcOdI4MwORfVSX4H1S2zLhyUXrWjarvUt31mKsg

Ozzie Gooen @ 2021-12-05T01:55 (+11)

Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?

---

So, Jack Dorsey just resigned from Twitter.

Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.

https://pxlnv.com/linklog/twitter-fleets-elliott-management/

From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.

Square, Jack Dorsey's other company (he was CEO of two), has done much better. Market cap of over 2x Twitter ($100B), huge gains in the last 4 years.

I'm imagining that if I were Jack... leaving would have been really tempting. On one hand, I'd have Twitter, which isn't really improving, is facing activist investor attacks, and worst, apparently is responsible for global chaos (of which I barely know how to stop). And on the other hand, there's this really tame payments company with little controversy.

Being CEO of Twitter seems like one of the most thankless big-tech CEO positions around.

That sucks, because it would be really valuable if some great CEO could improve Twitter, for the sake of humanity.

One small silver lining is that the valuation of Twitter is relatively small. It has a market cap of $38B. In comparison, Facebook/Meta is $945B and Netflix is $294B.

So if altruistic interests really wanted to... I imagine they could become activist investors, but like, in a good way? I would naively expect that even with just 30% of the company you could push them to do positive things. $12B to improve global epistemics in a major way.

The US could have even bought Twitter for 4% of the recent $1T infrastructure bill. (though it's probably better that more altruistic ventures do it).

If middle-class intellectuals really wanted it enough, theoretically they could crowdsource the cash.

I think intuitively, this seems like clearly a tempting deal.

I'd be curious if this would be a crazy proposition, or if this is just not happening due to coordination failures.

Admittingly, it might seem pretty weird to use charitable/foundation dollars on "Buying lots of Twitter" instead of direct aid, but the path to impact is pretty clear.


Facebook Thread

Ozzie Gooen @ 2021-12-05T01:54 (+11)

One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.

This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.

Berkeley and London seem like natural choices given the communities there. I imagine it could even be better if there were some government somewhere in the world that was just unusually amenable to both innovative techniques, and to external help with them.

Given that EAs/rationalists care so much about global coordination, getting concrete experience improving government systems could be interesting practice.

There's so much theoretical discussion of coordination and government mistakes on LessWrong, but very little discussion of practical experience implementing these ideas into action.

(This clearly falls into the Institutional Decision Making camp)

Facebook Thread

Ozzie Gooen @ 2021-12-05T01:42 (+11)

On AGI (Artificial General Intelligence):

I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).

Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]

Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibility, in my rough opinion).

In general, people seem surprisingly chill about this mixture? My impression is that people are highly incentivized to not upset people, and this has led to this strange situation where people are clearly pushing in opposite directions on arguably the most crucial problem today, but it's all really nonchalant.

[1] To be clear, I don't think I have any EA friends in this bucket. But some are clearly EA-adjacent.

More discussion here: https://www.facebook.com/ozzie.gooen/posts/10165732991305363

Ozzie Gooen @ 2021-06-30T03:12 (+10)

There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
 

This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.

Aaron Gertler @ 2021-06-30T05:53 (+6)

There are at least a few people in the community with marketing experience and an expressed desire to help out. The most recent example that comes to mind is this post.

If anyone reading this comment knows people who are interested in the intersection of longtermism and marketing, consider telling them about EA Funds! I can imagine the LTFF or EAIF being very interested in projects like this.

(That said, maybe one of the longtermist foundations should consider hiring a marketing consultant?)

Ozzie Gooen @ 2021-06-30T06:16 (+2)

Yep, agreed. Right now I think there are very few people doing active work in longtermism (outside of a few orgs that have people for that org), but this seems very valuable to improve upon. 

Jamie_Harris @ 2021-07-03T20:38 (+4)

If you're happy to share, who are the longtermist academics you are thinking of? (Their work could be somewhat related to my work)

Ozzie Gooen @ 2021-07-04T03:32 (+2)

No prominent ones come to mind. There are some very junior folks I've recently seen discussing this, but I feel uncomfortable calling them out.

Ozzie Gooen @ 2024-10-01T17:24 (+9)

Around discussions of AI & Forecasting, there seems to be some assumption like:

1. Right now, humans are better than AIs at judgemental forecasting.
2. When humans are better than AIs at forecasting, AIs are useless.
3. At some point, AIs will be better than humans at forecasting.
4. At that point, when it comes to forecasting, humans will be useless.

This comes from a lot of discussion and some research comparing "humans" to "AIs" in forecasting tournaments.

As you might expect, I think this model is incredibly naive. To me, it's asking questions like,
"Are AIs better than humans at writing code?"
"Are AIs better than humans at trading stocks?"
"Are AIs better than humans at doing operations work?"

I think it should be very clear that there's a huge period, in each cluster, where it makes sense for humans and AIs to overlap. "Forecasting" is not one homogeneous and singular activity, and neither is programming, stock trading, or doing ops. There's no clear line for automating "forecasting" - there are instead a very long list of different skills one could automate, with a long tail of tasks that would get increasingly expensive to automate. 

Autonomous driving is another similar example. There's a very long road between "helping drivers with driver-assist features" and "complete level-5 automation, to the extent that almost no human are no longer driving for work purposes."

A much better model is a more nuanced one. Break things down into smaller chunks, and figure out where and how AIs could best augment or replace humans at each of those. Or just spend a lot of time working with human forecasting teams to augment parts of their workflows.

JamesN @ 2024-10-01T22:36 (+3)

I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting - even if relatively good given we don’t have anything better yet.

I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.

For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.

Ozzie Gooen @ 2024-12-05T22:24 (+8)

I think that the phrase ["unaligned" AI] is too vague for a lot of safety research work.

I prefer keywords like:
- scheming 
- naive
- deceptive
- overconfident
- uncooperative

I'm happy that the phrase "scheming" seems to have become popular recently, that's an issue that seems fairly specific to me. I have a much easier time imagining preventing an AI from successfully (intentionally) scheming than I do preventing it from being "unaligned."

Ian Turner @ 2024-12-06T00:25 (+2)

Hmm, I would argue than an AI which, when asked, causes human extinction is not aligned, even if it did exactly what it was told.

Ozzie Gooen @ 2024-12-06T03:52 (+2)

Yea, I think I'd classify that as a different thing. I see alignment typically as a "mistake" issue, rather than as a "misuse" issue. I think others here often use the phrase similarly. 

Ozzie Gooen @ 2021-12-05T02:00 (+8)

When discussing forecasting systems, sometimes I get asked,

“If we were to have much more powerful forecasting systems, what, specifically, would we use them for?”

The obvious answer is,

“We’d first use them to help us figure out what to use them for”

Or,

“Powerful forecasting systems would be used, at first, to figure out what to use powerful forecasting systems on”

For example,

  1. We make a list of 10,000 potential government forecasting projects.
  2. For each, we will have a later evaluation for “how valuable/successful was this project?”.
  3. We then open forecasting questions for each potential project. Like, “If we were to run forecasting project #8374, how successful would it be?”
  4. We take the top results and enact them.

Stated differently,

  1.  Forecasting is part of general-purpose collective reasoning.
  2. Prioritization of forecasting requires collective reasoning.
  3. So, forecasting can be used to prioritize forecasting.

I think a lot of people find this meta and counterintuitive at first, but it seems pretty obvious to me.

All that said, I can’t be sure things will play out like this. In practice, the “best thing to use forecasting on” might be obvious enough such that we don’t need to do costly prioritization work first. For example, the community isn’t currently doing much of this meta stuff around Metaculus. I think this is a bit mistaken, but not incredibly so.

Facebook Thread

Ozzie Gooen @ 2021-12-05T01:53 (+8)

I’m sort of hoping that 15 years from now, a whole lot of common debates quickly get reduced to debates about prediction setups.

“So, I think that this plan will create a boom for the United States manufacturing sector.”

“But the prediction markets say it will actually lead to a net decrease. How do you square that?”

“Oh, well, I think that those specific questions don’t have enough predictions to be considered highly accurate.”

“Really? They have a robustness score of 2.5. Do you think there’s a mistake in the general robustness algorithm?”

—-

Perhaps 10 years later, people won’t make any grand statements that disagree with prediction setups.

(Note that this would require dramatically improved prediction setups! On that note, we could use more smart people working in this!)

Facebook Thread

Ozzie Gooen @ 2024-12-01T15:30 (+6)

I occasionally hear implications that cyber + AI + rogue human hackers will cause mass devastation, in ways that roughly match "lots of cyberattacks happening all over." I'm skeptical of this causing over $1T/year in damages (for over 5 years, pre-TAI), and definitely of it causing an existential disaster.

There are some much more narrow situations that might be more X-risk-relevant, like [A rogue AI exfiltrates itself] or [China uses cyber weapons to dominate the US and create a singleton], but I think these are so narrow they should really be identified individually and called out. If we're worried about them, I'd expect we'd want to take very different actions then to broadly reduce cyber risks. 

I'm worried that some smart+influential folks are worried about the narrow risks, but then there's various confusion, and soon we have EAs getting scared and vocal about the broader risks. 

Some more discussion in this Facebook Post.

Here's the broader comment against cyber + AI + rogue human hacker risks, or maybe even a lot of cyber + AI + nation state risks. 

Note: This was written quickly, and I'm really not a specialist/expert here. 

1. There's easily $10T of market cap of tech companies that would be dramatically reduced if AI systems could invalidate common security measures. This means a lot of incentive to prevent this.

2. AI agents could oversee phone calls and video calls, and monitor other conversations, and raise flags about potential risks. There's already work here, there could be a lot more.

3. If LLMs could detect security vulnerabilities, this might be a fairly standardized and somewhat repeatable process, and actors with more money could have a big advantage. If person A spend $10M using GPT5 to discover 0-days, they'd generally find a subset compared to person B, who spends $100M. This could mean that governments and corporations would have a large advantage. They could do such investigation during the pre-release of software, and have ongoing security checks as new models are released. Or, companies would find bugs before attackers would. (There is a different question of whether the bug is cost-efficient to fix).

4. The way to do a ton of damage with LLMs and cyber is to develop offensive capabilities in-house, then release a bunch of them at once in a planned massive attack. In comparison, I'd expect that many online attackers using LLMs wouldn't be very coordinated or patient. I think that attackers are already using LLMs somewhat, and would expect this to scale gradually, providing defenders a lot of time and experience.

5. AI code generation is arguably improving quickly. This could allow us to build much more secure software, and to add security-critical features.

6. If the state of cyber-defense is bad enough, groups like the NSA might use it to identify and stop would-be attackers. It could be tricky to have a world where it's both difficult to protect key data, but also, it's easy to remain anonymous when going after other's data. Similarly, if a lot of the online finance world is hackable, then potential hackers might not have a way to store potential hacking earnings, so could be less motivated. It just seems tough to fully imagine a world where many decentralized actors carry out attacks that completely cripple the economy.

7. Cybersecurity has a lot of very smart people and security companies. Perhaps not enough, but I'd expect these people could see threats coming and respond decently.

8. Very arguably, a lot of our infrastructure is fairly insecure, in large part because it's just not attacked that much, and when it is, it doesn't cause all too much damage. Companies historically have skimped on security because the costs weren't prohibitive. If cyberattacks get much worse, there's likely a backlog of easy wins, once companies actually get motivated to make fixes.

9. I think around our social circles, those worried about AI and cybersecurity generally talk about it far more than those not worried about it. I think this is one of a few biases that might make things seem scarier than they actually are.

10. Some companies like Apple of gotten good at rolling out security updates fairly quickly. In theory, an important security update to iPhones could reach 50% penetration in a day or so. These systems can improve further.

11. I think we have yet to see the markets show worry about cyber-risk. Valuations of tech companies are very high, cyber-risk doesn't seem like a major factor when discussing tech valuations. Companies can get cyber-insurance - I think the rates have been going up, but not exponentially.

12. Arguably, there's many trillions of dollars being held to by billionaires and others that they don't know what to do with. If something like this actually causes 50%+ global wealth to drop, it would be an enticing avenue for such money to go. Basically, we do have large reserves to spend, if the EV is positive enough, as a planet.

13. In worlds with much better AI, many AI companies (and others) will be a lot richer, and be motivated to keep the game going.

14. Very obviously, if there's 10T+ at stake, this would be a great opportunity for new security companies and products to enter the market.

15. Again, if there's 10T+ at stake, I'd assume that people could change practices a lot to use more secure devices. In theory all professionals could change to one of a few locked-down phones and computers.

16. The main scary actors potentially behind AI + Cyber would be nation states and rogue AIs. But nation-states have traditionally been hesitant to make these (meaning $1T+ damage) attacks outside of wartime, for similar reasons that they are hesitant to do military attacks outside wartime.

17. I believe that the US leads on cyber now. The US definitely leads on income. More cyber/hacking abilities would likely be used heavily by the US state. So, if they become much more powerful, the NSA/CIA might become far better at using cyber attacks to go after other potential international attackers. US citizens might have a hard time being private and secure, but so would would-be attackers. Cyber-crime becomes far less profitable if the attackers themselves can preserve their own privacy and security. There are only 8 Billion people in the world, so in theory it might be possible to oversee everyone with a risk of doing damage (maybe 1-10 million people)? Another way of putting this is that better cyber offense could directly lead to more surveillance by the US department. (This obviously has some other downsides, like US totalitarian control, but that is a very different risk) 

I wonder if some of the worry on AI + Cyber is akin to the "sleepwalking fallacy". Basically, if AI + Cyber becomes a massive problem, I think we should expect that there will be correspondingly massive resources spent then trying to fix it. I think that many people (but not all!) worried about this topic aren't really imagining what $1-10T of decently-effective resources spent on defense would do.

I think that AI + Cyber could be critical threat vector for malicious and powerful AIs in the case of AI takeover. I also could easily see it doing $10-$100B/year of damage in the next few years. But I'm having trouble picturing it doing $10T/year of damage in the next few years, if controlled by humans.

Ozzie Gooen @ 2024-06-26T20:45 (+6)

Around prediction infrastructure and information, I find that a lot of smart people make some weird (to me) claims. Like:

  1. If a prediction didn't clearly change a specific major decision, it was worthless.
  2. Politicians don't pay attention to prediction applications / related sources, so these sources are useless.

There are definitely ways to steelman these, but I think on the face they represent oversimplified models of how information leads to changes.

I'll introduce a different model, which I think is much more sensible:

  1. Whenever some party advocates for belief P, they apply some pressure for that belief to those who notice this advocacy.
  2. This pressure trickles down, often into a web of resulting beliefs that are difficult to trace.
  3. People both decide what decisions to consider, and what choices to make, based on their beliefs.

For any agent having an important belief P, this is expected to have been influenced by the beliefs of those that they pay attention to. One can model this with social networks and graphs.

Generally, introducing more correct beliefs, and providing more support to them in directions where important decisions happen, is expected to make those decisions better. This often is not straightforward, but I think we can make decent and simple graphical models of how said beliefs propagate.

Decisions aren't typically made all-at-once. Often they're very messy. Beliefs are formed over time, and people randomly decide what questions to pay attention to or what decisions to even consider. Information changes the decisions one chooses to make, not just the outcomes of these decisions.

For example - take accounting. A business leader might look at their monthly figures without any specific decisions in mind. But if they see something that surprises them, they might investigate further, and eventually change something important.  

This isn't at all to say "all information sources are equally useful" or "we can't say anything about what information is valuable".

But rather, more like,

"(directionally-correct) Information is useful on a spectrum. The more pressure it can excerpt on decision-relevant beliefs of people with power, the better."

Ozzie Gooen @ 2024-06-07T02:00 (+4)

Some AI questions/takes I’ve been thinking about:
1. I hear people confidently predicting that we’re likely to get catastrophic alignment failures, even if things go well up to ~GPT7 or so. But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“. Basically, I’m not sure if it’s possible for an incredibly smart organization to “sleepwalk into oblivion”. Likewise, I’d expect trade and arms races to get a lot nicer/safer, if we could make it a few levels deeper without catastrophe. (Note: This is one reason I like advanced forecasting tech)

2. I get the impression that lots of EAs are kind of assuming that, if alignment issues don’t kill us quickly, 1-2 AI companies/orgs will create decisive strategic advantages, in predictable ways, and basically control the world shortly afterwards. I think this is a possibility, but would flag that right now, probably 99.9% of the world’s power doesn’t want this to happen (basically, anyone who’s not at the top of OpenAI/Anthropic/the next main lab). It seems to me like these groups would have to be incredibly incompetent to just let one org predictably control the world, within 2-20 years. This both means that I find this scenario unlikely, but also, almost every single person in the world should be an ally in helping EAs make sure these scenarios don’t happen.

3. Related to #2, I still get the impression that it’s far easier to make a case of, “Let’s not let one organization, commercial or government, get a complete monopoly on global power, using AI”, then, “AI alignment issues are likely to kill us all.” And a lot of the solutions to the former also seem like they should help the latter.

harfe @ 2024-06-07T12:23 (+1)

But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“.

How do you know it tells the truth or its best knowledge of the truth without solving the "eliciting latent knowledge" problem?

Ozzie Gooen @ 2024-06-08T23:38 (+2)

Depends on what assurance you need. If GPT-7 reliably provides true results in most/all settings you can find, that's good evidence. 

If GPT-7 is really Machiavellian, and is conspiring against you to make GPT-8, then it's already too late for you, but it's also a weird situation. If GPT-7 were seriously conspiring against you, I assume it wouldn't need to wait until GPT-8 to take action.

Ozzie Gooen @ 2021-12-29T05:29 (+4)

Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded)

ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik.

https://astralcodexten.substack.com/p/acx-grants-results

Quick thoughts:

On specific grants:

Ozzie Gooen @ 2024-06-07T01:57 (+3)

I like the idea of AI Engineer Unions.

Some recent tech unions, like the one in Google, have been pushing more for moral reforms than for payment changes.

Likewise, a bunch of AI engineers could use collective bargaining to help ensure that safety measures get more attention, in AI labs.

There are definitely net-negative unions out there too, so it would need to be done delicately. 

In theory there could be some unions that span multiple organizations. That way one org couldn't easily "fire all of their union staff" and hope that recruiting others would be trivial. 

Really, there aren't too many AI engineers, and these people have a ton of power, so they could be a highly advantaged place to make a union.

harfe @ 2024-06-07T12:15 (+1)

This has been discussed before: https://forum.effectivealtruism.org/posts/GNfWT8Xqh89wRaaSg/unions-for-ai-safety

Ozzie Gooen @ 2024-06-08T23:40 (+2)

Ah nice, thanks!

Ozzie Gooen @ 2023-08-19T19:03 (+2)

I made a quick Manifold Market for estimating my counterfactual impact from 2023-2030. 

One one hand, this seems kind of uncomfortable - on the other, I'd really like to feel more comfortable with precise and public estimates of this sort of thing.

Feel free to bet!

Still need to make progress on the best resolution criteria. 

 

Linch @ 2023-08-19T20:26 (+2)

If someone thinks LTFF is net negative, but your work is net positive, should they answer in the negative ranges?

Ozzie Gooen @ 2023-08-19T23:51 (+2)

Yes. That said, this of course complicates things. 

Linch @ 2023-08-19T20:28 (+2)

Note that while we'll have some clarity in 2030, we'd presumably have less clarity than at the end of history (and even then things could be murky, I dunno)

Ozzie Gooen @ 2023-08-19T23:54 (+2)

For sure. This would just be the mean estimate, I assume. 

Ozzie Gooen @ 2021-12-05T01:52 (+2)

The following things could both be true:

1) Humanity has a >80% chance of completely perishing in the next ~300 years.

2) The expected value of the future is incredibly, ridiculously, high!

The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.

Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.

A 20%, or even a 0.002%, chance at a 10^20 outcome, is still really good.

One key question is the expectation of long-term negative[1] vs. long-term positive outcomes. I think most people are pretty sure that in expectation things are positive, but this is less clear.

So, remember:

Just because the picture of X-risks might look grim in terms of percentages, you can still be really optimistic about the future. In fact, many of the people most concerned with X-risks are those *most* optimistic about the future.

I wrote about this a while ago, here:

https://www.lesswrong.com/.../critique-my-model-the-ev-of...

[1] Humanity lasts, but creates vast worlds of suffering. "S-risks"


https://www.facebook.com/ozzie.gooen/posts/10165734005520363

Ozzie Gooen @ 2021-12-05T01:43 (+2)

Opinions on charging for professional time?

(Particularly in the nonprofit/EA sector)

I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.

I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.

My main work requires a lot of focus, so the context shifts make other tasks particularly costly.

Most professional coaches and similar charge at least $100-200 per hour for meetings. I used to find this high, but I think I'm understanding the cost more now. A 1-hour meeting at a planned time costs probably 2-3x as much time as a 1-hour task that can be done "whenever", for example, and even this latter work is significant.

Another big challenge is that I have no idea how to prioritize some of these requests. I'm sure I'm providing vastly different amounts of value in different cases, and I often can't tell.

The regular market solution is to charge for time. But in EA/nonprofits, it's often expected that a lot of this is done for free. My guess is that this is a big mistake. One issue is that people are "friends", but they are also exactly professional colleagues. It's a tricky line.

One minor downside of charging is that it can be annoying administratively. Sometimes it's tricky to get permission to make payments, so a $100 expense takes $400 of effort.

Note that I do expect that me helping the right people, in the right situations, can be very valuable and definitely worth my time. But I think on the margin, I really should scale back my work here, and I'm not sure exactly how to draw the line.

[All this isn't to say that you shouldn't still reach out! I think that often, the ones who are the most reluctant to ask for help/advice, represent the cases of the highest potential value. (The people who quickly/boldly ask for help are often overconfident). Please do feel free to ask, though it's appreciated if you give me an easy way out, and it's especially appreciated if you offer a donation in exchange, especially if you're working in an organization that can afford it.]

https://www.facebook.com/ozzie.gooen/posts/10165732727415363