Anthropic is not being consistently candid about their connection to EA

By burner2 @ 2025-03-30T13:30 (+237)

In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement:

Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda Askell puts it, "I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything". (Her ex-husband, William MacAskill, is an originator of the movement.)

This led multiple people on Twitter to call out how bizarre this is:

In my eyes, there is a large and obvious connection between Anthropic and the EA community. In addition to the ties mentioned above:

It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation. But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community. When asked, they could simply say something like, "yes, many people at Anthropic are motivated by EA principles."

It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles. It's not clear to me that this is even in their immediate self-interest. I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).

This also personally makes me trust them less to act honestly in the future when the stakes are higher. Many people regard Anthropic as the most responsible frontier AI company. And it seems like something they genuinely care about—they invest a ton in AI safety, security and governance. Honest and straightforward communication seems important to maintain this trust.


Ben_West🔸 @ 2025-03-31T17:24 (+83)

I'm sympathetic to wanting to keep your identity small, particularly if you think the person asking about your identity is a journalist writing a hit piece, but if everyone takes funding, staff, etc. from the EA commons and don't share that they got value from that commons, the commons will predictably be under-supported in the future.

I hope Anthropic leadership can find a way to share what they do and don't get out of EA (e.g. in comments here).

MarcusAbramovitch @ 2025-04-02T06:49 (+45)

I understand why people shy away from/hide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.

When PG refers to keeping your identity small, he means don't defend it or its characteristics for the sake of it. There's nothing wrong with being a C/C++ programmer, but realizing it's not the best for rapid development needs or memory safety. In this case, you can own being an EA/your affiliation to EA and not need to justify everything about the community. 

We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.

I'm a proud EA.

Angelina Li @ 2025-04-03T02:23 (+6)

FWIW, I appreciated reading this :) Thank you for sharing it!

We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.

I so agree! I think there is something virtuous and collaborative for those of us who have benefited from EA and its ideas / community to just... being willing to stand up and say simply that. I think these ideas are worth fighting for.

I'm a proud EA.

<3

Marcus Abramovitch 🔸 @ 2025-04-02T18:32 (+4)

On this note, I'm happy that in CEA's new post, they talk about building the brand of effective altruism

RavenclawPrefect @ 2025-04-01T16:26 (+24)

Note that much of the strongest opposition to Anthropic is also associated with EA, so it's not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EA's contribution to Anthropic's mission of making transformative AI go well for the world rather than its contribution to the company's bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.

(I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)

Ben_West🔸 @ 2025-04-01T20:01 (+2)

My guess is that the people quoted in this article would be sad if e.g. 80k started telling people not to work at Anthropic. But maybe I'm wrong - would be good to know if so!

(And also yes, "people having unreasonably high expectations for epistemics in published work" is definitely a cost of dealing with EAs!)

RavenclawPrefect @ 2025-04-01T20:41 (+7)

Oh, definitely agreed - I think effects like "EA counterfactually causes a person to work at Anthropic" are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there. 

(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value they've gotten from that commons.)

Lorenzo Buonanno🔸 @ 2025-03-31T20:18 (+12)

Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity


You know much more than I do, but I'm surprised by this take. My sense is that Anthropic is giving a lot back:

funding

My understanding is that all early investors in Anthropic made a ton of money, it's plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).

As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply they'll give much more funding than they got. (Of course in EV, it could still go to zero)

staff

I don't see why hiring people is more "taking" than "giving", especially if the hires get to work on things that they believe are better for the world than any other role they could work on

and doesn't contribute anything back

My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
 

To be clear, I don't know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be "giving back" even more to EA, but I'm skeptical that it would be the most cost-effective use of their resources (including time and brand value)

Ben_West🔸 @ 2025-03-31T21:47 (+17)

Great points, I don't want to imply that they contribute nothing back, I will think about how to reword my comment.

I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people aren't aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing. 

NickLaing @ 2025-03-31T10:52 (+72)

I'm a bit confused about people suggesting this is defendable.

"I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term".

There are three statements here

1. I'm not the expert on effective altruism - Its hard to see this as anything other than a lie. She's married to Holden Karnofsky and knows ALL about Effective Altruism. She would probably destroy me on a "Do you understand EA" quiz.... I wonder how @Holden Karnofsky  feels about this?


2.  I don't identify with that terminology. - yes true at least now! Maybe she's still got some residual warmth for us deep in her heart?


3.  My impression is that it's a bit of an outdated term". - Her husband set up 2 of the biggest EA (or heavily EA based) institutions that are still going strong today. On what planet is it an "outdated" term? Perhaps on the planet where your main goal is growing and defending your company?

In addition to the clear associations from the OP, from Their wedding page 2017 seemingly written by Daniela "We are both excited about effective altruism: using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis. For gifts we’re asking for donations to charities recommended by GiveWell, an organization Holden co-founded."

If you want to distance yourself from EA, do it and be honest. If you'd rather not comment, don't comment. But don't obfuscate and lie pretending you don't know about EA and downplay the movement.

I'm all for giving people the benefit of the doubt, but there doesn't seem to be reasonable doubt here. 

I don't love raising this as its largely speculation on my part, but there might still be a undercurrent of copium within the EA community by people who backed, or still back Anthropic as the "best" of the AI acceleration bunch (which they quite possibly are) and want to hold that close after failing with Open AI...

David Mathers🔸 @ 2025-04-01T09:24 (+37)

Everything you say is correct I think, but I think in more normal circles, pointing out the inconsistency between someone's wedding page and their corporate PR bullshit would seem a bit weird and obsessive and mean. I don't find it so, but I think ordinary people would get a bad vibe from it. 

NickLaing @ 2025-04-01T13:28 (+4)

That's interesting I think I might move in different circles. Most people I know would not really understand the concept of there being a PR world where your present different things from your personal life

Perhaps you move in more corporate or higher flying circles where this kind of disconnect is normal and where its fine to have a public/private communication disconnect which is considered rude to challenge? Interesting!

Ben_West🔸 @ 2025-04-03T03:14 (+10)

fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]

  1. ^

    I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.

NickLaing @ 2025-04-03T05:29 (+4)

 Wow again I just haven't moved in circles where this would even be considered. Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue. Although we all have a "public profile" in theory, very few people are famous/powerful enough for it to count.

 I don't think I believe in a public/private disconnect but I'll think about it some more. I believe in integrity and honesty in most situations, especially when your are publicly disparaging a movement. If you have chosen to lie and smear a movement with"My impression is that it's a bit of an outdated term" then I think this makes what you say a bit more fair game than for other statements where you aren't low-key attacking a group of well meaning people.

Ben_West🔸 @ 2025-04-03T15:41 (+15)

Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue.

Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.

(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)

NickLaing @ 2025-04-03T15:57 (+4)

That's interesting and I'm sad to hear about people declining jobs due those reasons. On the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique. I would imagine though there are a bunch of ways to avoid the "EA limelight" for many positions though, of course not public facing ones. 

Slight quibble though I would consider "Jane Doe sucks for these reasons" an order of magnitude more objectionable than quoting a wedding website to make a point. Maybe wedding website are sacrosanct in a way in missing tho...

Ben_West🔸 @ 2025-04-03T16:32 (+8)

the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique

Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me. 

There are trade-offs to everything 🤷‍♂️

NickLaing @ 2025-04-03T17:36 (+2)

100 percent man

David Mathers🔸 @ 2025-04-01T14:27 (+4)

No, I don't move in corporate circles. 

Buck @ 2025-04-01T14:03 (+26)

I think you shouldn't assume that people are "experts" on something just because they're married to someone who is an expert, even when (like Daniela) they're smart and successful.

Lukas_Gloor @ 2025-04-01T00:01 (+15)

I agree that these statements are not defensible. I'm sad to see it. There's maybe some hope that the person making these statements was just caught off guard and it's not a common pattern at Antrhopic to obfuscate things with that sort of misdirection. (Edit: Or maybe the journalist was fishing for quotes and made it seem like they were being more evasive than they actually were.) 

I don't get why they can't just admit that Anthropic's history is pretty intertwined with EA history. They could still distance themselves from "EA as the general public perceives it" or even "EA-as-it-is-now." 

For instance, they could flag that EA maybe has a bit of a problem with "purism" -- like, some vocal EAs in this comment section and elsewhere seem to think it is super obvious that Antrhopic has been selling out/became too much of a typical for-profit corporation. I didn't myself think that this was necessarily the case because I see a lot of valid tradeoffs that Anthropic leadership is having to navigate, and the armchair quarterbacks EAs seem to be failing to take that into account? However, the communications highlighted in the OP made me update that Anthropic leadership probably does lack the integrity needed to do complicated power-seeking stuff that has the potential to corrupt. (If someone can handle the temptions from power, they should at the very least be able to handle the comparatively easy dynamics of don't willingly distort the truth as you know it.)

Greg_Colbourn ⏸️ @ 2025-04-01T18:16 (+7)

Anthropic leadership probably does lack the integrity needed to do complicated power-seeking stuff that has the potential to corrupt.

Yes. It's sad to see, but Anthropic is going the same way as OpenAI, despite being founded by a group that split from OpenAI over safety concerns. Power (and money) corrupts. How long until another group splits from Anthropic and the process repeats? Or actually, one can hope that such a group splitting from Anthropic might actually have integrity and instead work on trying to stop the race.

David_Althaus @ 2025-04-01T14:49 (+7)

from Their wedding page 2023,

Not sure if I misunderstand something but the wedding page seems from 2017? (It reads "October 21, 2017" at the top.)

NickLaing @ 2025-04-01T14:50 (+2)

Apologies corrected

Mjreard @ 2025-03-30T16:48 (+71)

There's a lesson here for everyone in/around EA, which is why I sent the pictured tweet: it is very counterproductive to downplay what or who you know for strategic or especially "optics" reasons. The best optics are honesty, earnestness, and candor. If you have explain and justify why your statements that are perceived as evasive and dishonest are in fact okay, you probably did a lot worse than you could have on these fronts.

Also, on the object level, for the love of God, no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association. Don't accept their premise and play into their narrative by being evasive like this. *This validates the criticisms and makes you look worse in everyone's eyes than just saying you're EA or you think it's great or whatever.*

But what if I'm really not EA anymore? Honesty requires that you at least acknowledge that you *were.* Bonus points for explaining what changed. If your personal definition of EA changed over that time, that's worth pondering and disclosing as well.

Neel Nanda @ 2025-03-31T10:58 (+44)

no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association

I agree with your broad points, but this seems false to me. I think that lots of people seem to have negative associations with EA, especially given SBF and in the AI and tech space where eg it's widely (and imo falsely) believed that the openai coup was for EA reasons

Mjreard @ 2025-03-31T15:55 (+11)

I overstated this, but disagree. Overall very few people have ever heard of EA. In tech, maybe you get up to ~20% recognition, but even there, the amount of headspace people give it is very small and you should act as though this is the case. I agree it's negative directionally, but evasive comments like these are actually a big part of how we got to this point.

Neel Nanda @ 2025-03-31T21:21 (+11)

I'm specifically claiming silicon valley AI, where I think it's a fair bit higher?

MarcusAbramovitch @ 2025-03-31T22:27 (+10)

I think we feel this more than is the case. I think a lot of people know about it but don't have much of an opinion on it, similar to how I feel about NASCAR or something. 

I recently caught up with a friend who worked at OpenAI until very recently and he thought it was good that I was part of EA and what I did since college.

David Mathers🔸 @ 2025-04-01T09:25 (+4)

"widely (and imo falsely) believed that the openai coup was for EA reasons"

False why? 

Neel Nanda @ 2025-04-01T10:05 (+13)

Because Sam was engaging in a bunch of highly inappropriate behaviour for a CEO like lying to the board which is sufficient to justify the board firing him without need for more complex explanations. And this matches private gossip I've heard, and the board's public statements

Further, Adam d'Angelo is not, to my knowledge, an EA/AI safety person, but also voted to remove Sam and was a necessary vote, which is strong evidence there were more legit reasons

Matrice Jacobine @ 2025-04-02T15:21 (+1)

The "highly inappropriate behavior" is question was nearly entirely about violating safety protocols, and by the time Murati and Sutskever defected to Altman's side the conflict was clearly considered by both sides to be a referendum on EA and AI safety, to the point of the board seeking to nominate rationalist Emmett Shear as Altman's replacement.

Neel Nanda @ 2025-04-02T20:17 (+2)

I don't think the board's side considered it a referendum. Just because the inappropriate behaviour was about safety doesn't mean that a high integrity board member who is not safety focused shouldn't fire them!

Matrice Jacobine @ 2025-04-03T01:29 (+1)

It doesn't matter what you think they should have done, the fact is, Murati and Sutskever defected to Altman's side after initially backing his firing, almost certainly because the consensus discourse quickly became focused on EA and AI safety and not the object-level accusations of inappropriate behavior.

Oscar Sykes @ 2025-04-01T12:18 (+1)

Ilya too!

Lorenzo Buonanno🔸 @ 2025-03-30T14:28 (+60)

I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
 

I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)


Going point by point:

Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.

This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"

 

Amanda Askell was the 67th signatory of the GWWC pledge.

This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)

 

Many early and senior employees identify as effective altruists and/or previously worked for EA organisations

Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way

https://x.com/AmandaAskell/status/1905995851547148659

 

Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.

That's false: https://en.wikipedia.org/wiki/Artificial_consciousness

 

On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008."
The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)

Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."

Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.

 

It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation

I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"

 

But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community.

I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.

 

I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).

I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.

 

In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.

I don't see why it can't let people decide if they want to consider themselves part of it or not.

 

As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.

burner2 @ 2025-03-30T17:03 (+37)

The point I was trying to make is, separate from whether these statements are literally false, they give a misleading impression to the reader. If I didn't know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation. I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they've given veto power over the company to have been heavily involved in EA.

I also think being weird and evasive about this isn't a good communication strategy (for reasons @Mjreard discusses above).

 

As a side point, I'm confused when you say:

I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.

That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don't see this as evidence of Anthropic being forthcoming.

Lorenzo Buonanno🔸 @ 2025-03-30T22:09 (+2)

As a side point, I'm confused when you say:

I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.

That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don't see this as evidence of Anthropic being forthcoming.

 

I think in the context of the article, their quotes (44 words in total) make more sense:

 

In that context, the quotes clarify that Anthropic is not an "EA company", and give a more accurate understanding of the relationship to the reader.

A more in-depth analysis of the historical affiliations, separations, agreements, and disagreements of Anthropic's funders, founders, and employees with various parts of EA over the past 15 years would take far more than two paragraphs.
 

 

If I didn't know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation.

You wouldn't think that in the context of the article, though.

 

I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they've given veto power over the company to have been heavily involved in EA.

I don't know what percentage of Anthropic employees consider themselves part of the EA community. Also, I don't agree that it's clear that Evidence Action's CEO is part of the effective altruism community because evidence action received money from GiveWell. 

https://www.linkedin.com/in/kanika-bahl-091a936/details/experience/ She was working in global health since before effective altruism was a thing, and many/most people funded by OpenPhilanthropy don't consider themselves part of the community. In the same way that charities funded by Catholic donors are not necessarily Catholic. It does seem that OpenPhilanthropy was their main source of funding for many years though, which makes the link stronger than I originally thought.

David Mathers🔸 @ 2025-04-03T10:20 (+4)

Just as a side point, I do not think Amanda's past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely.  It's not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf  Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don't have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn't a public figure back then, but I'd be genuinely shocked to find out she didn't have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas. 

Not that I'm accusing her of dishonesty here or anything: she didn't say that she wasn't EA or that she had never been EA, just that Anthropic wasn't an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/

Lukas_Gloor @ 2025-04-03T12:36 (+2)

I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.

I never interpreted that to be the crux/problem here. (I know I'm late replying to this.) 

People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.

I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that expression despite its unfortunate history). I'd be equally annoyed if they downplayed some significant other thing unrelated to EA.

Sure, you might say it's fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren't "on message." Probably they were just trying to avoid actually-unfair bad press here? Still, it's clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists? 

More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.

Davidmanheim @ 2025-04-01T04:07 (+2)

You seem to have ignored a central part of what was said by Daniela Amodei; "I'm not the expert on effective altruism," which seems hard to defend.

Greg_Colbourn ⏸️ @ 2025-03-31T15:39 (+29)

It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles.

This works both ways. EA should be distancing itself from Anthropic, given recent pronouncements by Dario about racing China and initiating recursive self-improvement. Not to mention their pushing of the capabilities frontier.

Davidmanheim @ 2025-04-01T04:03 (+6)

As always, and as I've said in other cases, I.don't think it makes sense to ask a disparate movement to make pronouncements like this.

Greg_Colbourn ⏸️ @ 2025-04-01T10:44 (+10)

No, but the main orgs in EA can still act in this regard. E.g. Anthropic shouldn't be welcome at EAG events. They shouldn't have their jobs listed on 80k. They shouldn't be collaborated with on research projects etc that allow them to "safety wash" their brand. In fact, they should be actively opposed and protested (as PauseAI have done).

Dylan Richardson @ 2025-03-30T16:24 (+10)

Giving this an "insightful" because I appreciate the documentation of what is indeed a surprisingly close relationship with EA. But also a disagree because it seems reasonable to be skittish around the subject ("AI Safety" broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm). 

Plus, I'm not convinced that Anthropic has actually engaged in outright deception or obfuscation. This seems like a single slightly odd sentence by Daniela, nothing else.

quinn @ 2025-04-02T21:40 (+9)

I think "outdated term" is a power move, trying to say you're a "geek" to separate yourself from the "mops" and "sociopaths". She could genuinely think, or be surrounded by people who think, 2nd wave or 3rd wave EA (i.e. us here on the forum in 2025) are lame, and that the real EA was some older thing that had died.