A few questions about recent developments in EA
By Peter Berggren @ 2024-11-23T02:46 (+8)
This is a linkpost to https://www.lesswrong.com/posts/hjGKy7kuefJ97xFQo/a-few-questions-about-recent-developments-in-ea
(Sorry; I forgot to cross-post when I made this post)
Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.
- Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
- Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
- Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
- Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
- Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
- Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
- Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
- Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and cons, and why did these organizations decide that the pros outweighed the cons?
- When they did rename, why did they choose to rename to relatively "boring" names that potentially aren't as good for SEO as one that more clearly references Effective Altruism?
- Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
- When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
- Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?
I'm sorry if this is a bit disorganized, but I wanted to have them all in one place, as many of them seem related to each other.
titotal @ 2024-11-23T13:34 (+39)
I'm worried that a lot of these "questions" seem like you're trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief.
Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it's traditional role in murder and genocide. Third, it's all untested and based on questionable science and i suspect it wouldn't actually work very well, if at all.
Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
Have you considered that the rest of EA is incentivised to pretend there aren't problems in EA, for reputational reasons? If so, why shouldn't community health be expanded instead of reduced?
This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can't think of a major scandal in EA that was first raised by the community health team.
Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
Because this is a dumb and baseless parallel? There's a lot more to antisemitic conspiracy theories than "powerful people controlling things". In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale
Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?
Peter Berggren @ 2024-11-23T15:44 (+6)
First off, I specifically spoke to the LessWrong moderation team in advance of writing this, with the intention of rephrasing my questions so they didn't sound like I was trying to make a point. I'm sorry if I failed in that, but making particular points was not my intention. Second of all, you seem to be taking a very adversarial tone to my post when it was not my intention to take an adversarial tone.
Now, on to my thoughts on your particular points.
I have in fact considered that the rest of EA is incentivized to pretend that there aren't problems. In fact, I'd assume that most of EA has. I'm not accusing the Community Health team of causing any particular scandal; just of broadly introducing an atmosphere where comparatively minor incidents may potentially get blown out of proportion.
There seem to be clear and relevant parallels here. Seven of the fifteen people named as TESCREALists in the First Monday paper are Jewish, and many stereotypes attributed to TESCREALists in this conspiracy theory (victimhood complex, manipulating our genomes, ignoring the suffering of Palestinians) line up with antisemitic stereotypes and go far beyond just "powerful people controlling things."
I want to do maximizing myself because I was under the impression that EA is about maximizing. In my mind, if you just wanted to do a lot of good, you'd work in just about any nonprofit. In contrast, EA is about doing the most good that you can do.
Ian Turner @ 2024-11-24T20:04 (+2)
Don’t forget that maximizing is perilous.
Ben Millwood🔸 @ 2024-11-24T23:28 (+11)
I downvoted this post because I think it's really hard for a list of 12 somewhat-related questions, and particularly for the comment threads answering them, to be useful to a broader audience than just the original author. I also feel like these questions really could do more to explain what your thinking is on them, because as it is I feel like you're asking for people to put in work you haven't put in yourself.
If I had these questions, I think the main avenues I'd consider to getting them answered would be:
- post each one in its own Quick Take (= shortform), which would help separate the comment threads without dominating the frontpage with 12 posts at once,
- pick one (or more, if they're obviously related), and expand a little more on what motivates the question and what thoughts you already have, and make that a post on its own,
- consider other venues with smaller audiences (in-person or online social meetups, etc.)
You said:
I wanted to have them all in one place, as many of them seem related to each other
3 and 4 are obviously related, and 8 and 9. I don't see the relations between the others; I think if you're really making the pitch that this post is one topic, I need more explanation of what that topic is.
Charlie_Guthmann @ 2024-11-23T11:18 (+6)
I agree for the most part with Michael's answers to your questions on LW so I'll just go over some slight differences.
1- This movement should not be centralized at all IMO. EA should be a library. Also It's pretty gross that it's centralized but there is no political system sans a token donation election. I'm pretty sure nick beckstead and Will MacAskill etc etc would have been fired into the moon after ftx if there was a democratic voting process for leaders.
https://forum.effectivealtruism.org/posts/8wWYmHsnqPvQEnapu/?commentId=6JduGBwGxbpCMXymd
https://forum.effectivealtruism.org/posts/MjTB4MvtedbLjgyja/?commentId=iKGHCrYTvyLrFit2W
3- Agree with why the team is the way it is but they do have more of an obligation to correct this (conditional on the demographics of the team actually being an important dimension of success. It's believable but not a no-brainer) than your average HR dep. My experience working in a corporate job is that HR works for the man - don't trust them at all. CEAs community team is actually trying to solve problems to help all members of the community, not just the top dogs (well, at least you would hope)
5- Agree w/michael that they are. However, you're picking up on a real thread of arrogance, and often a smug unwillingness to engage with non top 5 cause areas despite the flow-through effects possibly getting more money to the causes they want. I think local EA groups should focus more on fixing the issues in their cities. Not because it is as important but because I think they would gain a lot of recognition and they could leverage that to fundraise more for their causes down the line. Likewise, orgs should be more willing to compromise their work if that means getting way more money. A few years ago my parents asked me to help them research which homeless shelters in Chicago to donate to and I told them they should give the money to (insert ea FOTM charity). They got super triggered and I think if I just answered their question I would have more sway over other donations they made.
8. I found this post, though I'll say I find the concept of an EA club not having ea in their name bizarre. I dislike the name effective altruism but that is the name of the movement so yea I would say they overcooked here.
MarcusAbramovitch @ 2024-11-28T18:21 (+3)
I'll take a crack at some of these.
On 3, I basically don't think this matters. I hadn't considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn't be there or some individuals should be there who aren't. AFAICT without much digging, they all seem to be doing a fine job and I don't see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. It's a giant waste of time, energy and other resources
On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It's always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can't remember if this is Scotts or Irishmen or another group)
On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).
On 10, good point, I would like to see some movement within EA to increase the intensity.
On 11, another good point. I'd love to read more about this.
On 12, another good point but this is somewhat how networks work, unfortunately. There's just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.
dirk @ 2024-11-24T16:31 (+3)
Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
There was an attempt at that in rationalism, Dragon Army, though it didn't ultimately succeed; you can find the postmortem at https://medium.com/@ThingMaker/dragon-army-retrospective-597faf182e50.
Peter Berggren @ 2024-11-25T05:51 (+1)
Yeah, I heard about that. As far as I can tell, the reason it failed was for reasons specific to the particular implementation here, and not due to the broader idea of implementing a project like this. In addition, Duncan has on multiple occasions expressed support for the idea of running a similar project that can learn from the mistakes made here. So my question is, why haven't more organizations like that been started?
Charlie_Guthmann @ 2024-11-23T19:52 (+1)
One last thing - if the reason you want to join a totalizing community is to gain structure, you don't need to join an EA cult to do this!
- Join more groups unrelated to EA. Make sure to maintain a connection to this physical world and remember how beautiful it is. Friendship, community and love are extremely motivating.
- I say this as a non-spiritual lifelong atheist: You may also consider adding some faith practice like hinduism or buddhism. I find a lot of hindu texts and songs to be extremely beautiful and although I don't believe in any of the magic stuff the idea of reincarnation and karma and the accompanying art / rituals can be motivating to me to do the best I can for this world.
Feel free to dm me if you want
Peter Berggren @ 2024-11-23T20:51 (+1)
Thanks for the advice. I was saying that this type of community might be good, not just because I would benefit, but because I know a lot of other people who also would. And that due to a lot of arbitrary-seeming concerns, it's likely highly neglected.
Charlie_Guthmann @ 2024-11-23T21:41 (+1)
Can you try to paint me a picture of how you specifically would benefit?