Some quick notes on "effective altruism"

By Jonas V @ 2021-03-24T15:30 (+207)

Introduction

 

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

 

"Effective altruism" sounds like a strong identity:

 

Some further, less important points:

 

Some thoughts on potential implications:

 

Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.


RyanCarey @ 2021-03-24T17:32 (+99)

I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

This sounds very right to me. 

Another way of putting this argument is that "global priorities (GP)"  community is both more likable and more appropriate  than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense. 

More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big problems of our time?" rather than "how much ought one to give?" or "what is the best way to solve problem X?" Moreover, I'd more likely bring up Parfit's catastrophic risks thought experiment, than Singer's shallow pond. A more appropriate name could help reduce bait-and-switch dynamics, and help with recruiting people more suited to the jobs that we need done.

If you have a name that's much more likable and somewhat more appropriate, then you're in a much stronger position introducing the ideas to new people, whether they are highly-susceptible to them, or less so. So I imagine introducing these ideas as "GP" to a parent, an acquaintance, a donor, or an adjacent student group, would be less of an uphill battle than "EA" in almost all cases.

Apart from likability and appropriateness, the other five of Neumeir's naming criteria are:

Overall, GP looks like a big upgrade. Another thing to keep in mind is that it may be more of an upgrade than it seems based on discussions within the existing community, because it consists of only those who were not repelled by the current "EA" name.

Concretely, what would this mean? Well... instead of EA Global, EA Forum, EA Handbook, EA Funds, EA Wiki, you would probably have GP Summit, GP Forum, (G)P Handbook, (G)P Funds, GP Wiki etc. Obviously, there are some switching costs in regard to the effort of renaming, and of name recognition, but as an originator of two of these things, the names themselves seem like improvements to me - it seems much more useful to go to a summit, or read resources about global priorities, rather than one focused on altruism in abstract. Orgs like OpenPhil/LongView/80k wouldn't have to change their names at all.

Moreover, changing the name to GP would break the names of some named orgs, it wouldn't always do that. In fact, the Global Priorities Institute was initially going to be the EA Institute, but the name had to be switched to sound more academically respectable. If the community was renamed as the Global Priorities Community, then GPI would get to be named after the community that it originated from and be academically respectable at the same time, which would be super-awesome. The fact that prioritisation arises more frequently in EA org names than any phrase except for "EA" itself might be telling us something important. Consider: "Rethink Priorities", "Global Priorities Project", "Legal Priorities Project", "Global Priorities Institute", "Priority Wiki", "Cause Prioritisation Wiki".

Another possible disadvantage would be if it made it harder for us to attract our core audience. But to be honest, I think that the people who are super-excited about utilitarianism and rationality are pretty likely to find us anyway, and that having a slightly larger and more respectable-looking community would help with that in some ways anyway.

Finally, renaming can be an opportunity for re-centering the brand and strategy overall. How exactly we might refocus could be controversial, but it would be a valuable opportunity.

So overall, I'd be really excited about a name change!

Jonas Vollmer @ 2021-03-25T08:08 (+70)

I really liked this comment, thanks!

The current discussion in the comments seems quite centered on "effective altruism vs. global priorities". I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than "global priorities". In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.

MichaelA @ 2021-03-26T02:21 (+18)

This seems like a really good point.

Off the top of my head, I think how we[1] should proceed is something like:

  • Generate a long list of possible labels
  • Generate a set of goals we have / criteria for evaluating the labels
  • Generate a set of broader approaches we could take, such as having different labels that we use for different audiences, or different labels for different segments of the community, or 
  • Then evaluate the labels and approaches (or combinations thereof) against the goals / criteria we came up with

I think the first three actions can/should be done roughly in parallel, and that the fourth should mostly wait till we've done the first three. Or we might iterate through "first three actions, then fourth action, then first three actions again ..." a few times.

And I'd say this is best done through one or more well-run surveys, as you suggest. Maybe there could first be surveys that ask EAs to generate ideas for labels, goals/criteria, and broader approaches, then ask them to rate given ideas and approaches against given goals/criteria (or maybe that should be split into a followup survey). And then there could be surveys of non-EAs that just skip to that last step (since I imagine it'd be hard for them to come up with useful ideas without context first). 

[1] I'm not sure who the relevant "we" is. 

Habryka @ 2021-03-24T21:55 (+44)

I think a name change might be good, but am not very excited about the "Global Priorities" name. I expect it would attract mostly people interested in seeking power and "having lots of influence" and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.

I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).

Max_Daniel @ 2021-03-25T12:44 (+38)

While I'm not sure we're using terms like "political" and "power" in the same way, as far as I can tell this worry makes a lot of sense to me.

However, I think there is an opposite failure mode: mistakenly believing that because of one's noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.

A key assumption from my perspective is that political and power dynamics aren't something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I'm not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I'm not sure if Plato says that, and I'm confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]

Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital.  There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don't think these are purely empirical disagreements and instead involve 'brute differences in interests'.

Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).

Therefore, I think we will need to deal with 'politics' anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)

To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I'm all for it.

Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I've seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.

My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to "Global Priorites" would have that effect (which I think I'm less confident in than you), then I'd count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).

Stefan_Schubert @ 2021-03-24T23:17 (+26)

I agree that changing names is hard and costly (you can't do it often), something that definitely should be taken into account.

Jonas Vollmer @ 2021-03-25T07:52 (+16)

I'm noticing I don't fully understand the way in which you think "Global Priorities" would attract power-seekers, or what you mean by that. Like, I have a vague sense that you're probably right, but I don't see the direct connection yet. Would be very interested in more elaboration on this.

Habryka @ 2021-03-25T17:28 (+27)

I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don't also strongly select for merit seem filled with them. 

Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I've gone to things that sounded like "Global Priorities" seminars, I mostly met lots of people with a political science degree, or MBA's, really focusing on how they can acquire more power and the whole conversation being very status oriented.  

Jonas Vollmer @ 2021-03-25T21:20 (+6)

Thanks, I find that helpful, and agree that's a dangerous dynamic, and could be exacerbated by such a name change.

Ozzie Gooen @ 2021-03-25T03:09 (+16)

I think this is a good point. That said, I imagine it's quite hard to really tell. 

Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences. 

RyanCarey @ 2021-03-24T23:26 (+12)

Interesting.

1) I'm convinced that a "GP" community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today's community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect - can you explain?

2) assuming that "GP" is too intrinsically political, can you think of any alternatives that have some of its advantages of "GP" without that disadvantage?

Ben Pace @ 2021-03-27T00:05 (+8)

we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people

I don't expect a brand change to "Global Priorities" to bring in more action-oriented people. I expect fewer people would donate money themselves, for instance, they would see it as cute but obviously not having any "global" impact, and therefore below them.

(I think it was my inner Quirrell / inner cynic that wrote some of this comment, but I stand by it as honestly describing a real effect that I anticipate.)

Habryka @ 2021-03-25T00:17 (+7)

But we would also be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people

I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.

Moreover, today's community is older and more BS-resident with some legibly-trustworthy leaders.

I think we have overall become substantially less BS-resistant as we have grown and have drastically increased the surface area of the community, though it depends a bit on the details. 

But you seem to think there would be a big and harmful net effect - can you explain?

Yep, I would be up for doing that, but alas won't have time for it this week. It seemed better to leave a comment voicing my concerns at all, even if I don't have time to explain them in-depth, though I do apologize for not having the time to explain them in full. 

RyanCarey @ 2021-03-25T00:23 (+15)

I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.

I'm concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I'm in favour of people gaining influence, all things equal!

Habryka @ 2021-03-25T01:14 (+4)

Alas, I think that isn't actually what tends to attract the most competent manipulative people. Random social communities might attract incompetent or average-competence manipulative people, but those are much less of a risk than the competent ones. In general, professional communities, in particular ones aiming for relatively unconditional power, strike me as having a much higher density of manipulative people than random social communities.

I also think when I go into my models here, the term "manipulative" feels somewhat misleading, but it would take me a while longer to explain alternative phrasings. 

RyanCarey @ 2021-03-25T02:03 (+7)

TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.

Habryka @ 2021-03-25T17:27 (+8)

nods My concerns have very little to do with cultishness, so my guess is we are talking about very different concerns here. 

richard_ngo @ 2021-03-27T22:54 (+40)

I think the "global priorities" label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they're trying to figure out global priorities. If I heard of a global priorities forum or conference, I'd expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they'd seem pretty self-aggrandizing.

The "priorities" part may also suggest to others that they're not a priority. I expect "the global priorities movement has decided that X is not a priority" seems just as unpleasant to people pursuing X as "the effective altruism movement has decided that X is not effective".

Lastly, "effective altruism" to me suggests both figuring out what to do, and then doing it. Whereas "global priorities" only has connotations of the former.

RyanCarey @ 2021-03-28T00:07 (+9)

What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?

richard_ngo @ 2021-03-30T09:48 (+18)

Well, my default opinion is that we should keep things as they are;  I don't find the arguments against "effective altruism" particularly persuasive, and name changes at this scale are pretty costly.

Insofar as people want to keep their identities small, there are already a bunch of other terms they can use - like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying "I'm part of the effective altruism community" softens the term a bit.

around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists"

This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don't identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it'd be good to hear more from people who have reservations about identifying as an effective altruist. I've just made a top-level question about it, plus an anonymous version - if that describes you, I'd be interested to see your responses!

Max_Daniel @ 2021-03-24T20:58 (+33)

Great comment. To these points I would also add (or maybe just summarize some of the points you made) that "global priorities" seems to have more empirical/world-focused connotations to me, whereas "effective altruism" sounds a lot more philosophical/ideological to me.

E.g. I agree that "global priorities" suggests questions like "what are the big challenges of our time?", which I like a lot more than e.g. "how altruistic should we be?", "is there something like 'true altruism'?" or whichever other thing "effective altruism" makes people first think of.

Of course, I agree that ultimately the project of doing as much good as we can involves both empirical and philosophical questions. But relative to today, I think we'd be better equipped to execute that project well with a stronger emphasis on empirical and practical questions and less emphasis on abstract philosophy. (Though to be fair to the EA label, the status quo is more due to founder effects rather than due to the name differentially attracting philosophers.)

MichaelA @ 2021-03-26T02:29 (+26)

The fact that prioritisation arises more frequently in EA org names than any phrase except for "EA" itself might be telling us something important. Consider: "Rethink Priorities", "Global Priorities Project", "Global Priorities Institute", "Priority Wiki", "Cause Prioritisation Wiki".

It seems worth noting that all of those orgs/wikis are focused on producing or collecting research, not on more directly acting on the world. This is of course a key part of EA, but not the whole of it. 

In line with that, I think that "global priorities", "global priorities community", or similar terms sound like they're mostly about working out what the global priorities are and less about actually acting on those answers. EA is already often perceived as too research-focused (though I'm not saying I agree with those perceptions myself), so it might be good to avoid things that would exacerbate that.

RyanCarey @ 2021-03-26T15:49 (+10)

I like this style of thinking, but I don't think it pushes in the direction that you suggest. EA entities with "priorities" in the name disproportionately work on surveys and policy, whereas those with "EA" in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.

On balance, I think "global priorities" connotes more concreteness and action-orientation than "EA", which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.

MichaelA @ 2021-03-26T23:36 (+2)

I guess I intended my comment above to make three claims:

  1. It is empirically true that those orgs/wikis you noted as having "priorities" in their names are focused on producing or collecting research, not on more directly acting on the world
  2. Separately, to me, "global priorities" does seem to have connotations of working out what the global priorities are and less about actually acting on those answers.
  3. Claim 1 seems to be in line with claim 2. 
    1. But I think claim 1 wasn't the basis for claim 2; I already felt those connotations before you named those orgs, though of course I had already heard of the orgs.

But I don't see these claims as super important, because:

  • We can just run a bunch of surveys and see what connotations other people perceive
  • Action-oriented vs research-oriented is just one of many relevant dimensions
  • "global priorities" is just one alternative name

I guess I see the small value of my comment as quickly highlighting small reasons to doubt your initial views and therefore additional reasons to gather more options, consider our goals/criteria/desiderata more (I like that your comment lists some general goals for names), and run a bunch of surveys. 

RyanCarey @ 2021-03-27T00:35 (+2)

OK, what names would we expect to promote action-orientation if "GP" wouldn't?

Ben Pace @ 2021-03-27T01:24 (+6)

I do not know. Let me try generating names for a minute. Sorry. These will be bad.

“Marginal World Improvers”

”Civilizational Engineers”

”Black Swan Farmers”

“Ethical Optimizers”

”Heavy-Tail People”

Okay I will stop now.

RyanCarey @ 2021-03-27T02:07 (+15)

A friend's "names guy" once suggested calling the EA movement "Unfuck the world"...

Pablo @ 2021-03-27T12:44 (+14)

We can begin here.

RyanCarey @ 2021-04-03T00:00 (+8)

EA popsci would be fun! 

§1. The past was totally fucked. 

§2. Bioweapons are fucked. 

§3. AI looks pretty fucked. 

§4. Are we fucked? 

§5. Unfuck the world!

Pablo @ 2021-04-03T12:46 (+6)

I will resist the temptation to further expand that list.

Ben Pace @ 2021-03-27T02:11 (+5)

“Hello, I’m an Effective Altruist.”

“Hello, I’m a world-unfucker.”

Honestly, I think the second one might be more action-oriented. And less likely to attract status-seekers. Alright, I’m convinced, let’s do it :)

Ben Pace @ 2021-03-26T23:58 (+20)

I was just reflecting on the term 'global priorities'. I think to me it sounds like it's asking "what should the world do", in contrast to "what should I do". The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.

For example, I think the question "what should the global priority be on helping developing countries" is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.

I think the question "what can I personally do to help people in Africa" is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It's a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.

"global priorities (GP)"  community is... more appropriate  than "effective altruism (EA)" community... More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action

Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.

MichaelA @ 2021-03-26T02:25 (+5)

Extendability. GP wins. It's more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and "policy prioritisation" is a better extension than "effective policy", because we're more about doing the important thing than just doing something well.

But it seems like GP is harder to extend to agents specifically? Currently, I can say "I'm an [EA / effective altruist / aspiring EA]". That sounds a bit arrogant, but probably less so than saying "I'm a global priority" :P

Obviously that's not the label we'd use for individuals, but I'm not sure the alternative. Some ideas that seem bad:

  • Global prioritist
  • GP (obviously that acronym is already taken, and in any case it'd just expand out to things like "I'm a global priority" or "we're global priorities")
  • Member of the global priorities community (way too long)

(In any case, as Jonas notes, our focus for now should probably be on brainstorming ideas rather than pitting them against each other so far. So this comment may not be very important.)

RyanCarey @ 2021-03-26T15:30 (+25)

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.

Though you can just say "I'm interested in / I work on global priorities / I'm in the prioritisation community", or anything that you would say about the AI safety community, for example.

Ben Pace @ 2021-03-27T01:28 (+14)

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal)

It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.

Neel Nanda @ 2021-03-27T08:13 (+6)

I also find that a bit cringy. To me, the issue is saying "I have SUCCEEDED at being effective at altruism", which feels like a high bar and somewhat arrogant to explicitly admit to

MichaelA @ 2021-03-26T23:45 (+8)

But:

  • By a similar token, one could replace "I'm/He's an EA" with "I'm/He's interested in effective altruism", which would at least somewhat reduce the problems you note.
    • People usually don't do this, which I think is because we naturally gravitate towards shorter phrases. I guess this could be seen as a downside of the fact that the current phrase can be conveniently shortened.
      • But, of course, the ability to shorten also has an upside (saving time and space).
  • I often say/write and hear/read things like "EAs are often interested in ...", "One mistake some EAs make is...", etc. This is more common than me referring to myself as an EA, and somewhat less at risk of seeming arrogant (though it still can). I think expanding all such uses of "EAs" to "people interested in global priorities" would be a hassle (though not necessarily net negative).
  • "I'm interested in global priorities" and "I work on global priorities" also seem kind-of arrogant, bland, and/or weirdly vague to me. Maybe like a parody of vacuous business speak.
    • Not sure how common this perception would be - we should run a survey.

(Though I feel I should emphasise that I just see these as small reasons to doubt your views, which therefore pushes in favour of gathering more options, considering our goals/criteria/desiderata more, and running a bunch of surveys. My intention isn't really to definitively argue against "global priorities".)

ETA: I just saw that Will Bradshaw already said things quite similar to what I said here, but a bit more concisely...

willbradshaw @ 2021-03-26T14:44 (+20)

Yeah, I'm much more sympathetic to concerns with "effective altruist" than with "effective altruism", and it doesn't seem like GP does any better in that regard – all the solutions you could apply here ("I'm a member of the global priorities community", "I'm interested in global priorities") also apply to EA.

Maybe the fact that the short forms are so awkward for GP is part of the idea? Like, EA has this very attractive but somewhat problematic personalised form ("effective altruist"); GP's personalised forms are all unattractive, so you avoid the problematic attractor?

But it still seems that, if personalised forms are a big part of the concern (which I think they are), this is a good argument in favour of keeping looking. Which was Jonas's proposal anyway.

MichaelA @ 2021-03-26T02:27 (+14)

(Or, of course, we could cut the arrogance down by just saying "I'm an early-career aspiring global priority.")

MaxDalton @ 2021-03-25T08:45 (+84)

I asked my team about this, and Sky provided the following information. This quarter CEA did a small brand test, with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”

Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good  - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)

Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. Maybe "EA" puts off a small-but-important subsection of the audience we tested on (e.g. unusually critical/free-thinking people).

I don't think this is dispositive - I think that testing other brands might still be a good idea. We're currently considering trying to hire someone to test and develop the EA brand, and help field media enquiries. I'm grateful for the work that Rethink and Sky Mayhew have been doing on this.

MHarris @ 2021-03-30T14:06 (+10)

I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".

I wonder which of these questions is most important in selecting a name.

sky @ 2021-03-29T00:31 (+8)

Thanks for sharing that info, Max. It was an interesting first pass at some of these questions. 

Ardenlk @ 2021-03-25T09:28 (+47)

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community.

I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.

MichaelA @ 2021-03-26T03:46 (+4)

I think these are good points.

It actually sounds non'ideological' to me if what that means is being comitted to certain ideas of what we should do and how we should think-- it sounds like it's saying 'hey, we want to do the effective and altruistic thing. We're not saying what that is.' it sounds more open, more like 'a question' than many -isms.

Readers of these comments may also be interested in the post Effective Altruism is a Question (not an ideology). (I assume you've already read the post and had it somewhat in mind, but also that some readers wouldn't know the post.)

David_Moss @ 2021-03-24T16:45 (+43)

Empirical research on people's responses to the term (and alternative terms) certainly seems valuable, and important to do before any potential rebrand.

Anecdotally, I find that people hate reference to "priorities" or "prioritising" as much or more than they hate "effective altruism." Referring to specific "global priorities" quite overtly implies that other things are not priorities. And terminology aside, I find that many people outright oppose "prioritisation" in the field of philanthropic or pro-social endeavours for roughly this reason: it's rude/inappropriate to imply that certain good things that people care about are more important than others. (The use of the word "global" just makes this even worse: this implies that you don't even just think that they are local or otherwise particular priorities, but rather that they are the priorities for everyone!)

Stefan_Schubert @ 2021-03-24T19:44 (+27)

To some extent, I think that what those who dislike effective altruism dislike isn't that term, but rather the set of ideas it expresses. As such, replacing it with another term that's supposed to express broadly the same set of ideas (like "priorities" or "global priorities") might make less of a difference than one might think at first glance (though it likely makes some difference).

What might make a greater difference, for better or worse, is choosing a term that expresses a quite different set of ideas. E.g. I think that people have substantially different reactions to the term "longtermism".

AGB @ 2021-03-25T00:16 (+23)

+1. A short version of my thoughts here is that I’d be interested in changing the EA name if we can find a better alternative, because it does have some downsides, but this particular alternative seems worse from a strict persuasion perspective.

Most of the pushback I feel when talking to otherwise-promising people about EA is not really as much about content as it is about framing: it’s people feeling EA is too cold, too uncaring, too Spock-like, too thoughtless about the impact it might have on those causes deemed ineffective, too naive to realise the impact living this way will have on the people who dive into it. I think you can see this in many critiques.

(Obviously, this isn’t universal; some people embrace the Spock-like-mindset and the quantification. I do, to some extent, or I wouldn’t be here. But I’ve been steadily more convinced over the years that it’s a small minority.)

You can fight this by framing your ideas in warmer terms, but it does seem like starting at ‘Global Priorities community’ makes the battle more uphill. And I find losing this group sad, because I think the actual EA community is relatively warm, but first impressions are tough to overcome.

Low confidence on all of the above, would be happy to see data.

Jonas Vollmer @ 2021-03-24T16:51 (+14)

I still think "effective altruism" sounds a bit more like we've already found the correct answer to "what should we prioritize" rather than just being interested in the question, but I agree these are some good points.

Cullen_OKeefe @ 2021-03-25T03:39 (+34)

It seems like EA could benefit from a dedicated, evidence-based messaging consultancy that served all EA orgs.

Peter_Hurford @ 2021-03-25T13:27 (+68)

Rethink Priorities is pretty close to this! We've done message testing now for many orgs across cause areas... Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.

We'd be happy to consider doing more work for other people in EA and the EA movement as a whole!

Cullen_OKeefe @ 2021-03-25T17:06 (+8)

Amazing. I knew RP did a lot of great work in this space, but didn’t realize how systematized you’d gotten. Great stuff :-)

AnonymousEAForumAccount @ 2021-03-25T14:44 (+7)

This is great! Can you summarize your findings across these tests?

JamesOz @ 2021-03-25T11:57 (+3)

I’ve been thinking about this! I  really have no sense if anyone involved in building the EA movement/EA orgs has sat down and really meticulously thought about narratives, audiences, framing and other elements of building a strong message. Does anyone know if this is being done?

 

If not, this seems like a potentially really exciting piece of work. If we just look at organisation that had a strong “meme”/message, whether it’s McDonalds or Friday’s for Future, it can really help an org reach their desired outcome. For us this might not be exponential growth in the general public (if we’re concerned about keep strong community values) but exponential growth in certain social groups e.g. donors or talented individuals in specific fields. The consensus on messaging says that emotional narratives work far better than facts and I think that could be an area where EA messaging hasn't been optimal -as my impression is that we're far more likely to speak about statistics vs emotional stories of those we're helping.

 

One piece of work could be focus groups with the various audiences, high net-worth donors as an example, to figure out what message resonates the most then we try align wider EA orgs that do fundraising around this message. Same could go for recruiting people involved in technical AI safety etc. I get the impression it could be quite high-leverage as having been involved in crowdfunding, the strength of your messaging can make a huge (10x) difference to your results. This is a field where you can be quite rigorous with building narratives based on evidence so seems like a no-brainer for EA-aligned folks.

Would love to hear if any of this work is already being done as I definitely see it as a need in the EA-meta ecosystem. I could see it fitting in with CEA potentially or like you said - external consultancy or non-profit.

Taymon @ 2021-03-25T07:08 (+29)

Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).

There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don't think any of the candidates have produced widespread reactions of "oh yeah I wish we'd thought of that in 2012", even among people who see problems with the current name. So coming up with a name that's better than "effective altruism", by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don't think that fully explains what's going on here.) When people do suggest different names, it tends to be because they think some or all of the community is emphasizing the wrong things, and want to pivot towards right ones.

"Global priorities community" definitely sounds incompatible with a grassroots direction; if I said that I was starting a one-person global priorities project in my basement, this would sound ridiculously grandiose and like I'd been severely Dunning-Krugered, whereas with an EA project this is fine.

For what it's worth, I'd prefer a name that's clearly compatible with both the institutional and the grassroots side, because it seems clear to me that both of these are in scope for the EA mandate and it's not acceptable to trade off either of them. The current name sounds a little more grassroots than I'd like, but again, I don't have any better ideas.

At one point I pitched Impartialist Maximizing Rationalist-Empiricist-Epistemological Welfarist-Axiological Ideology, or IMREEWAI for short, but for some strange reason nobody liked that idea :-P

MHarris @ 2021-03-24T18:34 (+28)

This is a discussion that has happened a few times. I do think that 'global priorities' has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.

I'd still be reluctant to ditch 'effective altruism' entirely. There is an important part of the original message of the movement (cf pond analogy) that's about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we've probably developed a healthier sense of how to balance that ('altruism/life balance') but it feels like 'global priorities' wouldn't cover it.

Meadowlark @ 2021-03-24T19:26 (+25)

This is an excellent point. I "joined" EA because of the pond idea. I found the idea of helping a lot of people with the limited funds I could spare really appealing, and it made me feel like I could make a real difference. I didn't get into EA because of its focus on global prioritization research.

Of course, what I happened to join EA because of is not super important, but I wonder how others feel. Like EA as a "donate more to AMF and other effective charities" is a really different message than EA as "research and philosophize about what issues are really important/neglected."

I'm not sure which EA is anymore, and changing the name to global priorities might change the movement from the Doing Good Better movement to the "Case for Strong Longtermism" movement and those are very different. But I'm very uncertain on which EA will/should end up as. 

Jonas Vollmer @ 2021-03-25T08:16 (+7)

I want to push back against the idea that a name change would implicitly change the movement in a more longtermist direction (not sure you meant to suggest that, but I read that between the lines). I think a name change could quite plausibly also be very good for the global health and development and animal welfare causes. It could shift the focus from personal life choices to institutional change, which I think people aren't thinking about enough. 

The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

Denise_Melchin @ 2021-03-26T20:48 (+28)

The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

I think I disagree with this given what the community currently looks like. (This might not be the best place to get into this argument, since it's pretty far from the original points you were trying to make, but here we go.)

Two points of disagreement:

i) The EA Survey shows that current donation rates by EAs are extremely low. From this I conclude that there is way too little focus on personal donations within the EA community. That said, if we get some of the many EAs which are donating very little to work on the suggestions you mention, that is plausibly a net improvement, as the donation rates are so low anyway.

Relatedly, personal donations are one of the few things that everyone can do. In the post, you write that "The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc.", but as I understand the terms you use, this is probably less than 10% of the Western population. But maybe you disagree with that?

Accordingly, I do not view this as the longer-term goal of the EA community, but only one of them. Most of the other people who cannot have high-flying high-impact careers, which is most people, should focus on maximizing donations instead.

ii) I think the EA community currently does not have the expertise to reliably have a positive impact on developing world policy. It is extremely easy to do harm in this area. Accordingly, I am also sceptical of the idea to introduce a hits-based global development fund, though I would need to understand better with what you are intending there. I would be very keen for the EA community to develop expertise in this area and some of the suggestions you make e.g growth diagnostics research should help with that. But we are very far from having expertise right now and should act accordingly.

Jonas Vollmer @ 2021-03-29T10:16 (+12)

Edit: I think my below comment kind of misses the point – my main response is simply: Some people could probably do a huge amount of good by, e.g., helping increase meat alternatives R&D budgets, this seems a much bigger opportunity than increasing donations and similarly tractable, so we should focus more on that (while continuing to also increase donations).

--

Some quick thoughts:

  • I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.
  • Low donation rates indeed feel concerning. To me, the lack of discussion of "how can we best make ODA budgets more effective" and similar questions feels even more concerning, as the latter seems a much bigger missed opportunity.
  • I think lots of people can get government jobs where you can have a significant positive impact in a relevant area at some point of your career, or otherwise contribute to making governments more effective. I tentatively agree that personal donations seem more impactful than the career impact in many cases, but I don't think it's clear that we should overall aim to maximize donations. It could be worth doing some more research into this.
  • I would feel excited about a project that tries to find out why donation rates are low (lack of money? lack of room for more funding? saving to give later and make donations more well-reasoned by giving lump sums? a false perception that money won't do much good anymore? something else?) and how we might increase them. (What's your guess for the reasons? I'd be very interested in more discussion about this, it might be worth a separate EA Forum post if that doesn't exist already.)
  • As you suggest, if the EA community doesn't have the expertise to have a positive impact on developing-world policy, perhaps it should develop more of it. I don't really know, but some of these jobs might not be very competitive or difficult but disproportionately impactful. Even if you just try help discontinue funding programs that don't work, prevent budget cuts for the ones that do, and generally encourage better resource prioritization, that could be very helpful.
Max_Daniel @ 2021-03-29T12:09 (+27)

I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community.

Thanks for stating your view on this as I would guess this will be a crux for some.

FWIW, I'm not sure if I agree with this. I certainly agree that there is a real risk from 'dilution' and other risks from both too rapid growth and a too large total community size.

However, I'm most concerned about these risks if I imagine a community that's kind of "one big blob" without much structure. But that's not the only strategy on the table. There could also be a strategy where the total community is quite large but there is structure and diversity within the community regarding what exactly 'being an EA' means for people, who interacts with whom, who commands how many resources, etc.

I feel like many other professional, academic, or political communities are both quite large overall and, at least to some extent, maintain spaces that aren't harmed by "dilution". Perhaps most notably, consider that almost any academic discipline is huge and yet there is formal and informal structure that to some extent separates the wheat from the chaff. There is the majority of people who drops out of academia after their PhDs and the tiny majority of those who become a professor; there is the majority of papers that will never be cited or are of poor quality, and then there is the very few number of top journals; there is the majority of colleges and university where faculty is mostly busy teaching and from where we don't expect much innovation, and the tiny fraction of research-focused top universities, etc.

I'm not saying this is clearly the way to go, or even feasible at all, for EA. But I do feel quite strongly that "we need to protect spaces for really high-quality interactions and intellectual progress" or similar - even if we buy them as assumption - does not imply it's best to keep to total size of the community small.

Perhaps as an intuition pump, consider how the life of Ramanujan might have looked like if there hadn't existed a maths book accessible to people in his situation, a "non-elite" university and other education accessible to someone in his situation, etc.

Jonas Vollmer @ 2021-03-29T16:30 (+8)

Yeah, these are great points. I agree that with enough structure, larger-scale growth seems possible. Basically, I agree with everything you said. I'd perhaps add that in such a world, "EA" would have a quite different meaning from how we use the term now. I also don't quite buy the point about Ramanujan – I think "spreading the ideas widely" is different from "making the community huge".

(Small meta nitpick: I find it confusing to call a community of 2 million people "small" – really wish we were using "very large" for 2 million and "insanely huge" for 1% of the population, or similar. Like, if someone said "Jonas wants to keep EA small", I would feel like they were misrepresenting my opinion.)

Max_Daniel @ 2021-03-29T17:28 (+6)

I think "spreading the ideas widely" is different from "making the community huge"

Yeah, I think that's an important insight I also agree with.

In an ideal world the best thing to do would be to expose everyone to some kind of "screening device" (e.g. a pitch or piece of content with a call to action at the end) which draws them into the EA community if and only if they'd make a net valuable contribution. In the actual world there is no such screening device, but I suspect we could still do more to expand the reach of "exposure to the initial ideas / basic framework of EA" while relying on self-selection and existing gatekeeping mechanisms for reducing the risk of dilution etc.

My main concern with such a strategy would actually not be that it risks dilution but that it would be more valuable once we have more of a "task Y", i.e. something a lot of people can do. (Or some other change that would allow us to better utilize more talent.)

Denise_Melchin @ 2021-03-29T12:47 (+8)

I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.

I meant this slightly differently than you interpreted it I think. My best guess is that less than 10% of the Western population are capable of entering potentially high impact career paths and we already have plenty of people in the EA community for whom this is not possible. This can be for a variety of reasons: they are not hard-working enough, not smart enough, do not have sufficient educational credentials, are chronically ill, etc. But maybe you think that most people in the current EA community are very well qualified to enter high impact career paths and our crux is there?

While I agree that government jobs are easier to get into than other career paths lauded as high impact in the EA Community (at least this seems to be true for the UK civil service), my impression is that I am a lot more skeptical than other EAs that government careers are a credible high impact career path. I say this as someone who has a government job. I have written a bit about this here, but my thinking on the matter is currently very much a work in progress and the linked post does not include most reasons why I feel skeptical. To me it seems like a solid argument in favour has just not been made.

I would feel excited about a project that tries to find out why donation rates are low (lack of money? lack of room for more funding? saving to give later and make donations more well-reasoned by giving lump sums? a false perception that money won't do much good anymore? something else?) and how we might increase them. (What's your guess for the reasons? I'd be very interested in more discussion about this, it might be worth a separate EA Forum post if that doesn't exist already.)

I completely agree with this (and I think I have mentioned this to you before)! I'm afraid I only have wild guesses why donation rates are low. More generally, I'd be excited about more qualitative research into understanding what EA community members think their bottlenecks to achieving more impact are.

Jonas Vollmer @ 2021-03-29T16:44 (+5)

Thanks for clarifying – I basically agree with all of this. I particularly agree that the "government job" idea needs a lot more careful thinking and may not turn out to be as great as one might think.

I think our main disagreement might be that I think that donating large amounts effectively requires an understanding of EA ideas and altruistic dedication that only a small number of people are ever likely to develop, so I don't see the "impact through donations" route as an unusually strong argument for doing EA messaging in a particular direction or having a very large movement. And I consider the fact that some people can have very impactful careers a pretty strong argument for emphasizing the careers angle a bit more than the donation angle (though we should keep communicating both).

(Disclaimer: Written very quickly.)

I also edited my original comment (added a paragraph at the top) to make this clearer; I think my previous comment kind of missed the point.

David_Moss @ 2021-03-29T12:20 (+6)

I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community.

 

While we're empirically investigating things, it seems like what proportion of the population seem like they could potentially be aligned with EA, might also be a high priority thing to investigate. 

Denkenberger @ 2021-03-28T07:27 (+6)

Though I was surprised when I read the results of the first EA survey because I was expecting the majority of non-student EAs would donate 10% of their pretax income, I don't think that saying that EA donations are extremely low is quite fair. The mean donation of EAs in the 2019 survey was 7.5%. The mean donation of Americans of pretax income is about 3.6%. However, with a significant number of EAs outside of the US giving less, the fact that many EAs are students, and the since I think that the EA mean is by person rather than weighted by donation (as the US average number is), I would guess EAs donate about 3-5 times as much as the same demographic that is not an EA. I do think that we could do better, and a lot of good could come from more donations.

MHarris @ 2021-03-26T09:12 (+9)

I'm all for focusing on the power of policy, but I'm not sure giving up any of our positions on personal donations will help get us there.

Meadowlark @ 2021-03-25T14:48 (+6)

I think I more or less agree with you. However, I think my point wasn't about longtermism, but rather just the difference between the project that DGB was engaging in and the later work by MacAskill on cause prioritization. Like, one was saying, "Hey! evidence can be really helpful in doing good, and we should care about how effective the charities are that we donate to," and the other work was a really cerebral, unintuitive piece about what we should care about, and contribute to, because of expected value reasons. And just that these are two very different projects, and it's not obvious to me which one EA is at the moment. To use a cliche, EA has an identity crisis, maybe, and the classic EA pitch of Peter Singer and DGB and AMF is a very distinct pitch from the global prioritization one. And whichever EA decides on, it should acknowledge that these are different, regardless of which one is more or less impactful. 

james_aung @ 2021-03-25T18:20 (+27)

A small and simple change that CEA could do is to un-bold the 'Effective' in their 'Effective Altruism' logo which is used on https://www.effectivealtruism.org/ and EAG t-shirts

I find the bold comes across as unnecessarily smug emphasis in Effective Altruism.

Onni_Aarne @ 2021-03-24T19:48 (+26)

"Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.

It's not just that it has developed in that direction, it has developed in many directions. Could the solution then be to use different brands in different contexts? "Global priorities community" might work better than "Effective Altruism community" when doing research and policy advocacy, but as an organizer of a university group, I feel like "Effective Altruism" is quite good when trying to help (particularly smart and ambitious) individuals do good effectively. For example, I don't think a "Global priorities fellowship" sounds like something that is supposed to be directly useful for making more altruistic life choices.

Outreach efforts focused on donations and aimed at a wider audience could use yet another brand. In practice it seems like Giving What We Can and One for the World already play this role.

Jonas Vollmer @ 2021-03-25T07:59 (+1)

I think it might actually be pretty good if EA groups called themselves Global Priorities groups, as this shifts the implicit focus from questions like "how do we best collect donations for charity?" to questions like "how can I contribute to [whichever cause you care about] in a systematic way over the course of a lifetime?", and I think the latter question is >10x more impactful to think about.

(I generally agree if there are different brands for different groups, and I think it's great that e.g. Giving What We Can has such an altruism-oriented name. I'm unconvinced that we should have multiple labels for the community itself.)

BarryGrimes @ 2021-03-25T12:45 (+8)

I agree that the community should only have one label but the community has multiple goals and is seeking to influence very different target audiences. In each case, we need to use language that appeals to the target audience.

Perhaps the effective altruism brand should be more like the Unilever brand with marketing segmented into multiple ‘product brands’. This could include existing brands like 80,00 Hours and Giving What We Can whilst the academic project becomes “global priorities research” rather than “effective altruism”.

The right name for groups will depend on the target audience and what the message testing reveals. I expect something like “High Impact Careers” or something along those lines may be more attractive to a wider audience than “effective altruism”

MichaelA @ 2021-03-26T02:37 (+4)

I think it might actually be pretty good if EA groups called themselves Global Priorities groups, as this shifts the implicit focus from questions like "how do we best collect donations for charity?" to questions like "how can I contribute to [whichever cause you care about] in a systematic way over the course of a lifetime?"

I think it's true that introductions to EA and initial perceptions of EA often focus on increasing regular individual people's donations to charity (as well as better allocating such donations to charity) to an extent that's disproportionate both to the significance of those topics and to how much of a focus those topics actually are in EA. 

But I'm not confident that the label "effective altruism" makes that issue worse than the label "global priorities" would. We already aren't using charity in the name, and my guess is that "altruism" isn't very strongly associated with "individual charity donations" in most people's minds (I'd guess the term "altruism" is similarly or more strongly associated with "heroic sacrifices"). I'd guess that this problem is more just a result of earlier EA messaging, plus local groups often choosing to lead with a focus on individual donations. 

(Of course, survey research could provide better answers on this question than our guesses would.)

Peter_Hartree @ 2021-03-26T13:17 (+25)

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were put off by the name "effective altruism".
  6. While I don't like the name, the thought that it might be driving large and net positive selection effects does not seem crazy to me.
  7. I would be glad if someone gave this topic further thought, plausibly to the extent of conducting surveys and speaking to relevant experts.
echoward @ 2021-03-25T21:29 (+24)

While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here? 

My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like:

I wonder if some of what is underpinning these discussions is less the accuracy or branding issues of particular names and more the difficulty of coordinating a growing community?

As the number of people interested in the ideas associated with effective altruism grows, more people enter the space with different values and interpretations of the various ideas. It becomes harder for everyone to get what they wanted from the community and less likely that all those involved agree that things are moving in a positive direction. 

My concern would be that even one were to wave a magic wand and successfully rebrand the movement to a new name at some point the same issues would arise when people again began to feel dissatisfied with something about the movement (or how others perceive it) and start casting around for a solution. Unfortunately, I think the solution is unlikely to be one of branding but might instead require us to figure out a lot more about what the goals of this endeavor are and how to successfully coordinate large groups of people who will inevitably have competing values and viewpoints.

Jonas Vollmer @ 2021-03-26T10:21 (+13)

New post that's related to this (just discovered it now): https://forum.effectivealtruism.org/posts/o5ChDMcooDFG8cfPJ/why-i-prefer-effective-altruism-to-global-priorities

FlorianH @ 2021-03-24T18:11 (+13)

Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.

Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.

Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).

Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.

RyanCarey @ 2021-03-25T00:34 (+7)

Agree that selection effects can be desirable and that dilution effects may matter if we choose a name that is too likable. But if we hold likability fixed, and switch to a name that is more appropriate (i.e. more descriptive), then it should select people more apt for the movement, leading to a stronger core.

Aditya Vaze @ 2021-03-25T00:05 (+2)

Strongly agree. The potential benefits of selection effects are underrated in these discussions.

kdbscott @ 2021-03-30T06:44 (+10)

at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists

Small note that this could also be counter evidence - these are folks that are doing a good job of 'keeping their identity small' yet are also interested in gathering under the 'effective altruism' banner. (edit: nevermind, seems like they identified with other -isms) .

Somehow the EA brand is threading the needle of being a banner and also not mind-killing people ... I think.

Would EA be much worse if we removed the 'banner' aspect of it? I don't know... it feels like we're running an experiment of whether it's possible to nurture and grow global prioritist qualities in the world (in people who might not have otherwise done much global prioritism, without a banner/community to help them get started). It's not clear if we're done with that experiment - if anything, initial results look promising from where I'm sitting. So my initial thought is that I don't quite want to remove the banner variable yet (but then again maybe Global Priorities could keep that variable)

Jonas Vollmer @ 2021-03-30T13:45 (+3)

I specifically wrote:

Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists.

For further clarification, see also the comment I just left here.

kdbscott @ 2021-03-30T17:41 (+3)

Ah whoops, thanks for the clarification. I'm glad that delineation was made during the session! 

Hmm so maybe some weaker point:  perhaps banners like 'atheism' and 'feminism' have the property 'blend me with your identity or consequences', whereas EA doesn't as much, and maybe that's better. ¯\_(ツ)_/¯ 

Anyway, thanks for the post Jonas, I agree with many points and have had similar experiences.

G Gordon Worley III @ 2021-03-24T18:02 (+10)

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

Julia_Wise @ 2021-03-25T19:50 (+8)

I think I'd expect US culture to be most ok with self-congratulation, and basically everywhere else (including UK) to be more allergic to it? But most of the people who voted on the name in the first place were British.

Jonas Vollmer @ 2021-08-16T10:52 (+9)

A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:

If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.

In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.

vaidehi_agarwalla @ 2021-03-25T03:27 (+3)

EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.

 

This makes a lot of sense to me if there's a cap on donations due to branding, especially for the neartermist funds and if you create a legible LTF fund, then that as well. 

How big of a priority is it for the EA Funds plan to grow the donor base to non-EA donors, and on what time scale?

Jonas Vollmer @ 2021-03-25T08:21 (+4)

Right now, reaching non-EA donors is not a big priority, and the rebrand is correspondingly pretty far down on my priority list. This may change on a horizon of 1-3 years, though. (Rebranding has some benefits other than reaching non-EA donors, such as reducing reputational risk for the community from making very weird grants.) 

MarcSerna @ 2021-10-07T06:50 (+1)

Great post.

Has this debate evolved? Did someone try to give the 10 names?

I like efficient altruism, it drops the smugness a bit.

Neoutilitariansm could also make sense. But maybe someone who understands EA better than me points out the differences between what EA has been and utilitarianism.

Change now after 10 years can be really really difficult. But the best time is as soon as possible. Also it is difficult because EA is not a single organization or exact philosophy with one person behind it.

I usually say "I admire/follow the Effective Altruism community" rather than saying I am an Effective Altruist.