General support for “General EA”

By Arthur Malone🔸 @ 2023-07-26T21:37 (+156)

TL;DR: When I say “General EA” I am referring to the cluster including the term “effective altruism,” the idea of “big-tent EA,” as well as branding and support of those ideas. This post is a response in opposition to many calls for renaming EA or backing away from an umbrella movement. I make some strategic recommendations and take something of a deep dive  using my own personal history/cause prioritization as a case study for why “General EA” works (longpost is long, so there's TL;DRs for each major section). 

I’m primarily aiming to see if I’m right that there’s a comparatively silent group that supports EA largely as it. If you’re in that category and don’t need the full rationale and story, the call to action is to add a comment linking your favorite “EA win” (success story/accomplishment you’d like to have people associate with EA).

 

Since long before last fall's reckoning, I've been following discussions in the EA community both for or against the "effective altruism" name, debates about rebranding, and calls to splinter the various worldviews currently covered by the EA umbrella into separate groups. There are too many to link, and since this post is ultimately in opposition to them, I prefer not to elevate any specific post. 

I’m actually entirely supportive of such discussions. And I think the reevaluation post-FTX and continued questioning during the EA Strategy fortnight is great, precisely because it is EA in action: trying to use reason to figure out how to do the most good means applying that methodology to our movement, its principles and its public image. 

Unfortunately, I haven’t seen a post advocating for a holistic reaffirmation of “EA has already developed and coalesced around a great (if not optimal) movement. We should not only stay the course, but further invest in the EA status quo.” Because while status quo bias is a real thing to stay vigilant against, it is also the case that the movement, its name and branding, and the actions it takes in the world, are all the cumulative work of a lot of incredibly intelligent people doing their best to take the right course of action. Don’t fix what ain’t broke.

As I interact with EAs as a community builder (I also lead the team organizing EAGxNYC, applications closing soon!) I have heard people advocating for the strategy/branding changes that are described on the forum. However, I perceive this as a minority compared to those who generally think we should just continue “being EA.” It is often the case that those in favor of maintaining a positive status quo do not express their position as vocally as those aiming for a change, so I wrote this post to reflect my own view of why it is preferable to stick with general EA. 

I aim to be somewhat ambitious and address several longstanding criticisms about EA, and hope to get some engagement from those with different viewpoints. But I also hope that some of the (what I perceive to be) silent majority will chime in and demonstrate that we’re here and don’t want to see EA splintered, rebranded, or otherwise demoted in favor of some other label.

“Effective Altruism”

TL;DR: There’s no word in the English language that accurately covers EA’s principles or equally applies to all its community. Every possible name would be a compromise, and “effective altruism” has the benefit of demonstrable success and established momentum. As long as we stick with it as a descriptive moniker rather than asserting it as a prescriptive identifier, it can serve us as well as the names chosen by other movements.

Circa 2009-2014, I attempted to write a book with the goal of starting a movement of individuals who try to take their ethics seriously and make positive changes in the world. I have shared some of that content with EA friends, and still hope to publish it in some fashion, but a major takeaway is this: during that time I watched EA take form, participated in many early discussions, and eventually decided that it would be better to join this community. And a large part of that was simply that “effective altruism” works. It works as a name to describe the goal, it works to attract the right people, and it works as a movement in the world to effect positive change.

I don’t wholly agree with the classic post “Effective Altruism is a question (not an ideology)” because, as much as I do think EA is better thought of as a question, I don’t think that “it’s better to focus on the question” is at all unique to EA. That post contrasts EA with other groups/ideologies like feminism and libertarianism, and attributes the difference to those movements presenting an answer around which they coordinate. I agree that, for example, feminism coordinates around the answer (yes) to the question, “Should men and women be equal?” I also think that the majority of what feminism is comes from other questions that follow, like “what should we do to make the world reflect that equality?”, for which there is much debate and many courses of reasonable (and less reasonable) action. 

I think EA is also just as coordinated around an answer (yes) to a question: “Should we use evidence and reason to do the most good we can do with limited resources?” And that the majority of what EA is comes from, again, the questions that follow and the actions taken in response.

I think a great deal of confusion comes from conflating groups/ideologies with defined and prescriptive memberships with those that don’t. In the above post, Islamism is included among the groups contrasted with EA, alongside feminism and libertarianism. Islamism (at least certain sects, like many religions) has prescriptive actions one must take in order to be considered a member. I think it is a clear failure mode for advocacy movements, like feminism and effective altruism, to adopt this prescriptive lens. 

When discussions devolve into membership claims (“You can’t be a feminist and do X” or “I would never do anything to denigrate women, I’m a feminist!”) then something has gone wrong. I call myself a feminist and an effective altruist because I think they’re informative and true descriptions of what I care about and what I try to do, not because I believe they’re inherent parts of my identity or because I follow prescriptive guidelines that declare me a member.

Further, I think this conception of advocacy “X-isms” as descriptive rather than prescriptive is all that’s necessary to refute the claim made by critics that “by calling yourself effective altruism, you’re implying that all other altruism is ineffective.” This is among the loudest reasons I've seen against "effective altruism" as a name. I personally think it's specious, silly, and unavoidable: anything our movement calls itself, if it is at all descriptively accurate, will garner the same pushback. Fundamentally we are aiming to evaluate and improve altruistic actions; that will always ruffle the feathers of those who don't excel by our metrics, and their reaction shouldn't deter work that we think has positive impact.

Every advocacy group names themselves for the thing they were created to support. We are a movement defined by trying to discern and advocate for effective altruism[1], the same way that environmentalists advocate for environmental conservation. Environmentalists describing themselves as such does not imply that everyone else is anti-conservation. 

We have to have a name, and we’ve got one that’s sufficiently accurate, evocative, and demonstrably effective at defining and recruiting an impactful community.

In support of “Big Tent EA”

TL;DR: There are already some great pieces on “Big Tent EA” and the value of worldview diversification. If I can add anything here, it’s from a case study of one (me). I hold some “typical EA positions” but not all, have moved from one known EA cause to another, and am happy to support those EAs who don’t agree with me on everything. I see the ability for EA as an umbrella to cover me and others like me as one of its biggest strengths.

My personal bet is that transformative AI is likely to arrive in my (expected) lifetime, and I currently think that doing my best to make this transition go well is the most important thing I can do to live in accordance with my own ethics. There’s a lot of uncertainty in that bet, but it’s the highest expected value contribution I think I can make. My belief in the timeline has remained surprisingly constant since the late-90s (as a non-expert, I’m very suspicious of my own consistency being either early luck or some motivated reasoning). But despite my best-guess timeline staying roughly the same, my personal plan to positively impact the world has changed dramatically.

-

In university I didn’t see a path to helping with AI and didn’t trust my own timeline estimates. I was motivated by a desire to help people and felt that I had a comparative advantage with biology, so I studied physiology, neuroscience, and cognitive science. I wanted to keep my path open to studying intelligence, but my direct plan was to use the Peace Corps as a stepping stone to medical school so I could eventually join MSF (Doctors Without Borders). I always wanted to contribute to impact at scale, and thought as a doctor with that experience I could make positive changes to large public health policy. Life happens, and while working as an EMT I experienced an injury that closed off that path. During my years of recovery I became convinced that there were other options with higher expected value I could pursue, but I never stopped thinking about the dire need for more medical care among the world’s poorest.

I love that both “directly save lives in the most underserved areas of the world” and “prepare for a massive moment in human history that could go terribly bad and wipe us all out or usher in a spectacular future of unimaginable prosperity,” the two drives pulling me in opposite directions, are partners in the EA movement. I was first drawn to EA during its formative years when it focused on GHWB and EtG. I want these to remain pillars of EA, not for aesthetic or sentimental reasons, but because they are solid. They are proven effective, and rely on a fraction of the uncertainty that AI safety/X-risk ideas do. It is epistemically humble to hedge our bets (my own donations are split roughly equally between the two, and I am pursuing career actions that I think will simultaneously benefit all EA causes). It is, in my opinion, a virtually unassailable[2] position that it is good to provide life-saving and life-improving medical care to the most desperate; and it is equally apparent that there are better and worse ways to go about this. Working this problem is, I claim, the essence of effective altruism.

-

On the other hand, I’ll out myself and say that I do not share many of my fellow EAs’ intuitions that reducing animal suffering is of comparable moral weight to improving human flourishing. But I love that their work is part of our shared movement. I’m glad to have mostly transitioned to a plant-based diet and to work as a community builder in support of their work (for example, EAGxNYC will have several animal-welfare talks and will serve all-vegan food. Is that enough plugs to get you to apply before the deadline on July 31st?). Because while I feel incredibly strongly about the GHWB and AI paths, the fact that I don’t feel the same way about animal welfare could just be because I’m wrong. If I’m wrong, I want to be wrong gracefully and not contribute to terrible animal suffering out of epistemic arrogance. I want to be convinced of the truth, no matter how counterintuitive or surprising it may be to me at first. Those advocating for the wellbeing of animals are doing extraordinarily rigorous work with the same methodology and passion that EAs direct towards GHWB and X-risk causes, so that makes them my allies. 

Saving lives, right now. Betting on the uncertain claim that we’re living at the hinge of history and have the power and responsibility to influence it. Advocating for the welfare of non-human animals (or even non-biological entities). To me, the throughline between these three, seemingly disparate cause areas is incredibly clear. EA takes ideas about how to do good seriously, even if they’re weird[3]. All moral progress was odd to the majority until it became the new established norm. Cultivating an overarching movement to support all kinds of “taking doing good seriously” appears to me like a rising tide that lifts all boats. Leaving it an open question means that individual workers and donors can still choose to allocate their time and money to the cause they understand best; this means that the benefit of participating under a shared banner does not come at any direct cost to any cause.

I’ve chosen to list these three because of how I’ve related to them; there are many more, but these three sufficiently illustrate a point. For me, I am intellectually and intuitively convinced of the EV of working on AI safety despite the uncertainty, believe effective GHWB work is beyond logical or ethical reproach, and think animal welfare work is of questionable value but am happy to support it. Some of the EAs that I respect most have the AI safety and animal welfare work completely reversed, and the ability for EA to sensibly host both viewpoints is, to me, one of the strongest elements of the whole endeavor. I’ve said I consider those working on animal welfare to be my allies even though I’m not yet convinced, but I want to go one step further. There are even a plethora of EAs who are in direct opposition to the work that I do and the causes I prioritize; if they do so from a place of trying to make the world a better place and are applying good reasoning in their arguments, then they are my allies too.

In support of maintaining the EA brand

TL;DR: PR being effortful is the cost of doing business for any movement trying to make change. We’ve encountered difficulties that are to be expected for anything of our size and ambition. If we want to continue and grow the impact that is central to EA, then we should address reasonable criticism and otherwise focus on broadcasting our impressive successes.

I think maintaining and further investing in the existing EA brand is the right call, but only part of that position is because of my belief in “big-tent EA.” A lot of the rest comes from a purely strategic place. I think questions of PR are essential, and that while focusing on PR can lead into bad places, that does not mean we can or should minimize focus on optics. Just that we should do it with our eyes open, and in my opinion, transparently.

I think it is to the movement’s benefit to say that I view the partnership between uncertain, “weird” causes and established obvious causes as both sensible and tactical. I don’t want it to be at all hidden that I support and cheer for effective GHWB actions for their direct impacts, as well as the legitimacy they grant the EA movement and thereby the indirect benefit seen by more speculative efforts like X-risk mitigation. The world is the way that it is, and we have to understand and acknowledge that reality. That includes people who want to do good but who would be very hard to convince about the danger of superhuman AI or the possible moral weight of wild insects.

So again, if it ain’t broke, don’t fix it. Don’t let the perfect be the enemy of the good. Sure, the EA brand has taken some hits, and starts off with the difficult position of seeming grandiose and judgmental. But as far as I’ve seen, those hits are attributable to the actions of a few individuals, not anything inherent to EA principles or the way our movement is organized and presented. As EA grew it was statistically inevitable that we would accumulate some significant mistakes and unfortunate associations. In some places we must publicly hold ourselves accountable, and aim to do better. In other situations where a PR hit is unjustified, we should demonstrate why and otherwise ignore bad-faith criticism. To retreat from a demonstrably impactful movement and name because of relatively minor media criticism is a terrible strategy. 

And as I mentioned in the discussion of the phrase “effective altruism,” I think its being perceived as grandiose is more attributable to our fundamental goal of improving altruistic impact than to any naming or branding issue. I’ve seen long lists of alternative options; some are fine, none seem obviously better than “effective altruism,” definitely none seem perfect. And with no perfect option, we have, in my opinion, the obvious choice of sticking with the good one we have.

Because from a branding perspective, while it is important to recognize the challenges we face, it is also essential to shout our successes from the rooftops. That’s how we attract the funding and talented workforce needed to positively impact the world. We have moved more money into effective GHWB efforts than into any of the less-certain causes that seemingly get more attention right now. We were rightfully focused on pandemic preparedness long before COVID-19, and have continued to emphasize it while the rest of the world irrationally deprioritizes it. The looming (albeit still uncertain) impact of transformative AI is just now reaching global attention; EA has been on it for longer than just about anyone. EA animal advocates have achieved significant improvements in farmed animal welfare and have been consistently on the forefront of the alternative protein solutions that could make torturous factory farms obsolete while also significantly reducing greenhouse gas emissions and improving the nutrition available to everyone. And that’s the tip of the iceberg; there’s so many more.

When I envisioned writing the above paragraph, I intended to make every single word (170 of them) a different link to a relevant EA win. It would be a good look as the final paragraph in a section about sticking with EA branding. Unfortunately I don’t have the time to do it, but trust me that they’re out there.[4] If you agree with the overall case of this post, a tiny call to action in support could be to link your own “best EA win” as a comment. If I get (incredibly improbably) 170 comments with links, I commit to going back and editing them all into the post.

Conclusion

I am not a PR professional, and I do not have any definite ideas of how to take action on the positions I describe above. When I look at what CEA has to say as the closest thing our movement has to centralized brand management, I notice a lot of uncertainty and restraint: (e.g. “Courting … mass attention while there is still significant uncertainty as to what EA is and what we want it to be also feels a bit premature.”) I’m happy they express their uncertainty (and think it’s very EA to question what EA is), but I’m hoping the answer might not be complicated.

Maybe I’ve been in a filter bubble or am interpreting other people’s opinions through a lens of confirmation bias; if there’s less support than I believe there is for sticking with EA as it generally exists, then I want to know that. If there is broad and quiet support, however, I think now is the important time to speak up and address some of the uncertainty reflected in CEA’s post and so many others like it. To me, I’m not that uncertain. “What EA is” hasn’t changed much for me, in large part because I view it as descriptive and not prescriptive. No one gets to choose precisely what EA is, the same way no one gets to authoritatively describe what feminism or environmentalism is, yet the terms are all necessary because they’re useful.

To me, EA is the movement that has coalesced around the answer (yes) to the question “should we use reason and evidence to do the most good we can do with limited resources?” It’s the raising of further questions and answering them with actions dedicated to saving lives in the neediest places, actions dedicated to preventing even small chances of human extinction, and actions dedicated to improving the welfare of non-human animals and all beings we should expand our moral circles to include, and a whole host of other comparably well-reasoned and high-expected-value actions. It’s the charitable and rational debate that we use to prioritize between those actions. And it’s the real world choices made to donate and work in disparate ways because those debates have yet to be resolved. EA is a lot of things, some of them seemingly incongruous, and it’s all the better for it. 

I titled and started off this post talking about supporting “General EA,” but that was a temporary clarification. I hope it’s clear that what I meant, and what I support, is just EA being EA, no qualifiers necessary.

  1. ^

    To be fair to the critics of using "effective altruist": yes, as supporters of effective altruism, the parallel term to environmentalist would actually be "effective altruismist." But that configuration doesn't exist in English, nor does any word to which attaching "-ism" and "-ist" would describe our movement's principles or membership. Words and names are supposed to facilitate clarity, and with no clearly superior option, we have to make do with a compromise somewhere.

  2. ^

    I know some might believe that supporting anything other than their own highest-priority cause comes at the opportunity cost of something more important and could be seen as "bad." I think this zero-sum mindset is ungrounded in the practical reality that not all altruistic resources are fungible. There's also the meat-eater problem, which I find slightly more compelling when raised against economic interventions than health interventions, but overall consider a poor framing for reasons beyond the scope of a footnote.

  3. ^

    Hopefully not just because they're weird. I know that's a hard tightrope to walk, but I think looking for weird, neglected places to find moral progress and high-impact actions is a good heuristic. I do think it is simply a heuristic for search and often disagree with how the ITN framework is applied, but that's a different post.

  4. ^

    Also relevant to a section on EA PR:  as moving as the block of separate links might have looked, I doubt anyone would have clicked through a significant number of them. So finding them all would primarily be just an optics move without actually being usefully informative. That's not a very effective use of my time, especially if I can achieve a substantive fraction of the optics impact by just getting you to imagine it as a bunch of blue links (and then reinforcing it with this meta comment).


Holly Morgan @ 2023-07-28T20:25 (+47)

I also hope that some of the (what I perceive to be) silent majority will chime in and demonstrate that we’re here and don’t want to see EA splintered, rebranded, or otherwise demoted in favor of some other label.

🙋‍♀️

This is one of my favourite posts on this forum and I imagine the large majority of EAs I know IRL would largely agree with it (although there's definitely a selection bias there). Thank you! I feel like there have been several moments in the past year or so where I've been like, "Man, EA NYC seems really cool."

Re "best EA win," I couldn't pick a favourite but here's one I learnt a few hours ago: Eitan Fischer - who I remember from early CEA days when he founded Animal Charity Evaluators - now runs the cultivated meat company Mission Barns. The Guardian says, "[A] handful of outlets have agreed to stock its products once they are approved for sale." 🥳

Toby_Ord @ 2023-08-07T09:54 (+28)

Thanks so much for writing this Arthur.

I'm still interested in the possibilities of changing various aspects of how the EA community works, but this post does a great job of explaining important things that we're getting right already and — perhaps more importantly — in helping us remember what it is all about and why we're here.

Michael_PJ @ 2023-07-27T07:43 (+23)

I wholeheartedly agree with this post.

I think there has been a bit of over-reacting to recent events. I don't think the damage is that bad, and to some degree I think we've just been unlucky. Maybe we need to do some things differently (e.g. try to project less of an air of certainty, which many critics seem to perceive) but we should also beware the illusion of control.

wuschel @ 2023-07-29T21:44 (+10)

Strong agree. 
I think having EA as a movement encompassing all those different cause areas also makes it possible to have EA-groups in smaller places, that could not sustain an AI-safety and a Global Health and Animal Rights group. 

Denis @ 2023-08-03T07:52 (+8)

Being relatively new to EA and to this forum, I strongly agree with the message here. While reading, I found a few specific points I would mention that you didn't stress:

  1. A good part of what you discuss relates to the business concept of "Brand Equity" - more or less, what is the subconscious and then conscious reaction of people when they hear a name. A good Brand is incredibly valuable. I worked for a consumer goods company and so I've heard quantitative estimates of the value of a Brand (like Walgreens or Tide or Hershey's) - to put it in perspective, if someone gave you the choice between owing all the factories used to make a specific consumer product, but not the Brand-name, or owning the Brand-name but not the factories, there are many cases where people would choose the Brand-name. I've also seen the massive effort involved in trying to create a new Brand, and to manage Brand-equity. Right now, EA is a very powerful Brand with a very positive equity. We need to realise that if we were to change it, it would mean re-starting brand-building from scratch. The equity isn't perfect, and there have been some negative PR events (some not our fault), but it's still very good. If we were to give that up and not just merge into existing charities, we'd really struggle to get anywhere near where we are today. 
  2. What you cite as almost a counter-example (that you don't feel so strongly about animals) is actually a powerful argument in favour of your conclusion. How many of us joined EA because of just one or two beliefs, and then through the EA community have widened our understanding of the ways we can make a difference. Rather than focus on the few where we may not fully agree, we could focus on how each of us, every day, gets a chance to interact with passionate, eloquent, rational people who want to educate us about more ways we can make the world better. What's not to love about that? And yet, again using you as an example, nobody forces any of us to fully embrace every cause or to feel passionate about every one. 
  3. EA is still a relatively new movement, growing fast. Some people still have not heard of it, or have heard of it only by name. It feels to me that getting more and more people involved can only help - the movement and the world - and that starts by making people aware that it exists and letting them see what it is. Splitting into different movements, or even renaming our movement, would be a step backwards in terms of growth just for the simple reason that each new name would be unknown again, and we'd need to start the whole effort again, and for each new group, there wouldn't be a history that interested people could read about (for example, the EA Forum) to find out more. (It's great if you're Elon Musk and you can change "Twitter" to "X" because you have enough name-recognition that people everywhere hear the news. We don't have that luxury). 

You suggest that we include one EA win. For me, one powerful EA win has been Direct Giving, the idea that people in dire poverty are far better judges of what they need than even the most well-meaning charity executives. It is through EA, or at least, through the kind of quantitative studies that EA supports, that direct giving has gained support and credibility as a very effective way to help. 

Chris Leong @ 2023-07-27T03:53 (+8)

Thanks for writing up this post.

I think it makes sense for us to increase the amount of work that happens under specific cause areas, but I think it’s also important for us to maintain the EA brand. I agree that we should be reluctant to discard a proven brand that has done a great job of attracting the kind of people who are capable of having a significant impact.

crobertson @ 2023-08-03T08:14 (+2)

I wholeheartedly agree.

To me, the "effective" supplements the "altruism". There are so many resources available via EA that could benefit those already involved in altruistic endeavours. A Big Tent community that does not require cause-neutral, hyper-optimised members would be able to provide those services more effectively. This should naturally result in a hedged movement and provides stronger appeal to those already committed to making the world better (presumably with stronger adherence to positive values than those responsible for the recent bad publicity).

A question posed by a local EA club member comes to mind: Is it better to teach an altruist to be more effective, or an effector to be more altruistic?