EA is a global community - but should it be?

By Davidmanheim @ 2022-11-18T08:23 (+200)

Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.

The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.

“The community” failed?

Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled. 

The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.

But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.” 

A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.  

Failures of Trust

There are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.

To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building. 

I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expelled, and that they really didn’t want to review the security camera footage, but they would if they needed to. It never occurred to me that there were cameras - but of course there were, if only because RAND has a secure classified facility on campus, and security officers that occasionally needed to respond to crazy people showing up at the gate. That meant they could trust, because they can verify.

Similarly, the time sheets for billing research projects, which was how everyone including the grad student got paid, got reviewed. I know that there were flags, because another graduate student I knew was pulled in and questioned for billing two 16-hour days one week. (They legitimately had worked insane hours those two days to get data together to hit a deadline - but someone was checking.) You can certainly have verifiably high trust environments if it’s hard to cheat and not get caught.

But EA was, until now, a high-trust group by default. It’s a huge advantage in working with others. knowing you are value aligned, that you can assume others care, and that you can trust them, means coordination is far easier. The FTX incident has probably partly destroyed that. (And if not, it should at least cause a serious reevaluation of how we use social trust within EA.) 

Restoring Trust?

I don’t think that returning to a high-trust default is an option. Instead, if we want to reestablish high trust throughout the community, we need to do so by fixing the lack of basis for the trust - and that means institutionalization and centralizing. For example, we might need institutions to “credential” EA members or at least institutions, perhaps to allow democratic control, or at least clarity about membership. Alternatively, we could double-down on centralizing EA as a movement, putting even more power and responsibility on whoever ends up in charge - a more anti-democratic exercise. 

However we manage to rebuild trust, it’s going to be expensive and painful as a transition - but if you want a large and growing high trust community, it can’t really be avoided. I don’t think that what Cremer and Kemp suggest is the right approach, nor are Cremer’s suggestions to MacAskill sufficient for a large and growing movement, but some are necessary, and if those measures are not taken, I think that the community should be announcing alternative structures sooner rather than later.

This isn’t just about trust, though. We’ve also seen allegations that EA as a community is too elitist, that it’s not a safe place for women, that it’s not diverse enough, and so on. These are all problems to address, but they are created by a single decision - to have an EA community at all. And the easy answer to many problems is to have a central authority, and build more bureaucracy. But is that a good idea?

The alternative is rethinking whether EA should exist as a community at all. And - please save the booing for the end - maybe it shouldn’t be one.

What would it mean for Effective Altruism to not be a global community?

Obviously, I’m not in charge of the global EA community. No one is, not even CEA, with a mission “dedicated to building and nurturing a global community.” Instead, individuals, and by extension, local and international communities are in charge of themselves. Clearly, nobody needs to listen to me. But nobody needs to listen to the central EA organizations either - and we don’t need to, and should not, accept the status quo. 

I want to explore the claim that trying to have a single global community is, on net, unhelpful, and what the alternative looks like. I’m sure this will upset people, and I’m not saying the approach outlined below is necessarily the right one - but I do think it’s a pathway we, yes, as a community, should at least consider.

And I have a few ideas what a less community-centered EA might look like. To preface the ideas, however, “community” isn’t binary. And even at the most extreme, abandoning the idea of EA as a community would not mean banning hanging out with other people inspired by the idea of Effective Altruism, nor would it mean not staying in touch with current friends. It would also not mean canceling meet-ups or events. But it does have some specific implications, which I’ll try to explore.

Personal Implications 

First, it means that “being an EA” would not be an identity. 

This is probably epistemically healthy - the natural tendency to defend an in-group is far worse when attacks seem to include you, instead of attacking a philosophy you admire, or other individuals who like the same philosophy. I don’t feel attacked when someone says that some guy who reads books by Naomi Novik is a jerk[1], so why should I feel attacked when someone says a person who read and agreed with “Doing Good Better” or “The Precipice” is a jerk?

Not having EA as an identity would also mean that public relations stops being a thing that a larger community cares about - thankfully. Individual organizations would, of course, do their own PR, to the extent that it was useful. This seems like a great thing - concern about community PR isn’t a good thing for anyone to care about. We should certainly be concerned about ethics, and not doing bad things, not the way it looks.

Community Building Implications

Not having EA as a community obviously implies that “EA Community Building” as a cause area, especially a monolithic one, should end. But I think in retrospect, explicitly endorsing this as a cause to champion was a mistake. Popularizing ideas is great, bringing people with related interests is helpful, but there are some really unhealthy dynamics that were created, and fixing them seems harder than simply abandoning the idea, and starting over.

This would mean that we stopped doing “recruitment” on college campuses - which was always somewhat creepy. Individual EAs on campus would presumably still tell their friends about the awesome ideas, recommend books, or even host reading groups - but these would be aimed at convincing individuals to consider the ideas, not to “join EA.” And individuals in places with other EAs would certainly be welcome to tell friends and have meet-ups. But these wouldn’t be thought of as  recruitment, and they certainly wouldn’t be subsidized centrally.

Wouldn’t this be bad?

CEA’s web site says “Effective altruism has been built around a friendly, motivated, interesting, and interested group of people from all over the world. Participating in the community has a number of advantages over going it alone.” Would it really be helpful to abandon this?

My answer, tentatively, is yes. Communities work well with small numbers of people, and less well as they grow. A single global community isn’t going to allow high trust without building, in effect, a church. I’m fairly convinced that Effective Altruism has grown past the point where a single community can be safe and high trust without hierarchy and lots of structure, and don’t know that there’s any way for that to be done effectively or acceptably.

Of course, individuals want and need communities - local communities, communities of shared interest, communities of faith, and so on. But putting the various parts of effective altruism into a single community, I would argue, was a mistake.

More Implications, and some Q&A

Would this mean no longer having community building grants, or supporting EA-community institutions?

First, I think that we should expect communities to be self-supporting, outside of donor dollars. Having work spaces and similar is great, but it’s not an impartially altruistic act to give yourself a community. It’s much too easy to view self-interested “community building” as actually altruistic work, and a firewall would be helpful.

Given that, I strongly think that most EAs would be better off giving their 10% to effective charities focused on the actual issues, and then paying dues or voluntarily contributing other, non-EA-designated funds for community building. That seems healthier for the community, and as a side-benefit, removes the current centralized “control” of EA communities, which are dependent on CEA or other groups. 

There are plenty of people who are trying to give far more than 10% of their income. Communities are great - but paying for them is a personal expense, not altruism. And from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism. I would far rather have people giving “only” 10% to charity, and using their other money for paying dues towards hosting or helping to subsidize fun events for others in their community, or paying to work in a EA-aligned coworking space.

Similarly, college students and groups that wanted to run reading clubs about EA topics would be more than welcome to ask alumni or others to support them. There is a case to be made for individuals spending money to subsidize that - but things like community retreats should be paid for by attendees, or at most, should be subsidized with money that wasn’t promised to altruistic causes.

What about EA Global?

I think it would mean the end of “EA Global” as a generic conference. I have never attended, but I think having conferences where people can network is great - however, the way these are promoted and paid for is not. Davos is also a conference for important people to network - and lots of good things are done there, I am sure. We certainly should not be aiming for having an EA equivalent.

Instead, I would hope the generic EA global events are replaced by cause and career specific conferences, which would be more useful at the object level. I also think that having people pay to attend is good, instead of having their hotel rooms and flights paid for. If there are organizations or local groups that send people, they would be welcome to pay on behalf of the attendees, since they presumably get value from doing so. And if there are individuals who can’t otherwise afford it, or under-represented groups or locations, scholarships can be offered, paid for in part by the price paid by other attendees, or other conference sponsors. (Yes, conferences are usually sponsored, instead of paid for by donations.)

Wouldn’t this make it harder for funders to identify promising younger values aligned people early?

Yes, it would. But that actually seems good to me - we want people to demonstrate actual ability to have impact, not willingness to attend paid events at top colleges and network their way into what is already a pretty exclusive club.

Wouldn’t this tilt EA funders towards supporting more legibly high-status people at top schools?

It could, and that would be a failure in design of the community. And that seems bad to me, but it should be countered with more explicitly egalitarian efforts to find high-promise people who didn’t have parents who attended Harvard. But that isn’t what paid and exclusive conferences will address. Effective Altruism doesn’t have the best track record in this regard, and remedies are needed - but preserving the status quo isn’t a way to fix the problem.

Should CEA be defunded, or blamed for community failures?

No, obviously not. This post does explicitly attack some of their goals, and I hope this is taken in the spirit it is intended - as exploration and hopefully constructive criticism. They do tons of valuable work, which shouldn’t go away. If others agree that the current dynamics should change, I am still unsure how radically CEA should change direction. But if the direction I suggest is something that community members think is worth considering, CEA is obviously the key organization which would need to change.

Is this really a good idea?

It certainly isn’t something to immediately do in 2023, but I do think it’s a useful direction for EA to move towards. And directionally, I think it’s probably correct - though working out the exact direction and how it should be done is something that should be discussed.

And even if people dislike the idea, I hope it will prompt discussion of where current efforts have gone wrong. We should certainly be planning for the slightly-less-than-immediate term, and publicly thinking about the direction of the movement. We need to take seriously the question of what EA looks like in another decade or two, and I haven’t seen much public thinking about that question. (Perhaps longtermism has distracted people from thinking on the scale of single decades. Unfortunately.)

But rarely is a new direction something one person outlines, and everyone decides to pursue. (If so, the group is much too centrally controlled - something the founders of EA have said they don’t want.) And I do think that something like this is at least one useful path forward for EA. 

If EA individuals and groups take more of this direction, I think it could be good, but details matter. At the same time, trajectory changes for large groups are slow, and should be deliberated about. So the details I’ve outlined have been trying to push the envelope, and prompt consideration of a different path we could take than the one we are on.

  1. ^

     I promise I picked this as an example before Eliezer wrote his post. Really.


     


Minh Nguyen @ 2022-11-18T18:17 (+78)

In the most respectful way possible, I strongly disagree with the overarching direction put forth here. A very strong predictor of engaged participation and retention in advocacy, work, education and many other things in life is the establishment of strong, close social ties within that community.

I think this direction will greatly reduce participation and engagement with EA, and I'm not even sure it will address the valid concerns you mentioned.

I say this despite the fact that I didn't have super close EA friends in the first 3-4 years, and still managed to motivate myself to work on EA stuff as well as policy successful advocacy in other areas. When it comes to getting new people to partake in self-motivated, voluntary social causes/projects, one of the first things I do is to make sure they find a friend to keep them engaged, and this likelihood is greatly increased if they simply meet more people.

I am also of the opinion that long-term engagement relying on unpaid, ad-hoc community organising is much more unreliable than paid work. I think other organisers will agree when I say: organising a community around EA for the purpose of deeply engaging EAs is time-consuming, and greatly benefits from external guidance and financial support. If you want to get people engaging deeply with EA ideas and actually taking EA roles, unpaid volunteer organisers are a significant bottleneck. You're expecting one organiser to regularly host events, perform tasks and engage multiple people at a deep level without central support, and that's a very difficult ask.

I will add also that I am from a non-EA hub, and the only people I know who work full-time with EA orgs directly cite EAGs as a catalyst for their long-term involvement.

I'm just ... skeptical of the theory of change put forth here.

Davidmanheim @ 2022-11-20T08:25 (+7)

I think that social ties are useful, yet having a sprawling global community is not. I think that you're attacking a bit of a straw man, one which claims that we should have no relationships or community whatsoever.

I also think that there is an unfair binary you're assuming, where on one side you have "unpaid, ad-hoc community organising" and on the other you have the current abundance of funding for community building. Especially in EA hubs like London, the Bay Area, and DC, the local community can certainly afford to pay for events and event managers without needing central funding, and I'd be happy for CEA to continue to do community building - albeit with the expectation that communities do their own thing and pay for events, which would be a very significant change from the current environment. Oh, and I also don't live in an EA hub, and have never attended an EAG - but I do travel occasionally, and have significant social interaction with both EAs and non-EAs working in pandemic preparedness, remotely.  The central support might be useful, but it's far from the only way to have EA continue.

RayTaylor @ 2023-09-28T18:01 (+1)

Both of you now seem to be focusing specifically on funding for community building, whereas the original post was much broader:

... maybe if those broader issues were addressed, the question of which community-building to fund would then be easier to work out?

Matthew Stork @ 2022-11-18T17:01 (+51)

What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation. It's not really clear to me that this is true. The crux of your argument seems to come from this paragraph:

The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.

Would this have been any different if EA consisted of an archipelago of affiliated groups? If anything, Whistleblowing is easier in a large group since you have a network of folks you can contain to raise the alarm. Without a global EA group, who exactly do the ex-Alameda folks complain to? I guess they could talk to a journalist or something, but "trading firm CEO is kind of an amoral dick" isn't really newsworthy (I'd say that's probably the default assumption).

I also generally disagree that making EA more low trust is a good idea. It's pretty well established that low trust societies have more crime and corruption than high trust societies. In that sense, making EA more low trust seems counterproductive to prevent SBF v2.0. In a low trust society, trust is typically reserved for your immediate community. This has obvious problems though! Making trust community-based (i.e. only trusting people in my immediate EA community) seems worse than making trust idea-based (i.e. trusting anyone that espouses shared EA values). People are more likely to defend bad actors if they consider them to be part of their in-group.

To be honest, I'd recommend the exact opposite course of action: make EA even more high trust. High trust societies succeed by binding members to a common consensus on ethics and morality. EAs need to be clearer about our expectations are with regard to ethics. It was apparently not clear to SBF that being a part of the EA community means adherence to a set of norms outside of naive utilitarian calculus. The EA community should emphatically state our norms and expectations. The corollary to that is that members that break the rules must be called-out and potentially even banished from the group. 

Davidmanheim @ 2022-11-19T15:56 (+11)

"What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation."

1. The community is unhealthy in various ways. 
2. You're suggesting centralizing around high trust, without a mechanism to build that trust.

I don't think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn't expect a random person most of us had never  heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn't say it loudly because they feared community censure. That's a big problem.

RayTaylor @ 2023-09-28T18:07 (+1)

It's also hard to call people out when a lot of you are applying to him/them for funding, and are mostly focused on trying to explain how great and deserving your project is.

One good principle here is "be picky about your funders". Smaller amounts from steady, responsible, principled and competent funders, who both do and submit to due diligence, are better than large amounts from others. 

This doesn't mean you HAVE to agree with their politics or everything they say in public - it's more about having proper governance in place, and funders being separate from boards and boards being separate from executive, so that undue influence and conflicts of interest don't arise, and decisions are made objectively, for the good of the project and the stated goals, not to please an individual funder or get kudos from EAs.

I've written more about donor due diligence in the main thread, with links.

Linch @ 2022-11-18T22:15 (+11)

It's pretty well established that low trust societies have more crime and corruption than high trust societies

FWIW, I've generally assumed that causality goes the other way, or a third factor causes both. 

Larks @ 2022-11-19T01:33 (+9)

Yes, economists chose to use the term 'trust' but I think a better term for what they are really discussing is 'trustworthyness'; I suspect they made the substitution for optics reasons.

Devin Kalish @ 2022-11-18T19:07 (+1)

In agreement with the first part of this comment at least. If there were EA causes but not an EA community, it seems like much the same thing would have happened. A bunch of causes SBF thought were good would have gotten offered money, probably would have accepted the money, and then wound up accidentally laundering his reputation for being charitable while facing the prospect that some of the money they got was ill-gotten, and some of the money they had planned on getting wasn't going to come. Maybe SBF wouldn't have made his money to begin with? I find it unlikely, ideas like earning to give and ends-justifies-means naive consequentialism and high-risk strategies for making more money are all ideas that people associate with EA, but which don't appeal to anything like a "community". This isn't to say none of these points are important aside from SBF, but well, it's just odd to see them get so much attention because of him. Similar points have been made in Democratizing Risk, and in a somewhat different way in the recent pre-collapse Clearer Thinking interview with Michael Nielson and Ajeya Cotra. Maybe it's still worth framing this in terms of SBF if now is an unusually good chance to make major movement changes, but at the same time I find it a little iffy. It seems misleading to frame this in terms of SBF if SBF didn't actually provide us with good reasons to update in this direction, and it feels a bit perverse to use such a difficult time to promote an unrelated hobbyhorse, as a more recent post harped on (I think a bit too much, but I have some sympathy for it).

Matthew Stork @ 2022-11-18T19:54 (+1)

Agree with your post and want to add one thing. Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn't really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism. 

I think back to the old Scott Alexander post about the rationalist community: Yes, We Have Noticed The Skulls | Slate Star Codex. I think he makes a valid point, that the rationalist community has tried to address the obvious failure modes of rationalism. This is also true of the EA community, in that there has absolutely been some criticism of galaxy brained naive utilitarianism. However, there is a certain defensiveness in Scott's post, an annoyance that people keep bringing up past failure modes even though rationalists try really hard to not fail that way again. I suspect this same defensiveness may have played a role in EA culture. Utilitarianism has always been criticized for the potential that it could be used to justify...well, SBF-style behavior. EAs can argue that we have newer and better formulations of utilitarianism / moral theory that don't run into that problem, and this is true (in theory). However, I do suspect that this topic was undervalued in the EA community, simply because we were super annoyed at critics that keep harping on the risks of naive utilitarianism even though clearly no real EA actually endorses naive utilitarianism. 

Evan R. Murphy @ 2022-11-21T09:04 (+9)

Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn't really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism. 

So I disagree with this because:

  1. It's unclear whether it's right to attribute SBF's choices to a failure of EA ideas. Following SBF's interview with Kelsey Piper and based on other things I've been reading, I don't think we can be sure at this point whether SBF was generally more motivated by naive utilitarianism or by seeking to expand his own power and influence. And it's unclear which of those headspaces led him to the decision to defraud FTX customers.
  2. It's plausible there actually were serious ways that the EA community failed with respect to SBF. According to a couple  accounts, at least several people in the community had reason to believe SBF was dishonest and sketchy. Some of them spoke up about it and others didn't. The accounts say that these concerns were shared with more central leaders in EA who didn't take a lot of action based on that information (e.g. they could have stopped promoting Sam as a shining example of an EA after learning of reports that he was dishonest, even if they continued to accept funding from him). [1]

    If this story is true (don't know for sure yet), then that would likely point to community failures in the sense that EA had a fairly centralized network of community/funding that was vulnerable, and it failed to distance itself from a known or suspected bad actor. This is pretty close to the OP's point about the EA community being high-trust and so far not developing sufficient mechanisms to verify that trust as it has scaled.

--

[1]: I do want to clarify that in addition to this story still not being unconfirmed, I'm mostly not trying to place a ton of blame or hostility on EA leaders who may have made mistakes. Leadership is hard, the situation sounds hard and I think EA leaders have done a lot of good things outside of this situation. What we find out may reduce how much responsibility I think the EA movement should put with those people, but overall I'm much more interested in looking at systemic problems/solutions than fixating on the blame of individuals.

Guy Raveh @ 2022-11-18T15:49 (+46)

Haven't finished reading yet, but I feel obliged to flag* (like anywhere else where they come up) that this paragraph:

We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning.

is linking to a known cult leader. This is deeply ironic.

*The reason I think this should be stated every time is that there are many new people coming in all the time, and it's important that none of them encounter these people without the corresponding warning.

Cillian Crosson @ 2022-11-18T19:28 (+18)

I think this comment would be much more helpful if it linked to the relevant posts about Leverage rather than just called Geoff a "known cult leader".

(On phone right now but may come back and add said links later unless Guy / others do)

Guy Raveh @ 2022-11-18T20:11 (+3)

Edited

Davidmanheim @ 2022-11-19T15:58 (+3)

Completely fair - I'm not endorsing him, just pointing to a source for the allegation. (And more recent allegations, about a compete lack of control and self dealing, are far more damning.)

Richard Y Chappell @ 2022-11-18T11:14 (+37)

Upvoted despite disagreeing, since I think this is an important question to explore.  But I'm puzzled by the following claim:

from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism.

Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people "joining EA", taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about.  Without addressing this head-on, I'm not sure which of the following you mean:

(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.

(2) A moral/conceptual disagreement: You deny that indirectly causing good counts as altruism.

Can you clarify which of these you have in mind?

Matthew Yglesias @ 2022-11-19T14:07 (+38)

Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people "joining EA", taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about.

 

I took OP's point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas. 

"Why do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?" seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then it's understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere. 

Richard Y Chappell @ 2022-11-19T14:55 (+32)

Hmm, I think of the "classic EA" case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around "overhead", CEO pay, fundraising, etc., aren't necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.

Davidmanheim @ 2022-11-18T11:50 (+14)

I don't mean either (1) or (2), but I'm not sure it's a single argument. 

First, I think it's epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it's good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn't be in EA because they aren't making the right choices. If your community isn't just about the eventual altruistic value they will create, those failure modes are less likely.

Second, it's easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated - both community size and total dollars seem like an  unfortunately easy attractor for this failure.

Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!

Chantal @ 2022-11-18T12:43 (+14)

Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1). 

Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/school of thought, and I struggle to understand how a person can "be" a philosophy, or how a philosophy can "recruit members". 

I also suspect that a strong self-perception that one is a "good person" can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from I'm guessing most young women who have ever been involved in a movement for change) can tell you. 

I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity  ("join us!") as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation. 

RayTaylor @ 2023-09-28T18:21 (+1)

> healthy for people to separate giving to their community from altruism.

Is this realistically achievable, with the community we have now? How?


(I imagine it would take a comms team with a social psychology genius and a huge budget, and still would only work partially, and would require very strong buy in from current power players, and a revision of how EA is presented and introduced? but perhaps you think another, leaner and more viable approach is possible?)

>The simpler your path to impact is, the fewer failure points exist

That's not always true. 

Some extreme counter-examples:

a. Programmes on infant stunting keep failing, partly because an overly simple approach has been adopted (intensive infant feeding, Plumpy Nuts etc, with insufficient attention to maternal nutrition, aflatoxin removal, treating parasites in pregnancy, adolescent nutrition, conditional cash transfers etc)

b. A critical path plan was used for Apollo, and worked much better than the simpler Soviet approach, despite being much more complicated. 

c. The Brexit Leave campaign SEEMED simple but was actually formed through practice on previous campaigns, and was very sophisticated "under the hood", which made it hard to oppose.

RyanCarey @ 2022-11-19T19:34 (+35)

It's worth considering Eric Neyman's questions: (1) are the proposed changes realistic, (2) would the changes actually have avoided the current crisis, and (3) would its benefits exceed its costs generally.

On (1), I think David's proposals are clearly realistic. Basically, we would be less of an "undifferentated social club", and become more like a group of academic fields, and a professional network, with our conferences and groups specialising, in many cases, into particular careers.

On (2), I think part of our mistake was we used an overly one-dimensional notion of trust. We would ask "is this person value-aligned?", as a shorthand for evaluating trustworthiness. The problem is that any self-identified utilitarian who hangs around EA for a while will then seem trustworhty, whereas objectively, an act utilitarian might be anything but. Relatedly, we thought that if our leadership trust someone, we must trust them too, even if they are running an objectively shady business like an offshore crypto firm. This kind of deference is a classic problem for social movements as well.

Another angle on what happened is that FTX behaved like a splinter group. Being a movement means you can convince people of some things for not-fully-rational reasons - based on them liking your leadership and social scene. But this can also be used against you. Previously, the rationalist community has been burned by the Vassarites, the Zizians. There has been Leverage Research. And now, we could say that FTX had its own charismatic leadership, dogmas (about drugs, and risk-hunger), and social scene. If we were less like a movement, it might've been harder for them to be.

So I do think being less communal and more professional could make things like FTX less likely.

On (3), I think this change would come with significant benefits. Fewer splinter groups. Fewer issues with sexual harrassment (since it would be less of a dating scene). Fewer tensions relating to whether written materials are "representative" of people's views. Importantly, high-performing outsiders would less likely bounce off due to EA seeming "cultic", as did Sam Harris, per his latest pod. Also see Matt Y's comment.

I think the main downside to the change would be if it involved giving up our impact somehow. Maybe movements attract more participants than non-movements? But do they attract the right talent? Maybe members of a movement are prepared to make larger sacrifices for one another? But this doesn't seem a major bottleneck presently.

So I think the proposal does OK on the Eric-test. There is way more to be said on all this, but FWIW, my current best guess is that David's ideas about professionalising and disaggregating by professional interest should be a big part of EA's future.

GidiKadosh @ 2022-11-18T21:45 (+31)

On the one hand, I have a strong urge to say something like:  But David, community building is not only useful for "trust" and "vetting people"!

On the other hand - In the last 2.5 years as a community builder, I was fighting desperately trying to make EA groups more practical and educational, instead of social and network-based.

I'm not the only one. I know many other community builders who tried to argue that our resources should focus on "tools", and less on anecdotes about why the maximization mindset is important and anecdotes about the most pressing cause areas. 
Instead, I think that the value that we provide should be something in the lines of providing them with actual tools for applying the maximization mindset, and for prioritizing their career/donation/research/etc opportunities by social impact.

Almost everyone I spoke with agreed with this notion, including multiple representatives from CEA - but nothing changed so far regarding groups' resources or incentives. 
So, if the main value of community building was meant to be for vetting, then I'd say that the community failed. I don't strongly believe this is the case right now, but I think that many perceive this to be the main value we provide. Anyhow - SBF is a great example, but I think that trust failures are not the only reason why this shift is needed.

My view is that instead of being social-focused, we need a more practical form of community building.  EA distinct itself from the traditional NGO world or from other communities using “impact” as a buzzword, because we care about maximizing impact. But then, groups don't really have the resources to actually help people maximize their impact. After 2.5 years, I still don’t know where to find a good, simple article or video that describes how to create a theory of change (which is needed when submitting grants to EA funders!), or a clear article describing the practical aspects of “Thinking at the margin”, if I want to send those to community members. It takes an absurdly long time to find a good article about the basics of cost-effectiveness estimates, compared to how rooted this idea is in the movement. How long does it take to find an advanced handbook on conducting cost-effectiveness research in the context of social impact? I don’t know, we found none and had to write such a guide in EA Israel.

I strongly agree with the notion that "community" isn't binary:

And I have a few ideas what a less community-centered EA might look like. To preface the ideas, however, “community” isn’t binary. 

But I think that the suggestion in this post goes too far on the non-community side of the spectrum. I think that communities provide significant value (such as motivation or connections), and thatthis is the bottleneck for impact for many people. And I also wouldn't denounce the concept of "being an EA" too quickly, as it still means something like "I think carefully about helping others, compared to the default scope-insensitive notion of doing good". 
But convincing people of this basic concept is really an easy win in my opinion. The more important challenge in my view is getting people to develop their own, independent outlooks on maximizing impact. 

Michael Noetel @ 2022-11-22T01:34 (+13)

I really like this framing Gideon. It seems aligned with CEA's Core EA principles. I'd love EA to be better at helping people learn skills.  One of our working drafts for an EA MOOC focuses more on the those core principles and skills. Is something like this work-in-progress closer to what you had in mind?

Davidmanheim @ 2022-11-19T16:07 (+11)

This seems very plausibly a better direction. I think we agree there is something wrong, and the direction you're pointing may be a better one - but I'm concerned, because I don't see a way to make an extent and large community shift, and think that we need a more concrete theory of change...

Speaking of which, "I still don’t know where to find a good, simple article or video that describes how to create a theory of change" - you should have asked! I'd recommend here and here. (I also have a couple more PDFs of relevant articles from classes in grad school, if you want.)

GidiKadosh @ 2022-11-19T16:17 (+2)

Don't you think that CEA can run a large community shift, by changing the guidance and incentives for local groups?

(Thank you so much for the material! Seems better than the material I have today, but I think we need much simpler and more communicative material for proper out-facing community building)

Davidmanheim @ 2022-11-19T17:01 (+8)

I hope they can, but don't know that it's easy to direct groups so easily. My biggest concern is with college EA groups, where well-intentioned 21 year olds with very limited life experience  are running groups, often without much external monitoring.

And regarding material for theories of change, I'm skeptical that it can be taught well without somewhat deep engagement. In grad school, it took thinking, feedback, and practice to get to the point where we could coherently lay out a useful theory of change.

aogara @ 2022-11-18T17:34 (+31)

One small personal experience: I worked a non-EA job for three years. None of my close friends were interested in EA, and my job wasn’t in a highly impactful cause area. I developed some other interests during those years, reading a lot about startups and VC and finance. Despite my enthusiasm when I first read Peter Singer and Doing Good Better, I think my interest in working on EA topics could have slowly faded and been replaced with other interesting ideas.

The EA community was a big part of what kept me engaged with EA. This forum was a steady stream of information about how to do good in the world, and one that allowed me to voice my own opinions and have lots of interesting conversations. I attended two online EA Globals which mostly made me identify more as an EA. Later I went back to school, where the university EA group leader reached out and encouraged me to join a reading group. We had weekly dinners and great conversations, and only a few months later, I quit my part-time job at a for-profit startup and began working on AI Safety.

It’s hard to say what the counterfactual is, but I think the odds I’d be working on AI Safety right now would have been much lower without the identity, personal connections, and intellectual engagement from the EA community. Part of it is nerdsniping — it’s not always easy to find smart, sensible conversations about the world, but I’ve always found EA to provide plenty of them. There are real downsides — I used to think that I deferred way too much on my opinions about cause prioritization (I think I’ve improved, but maybe I’ve just lost my independent thinking). Your post is a great analysis of those dynamics and I’m not trying to argue for a bottom line, but just wanted to share one personal benefit of the community.

dyusha @ 2022-11-18T16:09 (+30)

The first time I heard about effective altruism was when someone told me "you should check out your local EA community--it's where the cool people hang out". Indeed this turned out to be the case; I had met thoughtful, curious people, and gotten quite interested in the ideas of effective altruism.

When I moved a year later to a different city in a different country, I spent a few weeks lamenting the difficulty of meeting people, until I realized I could go have thoughtful and interesting conversations with wonderful people at the local EA community. In a way, it feels not unlike to churches in towns in the United States--you can move from one state to another, and yet the next Sunday you'll find a tight-knit community of people who approximately share a large subset of your values. And while in some denominations of Christianity, there's a large global hierarchical structure, as far as I am aware, there's no global fund for starting new churches (I may be totally wrong, I don't actually know much about this. Though I have heard of church planting, but even there, the new church "must eventually have a separate life of its own and be able to function without its parent body").

One of the major reasons people I've met valued the global EA community was due to the coordination problems of otherwise getting a lot of people to start some new initiative. That is, say you're trying to start an organization centred around giving RL agents numinous experiences. The median number of people who think this is the most worthwhile thing to work on, in each local EA community, is exactly zero, but through the connections formed by the global community (through EAG and forums) it's a lot easier to find people interested in starting rLANE, which would never have been possible on the local scale.

Fundamentally, at my level of involvement with the EA community, I'm not sure I've witnessed the failure modes that have been taken as the antecedent for this whole post. Small, local communities haven't felt elitist in the same way that nobody calls the local swing dance club elitist, despite it being very obvious whether you're an experienced swing dancer or not. The global community has been very good at solving coordination problems to tackle specific challenges, and I haven't witnessed any downsides personally (but once again, I've never been accepted to an EAG or anything, and I've never lived in any of the "big" EA cities). If decentralization is the correct direction to go down, I think the "who's interested in helping me work on <x>?" is one of the most valuable things to make sure has a strong foundation of interconnectedness between different communities in different places.

Dvir Caspi @ 2022-11-18T10:36 (+26)

As someone who has been deeply into community building for years (most of it outside EA), I am biting my lip yet upvoting this.  I deeply agree that "Being an EA" as an identity has problematic implications, to say the least.  While I have many thoughts, for now I'll just highlight what you wrote which for me is the most important: "[we should be] convincing individuals to consider the ideas, not to “join EA.”". 

Michael M-n @ 2022-11-19T17:24 (+25)

I'm not a member of the EA community, and in fact have been quite sceptical of it, but I do believe in the idea of altruism being effective, so I wanted to engage on a post like this. For context, I work on public health in developing countries, and have worked in a variety of fields in the traditional aid sector from agriculture to women's rights to civil society – in my observation public health is the most effective, followed by agriculture. While I'm sceptical of EA as a community, I do believe in some of the tenets, and even use services like GiveWell to guide my own donations. I wanted to ask the some questions, if you or community members are willing to answer – bear with me as I don't know the EA jargon  that well.

One of the main reasons I'm sceptical is sort of a generalised scepticism of cliques, identities, and subcultures in general. Human beings are social animals, and we naturally seek status. So when a community/subculture forms, suddenly people seek status in it, seek to associate with its 'leaders' or popular causes, this short circuits the ostensibly rational analysis people think they're doing. Of course we don't know we're doing this, we think we're being perfectly rational, but a lot of what I hear coming out of the EA community seems to follow this dynamic - longtermism, crypto, FTX and its supporters all feel typical of social dynamics in other communities, and I just don't think there's any way for human beings to get around this. Does this make the idea of an EA 'community' self-defeating?

Similarly, the other aspect I don't see EA (as it filters to the outside) dealing with well is humility, specifically humility and one's own motivations and humanity. Knowing we're susceptible to all sorts of faults, we should acknowledge these when we make plans. When we observe thousands of people saying 'I'll do good things when I'm rich/powerful', and then getting caught up in their own world, it seems absurd to say 'aha, but I'll be different!' We have to assume that some of our motivations aren't entirely pure or rational. EA seems to think we can put that aside with enough technical terminology and get to pure reason, but I just don't see it happening. Once it becomes a human social structure, how can humility remain a part of an EA community? Human communities just don't tend to work that way - and the individuals who DO see subtleties tend to lose status in closed communities.

My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it. You also need to double-check it constantly – for my work I insist on constantly interacting at the clinic level to see if what I'm building is actually used and useful, and change it if it isn't. I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven't, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles. So what is it all for?

Davidmanheim @ 2022-11-20T07:17 (+3)

Does this make the idea of an EA 'community' self-defeating?


I think that it's possible to have a community that minimizes the distortion of these social dynamics, or at least is capable of doing significant good despite the distortions - but as I argued in the post, at scale this is far harder, and I think it might be net negative to try to build a single global community the way EA seems to have decided to do.
 

My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it. 

Agreed - and that was one of the key things early EA emphasized - and it's been accepted, in small part due to EA, to the point that it is now conventional wisdom.

I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven't, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles.

I don't think that EA as a movement is well placed to provide ways to reform traditional aid. As you point out, it has many pathologies, and I don't think there is a simple answer to fix a complex system deeply embedded in geopolitics and social reality. I do think that EA-promoted ideas, including giving directly, have the potential to displace some of the broken systems, and we should work towards figuring out where simpler systems can replace  current complex but broken ones. I also think that an EA-like focus on measuring outcomes helps push for the parts of traditional aid that do work. That is, it identifies specific programs  which are effective and evaluates and scales them. This isn't to say that traditional aid doesn't have related efforts which are also successful, but I think overall it's helpful to have external pushes from EA and people who embrace related approaches for this work.

Michael M-n @ 2022-11-20T23:49 (+2)

Thanks for this response! I don't want to go too deep on the traditional aid sector, being still in it and all, but I do think they could do with a lot more thinking about real effectiveness. Or even just to occasionally step back, and think 'what are we trying to do here?' I don't disagree with anything you've written, except to wonder if it's even possible to have non-global communities anymore, and if even small scale communities succumb to the same dynamics.

Just when an idea becomes popular it becomes a community, and the community imports the social dynamics. I suppose if I had to say what I would envision for something EA-like is in line with a what you said about conventional wisdom – a kind of invisible force that no one really identifies with, but nudges decisions in a better direction. To some extent I wonder if game theory and microeconomics have maybe achieved this – people seem to subconsciously think a lot more in terms of cost/benefit than they did 20 years ago. But whenever an online community becomes a 'thing', I really feel like those social communities overwhelm – and my experience living in Berlin suggests to me that even the tiniest subcultures develop the same dynamics.

Speaking of geopolitics and social reality, do you think EA grapples with that well? In my experience one of the most crucial elements of effectiveness for aid projects has been good buy-in from the local government and community, which can be a messy, political and extremely tedious process, and I'm lucky enough to have an employer that takes the time. What's the EA opinion on 'do something suboptimal because otherwise one department of a ministry will hate you and your whole project is screwed'?

Dancer @ 2022-11-19T19:01 (+2)

Welcome! I'm a bit pushed for time but thought I'd offer up an answer to you question:

So what is it all for?

If I had to pick one USP of EA it's the serious attempt to prioritise between causes.

ZacharyRudolph @ 2022-11-18T18:11 (+20)

I appreciate this post, but pretty strongly disagree. The EA I've experienced seems to be at most a loose but mutually supportive coalition motivated by trying to most effectively do good in the world. It seems pretty far from being a monolith or from having unaccountable leaders setting some agenda. 

While there certainly things I don't love such as treating EAGs as mostly opportunities to hang out and some things like MacAskill's seemingly very expensive and opaque book press tour, your recommendations seem like they would mostly hinder efforts to address the causes the community has identified as particularly important to work on.

For instance, they'd dramatically increase the transaction costs for advocacy efforts (i.e. most college groups) aimed at introducing people to these issues and giving them an opportunity to consider working on solving them. One of the benefits of EA groups is that it allows for a critical mass of people to become involved where there might not be enough interest to sustain clubs for individual causes (and again the costs of people needing to organize multiple groups). In effect, this would mostly just cede ground and attention to things like consulting, finance, and tech firms. 

Similarly,  we shouldn't discount the (imo enormous) value of having people (often very senior people) willing to offer substantial help/advice on projects they aren't involved with simply because the other person/group is part of the same community and legibly motivated for similar reasons. I can also see ways in which a loss of community would lead to reduced cooperation between orgs and competition over resources. It seems important to note too that being part of a cause-neutral community makes people more able to change priorities when new evidence/arguments emerge (as the EA community has done several times since I've been involved). 

I think proposals of this kind really ought to be grounded in saying how the arguments the community has endorsed for some particular strategy are flawed, e.g. showing how community building is not in fact impactful.  We generally seem to be over updating on a single failure (even allowing that the failure was particularly harmful). 

Note: wrote this fairly quickly, so it's probably not the most organized collection of thoughts.

DMMF @ 2022-11-19T00:58 (+18)

I don't know if an EA community is a good thing, but as a related point, I think it's worthwhile to share that I think the EA community as it currently exists, and in particular, EA leadership, has done a very poor job in advancing the interests of EA causes.

At present, EA has an awful reputation and most people view the community with contempt and it's ideas as noxious.

Candidly, I'm embarrassed to share any affiliation I have with EA to colleagues and non-close peers.

This didn't have to be this way and frankly, given the virtue of EA, it takes a special type of failure to have steered the community down this path.

I think EA would be significantly better served if a number of leading EA orgs and thought leaders dramatically reevaluated their role, strategy and involvement with EA.

Dancer @ 2022-11-19T11:25 (+14)

Candidly, I'm embarrassed to share any affiliation I have with EA to colleagues and non-close peers.

I only realised this recently, but honestly I think most people are embarrassed to share almost any ethical (political, religious, philosophical, etc) affiliation with their colleagues and non-close peers. So I think that avoiding that is an unreasonable standard to expect or be aiming for.

This didn't have to be this way and frankly, given the virtue of EA, it takes a special type of failure to have steered the community down this path.

I can see why you'd see things this way as an EA adherent. But I'm sure many members of various political parties, movements, religious groups etc feel the same - given that our ideas are so obviously incredibly virtuous, it's exceptionally bad that our leaders have nevertheless managed to make us look bad to most people outside of the community (i.e. the set of people who don't think it looks so good that they've joined).

I think EA would be significantly better served if a number of leading EA orgs and thought leaders dramatically reevaluated their role, strategy and involvement with EA.

PR is hard.  Extremely hard. Otherwise you wouldn't have this situation where affiliations with pretty much any group generally lose you a bit of status in the eyes of people outside those groups. I think a better indicator of success is growth, assuming you want growth. And I actually  think that when 'EA' has wanted growth, we've generally been very good at it.

And that's in spite of the limitation of a significant fraction of the community chastising others in the community for ever caring about PR.

Davidmanheim @ 2022-11-19T16:10 (+4)

It seems like you're ignoring that he said EA has an actively bad reputation, and viewing this as a generic claim about not wanting to share a view others don't embrace.

Dancer @ 2022-11-19T17:39 (+8)

I reeeeeaally don't want to get into harmful stereotypes here obviously so I'll just pick a few real examples in my own case: when I've told old colleagues, family, school-friends etc that I was vegan, an environmentalist, a feminist, I think their reaction was quite a bit worse than simply "I don't agree." But maybe we move in different circles.

Of course it may be the case that EA really does have "an awful reputation and most people view the community with contempt and it's ideas as noxious." But as any movement grows it's bound to attract more and more bad (and good) press, and you're going to feel embarrassed talking about it, and in times of bad press it may well feel like it's taken over your entire brand and it's doomed and it's taken "a special type of failure" to cause this and leadership should "dramatically reevaluate" their involvement in EA i.e. I'm generally expecting people to be more pessimistic than they should be right now.

And even before all this I'd been noticing in myself that I was taking "I'm embarrassed to tell people I'm an EA" as a doom-y sign, but then I realised basically all causes have this effect so that in itself is not sufficient for me to conclude that we'd done a really bad job of PR. So I wanted to point this out to anyone I suspected was thinking similarly.

PeterSlattery @ 2022-11-18T09:01 (+18)

(on phone so quick thoughts) Thank you for writing this. It is very brave! I am actually quite sympathetic to your arguments. I would like EA to evolve over time from being a movement into being a boring norm of behaviour and reasoning etc. Just like how being a suffragette gradually failed to have any purpose. However, I think that doing so right now is a bit premature. I'd welcome some more debate though.

Davidmanheim @ 2022-11-18T09:19 (+5)

I agree that it's premature - I don't think we should cancel EAG in 2023, but I do think that we're likely to make minor changes and keep doing poorly in various ways if we aren't explicitly thinking about the question of what the community should be. 

And I'm certainly open to hearing if and why I'm wrong. But I worry that if we aren't thinking about what the community looks like in a decade, we'll keep stumbling forward ineffectually, with unnecessarily and unexpected problems and failures from unplanned growth.

PeterSlattery @ 2022-11-18T10:28 (+6)

Thanks. A few more quick thoughts! I actually don't think that the community failed here. I am pretty sure that all communities have bad actors, and EA probably has fewer than many communities that are doing quite well. We hold ourselves to a very high standard.

With that said, I agree that we should think more about what we want the community to become and how to get it there, and would love to hear some visions. Here is a quick attempt at making some predictions for how things go in the next 10-20 years (if we are still around):

  • EA becomes an established academic discipline and more mainstream. Something like what I think happened with psychology, economics and various other new disciplines/intellectual innovations.
  • Major conceptual innovations, like the using the ITN framework and expected value in philanthropic settings etc go mainstream and are now widely used in relevant settings.
  • The overall EA identity weakens and EA starts to become one of many relatively uninteresting topics that people are interested in rather than a cool new identity to have. 
  • People start forming identities and groups focused on current and new causes areas rather than EA 
  • Cause areas/EA related actions (e.g., AI safety or earning to give) become like academic/professional communities with their own conferences and subcultures. We start seeing some quite well managed and formalised community building by specific organisations as they scale up.
  • We never properly manage things at the EA community level due to the impossible to match increase in the scale and complexity of all the causes and their sub-communities and our decentralised nature. We continue to have informal groups etc but it never really gets more formalised or much larger scale than now.
  • EAGs continue. These function like academic conferences and start being funded by large cause area orgs to find hires and allow people in their areas to meet and collaborate etc. 
  • The EA forum becomes much less used, having achieved its goal of facilitating and incubating many new ideas, concepts and connection that have now spun off into their own more specific areas of focus or gone mainstream.  Again, it's like an psychology forum or similar at this point, and most people are more interested in something more specific so use more specific forums.

    That's all pretty rushed because I need to go to sleep now (actually 30 minute ago), but it was interesting to attempt, and I would like to hear thoughts or other perspectives.
PeterSlattery @ 2022-11-18T21:58 (+11)

Interesting to see that people disagree with me. I am interested to hear why if anyone wants to share.

Lumpyproletariat @ 2022-11-19T14:40 (+3)

If I had to guess, I'd point at having a long bulleted list of different specific predictions about the future as a risk factor for someone registering disagreement. 

Davidmanheim @ 2022-11-18T11:40 (+6)

I think that my earlier attempt to discuss this mostly matches what you're saying. 

A key thing that changed is that I no longer think we should try to "manage things at the EA community level" - and if we're not attempting that, we should reconceptualize what it means to be good community members and leaders, and what failure modes we should anticipate and address.

The other thing I want is more ambitious - ideally, in 20+ years I want the idea of prioritizing giving part of your income, viewing the future as at least some level of moral priority, and cause neutrality to all look like women's suffrage does; so obviously correct and uncontroversial that it's not a topic of discussion.

Nathan Young @ 2022-11-18T17:20 (+13)

Communities give hugely leveraged power to those who inform and represent them - you can get lots of people to change their minds or say "the community believes X"

While I think these powers are open to abuse, it's not to say that they aren't valuable. 

A cost-benefit analysis seems appropriate here (and there have been a lot of costs recently) rather than suggesting that there are no benefits.

Davidmanheim @ 2022-11-19T16:13 (+7)

First, I didn't say there are no benefits. 

Second, I don't have a clear enough vision to lay out the alternative,

Third, I'm skeptical of doing a cost-benefit analysis without considering options for a specific actor.  I can't usefully compare alternatives of specific actions I would take, as someone not controlling any of the decisions. If CEA wants to evaluate 3 different potential strategies, they could do a useful CBA.

Chris Leong @ 2022-11-18T10:02 (+12)

I think it’s interesting to explore far out ideas and I suppose it might makes sense from the perspective of someone focused on near-termism.

However, as someone more focused on AI safety, one of the cause areas that is more talent dependent and less immediately legible, this seems like this would be a mistake.

If the community is uncertain between the causes, I suggest that it probably wouldn’t be a good idea to dismantle the community now, at least if we think we might obtain more clarity over the next few years.

Davidmanheim @ 2022-11-18T10:06 (+26)

I think that AI safety needs to be promoted as a cause, not as a community. If you have personal moral uncertainty about whether to focus on animal suffering or AI risk, it might make sense to be a vegan AI researcher. But if you have moral uncertainty about what the priority is overall, you shouldn't try to mix the two.

People in Machine learning are increasingly of the opinion that there is a risk, and it would be much better to educate them than to try to bring them in to a community which has goals they don't, and don't need to, care about.

PabloAMC @ 2022-11-18T14:24 (+2)

But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.

Chris Leong @ 2022-11-18T13:00 (+2)

I’d suggest that we need multiple paths for drawing talent and general EA community building has been surprisingly successful so far.

HausdorffSpace @ 2022-11-18T13:22 (+17)

Honest question: isn't an option for the AI Safety community being just the AI Safety community, independent of there being an EA community?

I understand the idea of the philosophy of effective altruism and longtermism being a motivation to work in AI Safety, but that could as well be a worry about modern ML systems, or just sheer intellectual interest. I don't know if the current entanglement between both communities is that healthy.

EDIT: Corrected stupid wording mistakes. I wrote in a hurry.

Davidmanheim @ 2022-11-18T13:36 (+12)

I certainly think that having an academic discipline devoted to AI safety is an option, but I think it's a bad idea for other reasons; if safety is viewed as separate from ML in general, you end up in a situation similar to cybersecurity, where everyone builds dangerous shit, and then the cyber people recoil in horror, and hopefully barely patch the most obvious problems.

That said, yes, I'm completely fine with having informal networks of people working on a goal - it exists regardless of efforts. But a centralized effort at EA community building in general is a different thing, and as I argued here, I tentatively think this are bad, at least at the margin.

HausdorffSpace @ 2022-11-18T16:33 (+1)

I agree with you insofar as separating AI safety from ML is terrible, since the objective of AI safety, in the end, is not to only study safety but to actually implement it in ML systems, and that can only be done in close communication with the general ML community (and I really enjoyed your analogy with cybersecurity).

I don't know what is the actual current state of this communication, nor who is working on improving it (although I know people are discussing it), but a thing I want to see at least are alignment papers published in NeurIPS, ICML, JMLR, and so on. My two-cent guess is that this would be easier if AI safety would be more dissociated with EA or even longtermism, although I could easily envision myself being wrong.

EDIT: One point important to clarify is that "more dissociated" does not mean "fully dissociated" here. It may be as well that EA donors support AI safety research, effective altruism as an idea makes people look into AI safety, and so on. My worry is AI safety being seen by a lot of people as "that weird idea coming from EA/rationalist folks". No matter how fair this view actually is, the point is that AI safety should be popular, non-controversial, if safety techniques are to be adopted en masse (which is the end goal).

Chris Leong @ 2022-11-18T13:51 (+2)

I’m in favour of direct AI safety movement building too, but the point still remains that the EA community is a vital talent pipeline for cause areas that are more talent dependent. And given the increasing prominence of these cause areas, it seems like it would be a mistake to optimise for the other cause, at least when it’s looking highly plausible that the community may shift even more in the longtermist/x-risk over the next few years.

pseudonym @ 2022-11-19T01:43 (+1)

The shift to longtermism/x-risk to me seems to have been an intentional one, but your comment makes it sound otherwise?

Chris Leong @ 2022-11-19T06:54 (+4)

I don't know what you mean by intentional or not.

But my guess is that the community will shift more long-termist after more people have had time to digest What We Owe the Future.

pseudonym @ 2022-11-19T07:39 (+1)

The community is shifting more long-termist because of intentional decisions that were made - there's no reason that these shifts have to be locked into place if there happens to be a good reason to shift away from them - not suggesting there is one! If the shift turns out to be a mistake in the future, we should be happy to move away from it, not say "oh but the community may shift towards it in the future", especially when that shift is caused by intentional decisions in EA leadership.

Chris Leong @ 2022-11-19T07:44 (+3)

I guess this is why I asked what you meant.

Publishing What We Owe the Future was an intentional decision, but there's a sense in which people read whatever people write and make up their own minds.

"Oh but the community may shift towards it in the future" - I guess some of these shifts are pretty predictable in advance, but that's less important than the point I was making about maintaining option value especially for options that are looking increasingly high value.

Asa Cooper Stickland @ 2022-11-18T11:07 (+9)

Can you expand a bit on what you mean by why these ideas applying better to near – termism?

E.g. Out of 'hey it seems like machine learning systems are getting scarily powerful, maybe we should do something to make sure they're aligned with humans' vs 'you might think it's most cost effective to help extremely poor people or animals but actually if you account for the far future it looks like existential risks are more important, and AI is one of the most credible existential risks so maybe you should work on that', the first one seems like a more scalable/legible message or something. Obviously I've strawmaned the second one a bit to make a point but I'm curious what your perspective is!

Chris Leong @ 2022-11-18T12:59 (+2)

Maybe I should have said global health and development, rather than near-termism.

RayTaylor @ 2023-09-28T17:57 (+8)

Hi David, I think I follow your thinking, but I'm not hopeful that there is a viable route to "ending the community" or "ending community-building" or ending people "identifying as EAs", even if a slight majority agreed it was desirable, which seems unlikely.

On the other hand, I vary much agree that a single Oxford or US-based organisation can't "own" and control the whole of effective altruism, and aiming not for a "perfect supertanker" but a varied "fleet" or "regatta" of EA entities would be preferable, and much more viable. Then supervision and gatekeeping and checks could be done in a single timezone, and the size of EA entities and groups could be kept at 150 or less. Also different EA regions or countries or groups could develop different strengths. 

We'd end up with a confederation, rather like Oxfam, the Red Cross, Save the Children etc. (It's not an accident that the federated movements often have a head office in the Netherlands or Switzerland, where the laws on what NGOs/charities can and can't do are more flexible than in the UK or USA, which is kinda helpful for an 'unusual' movement like EA.)

Oxfam themselves also formed INTRAC as a training entity, and one could imagine CEA doing something similar, offering training in (for example)
- lessons learned
- bringing in MEv  trainers for evaluation training
- PLA trainers for participatory budgeting etc.

cookiecut @ 2022-11-19T00:47 (+8)

Philosophers have perhaps struggled with these impossible questions before,

https://brill.com/view/journals/rip/26/1/article-p25_2.xml?language=en

-- being new here, I have the feeling that for all of its love of applied utilitarian philosophy, this community is dissapointing in its lack of openness to other philosophical readings.

I understand that due to the recency and impact of events there is a work of mourning happening in this forum, and proposing healthy paths for this , if possible,  is important to those who have belonged and formed affinities and conversations. The trajectory change has already happened, it just may not be recognizable yet. 

Dancer @ 2022-11-19T11:54 (+4)

https://brill.com/view/journals/rip/26/1/article-p25_2.xml?language=en

--being new here, I have the feeling that for all of its love of applied utilitarian philosophy, this community is dissapointing in its lack of openness to other philosophical readings

I think this is a bit like saying, "[Link to avocado salad recipe.] For all its love of cookies, this Sugar Appreciation Society is disappointing in its lack of openness to other foods."

We're not only about utilitarianism, sure. A lot of us take moral uncertainty seriously. We realise that other philosophical traditions might be helpful in guiding us towards good rule/prudent utilitarian practices and providing us with inspiration in our personal lives outside of EA. Some of us have much stronger beliefs in other ethical theories than we do in utilitarianism.

But if a 'trajectory change' means that 'EA' on the whole stops resembling utilitarianism significantly more than other ethical theories, I personally wouldn't consider that EA any more.

cookiecut @ 2022-11-19T12:24 (+1)

Perhaps it is more like saying that the Sugar Appreciation Society tends to have a certain blindness to 1. the effects of sugar (which in extreme cases may cause blindness) 2. the effects of appreciation or fanaticism  or sweetness as a fetish or as a means of persuasion into a cause 3. The complex origins and history of sugar and its links to empire and exploitation. 

But it is hard to not like SAS. As a non-SAS I am in awe of what has been built and discussed, and even of the openness that does exist which is greater than in other societies, and I am at fault for pointing towards its closedness when I am new, and prefer not to be as open about my love of sugar but engage in it perhaps less conciously than I would like, and also must acknowledge that sugar is everywhere. 

But if SAS looks at itself in the mirror it wants to see only a sweet image. It that image changes, as it must, into a saltier or spicier one as it may as it engages with other perspectives, SAS may become unrecognizable, and may fear that, as one fears a ghost ?

ludwigbald @ 2022-11-21T23:17 (+7)

A few thoughts, excuse the mess:

I think that local groups should continue to exist, they are what makes up the community. In my view, you can run a local group well without outside funding. In any case, your local group should not be dependent on outside funding.

I'd like to see CEA run a topic-specific conference.

I'd like to see more democratically organized EA structures, like the German https://ealokal.de

I'd like to see more grassroots community events by and for community members.

None of this would have prevented SBF from committing fraud, but we would feel fewer ripple effects in the community.

pseudonym @ 2022-11-18T09:40 (+6)

I don’t think that what Cremer and Kemp suggest is the right approach, nor are Cremer’s suggestions to MacAskill sufficient for a large and growing movement, but some are necessary, and if those measures are not taken, I think that the community should be announcing alternative structures sooner rather than later.

Which ones do you think are necessary? 

Davidmanheim @ 2022-11-18T10:08 (+18)

Ombudsmen and clear rules about and norms of protection for whistleblowing, more funding transparency, and better disclosure about conflicts of interest.

(None of these relate to having a community, by the way - they are just important things to have if you care about having well run organizations.)

cookiecut @ 2022-11-19T12:43 (+1)

You mean something like this ? https://www.carnegie.org/about/governance-and-policies/

I feel like Effective Ventures, the parent organization here, lacks this and more. One does not know if it has a startup feeling and they have not yet gotten to the point of giving clarity on these things, or if the lack of clarity is part and parcel. 

nongiga @ 2022-11-22T13:16 (+4)

This hurts but it checks out.

Sam Elder @ 2022-11-18T21:59 (+3)

I think there’s a kernel of truth to this suggestion! I would put it this way: EA should be global, and it should continue to be powered by communities, but those communities should be local and small.

First, work in specific cause areas should continue to happen globally, but should not operate with an automatic assumption of trust.

“Low trust” wouldn’t mean we stop doing a lot of good; it would just mean that we need to be more transparent and rigorous, rather than just having major EA figures texting with billionaires and the rest of us just hoping they do it right.

GiveWell is a great example of an EA institution built in a “low trust” framework. And it’s great! It would be very hard for me to imagine anything nearly this bad happening in the global health and well-being side of EA precisely because of places like GiveWell.

But small, high-trust community is also great! And we should encourage that — more local meetup groups, more university groups. I agree that funding for such things, after the “startup phase,” should also be a bit more locally-sourced, with alumni funding university groups since students usually lack sources of income.

When I first hosted some EAs for a dinner in my home, someone in the group asked if I wanted funding for it. I had enough context in the EA community that I knew where this sentiment was coming from, but I still found it weird to imagine the tendrils of central EA funds reaching all the way down into my little house dinner. And given that the source was probably, at least counterfactually/fungibly, FTX, I’m glad I didn’t take it.

RayTaylor @ 2023-09-28T19:06 (+1)

I think it's wise to separate the FTX and due diligence issue from the broader thesis. Here I'm just commenting on due diligence with donors.

Who was/is responsible for checking the probity or criminality of ...

 (a) FTX and Almeda?

 (b) donors to a given charity like CEA? (I put some links on this below)

(a) First it's their own board/customers/investors, but presumably supervisory responsibility is or should also be with central bank regulators, FBI, etc. If the CEO of a company is a member of Rotary, donates to Oxfam, invests in a football team, it doesn't suddenly become the primary responsibility of all of those entities (ahead of board, FBI etc) to check out his business and ethics and views, unless (and this is important) he's going to donate big and then have influence on programmes, membership etc. 

(See the links below on how both due diligence and reputational considerations* can matter a lot to the recipient charity. If there is some room for doubt about the donor, but it doesn't reach a threshold, it may be possible to create a layer of distance or supervision e.g. create a trust with it's own board, which does the donating.)

(b) Plenty of charities accepted donations from Enron, Bernie Madoff and others. 

Traditionally, their job is to do their job, not evaluate the probity of all their donors. However, there has been a change of mood since oil industry disinvestment campaigns and the opioid crisis (with the donations from the Sackler family, here in the UK at least**). Political parties are required to do checks on donors. 

Marshall Rosenberg turned down lots of people who wanted to fund NVC and the cNVC nonprofit, because he felt that taking money put him into relationship with them, and some companies he just didn't want to be in relationship with. This worked well for him, and made sure there was no pressure to shift focus, but it did frustrate his staff team quite often.

It might be possible as a matter of routine policy to ask large donors if they are happy to have their main income checked, especially if they want to be publicly associated with a particular project, or to go more discreetly to ratings agencies and so on. A central repository of donor checks could be maintained, to minimise costs. This wouldn't be perfect, but a due diligence process, ideally open and transparent, would sometimes be a good defence if problems arise later? 

These are the (more minimal) UK Charity Commission guidelines on checking out your donors, and even this might have helped if it had been done rigorously:

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/550694/Tool_6.pdf 

Here's a plain English version where the overall advice is "be reasonable":

https://manchestercommunitycentral.org/sites/manchestercommunitycentral.co.uk/files/Ethical%20fundraising%20-%20how%20to%20conduct%20due%20diligence%20on%20potential%20donors_0.pdf 

**This is for bigger donations:
https://www.nao.org.uk/wp-content/uploads/2017/08/Due-diligence-processes-for-potential-donations.pdf  

*This is about how things went wrong for Prince Charles's charities:
https://www.charitytoday.co.uk/due-diligence-for-charities-ensuring-transparency-and-trustworthiness/