EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism

By Rockwell @ 2023-02-09T22:20 (+27)

This is a linkpost to https://docs.google.com/document/d/1BVTuCn7NKdUZAA8DWR9eCsPD7Gcja0fWPnDs_Hy8R9Q/edit?usp=sharing

In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.

We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good. 

But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it. 

We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.

If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.

Thank you to the many community builders who contributed to the creation of this document.


Duncan Sabien @ 2023-02-10T02:07 (+141)

I am opposed to this. 

I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as "fear"].

Here are some things that are true:

However:

Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.

By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that's happening is the creation of a motte-and-bailey, ripe for future abuse.

There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.

But by just saying "hey, [thing] is bad! We're going to create social pressure to be vocally Anti-[thing]!" you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to "oh, so you're NOT opposed to bigotry, huh?"

Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it's a bad idea for various pragmatic or logistical reasons, it's pretty easy to imagine that being rounded off to "the opposition is racist."

(I can imagine people saying "we won't do that!" and my response is "great—you won't. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.")

The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It's saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.

The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.

Rockwell @ 2023-02-10T03:01 (+37)

Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations.  We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.

I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one's opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn't obvious where much of the community stands. And this is having a tangible impact on our communities and our community members' sense of belonging and safety. This statement doesn't solve this. But by putting our shared commitment in plain language, I believe we've laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed. 

Jason @ 2023-02-10T16:24 (+27)

I wonder if the statement would have been stronger with a nod in that direction, e.g. something vaguely like: "We recognize that signing a statement is not enough.  As signatories, we will be considering specific ideas to combat racism and sexism in the context of the resources, needs, and dynamics of the specific community we help build. The organizers will be continuing to collaborate on a more substantive, action-specific proposal in the coming months."

Duncan Sabien @ 2023-02-10T03:08 (+10)

I would like for all involved to consider this, basically, a bet, on "making and publishing this pledge" being an effective intervention on ... something.

I'm not sure whether the something is "actual racism and sexism and other bigotry within EA," or "the median EA's discomfort at their uncertainty about whether racism and sexism are a part of EA," or what.

But (in the spirit of the E in EA) I'd like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?

Guy Raveh @ 2023-02-11T00:55 (+31)

Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I'm certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.

Therefore it's very meaningful to see a large core of community builders speak out about this explicitly, including disavowing "scientific" racism and sexism specifically. I'm also especially glad to see the head of my own country's community among them.

Aptdell @ 2023-02-11T05:47 (+9)

Just to be clear, I think many of us in the community are not uncertain about whether racism and sexism are part of EA. Rather I'm certain that they are, in the sense that many in the community exhibited them in discussions in the last few weeks.

I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.

I wish the Google Doc had been more specific. It could've said things like:

  • It's important to treat people with respect regardless of their race/sex

  • It's important to reduce suffering and increase joy for everyone regardless of their race/sex

  • We should be reluctant to make statements which could be taken as "scientific" justification for ignoring either of the previous bullet points

As written, it seems like the doc has the disadvantage of being ripe for abuse, without the advantage of providing guidelines that let someone know whether the signatories dislike their comment. I think on the margin, this doc pushes us towards a world where EAs are spending less time on high-impact do-gooding, and more time reading social media to make sure we comply with the latest thinking around anti-racism/anti-sexism.

dspeyer @ 2023-02-14T21:28 (+17)

We should be reluctant to make statements which could be taken as "scientific" justification for ignoring either of the previous bullet points

 

Thank you for stating plainly what I suspect the original doc was trying to hint at.

That said, now that it's plainly stated, I disagree with it.  The world is too connected  for that.

Taken literally, "could be taken" is a ridiculously  broad standard.  I'm sure a sufficiently motivated reasoner could  take "2+2=4" as justification for racism.  This is not as silly a concern as it sounds, since we're mostly worried about motivated reasoners, and it's unclear how motivated a reasoner we should be reluctant to offer comfort to.  But let's look at some more concrete examples:

  • In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism.  I can't actually follow the logic that goes from "A dangerous new disease emerged in China" to "I should go beat up someone of Chinese ancestry" but it seems a few people who had been itching for an excuse did.  Nevertheless, given the relative death tolls, we clearly should have had more warnings and more preparations.  The next pandemic will likely also emerge in a place containing people against whom racism is possible (base rate, if nothing else), and pandemic preparedness people need to be ready to act anyway.
  • Similarly, many people tried to bury the fact that monkeypox was sexually transmitted because it could lead to homophobia.  So instead they warned of a coming pandemic.  False warnings are extremely bad for preparedness, draining both our energy and our credibility.
  • Political and Economic Institutions are a potentially high-impact cause area in both near- and far-term (albeit, dubiously tractable).  Investigating them is pretty much going to require looking at history, and at least sometimes saying that western institutions are better than others. 
  • Going back to Bostrom's original letter, many anti-racists have taken to denying the very idea of intelligence in order to reject it.  Hard to work on super-intelligence-based x-risk (or many other things) without that concept.
Aptdell @ 2023-02-15T13:18 (+4)

I think you make good points -- these are good cases to discuss.

I also think that motivated reasoners are not the main concern.

My last bullet point was meant as a nudge towards consequentialist communication. I don't think consequentialism should be the last word in communication (e.g. lying to people because you think it will lead to good consequences is not great).

But consequences are an important factor, and I think there's a decent case to be made that e.g. Bostrom neglected consequences in his apology letter. (Essentially making statements which violated important and valuable taboos, without any benefit. See my previous comment on this.)

For something like COVID, it seems bad to downplay it, but it also seems bad to continually emphasize its location of origin in contexts where that information isn't relevant or important.

"We should be reluctant" represents a consideration against doing something, not a complete ban.

Chris Gatewood @ 2023-02-13T05:45 (+4)

I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.

James Watson's denial of having made racist statements is a social fact worth noting. Most 'alt-center,' etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this. 

To be clear,  I don't think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don't think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all  x-risks and compound x-risks, etc. right the first time.)

But after mulling over most of the HBD-affirmed defenses of Bostrom's email/apology that I've read or engaged on the EA forum that weren't obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won't say their comments were racist even if they themselves are not actually certain they are non-racist. 

My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA's program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.[1] 

For these, among other reasons, I think this instance of Hirshman's rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn't allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.

As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there's any way I can support your efforts, please let me know!

  1. ^

    1.1 For want of an intensional definition of value-alignment.
    1.2.  I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.'s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has  its missteps. 

Lorenzo Buonanno @ 2023-02-10T10:41 (+27)

Writing very quickly as someone that signed this from EA Italy

I agree that this 12-line letter is not perfect and will not solve racism or sexism, and probably not do much (otherwise, these issues would have already been solved).

There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community.

If you think it's important and useful, please do work on this, it might be concrete progress for the community! Or if the versions were already made it might be useful to share them

Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it's a bad idea for various pragmatic or logistical reasons, it's pretty easy to imagine that being rounded off to "the opposition is racist."

I would be extremely surprised by this, what % do you give that something like this will happen? If a request to sign reached me I assume it reached hundreds or thousands of people

This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It's saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.

I used to share this thinking, and worry a lot about replaceability, but on the current margin it seems to me that the alternative to thing is almost always not better thing but no thing. So I think it would be useful if you had made concrete proposals for how thing could be improved for next time, but not what I perceive as disincentivizing people from generally doing stuff.
I wouldn't want Rob Mather not to found the Against Malaria Foundation out of fear of sapping momentum for creating an even better version of a bednet distribution org. I would agree with you if you could share some reason for expecting the counterfactual to be better (e.g. "there was this other much better letter that I was just about to post, but now I don't want to spam people about this so I will not")

The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.

Imho the road to anywhere is paved with good intentions, and the most likely counterfactual is standing still, not moving in a better direction, unless you know of some existing and better plans that were hindered by this 12-line letter.

 

In terms of practical actions, someone from EA Italy (not me) is publishing a Code of Conduct this Sunday instead of in the next weeks, we're sharing an anonymous form on the website and via other channels, following the advice from EA Philippines (in addition to links to two contact people and the CEA community health team), and we're going to ask city and university groups to publish these as well.

Would probably have happened anyway, but likely a few weeks later, and it's nice to have links to resources from other groups while we figure out a strategy. I personally found the advice/experience from EA Philippines to be more useful, otherwise we might have just added contact infos but forgotten to add an Italian anonymous form. So I would endorse asking various groups to share what practical actions they are taking, but it doesn't seem to me that this letter sapped momentum from doing it.

Not writing as anything

S.E. Montgomery @ 2023-02-10T03:05 (+17)

I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.

There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community.

Who says we can't have both? I don't get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.

But by just saying "hey, [thing] is bad! We're going to create social pressure to be vocally Anti-[thing]!" you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to "oh, so you're NOT opposed to bigotry, huh?"

I don't think this is the case - I, for one, am definitely not thinking that anyone who chose not to sign didn't do so because they are not opposed to bigotry. (Confusing double-negative - but basically, I can think of other reasons why people might not have wanted to sign this.) 

The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. 

I can think of better outcomes than that - the next time there is a document or initiative with a bit more substance, here's a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here's a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here's some community builders they can talk to for their stance on this. There's nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don't get the sense from this post that this will be the last we hear about this. 

sphor @ 2023-02-10T03:50 (+11)

There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.

Can you give some details? 

Duncan Sabien @ 2023-02-10T04:56 (+7)

I mean, I don't have this hypothetical document made in my head (or I would've posted it myself).

But an easy example is something of the shape:

[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]

"We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years."

Maybe the number is 1%, or 10%, or something else; maybe it's 1 year or 10 years or instead of years it's "until X members of our group/board/whatever are from [nondominant demographic]."

The thing that I like about the above example in contrast with the OP is that it's clear, concrete, specific, and evaluable, and not just an applause light.

agunning @ 2023-02-11T05:49 (+8)

there's a thing of like 

in the current environment there's a lot of discourse that goes like  "EA needs to be less tolerant of weird people especially people who do things like poly and kink, in order for EA to feel more safe for women"

given that the omission of homophobia, transphobia, etc especially since these people are v overrepresented here seems ... notable?

Liv @ 2023-02-10T09:35 (+63)

Hi.
I'm going to talk about sexism in particular here, since it was this problem mentioned in the declaration, which I had a chance to experience in my life personally.

.
I agree with every single point Duncan made, and I felt relieved seeing it. 
To add up to it, the declaration doesn't make me feel safe, quite the opposite, I feel that my "safe place where we are serious about problems and take the best possible actions to fight them" got a bit invaded (it's simply my own, purely emotional reaction - but since, I guess, this post was made to make me feel safe, let me share it). I am a part of the EA, because I'm impressed with how effective it is. I wish sexism was treated in the same way as malaria, because I think it deserves it. I want it to be eradicated. And I believe it's possible. I don't believe this declaration helps.
.
To me, the words used in the declaration feel empty and, to be frank, sometimes so vague that I have trouble understanding what exactly you wanted to communicate. I certainly can't say what exact actions do you declare to take.


Here are the actions I think would be better:
- Sexism is a VERY broad topic.I'd like to see which particular embodiment of sexism you, as community leaders, identify as the most prevalent and harmful. I would really like your analysis to be country or culture specific. I'd like to see numbers, and if not, solid qualitative analysis.

-I'd like to see a comparison of the impact of each of the forms of sexism to another issues the community faces - also the ones which are not spoken about or haven't been recently mentioned by the mainstream media

- If in the process you decide that fighting a particular form of sexism or other discrimination is not something we should do (i.e. because it is not neglected) please - focus your resources on those in the community, who suffer more.

- I'd like to see plans of specific actions you want to take, addressing specific community issues (i.e. specific forms of sexism). I'd like to see evidence on how the actions are going to help and why they are the best solution.

- I'd like to see vivid, open, rational, honest discussion about how exactly each defined problem can be addressed - and if it's defined properly. I'd like the problem to be approached from so many angles, that we are left with its pure and strict definition, and with bullet proof action plan.

- Also, if you decide to deal with a particular form of community problem (i.e. particular form of sexism), I'd like to know what is it, how does it manifest, if it concerns me (i.e. because of my age or location), how do I avoid it, how can I help if needed. If this particular problem concerns me personally, I'd love to be asked on how I am affected and how you can help - I'd like to feel listened to.

- Then, I'd like to see your chosen actions helping the community to be better. I'd like the impact to be measured and learned from.

Maybe you are currently working on all or some of the above. If yes, I think it would be helpful to me if you mentioned specific efforts of yours in the post, because this context certainly would change my perception. If you are not working on it, I think this post not supported by similar efforts may actually have a negative impact (please see Duncan's arguments, I agree with them).

Severin T. Seehrich @ 2023-02-10T21:49 (+47)

Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don't think anymore that this is a very appropriate place for a slippery slope-argument.

_____________
I'd like to explain why I won't sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.

So here you go for my TED talk.

I'm not a Sam in a bunch of ways: I come from a working-class background. I studied continental philosophy and classical greek at an unknown small town uni in Germany (and was ashamed of that for at least my first two years of involvement with EA). Though I was thunderstruck by the simple elegance of utilitarian reasoning as a grad student, I never really developed a mind for numbers and never made reading academic papers my guilty pleasure. I've been with the libertarian socialists long enough before getting into EA that I'm still way better at explaining Hegel, Marx, Freud, the Frankfurt school,  the battle lines between materialist and queer feminism, or how to dive a dumpster than even basic concepts of economy. In short: As far as knowing the anti-racist and anti-sexist discourse is concerned, I may well be in the 95th percentile of the EA community.

And because of all of this life experience, reading this statement sent a cold shower down my spine. Here's why.

I have been going under female pronouns for a couple of years. That's not a fortunate position to be in in a small German university city whose cultural discourse is always 10-20 years behind any Western capital city, especially of the anglo-saxon world. I've grown to love the feeling of comfort, familiarity, and safety that anti-discriminatory safe spaces provide, and I've actively taken part in making these spaces safe - sometimes in a more, sometimes in a less constructive tone.

But while enjoying that safety, comfort, and sense of community, I constantly lived with a nagging half-conscious fear of getting ostracized myself one day for accidentally calling the wrong piece of group consensus into question. In the meantime, I never was quite sure what the group consensus actually was, because I'm not always great at reading rooms, and because just asking all the dumb questions felt like a way too big risk for my standing in the tribe. Humility has not always been a strength of mine, and I haven't always valued epistemic integrity over having friends.

The moment when the extent of this clusterfuck of groupthink dawned on me was after we went to the movies for a friend's birthday party: Iron Sky 2 was on the menu. After leaving the cinema, my friend told me that during the film, she occasionally glanced over to me to gauge whether it's "okay" to laugh about, well, Hitler riding on a T-Rex. She glanced over to me in order to gauge what's acceptable. She, who was so radically leninist that I didn't ever dare mention that I'm not actually really all that fond of Lenin. Because she had plenty of other wonderful qualities besides being a leninist. And had I risked getting kicked out of the tribe for a petty who's-your-favorite-philosopher-debate, that would have been very sad.

On that day, I realized that both of us had lived with the same fear all along. And that all our radical radicalism was at least two thirds really, really stupid virtue signalling. Wiser versions of us would have cut the bullshit and said: "I really like you and I don't want to lose you." But we didn't, because we were too busy virtue signalling at each other that really, you can trust me and don't have to ostracize me, I'm totally one of the Good Guys(TM).

Later, I found the intersection between EAs and rationalists: A community that valued keeping your identity small. A community where the default response to a crass disagreement was not moral outrage or carefully reading the room to grasp the group consensus, but "Let's double crux that!", and then actually looking at the evidence and finding an answer or agreeing that the matter isn't clear. A community where it was considered okay and normal and obvious to say that life sometimes involves very difficult tradeoffs. A community where it was considered virtuous to talk and think as clearly and level-headedly as possible about these difficult tradeoffs.

And in this community, I found mental frameworks that helped me understand what went wrong in my socialist bubble: Most memorably, Yudkowsky's Politics is the Mind-Killer  and his Death Spirals sequence. I'd place a bet that the majority of the people who are concerned about this commitment know their content, and that the majority of the people who support it don't. And I think it would be good if all of us were to (re-)read them amidst this drama.

I'm a big fan of being considerate of each others' feelings and needs (though I'm not always good at that). I'm a big fan of not being a bigot (though I'm not always good at that). Overall, I'd like EA to feel way more like the warm, familiar, supportive anti-discriminatory safe spaces of my early twenties.

Unfortunately, I don't think this pledge makes much of a difference there.

At the same time, after I saw the destructive virtue signalling of my early 20s play out as it did, I do fear that this pledge and similar contributions to the current debate might make all the difference for breaking EA's discourse norms.

And by "breaking EA's discourse norms", I mean moving them way closer to the conformity pressure and groupthink I left behind.

If we start throwing around loaded and vague buzzwords like "(anti-)sexism" and "(anti-)racism" instead of tabooing our words and talking about concrete problems, how we feel about them, and what we think needs doing in order to fix them, we might end up at the point where parts of the left seem to be right now: Ostracizing people not only when that is necessary to protect other community members from harm, but also when we merely talk past each other and are too tired from infighting to explain ourselves and try and empathize with one another.

I'd be sad about that. Because then I'd have to look for a new community all over again.

Jason @ 2023-02-12T16:21 (+34)

For the people who think the statement is applause lights, I'd suggest considering the following response: If someone comes up with a reasonable concrete plan for addressing racism and sexism within EA, and it doesn't get (sufficient) funding through the usual sources, you will contribute to helping fund it in some manner. That's unavoidably vague and non-specific because we are talking about a hypothetical proposal, but it would be at least a slightly costly signal of support.

I'll commit to funding a hypothetical reasonable underfunded plan that develops in 2023 somewhere in the three-figure range. I'm not going to pretend that is a particularly significant amount in real-world effect terms, but I think it's enough for someone in the public sector like me to dispel the idea that it's just an applause-light level commitment.

(I recognize some people may be students or otherwise not in a position to make more than a symbolic commitment -- but symbolic commitments having some cost still have signalling value.)

Severin @ 2023-02-13T00:32 (+22)

This shifted my opinion towards being agnostic/mildly positive about this public statement.

I'm still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I'm definitely in favor of that.

spencerg @ 2023-02-10T16:37 (+30)

I was pretty surprised by these Twitter poll results (of course, who is responding may have various selection biases involved) where I ask how people feel about organizations putting out statements along the lines of “we oppose racism and sexism and believe diversity is important” (note: the setting of my poll - I give the example of a software accounting firm or animal rights org -  is quite different from the setting of the above post):

https://twitter.com/SpencrGreenberg/status/1624044864584273920

skyblue20 @ 2023-02-10T18:49 (+3)

There's one important consideration I didn't see anyone mention in the comments here or on that twitter poll. This statement would be viewed very positively 30 years ago (by people who cared about racism/sexism) when it may have been very rare. Since it is commonplace now, the signal is week but maybe still positive.

However, a more important consideration is what signal the lack of such a statement gives. Especially now that it is so commonplace. If I'm trying to pick between 10 software accounting firms to apply to and only 2 are missing this statement (which is very plausible today), I would interpret the lack of even a simple/vague/low-accountability (and thereby low-cost) statement  as a strong negative signal.

River @ 2023-02-12T18:31 (+7)

There are different ways to read the signal that the lack of a statement gives. Someone could read it to mean that these two firms have rampant racism/sexism. Alternatively, someone could read it to mean that these two firms have the same low rates of racism/sexism as the other ten, and choose to focus their energies on software accounting rather than identity politics. A third possible reading is that the 10 firms put out statements precisely because they had more problems with racism/sexism, and therefor the two firms without the statements probably have the fewest racism/sexism problems. How you read the lack of a statement will depend a lot on your priors about the dynamics of racism/sexism in your particular place and time. But if you adopt the second or third readings, then the signal from the lack of a statement seems positive.

Chris Leong @ 2023-02-10T12:01 (+26)

I wish I lived in a world where I could support this. I am definitely worried about how recent events may have harmed minorities and women and made it harder for them to trust the movement.

However, coming out of a few years where the world essentially went crazy with canceling people, sometimes for the most absurd reasons, I’m naturally wary of anything in the social justice vein, even whilst I respect the people proposing/signing it and believe that most of them are acting in good faith and attempting to address real harms.

Before the world went crazy for a few years, I would have happily signed such a statement and encouraged people to sign it as well, since I support my particular understanding of those words. Although now I find myself agreeing with Duncan that there are real costs with signing a statement if that then allows other people to use your signature as support for an interpretation that doesn’t match your beliefs. And I think it’s pretty clear to anyone who has been following online discourse that terms can be stretched surprisingly far.

This comment is more political than I’d like it to be, however, I think it is justified given that the standard position within social justice is that political neutrality is fake and an attempt to impose values whilst pretending that you aren’t.

Maybe it’s unfair to attribute possible beliefs to group of people who haven’t made that claim, but this has to be balanced against reasoning transparency which feels particularly important to me when I suspect that this is many people’s true rejection. And maybe it makes sense in the current environment when people are leaning more towards sharing.

I wish we lived in a different world, but in this world, there are certain nice things that we don’t get to have. That all said, there’s definitely been times when I’ve failed to properly account for the needs or perspectives of people with other backgrounds and certainly intend to become as good at navigating these situations as I can because I really don’t want to offend or be unfair to anyone.

Jason @ 2023-02-10T20:45 (+25)

People downvoting this post, apparently due to disagreement, is burying it deep in the community stack fewer than 24 hours after its release. I don't think that is a desirable outcome.

It's unfortunate that there's no disagree-vote on posts. In its abscence, I wish people would not downvote posts like this in a way that buries them soon after release. Whatever you think of the statement this post announces, it is not a lousy post.

I'm not expressing an opinion on the statement either way, but it should have a reasonable chance to be seen.

Severin @ 2023-02-10T22:58 (+8)

Thanks, I removed my downvote after reading this comment.

quinn @ 2023-02-14T13:53 (+13)

feel bad for piling on, but I want to copy over my note from slack because I think it is a succinct epistemology concern and less comprehensive than the other comments: 

idk what channel is best for this comment, which I hesitate to make, because I share the broad goals (besides one nagging detail) of the document and don't wanna be that guy and it's not my hill to die on, and etc. etc. I know some people will feel like this comment is a call to relitigate some object level thing that a lot of people don't even want to be in the overton window, and I'm sorry.

but I think it might be poisonous to precommit against science. believing true things is dual use. empirical beliefs are not assigned any moral status whatsoever. I don't care a lot about the object level here because it's not morally relevant, and it's only tactically relevant for things way outside my wheelhouse. But a culture that says "if you're investigating this mindkilled empirical topic that vanishingly few people have real expertise on, you're on thin ice, because a priori we know there's a right answer and a wrong answer socially speaking" is alarming and kinda anti-EA. Pointing to hypothetical harms that can be downstream of beliefs propagating (by belief I mean in the strictest sense of an empirical and falsifiable map of the territory) doesn't get you out of that for free.

source: co-run EA Philly with someone. my diversity credentials: used to tutor math at a community college, was highly involved BLMer 2014-2016


For the record: Duncan's comment may have swayed me more to the harms of virtue signaling, making me more negative about the statement than I was when I chimed in on slack. 

burner @ 2023-02-14T19:02 (+12)

This is really sad and frustrating to see, that a community which prides itself in rigorous and independent thinking has taken to reciting by the same platitudes that every left wing organization does. We're supposed to hold ourselves to higher standards than this. 

Posts like this makes me much less being interested in being a part of EA.

pete @ 2023-02-11T18:15 (+4)

Great job, Rocky and signatories. Statements are not programs, but neither are they nothing. They take a ton of courage and hard work to write. Proud of everyone who engaged in good faith to put this forward and to strengthen EA as a community.