Some thoughts on the EA Munich // Robin Hanson incident

By Aaron Gertler 🔸 @ 2020-08-28T11:27 (+48)

I work for CEA, but these are my personal views. 

Relevant background: I previously co-founded two EA groups, at Yale University and the healthcare corporation Epic. In one case, I had to make a decision about how to handle a potential guest speaker who was also a controversial figure; this is part of why I sympathize with EA Munich’s position, though a small part.

Epistemic status: A lot of pent-up venting, which I hope adds up to something moderately reasonable. But I wouldn’t be too surprised if it doesn’t.


Many things can be true at the same time.

A planned EA Munich event with Robin Hanson was recently cancelled. This is EA Munich’s explanation. This is a Twitter thread with lots of reactions.

For context, I’ll start with a factual clarification, based on conversations with others at CEA (all of this is also detailed in the Munich group’s document):

Here are some things about the situation which seem true to me (though this doesn’t necessarily make them true):

On the decision and ensuing social media kerfuffle

  1. It is generally good for groups interested in finding good ideas to choose speakers on the basis of the quality of their best ideas, rather than their most controversial or misguided ideas.
  2. However, if most members of a small group don’t want a speaker to present to their group, this is a good reason for that speaker not to present. The smaller the group, the more true this seems. (If a speaker is disinvited from an event at a large university, thousands of supporters might be left disappointed; this isn’t the case for a tiny event run by a local EA group.)
  3. The Slate piece cited as criticism of Hanson was uncharitable; reading it would probably leave most people with a different view of Hanson than they’d get from reading a wider selection of Hanson’s work.
  4. And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling.
    • It’s unclear how many members/organizers of the Munich group were personally upset/unsettled by Hanson and how many were mostly concerned with the PR implications of his presence, but it seems likely that both groups were represented.
  5. Those who commented on the announcement were generally quite uncharitable to EA Munich — including people I’m certain would endorse the Principle of Charity in the abstract if I were to ask them about it independent of this context. Reading Hanson’s tweets likely left them with a very different view of EA Munich than they’d get from attending a few meetups.
  6. I wasn’t involved in CEA’s discussion with EA Munich, but CEA giving them the go-ahead to make their own decision seems correct.
    • I don’t think Hanson’s supporters would actually have wanted CEA to say: “You should run the event even if it feels like the wrong decision.”
    • Maybe they would have wanted CEA to say: “You should do what seems best, but keep in mind the negative consequences of deplatforming speakers.” But EA Munich was clearly aware of the negative consequences. What could CEA tell them that they didn’t know already, aside from “we trust you to make a decision”?
  7. There are ways in which EA Munich could have adjusted their announcement to better communicate their reasoning.
  8. There are many ways in which the EA Munich announcement is much, much better than other announcements of its type produced by institutions with far more power, prestige, and PR experience.
  9. Writing an announcement that has to be approved by eight people (all volunteers), involves a sensitive topic, and has to be published quickly… is something I wouldn’t wish on anyone. Be kind.

On Robin Hanson

  1. Based on my reading of some of Hanson’s work, I believe he cares a lot about the world being a better place and people living better lives, whoever they are. He is the respected colleague of several of my favorite bloggers. I’d probably find him an interesting person to eat lunch with.
  2. Much of Hanson’s writing (as EA Munich pointed out themselves!) is interesting and valuable. And some writing that doesn’t seem interesting or valuable to me is clearly interesting or valuable to other people, which probably means that I’m underestimating the total value of his output.
  3. Some of Hanson’s writing has probably been, on net, detrimental to his own influence. Had he chosen not to publish that writing (or altered it, gotten more feedback before publishing, etc.), his best and most important ideas would have a better chance of improving the world. Instead, much of the attention he gets involves ideas which I doubt he even cares about very much (though I don’t know Hanson, and this is just a guess).
  4. But as I said, many things can be true at the same time. There is something to the argument that an ideal scholarly career will involve some degree of offense, because filtering all of one’s output takes a lot of time and energy and will produce false positives. “If you never make people angry, you’re spending too much time editing your work.”
  5. Still, many other scholars have done a better job than Hanson at presenting controversial ideas in a productive way. (Several of them work in his academic department and have written thousands of blog posts on varied topics, many of them controversial.)
  6. To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this may be less likely to happen because of Hanson’s decisions. (Though maybe the controversies lead more people to his good ideas in a way that is net positive? I really don't know.)
  7. And of course, Hanson's approach to his own work is none of my business, and he can write whatever he wants. I just have a lot of feelings.

On the EA movement’s approach to ideas, diversity, etc.

  1. EA Munich’s decision doesn’t say much, if anything, about EA in general. They are a small group and acted independently.
  2. That said, my impression is that, over time, the EA movement has become more attentive to various kinds of diversity, and more cautious about avoiding public discussion of ideas likely to cause offense. This involves trade-offs with other values.
  3. However, these trade-offs could easily be beneficial, on net, for the movement’s goals.
    • Whether they actually are depends on many factors, including what a given person would define as “the movement’s goals.” Different people want EA to do different things! Competing access needs are real!
  4. Some of the people who have encouraged EA to be more attentive to diversity and more cautious about public discussion did so without thinking carefully about trade-offs.
  5. Some of the people who have encouraged EA not to become more cautious and attentive to diversity… also did so without thinking carefully about trade-offs.
  6. Given prevailing EA discussion norms, I would expect people who favor more attentiveness to diversity to be underrepresented in community discussions, relative to their actual numbers. My experience running anonymous surveys of people in EA (Forum users, org employees, etc.) tends to bear this out.
    • However, underrepresentation isn’t exclusive to this group. I’ve heard from people with many different views who feel uncomfortable talking about their views in one or more places.
  7. The more time someone spends talking to a variety of community members (and potential future members), the more likely they are to have an accurate view of which norms will best encourage the community’s health and flourishing. Getting a sense of where the community lies on issues often involves having a lot of private conversations, because people often say more about their views in private than they will in a public forum.
  8. Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.
    • I don’t know what the right trade-offs are myself, but I recognize that, compared to the aforementioned people, I have access to (a) the same knowledge about trade-offs and (b) less knowledge about actual people in the community.
    • Hence, I’m inclined to weigh someone’s views more heavily if they’ve spent a lot of time talking to community members.
    • That said (almost done), I spoke to some of the aforementioned people, who cautioned me not to defer too much to their views, and pointed out that “opinions about diversity” aren’t necessarily correlated with “time spent talking to community members,” presenting me with examples of other frequent conversation-havers who hold very different opinions.
      • This drives home for me how open these kinds of questions are — and how wrongfooted it seems when people present EA or its biggest orgs as some kind of restrictive orthodoxy.

Wei_Dai @ 2020-08-29T07:41 (+72)

Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing "party line" in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?

I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?

Cullen_OKeefe @ 2020-09-01T05:09 (+49)

[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)

ragyo_odan_kagyo_odan @ 2020-09-10T21:59 (+8)

What made you update?

Aaron Gertler @ 2020-09-02T00:05 (+22)

Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it's very unlikely that "full-scale cancel culture" (I'll use "CC" from here) evolves within EA. 

Some elements of my doubt: 

  • Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
  • EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
    • Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
    • CEA actively shrank the size of EA Global because they thought it would improve the quality of the event.
    • 80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
    • Obviously, none of these are perfect analogies, but I think there's a noteworthy pattern here.
  • The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
  • EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nation's culture. Of course, the entire Western world is moving in a "cancel culture" direction to some degree, so this isn't complete protection, but it still seems like a protective factor.
    • I've also been impressed by recent EA work I've seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
  • Your comments on this issue include the most upvoted comments on my post, on Cullen's post, and on "Racial Demographics at Longtermist Organizations". It seems like the balance of opinion is very firmly anti-CC. If I began to see downvoting brigades on those types of comments, I would become much more concerned.

Compared to all of the above, a single local group's decision seems minor. 

But I'm sure there are other reasons to worry. If anyone sees this and wants to create a counter-list ("elements of concern"?), I'd be very interested to read it.

Wei_Dai @ 2020-09-02T01:11 (+62)

(I'm occupied with some things so I'll just address this point and maybe come back to others later.)

It seems like the balance of opinion is very firmly anti-CC.

That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:

  1. I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of "EA politics". If there's a danger or trend of EA going in a CC direction, I should be among the last to know.
  2. Until recently I have had very little interest in politics or even socializing. (I once wrote "And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.") Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
  3. I'm probably well within the top percentile of all EAs in terms of "cancel proofness", because I have both an independent source of income and a non-zero amount of "intersectional currency" (e.g., I'm a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don't like to do talks/presentations, so there's pretty much nothing about me that can be canceled.

The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they're worried about EA developing/joining CC, and telling me what they've seen to make them worried, and saying that they can't talk publicly about it.

RobBensinger @ 2020-09-02T16:27 (+37)
I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly

I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.

Paul_Christiano @ 2020-09-26T01:59 (+32)
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.

That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.

If someone convinced me to get more pessimistic about "cancel culture" then I'd definitely think about it more. I'd be interested in concrete forecasts if you have any. For example, what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?

Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I'd want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like "Censure those who don't censure others for not censuring others for problematic speech" and take that to its extreme, but the rest of the world will get along fine without them and it's not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn't look like they will win elections).

in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)

I don't feel that way. I think that "exclude people who talk openly about the conditions under which we exclude people" is a deeply pernicious norm and I'm happy to keep blithely violating it. If a group excludes me for doing so, then I think it's a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)

I'm generally supportive of pro-speech arguments and efforts and I was glad to see the Harper's letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.

I generally try to state my mind if I believe it's important, don't talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can't discuss in public then my intention is to discuss them.

I would only intend to join an internet discussion about "cancellation" in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).

Wei_Dai @ 2020-10-17T13:43 (+17)

To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.

Paul_Christiano @ 2020-10-17T17:18 (+26)

I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).

It doesn't currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.

It does feel like your estimates for the expected harms are higher than mine, which I'm happy enough to discuss, but I'm not sure there's a big disagreement (and it would have to be quite big to change my bottom line).

I was trying to get at possible quantitative disagreements by asking things like "what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future?" I think I have a probability of perhaps 2-5% on "meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence."

I'm always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that "think about it more" is cost-effective for me, but I'm more skeptical of that (I expect they'd instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn't intend for the grandparent to be pushing against that.

For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about "cancel culture" is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it's more likely I should do something. In that case I do think it's clear there is going to be a lot of damage, though again I think we differ a bit in that I'm more scared about the health of our institutions than people like me losing influence.

Wei_Dai @ 2020-10-17T19:51 (+28)

I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.

I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don't want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.

I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it's an important problem that more people should work on. So instead of "and lots of people talk about it already" which seems to suggest that enough people are working on it already, something like "I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere."

Curious how things look from your perspective, or a third party perspective.

Aaron Gertler @ 2020-09-03T02:22 (+16)

Why did it take someone like me to make the concern public?

I don't think it did.

On this thread and others, many people expressed similar concerns, before and after you left your own comments. It's not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was "The Importance of Truth-Oriented Discussions in EA".

While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don't know either way. 

I've also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they'd feel comfortable commenting about their views, though I can't be sure.

I also have direct evidence in the form of EAs contacting me privately to say that they're worried about EA developing/joining CC, and telling me what they've seen to make them worried, and saying that they can't talk publicly about it.

I've gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.

My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...

...and because writing for an EA audience about any  topic can be intimidating. I've had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had "literal anxiety dreams" when he wrote a post about longtermism.

As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I'm not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.

*****

I have seen incidents in the community that worried me. But I haven't seen a pattern of such incidents; they've been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn't cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.

Aaron Gertler @ 2020-09-03T02:27 (+7)

Also: I'd be glad to post something in the EA Polls group I created on Facebook. 

Because answers are linked to Facebook accounts, some people might hide their views, but at least it's a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don't know what wording you'd want around such a question.

Max_Daniel @ 2020-09-02T18:25 (+1)
the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public?

My guess is that your points explain a significant share of the effect, but I'd guess the following is also significant:

Expressing worries about how some external dynamic might affect the EA community isn't often done on this Forum, perhaps because it's less naturally "on topic" than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:

  • How does US immigration policy affect the ability of US-based EA orgs to hire talent?
  • How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures's capital might be in Facebook stocks?)

Both of these questions seem quite important and relevant, but I recall less discussion of those than I'd have at-first-glance expected based on their importance.

(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn't immediately find. But I think it's somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)

Kaj_Sotala @ 2020-09-02T08:22 (+14)

On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.

Wei_Dai @ 2020-09-02T23:35 (+26)

That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I've observed and as we've now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I'm worried however that each "victory" by CC has a ratcheting effect on EA culture, whereas failed cancellations don't really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.

Honestly I'm not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn't, or didn't think of?

EricHerboso @ 2020-09-03T00:29 (+10)

I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.

Elityre @ 2020-09-07T09:15 (+35)

I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn't like the discourse norms in the central EA spaces, I don't want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.

To say it another way, I think it is a mistake to have "advanced" and "introductory" EA spaces, at all.

I am intending to make a pretty strong claim here.

[One operationalization I generated, but want to think more about before I fully endorse it: "I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of 'EA's discourse norms are as good as those in academia.'"]

Some cruxes:

  • I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
  • I think a model by which people gradually "warm up" to "more advanced" discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the "less advanced" level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
  • Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms than we have on this forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
EricHerboso @ 2020-09-08T08:05 (+16)

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”

In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.

I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.

I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.

You say that “If people wouldn't like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.

  • Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.
  • Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students' spaces.

I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.

I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.

Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.

To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.

Dale @ 2020-09-08T19:28 (+23)
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.

I agree with your conclusion about this instance, but for very different reasons, and I don't think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.

It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP's explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.

Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic - even on debate around subjects like 'but do the victims (animals) have moral value?'

Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic - even on debate around subjects like 'but do the victims (fetuses) have moral value?'

It might be the case that people make a dedicated 'Effective Liberation for Xinjiang' group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.

I think this is true even if it made some hypothetical Uighur diaspora members of the group feel 'unsafe'. People have a right to actual safety - clearly no-one should be beating each other up at EA events. But an unlimited right to 'feel safe', even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys - not in a debate. Indeed, if feeling 'unsafe' is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.

The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.

Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.

Elityre @ 2020-09-15T20:22 (+3)

I think this comment says what I was getting at in my own reply, though more strongly.

EricHerboso @ 2020-09-08T20:32 (+2)

If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.

On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.

Ben_West @ 2020-09-09T00:50 (+14)

Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.

Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".

Elityre @ 2020-09-15T20:19 (+9)

First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.

[Everything that I say in this comment is tentative, and I may change my mind.]

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.

If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I'm inclined to bite the bullet of allowing that sort of conversation.

The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn't allow that kind of talk. Namely, that "the Holocaust happened, and Holocaust denial is false".

Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.

I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.

If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.

This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...

In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)

But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)

...

Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.

I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don't think we're on the same page about what the relevant scalar is.

If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?

It depends entirely on what is meant by "certain forms", but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as "racist", because that is a convenient and unarguable way to attack those ideas.

I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren't actually going to do anything), Eli-University would come down on them hard.

If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)

The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don't know enough about the world to rule out discussion of that line of thinking entirely.

...

I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.

I would love to hear more about the details there. In what ways do people not feel safe?

(Is it things like this comment?)

I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.

Yeah. I want to know more about this. What kind of harm?

My default stance is something like, "look, we're here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are 'harmed' by speech-acts, I'm sorry for you, but tough nuggets. I guess you shouldn't participate in this discourse. "

That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.

Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind.

Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.

...

I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.

I'm tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.

It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.

EricHerboso @ 2020-09-26T14:54 (+7)
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.

We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:

  1. I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
  2. I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.

I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.

In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?

No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.

I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission

“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
Khorton @ 2020-09-07T12:42 (+15)

"I think a model by which people gradually "warm up" to "more advanced" discourse norms is false."

I don't think that's the main benefit of disallowing certain forms of speech at certain events. I'd imagine it'd be to avoid making EA events attractive and easily accessible for, say, white supremacists. I'd like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.

JoshYou @ 2020-09-08T22:39 (+11)

We've already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don't moderate away or at least discourage such views will tend to attract them - it's not the pattern of activity you'd see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It's also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.

Khorton @ 2020-09-08T23:36 (+10)

I don't know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)

abrahamrowe @ 2020-09-09T00:13 (+2)

I just upvoted this comment as I strongly agree with it, but also, it had  -1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.

Dale @ 2020-09-09T01:42 (+21)

I didn't downvote it, though probably I should have. But it seems a stretch to say 'one guy who works for a weird organization that is supposedly EA' implies 'congregation'. I think that would have to imply a large number of people. I would be very disappointed if I had a congregation of less than ten people.

JoshYou also ignores important hedging in the linked comment:

Bennett denies this connection; he says he was trying to make friends with these white nationalists in order to get information on them and white nationalism. I think it's plausible that this is somewhat true.

So instead of saying

We've already seen white nationalists congregate in some EA-adjacent spaces.

It would be more fair to say

We've already seen one guy with some evidence he is a white nationalist (though he somewhat plausibly denies it) work for a weird organization that has some EA links.

Which is clearly much less worrying. There are lots of weird ideologies and a lot of weird people in California, who believe a lot of very incorrect things. I would be surprised if 'white nationalists' were really high up on the list of threats to EA, especially given how extremely left wing EA is and how low status they are. We probably have a lot more communists! Rather, I think the highlighting of 'White Nationalists' is being done for ideological reasons - i.e. to cast shade on more moderate right wing people by using a term that is practically a slur. I think the grandparent would not have made such a sloppy comment had it not been about the hated outgroup.

JoshYou @ 2020-09-09T02:34 (+7)

I also agree that it's ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I'm not talking about conservatives, or the "IDW", or people who don't like the BLM movement or think racism is no big deal. I'd be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett's leaked emails. I don't think his defense is credible at all, by the way).

I don't think it's accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they're not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don't think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.

Habryka @ 2020-09-09T01:12 (+20)

Describing members of Leverage as "white nationalists" strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck's initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.

I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely. 

Buck's comment also says at the bottom: 

Edited to add (Oct 08 2019): I wrote "which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees." I think this was mistaken and I'm confused by why I wrote it. I endorse the claim "I think it's plausible Leverage had like five really racist employees". I feel pretty bad about this mistake and apologize to anyone harmed by it.

I also want us to separate "really racist" from "white nationalist" which are just really not the same term, and which appear to me to be conflated via the link above.

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

JoshYou @ 2020-09-09T02:40 (+13)

My description was based on Buck's correction (I don't have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don't believe. I don't mean to imply anything stronger than what Buck claimed about Leverage.

I invoked white nationalists not as a hypothetical representative of ideologies I don't like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.

Habryka @ 2020-09-09T04:17 (+14)

Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.

I've debated with a number of neoreactionaries, and I've never seen them bring up much stuff about racial superiority.  Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don't have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I've seen at least some neoreactionaries propose, but my sense is that I wouldn't characterize the philosophy fundamentally as white nationalist in the racist sense of the term. Though of course the few neoreactionaries that I have debated are probably selected in various ways that reduces the likelihood of having extreme opinions on these dimensions (though they are also the ones that are most likely to engage with EA, so I do think the sample should carry substantial weight). 

Of course, some neoreactionaries are also going to be white nationalists, and being a neoreactionary will probably correlate with white nationalism at least a bit, but my guess is that at least the people adjacent to EA and Rationality that I've seen engage with that philosophy haven't been very focused on white nationalism, and I've frequently seen them actively argue against it.

abrahamrowe @ 2020-09-09T02:31 (+9)

Thanks for elaborating!

I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning. 

I also think the claims would be equally concerning if JoshYou had said "white supremacists" or "really racist people" instead of "white nationalists" in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned?

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

I didn't really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American).

I guess broadly a trend I feel like I've seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.

Elityre @ 2020-09-15T19:31 (+3)

I don't follow how what you're saying is a response to what I was saying.

I think a model by which people gradually "warm up" to "more advanced" discourse norms is false.

I wasn't saying "the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms." I was saying if that I was mistaken about that "warming up effect", it would cause me to reconsider my view here.

In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.

Wei_Dai @ 2020-09-03T00:47 (+28)

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed "introductory spaces" like undergrad classes to become "safe spaces", thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded "safe spaces" everywhere they went. And how is anyone supposed to argue against "safety", especially once its importance has been institutionalized (i.e., departments were built in part to enforce "safe spaces", which can then easily extend their power beyond "introductory spaces").

ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of "safe spaces" in universities.

EricHerboso @ 2020-09-03T05:12 (+6)

I don't think this is pivotal to anyone, but just because I'm curious:

If we knew for a fact that a slippery slope wouldn't occur, and the "safe space" was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a "safe space", would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?

(by "disallowing discussion", I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)

Kaj_Sotala @ 2020-09-03T06:29 (+8)
yet academia is now the top example of cancel culture

I'm a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don't think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

I have lots of friends in academia and follow academic blogs etc., and basically don't hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn't really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success.

I don't doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.

Wei_Dai @ 2020-09-03T07:04 (+20)

I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:

and this comment in the comments section of a NYT story about cancel culture among the students:

Having just graduated from the University of Minnesota last year, a very liberal college, I believe these examples don’t adequately show how far cancel culture has gone and what it truly is. The examples used of disassociating from obvious homophobes, or more classic bullying that teenage girls have always done to each other since the dawn of time is not new and not really cancel culture. The cancel culture that is truly new to my generation is the full blocking or shutting out of someone who simply has a different opinion than you. My experience in college was it morphed into a culture of fear for most. The fear of cancellation or punishment for voicing an opinion that the “group” disagreed with created a culture where most of us sat silent. My campus was not one of fruitful debate, but silent adherence to whatever the most “woke” person in the classroom decided was the correct thing to believe or think. This is not how things worked in the past, people used to be able to disagree, debate and sometimes feel offended because we are all looking to get closer to the truth on whatever topic it may be. Our problem with cancel culture is it snuffs out any debate, there is no longer room for dissent or nuance, the group can decide that your opinion isn’t worth hearing and—poof you’ve been canceled into oblivion. Whatever it’s worth I’d like to note I’m a liberal, voted for Obama and Hillary, those who participate in cancel culture aren’t liberals to me, they’ve hijacked the name.

About "I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context." there could be a number of explanations aside from cancel culture not being that bad in academia. Maybe you could ask them directly about it?

Kaj_Sotala @ 2020-09-03T07:56 (+9)

Thanks. It looks to me that much of what's being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.

there could be a number of explanations aside from cancel culture not being that bad in academia.

I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I'd generally expect this to come up if it were an issue. But I could still ask, of course.

Khorton @ 2020-08-29T07:46 (+42)

EDIT: I realized that discussing this will not help me do more good or live a happier life so I'd rather not, but I'll leave it up for the record. You are welcome to reply to it.

Something I don't see discussed here is that there's a difference between a) not inviting a live speaker who has a history of being unpredictable and insensitive compared to b) refusing to engage with any of their ideas.

At this point, for my own mental health, I would not engage with Robin Hanson. If I knew he were going to be at an event and I'd have to interact with him, I wouldn't go. But I still might read one of his books - they've been through an editing process so I trust them to be more sensitive and more useful.

I see a lot of people saying "no one involved with EA would really object to Robin Hanson at an event" but there are actually a lot of us. And you can insult me however you want to - you can say that this makes me small-minded or irrational - but that won't make it an "effective" use of my time to hang around someone who's consistently unkind.

alexrjl @ 2020-08-29T13:29 (+28)

I appreciate you writing this and leaving it up, I feel basically the same (including the edit, so I'm pretty unlikely to reply to further comments) but felt better having seen your post, and think that you writing it was, in fact, doing good in this case (at least in making me and probably others not feel further separated from the community).

jackmalde @ 2020-08-29T13:58 (+21)

I think there's another difference between:

a) Thinking that a speaker shouldn't be allowed to speak at an event

b) Deciding not to attend an event with a confirmed speaker because you don't like their ideas

For the first half of your comment I thought you fell into camp b) but not camp a). However your last paragraph seems to imply you fall into both camps.

Personally I would not want a person to speak at an EA event if I thought they were likely to cause reputational damage to EA. In this particular case I (tentatively) don't think Hanson would have. Sure he's said some questionable things, but he was being invited to talk about tort law and I fail to see how allowing that signals condoning his questionable ideas. Therefore I would probably have let him speak and anyone who didn't want to hear him would obviously have been free to not attend.

It seems to me that people often imply that personally finding a speaker beyond the pale means that the speaker shouldn't be allowed to speak to anyone. I've always found this slightly odd.

anon_account @ 2020-09-02T20:14 (+15)

Personally, I feel the same. I can engage with Robin's ideas online. I think he produces some interesting content. Also, some dumb content. I can choose to learn from either. I can notice if he 'offends' me and then decide I'm still interested in whether what he has to say might be useful somehow. ...That doesn't mean I have to invite the guy over to my house to talk with me about his ideas, because I realize that I wouldn't enjoy being around him in person. I think this is more common than people realize among people who know Robin. If Munich wanted to read and discuss his stuff, but not invite him to 'hang out,' I get it.

MaxRa @ 2020-08-29T11:32 (+4)

Thank you for writing it and keeping this up. I think it's really valuable that people share the discomfort they feel around the way some people discuss. I wonder if Kelsey Piper's discussion of competing access needs and safe spaces captures the issue at hand.

Competing access needs is the idea that some people, in order to be able to participate in a community, need one thing, and other people need a conflicting thing (source)
  • For some people it is really valuable to have a space where one can discuss sensitive topics without caring about offense, where taking offense is discouraged because it would hinder progressing the arguments. Maybe even a space where one is encouraged to let one's mind go to places that are uncomfortable, to develop one's thinking around topics where social norms discourage you to go.
  • For others, a space like this would be distressing, depressing and demotivating. A space like this might offer a few insights, but they seem not worth the emotional costs and there seem to be many other topics to explore from an EA perspective, so why spend any time there.

I also hope that it is very easy for people to avoid spaces like this at EA conferences, e.g. to avoid a talk by Robin Hanson (though from the few talks of him that I saw I think his talks are much less "edgy" than the discussed blog posts). I wonder if it would be useful to tag sessions at an EA conference that would belong into the described space, or if people mostly correctly avoid sessions they would find discomforting already.

MaxRa @ 2020-08-29T11:37 (+1)

One idea in the direction of making discussion norms explicit that just came to my mind are Crocker's rules.

By declaring commitment to Crocker's rules, one authorizes other debaters to optimize their messages for information, even when this entails that emotional feelings will be disregarded. This means that you have accepted full responsibility for the operation of your own mind, so that if you're offended, it's your own fault.

I've heard that some people are unhappy with those rules. Maybe because they seem to signal what Khorton alluded to: "Oh, of course I can accommodate your small-minded irrational sensitivities if you don't want a message optimized for information". I know that they are/were used in the LessWrong Community Weekends in Berlin, where you would where a "Crocker's rules" sticker on your nametag.

MaxRa @ 2020-08-28T13:13 (+36)

Thanks for writing about this. This incident bothered me and I really appreciate your thoughts and find them clarifying. I also tend to feel really frustrated with people finding offense in arguments (and notice this frustration right now), just to flag this here.

To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this is less likely to happen because of Hanson’s missteps.

I found it improper that you call it "missteps", as if he made mistakes. As you said, openly discussing sensitive topics will cause offense if you don't censor yourself a lot. You mention that his collegues do a better job at making controversial ideas more palatable, but again, as you suggested, maybe they actually spent more time editing their work. This seems like a tradeoff to me, and I'm not convinced that Hanson is making missteps and we should encourage him to change how he runs his blog to have a more positive impact. Not saying this is true for Hanson, but for some thinkers it might be draining to worry about people taking offense at their thoughts. I'm worried about putting pressure on an important thinker to direct mental resources to other things than having smart thoughts about important topics.

Lukas_Gloor @ 2020-08-29T08:17 (+77)
This seems like a tradeoff to me

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.

For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it's fair that everyone has to be at least as good at careful communication as Harris to be able to openly talk about sensitive topics (and it seems the bar from societal backlash is even higher now, which is of course terrible), but maybe we can expect people to at least do better than Hanson? That doesn't mean that Hanson should be disinvited from events, but I feel like it would suck if he didn't take more time to make his tweets less needlessly incendiary.

Wei_Dai @ 2020-08-29T09:05 (+35)

I don’t think he’s even trying, and maybe he’s trying to deliberately walk as close to the line as possible. What’s the point in that?

I can think of at least three reasons for someone to be "edgy" like that:

  1. To signal intelligence, because it takes knowledge and skill to be able to walk as close to a line as possible without crossing it. This could be the (perhaps subconscious) intent even if the effort ends up failing or backfiring.
  2. To try to hold one end of the overton window in place, if one was worried about the overton window shifting or narrowing.
  3. To try to desensitize people (i.e., reduce their emotional reactions) about certain topics, ideas, or opinions.

One could think of "edgy" people as performing a valuable social service (2 and 3 above) while taking a large personal risk (if they accidentally cross the line), while receiving the personal benefits of intelligence signaling as compensation. On this view, it's regretable that more people aren't willing to be "edgy" (perhaps because we as a culture have devalued intelligence signaling relative to virtue signaling), and as a result our society is suffering the negative consequences of an increasingly narrow overton window and an increasingly sensitive populace.

An alternative view would be that there are too many "edgy" people causing damage to society by making the overton window too wide or anchoring it in the wrong place, and causing emotional damage to lots of people who they have no business trying to "desensitize", and they're doing that for the selfish benefit of signaling their intelligence to others. Therefore we should coordinate to punish such people by canceling/deplatforming/shaming them, etc.

(You can perhaps tell which view I'm sympathetic to, and which view is the one that the most influential parts of Western civilization have implicitly adopted in recent years.)

Lukas_Gloor @ 2020-08-29T09:29 (+35)

Thanks, those are good points. I agree that this is not black and white, that there are some positives to being edgy.

That said, I don't think you make a good case for the alternative view. I wouldn't say that the problem with Hanson's tweets is that they cause "emotional damage."The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people's impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

Wei_Dai @ 2020-08-29T11:09 (+16)

The "alternative view" ("emotional damage") I mentioned was in part trying to summarize the view apparently taken by EA Munich and being defended in the OP: "And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling."

The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people’s impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

This would be a third view, which I hadn't seen anyone mention in connection with Robin Hanson until now. I guess it seems plausible although I personally haven't observed the "negative effect" you describe so I don't know how big the effect is.

MaxRa @ 2020-09-02T11:44 (+22)

Two other reasons to be "edgy" came to my mind:

Signalling frank discussion norms - when the host of a discussion now and then uses words and phrases that would be considered insensitive among a general audience, people in this discussion can feel permitted to talk frankly without having to worry about how the framing of their argument might offend anybody.

Relatedly, I noticed feeling relieved when a person higher in status made a "politically incorrect" joke. I felt like I could relax some part of my brain that worries about saying something that in some context could cause offense and me being punished socially (e.g. being labeled "problematic", which seems to be happening much quicker than I'd like, also in EA circles).

Only half joking, if somebody would leak the chats I have had with my best friend over the years, there is probably something in there to deeply offend every person on Earth. So maybe another reason to be "edgy" is just that it's fun for some people to say things in a norm-violating way? I remember laughing out loudly at two of Hanson's breaches of certain norms. Some part of me is worried about how this makes me look like, here. I think I laughed because it violated some norm in a suprising way (which would relate it to signalling intelligence), and not because I didn't find the topic serious or wasn't interested in serious discussion. I don't want to imply this was intended by Hanson, though. But I can imagine that it draws in some people, too.

MaxRa @ 2020-08-29T12:06 (+13)

Thanks for the pushback, I'm still confused and it helped me think a bit better (I think). What do you think about the idea that the issue resolves around what Kelsey Piper called competing access needs? I explained how I think about it in this comment. I feel like I want to protect edgy think aloud spaces like those from Hanson. I feel like I benefit a lot from it and I feel like I (not being on any EA insides) am already excluded from many valuable but potentially offending EA think aloud spaces because people are not willing to bare the costs like Hanson does.

Lukas_Gloor @ 2020-08-29T12:53 (+12)

That all makes sense. I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters. It feels to me like the intention isn't just to discuss ideas with people of a certain access need, but also some element of deliberate provocation. (But maybe you could say that's just a side product of curiosity about where the lines are – I just feel like some of the tweet wordings were deliberately optimized to be jarring.) If it wasn't for that one tweet that Hanson now apologized for, I'd have less strong opinions on whether to use the term "misstep." (And the original post used it in plural, so you have a point.)

vaniver @ 2020-08-31T18:47 (+26)
I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters.

Presumably every filter is associated with an edge, right? Like, the 'trolley problem' is a classic of philosophy, and yet it is potentially traumatic for the victims of vehicular violence or accidents. If that's a group you don't want to upset or offend, you install a filter to catch yourself before you do, and when seeing other people say things you would've filtered out, you perceive them as 'edgy'. "Don't they know they shouldn't say that? Are they deliberately saying that because it's edgy?"

[A more real example is that a friend once collected a list of classic examples and thought experiments, and edited all of the food-based ones to be vegan, instead of the original food item. Presumably the people who originally generated those thought experiments didn't perceive them as being 'edgy' or 'over the line' in some way.]

but also some element of deliberate provocation.

I read a lot of old books; for example, it's interesting to contrast the 1934 and 1981 editions of How to Win Friends and Influence People. Deciding to write one of the 'old-version' sentences in 2020 would probably be seen as a deliberate provocation, and yet it seems hugely inconsistent to see Dale Carnegie as out to deliberately provoke people.

Now, I'm not saying Hanson isn't deliberately edgy; he very well might be. But there are a lot of ways in which you might offend someone, and it takes a lot of computation to proactively notice and prevent all of them, and it's very easy to think your filters are "common knowledge" or "obvious" when in fact they aren't. As a matter of bounded computation, thoughts spent on filters are thoughts not spent on other things, and so there is a real tradeoff here, where the fewer filters are required the more thoughts can be spent on other things, but this is coming through a literal increase in carelessness.

Lukas_Gloor @ 2020-09-01T19:13 (+11)
Now, I'm not saying Hanson isn't deliberately edgy; he very well might be.

If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.

Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic. That would make sense – but then I have a different opinion.

vaniver @ 2020-09-01T21:26 (+21)
you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic.

I'm afraid this sentence has too many negations for me to clearly point one way or the other, but let me try to restate it and say why I made a comment:

The mechanistic approach to avoiding offense is to keep track of the ways things you say could be interpreted negatively, and search for ways to get your point across while not allowing for any of the negative interpretations. This is a tax on saying anything, and it especially taxes statements on touchy subjects, and the tax on saying things backpropagates into a tax on thinking them.

When we consider people who fail at the task of avoiding giving offense, it seems like there are three categories to consider:

1. The Blunt, who are ignoring the question of how the comment will land, and are just trying to state their point clearly (according to them).

2. The Blithe, who would put effort into rewording their point if they knew how to avoid giving offense, but whose models of the audience are inadequate to the task.

3. The Edgy, who are optimizing for being 'on the line' or in the 'plausible deniability' region, where they can both offend some targets and have some defenders who view their statements as unobjectionable.

While I'm comfortable predicting those categories will exist, confidently asserting that someone falls into any particular category is hard, because it involves some amount of mind-reading (and I think the typical mind fallacy makes it easy to think people are being Edgy, because you assume they see your filters when deciding what to say). That said, my guess is that Hanson is Blunt instead of Edgy or Blithe.

Lukas_Gloor @ 2020-09-02T06:02 (+4)

Thanks, that makes sense to me now! The three categories are also what I pointed out in my original comment:

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible.

Okay, so you cared mostly about this point about mind reading:

While I'm comfortable predicting those categories will exist, confidently asserting that someone falls into any particular category is hard,

This is a good point, but I didn't find your initial comment so helpful because this point against mind reading didn't touch on any of the specifics of the situation. It didn't address the object-level arguments I gave:

[...] I just feel like some of the tweet wordings were deliberately optimized to be jarring.)
but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident.

I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered. If I read your comment as "I don't want to comment on the specific tweets, but your interpretation might be a bit hasty" – that makes perfect sense. But by itself, it felt to me like I was being strawmanned for not being aware of obvious possibilities. Similar to khorton, I had the impulse to say "What does this have to do with trolleys, shouldn't we, if anything, talk about the specific wording of the tweets?" Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go." (And while one could try to defend this as just blunt or blithe, I think the reasoning would have to be disanalogous to your trolley or food examples, because it's not like it should be surprising to any Western person in the last two decades that rape is a particularly sensitive topic – very unlike the "changing animal food to vegan food" example you gave.)

Habryka @ 2020-09-02T06:11 (+16)

Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go."

I am always really confused when someone brings up this point as a point of critique. The substance of Hanson's post where he used that phrase just seemed totally solid to me. 

I feel like this phrase is always invoked to make the point that Hanson doesn't understand how bad rape is, or that he somehow thinks lots of rape is "gentle" or "silent", but that has absolutely nothing to do with the post where the phrase is used. The phrase isn't even referring to rape itself! 

When people say things like this, my feeling is that they must have not actually read the original post, where the idea of "gentle, silent rape" was used as a way to generate intuitions not about how bad rape is, but about how bad something else is (cuckoldry), and about how our legal system judges different actions in a somewhat inconsistent way. Again, nowhere in that series of posts did Hanson say that rape was in any way not bad, or not traumatic, or not something that we should obviously try to prevent with a substantial fraction of our resources. And given the relatively difficult point he tried to make, which is a good one and I appreciate him making, I feel like his word choice was overall totally fine, if one assumes that others will at the very least read what the phrase refers to at all, instead of totally removing it from context and using it in a way that has basically nothing to do with how it was used by him, which I argue is a reasonable assumption to make in a healthy intellectual community.

Lukas_Gloor @ 2020-09-02T06:38 (+15)

I did read the post, and I mostly agree with you about the content (Edit: at least in the sense that I think large parts of the argument are valid; I think there are some important disanalogies that Hanson didn't mention, like "right to bodily integrity" being way clearer than "moral responsibility toward your marriage partner"). I find it weird that just because I think a point is poorly presented, people think I disagree with the point. (Edit: It's particularly the juxtaposition of "gently raped" that comes also in the main part of the text. I also would prefer more remarks that put the reader at ease, e.g., repeating several times that it's all just a thought experiment, and so on.)

There's a spectrum of how much people care about a norm to present especially sensitive topics in a considerate way. You and a lot of other people here seem to be so far on one end of the spectrum that you don't seem to notice the difference between me and Ezra Klein (in the discussion between Sam Harris and Ezra Klein, I completely agreed with Sam Harris.) Maybe that's just because there are few people in the middle of this spectrum, and you usually deal with people who bring the same types of objections. But why are there so few people in the middle of this spectrum? That's what I find weird.

Some people here talk about a slippery slope and having to defend the ground at all costs. Is that the reasoning?

I want to keep up a norm that considerateness is really good. I think that's compatible with also criticizing bad outgrowths of considerate impulses. Just like it's compatible to care about truth-seeking, but criticize bad outgrowths of it. (If a virtue goes too far, it's not a virtue anymore.)

Habryka @ 2020-09-02T06:58 (+21)

I find it weird that just because I think a point is poorly presented, people think I disagree with the point.

Sorry! I never meant to imply that you disagree with the point. 

My comment in this case is more: How would you have actually wanted Robin Hanson to phrase his point? I've thought about that issue a good amount, and like, I feel like it's just a really hard point to make. I am honestly curious what other thing you would have preferred Hanson to say instead. The thing he said seemed overall pretty clear to me, and really not like an attempt to be intentionally edge or something, and more that the point he wanted to make kind of just had a bunch of inconvenient consequences that were difficult to explore (similarly to how utilitarianism quickly gives rise to a number of hard to discuss consequences that are hard to explore).

My guess is you can probably come up with something better, but that it would take you substantial time (> 10 minutes) of thinking. 

My argument here is mostly: In context, the thing that Robin said seemed fine, and I don't expect that many people who read that blogpost actually found his phrasing that problematic. The thing that I expect to have happened is that some people saw this as an opportunity to make Robin look bad, and use some of the words he said completely out of context, creating a narrative where he said something he definitely did not say, and that looked really bad. 

And while I think the bar of "only write essays that don't really inflame lots of people and cause them to be triggered" is already a high bar to meet, but maybe a potentially reasonable one, the bar of "never write anything that when taken out of context could cause people to be really triggered" is no longer a feasible bar to meet. Indeed it is a bar that is now so high that I no longer know how to make the vast majority of important intellectual points I have to make in order to solve many of the important global problems I want us to solve in my lifetime. The way I understood your comment above, and the usual critiques of that blogpost in particular, is that it was leaning into the out-of-context phrasings of his writing, without really acknowledging the context in which the phrase was used. 

I think this is an important point to make, because on a number of occasions I do think Robin has actually said things that seemed much more edgy and unnecessarily inflammatory even if you had the full context of his writing, and I think the case for those being bad is much stronger than the case for that blogpost about "gentle, silent rape" and other things in its reference class being bad. I think Twitter in particular has made some of this a lot worse, since it's much harder to provide much context that helps people comprehend the full argument, and it's much more frequent for things to be taken out of context by others.

vaniver @ 2020-09-02T19:13 (+7)
I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered.

In my original comment, I was trying to resolve the puzzle of why something would have to appear edgy instead of just having fewer filters, by pointing out the ways in which having unshared filters would lead to the appearance of edginess. [On reflection, I should've been clearer about the 'unshared' aspect of it.]

Khorton @ 2020-09-01T10:50 (+2)

Comparing trolley accidents to rape is pretty ridiculous for a few reasons:

  1. Rape is much more common than being run over by trolleys.
  2. Rape is a very personal form of a violence. I'm not sure anyone has ever been run over by a trolley on purpose in all of history.
  3. If you're talking to a person about trolley accidents, they're very unlikely to actually run you over, no matter how cheerful they seem, because most people don't have access to trolleys. If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you. In some cases, the conversation includes an implicit threat.
Larks @ 2020-09-01T14:06 (+38)
If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you.

I realise you did not say this applied to Robin, but just in case anyone reading was confused and mistakenly thought it was implicit, we should make clear that Robin does not think rape is 'not a big deal'. Firstly, opposition to rape is almost universal in the west, especially among the highly educated; as such our prior should be extremely strong that he does think rape is bad. In addition to this, and despite his opposition to unnecessary disclaimers, Robin has made clear his opposition to rape on many occasions. Here are some quotations that I found easily on the first page of google and by following the links in the article EA Munich linked:

I was not at all minimizing the harm of rape when I used rape as a reference to ask if other harms might be even bigger. Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape.

https://www.overcomingbias.com/2014/11/hanson-loves-moose-caca.html

You are seriously misrepresenting my views. I'm not at all an advocate for rape. 

https://twitter.com/robinhanson/status/990762713876922368?lang=en

It is bordering on slander for you to call me "pro-rape". You have no direct evidence for that claim, and I've denied it many times.  

https://twitter.com/robinhanson/status/991069965263491072

I didn't and don't minimize rape!  

https://twitter.com/robinhanson/status/1042739542242074630

and from personal communication:

of course I’m against rape, and it is easy to see or ask.

Separately, while I don't know what the base rate for a hypothetical person who supposedly doesn't take rape sufficiently seriously will rape someone at an EA event as a result (I suspect it is very low), I think we would be relatively safe here as it would presumably be a zoom meeting anyway due to German Immigration Restrictions.

Khorton @ 2020-09-01T15:20 (+19)

Yes, I'm not saying that Robin Hanson is a criminal, and it's good to point out that he's not pro-rape. Thanks for that.

I was thinking about what it would look like for the whole EA community to generally try to avoid upsetting people who have been traumatized by rape, and comparing that to if the EA community tried to avoid upsetting people who have been traumatized by trolley accidents, which was a suggestion above.

My intuition about the base rate of people who have experienced sexual assault and how often sexual assault happens at EA events is probably different from yours which may explain our different approaches to this topic.

ragyo_odan_kagyo_odan @ 2020-09-01T18:26 (+3)
My intuition about how often sexual assault happens at EA events is probably different from yours

How often does sexual assault and/or rape happen at EA events, in your opinion? Are we talking 1 in 10 events, 1 in 100, 1 in 1000?

vaniver @ 2020-09-01T21:07 (+23)
Comparing trolley accidents to rape is pretty ridiculous for a few reasons:

I think you're missing my point; I'm not describing the scale, but the type. For example, suppose we were discussing racial prejudice, and I made an analogy to prejudice against the left-handed; it would be highly innumerate of me to claim that prejudice against the left-handed is as damaging as racial prejudice, but it might be accurate of me to say both are examples of prejudice against inborn characteristics, are perceived as unfair by the victims, and so on.

And so if you're not trying to compare expected trauma, and just come up with rules of politeness that guard against any expected trauma above a threshold, setting the threshold low enough that both "prejudice against left-handers" and "prejudice against other races" are out doesn't imply that the damage done by both are similar.


That said, I don't think I agree with the points on your list, because I used the reference class of "vehicular violence or accidents," which is very broad. I agree there's an important disanalogy in that 'forced choices' like in the trolley problem are highly atypical for vehicular accidents, most of which are caused by negligence of one sort or another, and that trolleys themselves are very rare compared to cars, trucks, and trains, and so I don't actually expect most sufferers of MVA PTSD to be triggered or offended by the trolley problem. But if they were, it seems relevant that (in the US) motor vehicle accidents are more common than rape, and lead to more cases of PTSD than rape (at least, according to 2004 research; I couldn't quickly find anything more recent).

I also think that utilitarian thought experiments in general radiate the "can't be trusted to abide by norms" property; in the 'fat man' or 'organ donor' variants of the trolley problem, for example, the naive utilitarian answer is to murder, which is also a real risk that could make the conversation include an implicit threat.

Khorton @ 2020-09-01T12:58 (+19)

If you think my arguments are incorrect, it would be useful to explain how rather than silently downvoting.

I am starting to wonder if I will be downvoted on the EA Forum any time I point out that rape is bad. That can't be why people downvote these comments, right?

MaxRa @ 2020-09-01T17:58 (+25)

I'm glad you came back to look at this discussion again because I found your comments here (and generally) really valuable. I refrained from upvoting your comment because you called the comparison "pretty ridiculous". I would feel attacked if you called my reasoning ridiculous and would be less able to constructively argue with you.

I think you are right when pointing out that some topics are much more sensitive to many more people, and EAs being more careful around those topics makes our community more welcoming to more people. That said, I understood vaniver's point was to take an example where most people reading it would not feel like it is a sensitive topic, and *even there* you might upset some people (e.g. if they stumble on a discussion comparing the death of five vs. one). So the solution should not be to punish/deplatform somebody that discussed a topic in a way that was upsetting for someone, and going forward stop people from thinking publically when touching potentially upsetting topics, but something else.

Khorton @ 2020-09-01T19:28 (+10)

That's a very helpful overview, thank you.

Gregory_Lewis @ 2020-09-01T13:34 (+15)

I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.

Habryka @ 2020-09-01T18:07 (+22)

I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below).

I agree that voting quality somewhat deteriorates in more heated debates, but I think this characterization of how voting happens is too uncharitable. I try pretty hard to vote carefully, and often change my votes multiple times on a thread if I later on realize I was too quick to judge something or misunderstood someone, and really spend a lot of time reconsidering and thinking about my voting behavior with the health of the broader discourse in mind, so I am quite confident about my own voting behavior being mischaracterized by the above. 

I've also talked to many other people active on LessWrong and the EA Forum over the years, and a lot of people seem to put a lot of effort into how they vote, so I am also reasonably confident many others also spend substantial time thinking about their voting in a way that really isn't well-characterized by "roughly morphing barely restricted tribal warfare". 

Linch @ 2020-09-01T17:58 (+2)

I am reasonably confident that this is the best first-order explanation.


EDIT: Habryka's comment makes me less sure that this is true.

Aaron Gertler @ 2020-08-28T17:58 (+17)

Thanks for the feedback. I think the word "missteps" is too presumptive for the reasons you outlined, and I've changed it to "decisions." I also added a caveat noting that the controversies he's provoked may lead to his ideas becoming better-known generally (though it's really hard to determine the overall effect).

Dale @ 2020-08-30T13:34 (+29)
That said, my impression is that, over time, the EA movement has become more attentive to various kinds of diversity, and more cautious about avoiding public discussion of ideas likely to cause offense. This involves trade-offs with other values.

I am skeptical of this. The EA survey shows that one of the most under-represented group in EA is conservatives, and I have seen little sign that EAs in general, and CEA in particular, have become more cautious about public discussion that will offend conservatives.

Similarly, I don't think there is much evidence of people suppressing ideas offensive to older people, or religious people, even though these are also dramatically under-represented groups.

I think a more accurate summary would be that as EA has grown, it has become subject to Conquest's Second Law, and this has made it less tolerant of various views and people currently judged to be unacceptable by SJWs. Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.

Aaron Gertler @ 2020-09-02T00:44 (+14)

Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.

I hear frequently from people who express fear of discussing "woke" views on the Forum or in other EA discussion spaces. They (reasonably) point out that anti-woke views are much more popular, and that woke-adjacent comments are frequently heavily downvoted. All I have is a series of anecdotal statements from different people, but maybe that qualifies as "evidence"?

Habryka @ 2020-09-02T01:20 (+45)

My model of this is that there is a large fraction of beliefs in the normal Overton window of both liberals and conservatives, that are not within the Overton window of this community. From a charitable perspective, that makes sense, lots of beliefs that are accepted as Gospel in the conservative community seem obviously wrong to me, and I am obviously going to argue against them. The same is true for many beliefs in the liberal community. Since many more members of the community are liberal, we are going to see many more "woke" views argued against, for two separate reasons: 

  1. Many people assume that all spaces they inhabit are liberal spaces, the EA community is broadly liberal, and so they feel very surprised if they say something that everywhere else is accepted as obvious, suddenly get questioned here (concrete examples that I've seen in the past that I am happy to see questioned are: "there do not exist substantial cognitive differences between genders", "socialized healthcare is universally good", "we should drastically increase taxes on billionaires", "racism is obviously one of the most important problems to be working on").
  2. There are simply many more liberal people so you are going to see many more datapoints of "woke" people feeling attacked, because the baserates for conservatives is already that low

My prediction is that if we were to actually get someone with a relatively central conservative viewpoint, their views would seem even more outlandish to people on the forum, and their perspectives would get even more attacked. Imagine talking about any of the following topics on the forum: 

  1. Gay marriage and gay rights are quite bad
  2. Humans are not the result of evolution
  3. The war on drugs is a strongly positive force, and we should increase incarceration rates

(Note, I really don't hang out much in standard conservative circles, so there is a good chance the above are actually all totally outlandish and the result of stereotypes.) 

If I imagine someone bringing up these topics, the response would be absolutely universally negative, to a much larger degree than what we see when woke topics are being discussed. 

The thing that I think is actually explaining the data is simply that the EA and Rationality communities have a number of opinions that substantially diverge from the opinions held in basically any other large intellectual community, and so if someone comes in and just assumes that everyone shares the context from one of these other communities, they will experience substantial pushback. The most common community for which this happens is the liberal community, since we have substantial overlap, but this would happen with people from basically any community (and I've seen it happen with many people from the libertarian community who sometimes mistakenly believe all of their beliefs are shared in the EA community, and then receive massive pushback as they realize that people are actually overall quite strongly in favor of more redistribution of wealth).

And to be clear, I think this is overall quite good  and I am happy about most of these divergences from both liberal and conservative gospel, since they overall seem to point much closer to the actual truth than what those communities seem to generally accept as true (though I wouldn't at all claim that we are infallible and this is a uniform trend, and think there are probably quite a few topics where the divergences point away from the truth, just that the aggregate seems broadly in the right direction to me).

jtm @ 2020-08-28T16:31 (+27)

Just logging in to say that, as someone who co-ran a large university EA group for three years (incidentally the one that Aaron founded many years prior!), I find it plausible that, in some scenarios, the decision that EA Munich made would be the all-things-considered best one.

Habryka @ 2020-08-28T17:33 (+26)

Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.

Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here. 

I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like "diversity and inclusion" which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like "people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm", but I don't actually know whether that's what you mean. But if you do mean it, I think that's mostly backwards, based on the evidence I have seen.

Aaron Gertler @ 2020-08-28T19:05 (+42)

I maybe should have said something like "concerns related to social justice" when I said "diversity." I wound up picking the shorter word, but at the price of ambiguity.

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.

A simple example of the kind of thing I'm thinking of (which I'm aware is too simplistic to represent reality in full, but does draw from the experiences of people I've met): 

A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.

There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.

For example, I've heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:

  • Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
    • There are lots of trade-offs that make this hard to figure out.
  • Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
    • This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, I'd assume they did what made the most sense to them, based partly on private information I couldn't access.

As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.

I could write much more along these themes, but I'll end here, because I already feel like I'm starting to lose coherence.

*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there aren't ideas that only people from group X will have, every individual is a new mind with new thoughts.

Wei_Dai @ 2020-08-30T20:46 (+47)

I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.

I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...

Getting the right balance seems difficult.

Well, maybe not, if some of the apparent options aren't real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I'm starting to suspect that EA has taken quite a few steps down the slope already.)

It may be that in the end EA can't fight (i.e., can't win against) SJ-like dynamics, and therefore EA joining cancel culture is more "effective" than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I'll defer to them. (If that's the case, I'll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about "getting the right balance" without even mentioning the obvious possibility of a slippery slope.

Aaron Gertler @ 2020-09-02T00:38 (+7)

I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...

I don't think it says much about the current moment in EA. It says a few things about me: 

  • That I generated the initial draft for this post in the middle of the night with no intention of publishing
  • That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
  • That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:

For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope.

Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trump's electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (I'd be happy to read any arguments for this being a uniquely bad time; I don't think it's impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)

It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about "getting the right balance" without even mentioning the obvious possibility of a slippery slope.

If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! I'd like to think that we've hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but I'd be surprised if there weren't ways for us to improve.

As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase "slippery slope" appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.

I'm open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the "slippery slope" narrative,  so I remain uncertain. 

(For people curious about the topic, Ezra Klein's podcast with Yascha Mounk includes an interesting argument that the Overton Window may have widened since 2000 or so.)

Wei_Dai @ 2020-09-02T04:23 (+16)

I’d be happy to read any arguments for this being a uniquely bad time

There were extensive discussions around this at https://www.greaterwrong.com/posts/PjfsbKrK5MnJDDoFr/have-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)

ETA: See also this Atlantic article that just came out today, and John McWhorter's tweet:

Whew! Because of the Atlantic article today, I am now getting another flood of missives from academics deeply afraid. Folks, I hear you but the volume outstrips my ability to write back. Please know I am reading all of them eventually, and they all make me think.

If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"? (Unless you know that others are already working on the problem, and it sure doesn't sound like it.)

Aaron Gertler @ 2020-09-03T01:44 (+6)

Thanks for linking to those discussions. 

Having read through them, I'm still not convinced that today's conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before? 

(I should say, before I continue, that many of these stories horrify me — for example, the Yale Halloween incident, which happened the year after I graduated. I'm fighting against my own inclination to assume that things are worse than ever.)

Take John McWhorter's article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that era's many taboo subjects? What would answers to the survey in the article have looked like?

Or take the "Postcard from Pre-Totalitarian America" you referenced.  It's a chilling anecdote... but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, it's impossible to tell.

Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that one's own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when today's economic conditions are weighed against those of earlier eras.)

 

But whether this is the worst time in general isn't as relevant as:

If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"?

Taking this question literally, there are a huge number of fates I'm not sure EA can avoid sharing, because nothing is certain. Among these fates, "devolving into cancel culture" seems less prominent than other failure conditions that I have also not made my top priority.

This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/community issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isn't connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.

(For what it's worth, if I had to choose a top issue that might lead EA to "fail", I'd cite "low or stagnant growth," which is something I think about a lot, inside and outside of work.)

There are people whose job descriptions include "looking for threats to EA and trying to plan against them." Some of them are working on problems like the ones that concern you. For example, many aspects of 80K's anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).

Of course, the interviews are scattered across many subjects, and many potentially great projects in this area haven't been done. I'd be interested to see someone take on the "cancel culture" question in a more dedicated way, but I'd also like to see someone do this for movement growth, and that seems even more underworked to me.

I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the "cancel culture" direction. (The EA Munich thing seems like one of a few isolated incidents, and I don't see a cancel-y trend in EA right now.)

Wei_Dai @ 2020-09-03T02:49 (+23)

I think the biggest reason I'm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I've personally seen it in action in my local parent groups. Doesn't that give you a high enough base rate that you should think "I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don't apply to EA"?

Your reasoning (from another comment) is "I've seen various incidents that seem worrying, but they don't seem to form a pattern." Well if you only get seriously worried once there's a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.

For what it’s worth, if I had to choose a top issue that might lead EA to “fail”, I’d cite “low or stagnant growth,” which is something I think about a lot, inside and outside of work.

"Low or stagnant growth" is less worrying to me because that's something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you don't get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.

I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the “cancel culture” direction.

Thanks for this information. It does makes it more understandable why you're personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.

Habryka @ 2020-08-30T22:44 (+39)

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.

But isn't it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter). 

That is why it's so important to not give into those people's demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don't like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed. 

Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.

Ben_West @ 2020-08-31T16:56 (+5)

I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.  

I'd be curious how many people you think are not willing to "tolerate real intellectual diversity". I'm not sure if you are saying  

  • "Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost," or  
  • "Anyone who is upset by intellectual diversity isn't someone we want to attract anyway, so losing them isn't a real cost."

(Presumably you are saying something between these two points, but I'm not sure where.)

Habryka @ 2020-08-31T17:30 (+21)

No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn't associate with people they think are wrong. So the real choice is not between "having all the people who think you shouldn't associate with people who think they are wrong" and "having all the weird intellectually independent people", it is instead between "having an intellectually uniform and conformist slice of the people who don't want to be associated with others they disagree with" and "having a  quite intellectually diverse crowd of people who are tolerating dissenting opinions", with the second possibly actually being substantially larger, though generally I don't think size is the relevant constraint to look at here.

AGB @ 2020-09-01T00:23 (+22)

I think you're unintentionally dodging both Aaron's and Ben's points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn't the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn't sound like what you're talking about.

> A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.

Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don't care about someone who would distance themselves from EA for this reason anyway (it's not an actual 'loss'). And I'm not sure which it is or (if the first) what percentages you would give.

Habryka @ 2020-09-01T02:51 (+19)

The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn't even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.

This doesn't seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options: 

1. They are uncommon

2. They are not valuable

I am proposing a third option which is: 

3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it's really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.

It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.

AGB @ 2020-09-02T13:26 (+15)

[EDIT: As Oli's next reponse notes, I'm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]

***

Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben's first option, since it's just a non-numeric write-up of what Ben said:

Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost

I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.

***

At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don't want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I'm generally willing to distance from the latter in order to be open to the former. That's from a purely cold-blooded 'maximise community output' perspective, never mind the human element.

Other than that, I have a number of disagremeents with things you wrote, and for brevity I'm not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:

it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you

I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward 'leopards don't change their spots' way.

I can't speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I'm not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can't be bothered to deal with that. I'm definitely not bringing my children to that event. It's not a matter of being 'triggered' per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I'm very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.

There's also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven't put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.

Habryka @ 2020-09-02T18:41 (+11)

No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don't like is a world where you both won't have a lot of people, and you will have tons of polarization and internal conflict. 

How is your summary at all compatible with what I said, given that I explicitly said: 

with the second (the one where we select on tolerance) possibly actually being substantially larger

That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying. 

I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of "if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart". You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don't claim that I am saying something I am pretty clearly not saying. 

AGB @ 2020-09-03T15:30 (+18)

Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.

That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.

Habryka @ 2020-09-03T17:44 (+14)

Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Yeah, sorry, I do think the "by necessity" was too strong. 

ofer @ 2020-08-29T12:41 (+19)

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you

As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).

Aaron Gertler @ 2020-09-02T00:42 (+8)

I agree with this. 

Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hanson's Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are "difficult" will vary, since different people find different things obviously acceptable/unacceptable).

Linch @ 2020-08-30T09:57 (+20)

On a meta-level, the attitude we have towards "cancellation from a public event" is fairly weird. If only EA Munich chose not to host Hanson's talk to begin with, we almost certainly wouldn't have this discussion. Having instead chosen to host a talk and then changing their minds, they now face lots of handwringing and prompted a larger EA internet conversation.

This feels structurally similar to what Jai Withani calls "The Copenhagen Interpretation of Ethics," though of course it is not exactly the same.

I don't quite understand this asymmetry (though I too feel a similar draw to think/opine in great detail about the "withdraw an event" case, but not the "didn't choose to hold an event " case). But in terms of first-order outcomes, they seem quite similar*!

*They're of course not identical, for example asking someone to give a talk and then changing your mind is professionally uncourteous, can waste speaker's preparation time, etc. But I think in the first order, the lack of professional courtesy and the (say) 2 hours time wasted is quite small compared to the emotional griping we've had.

Ben Pace @ 2020-08-30T18:12 (+17)

It sends public signals that you'll submit to blackmail and that you think people shouldn't affiliate with the speaker. The former has strong negative effects on others in EA because they'll face increased blackmail threats, and the latter has negative effects on the speaker and their reputation, which in turn makes it less likely for interesting speakers to want to speak with EA because they expect EA will submit to blackmail about them if any online mob decides to put their crosshairs on that speaker today.

Gregory_Lewis @ 2020-08-30T23:34 (+11)

Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.

If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.

Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure from third parties. Whether or not that was a good decision, it does not signal they're vulnerable to 'blackmail threats'. If anything, they've signalled the opposite by not reversing course after various folks castigated them on Twitter etc.

The distinction between 'changing our minds on the merits' and 'bowing to public pressure' can get murky (e.g. public outcry could genuinely prompt someone to change their mind that what they were doing was wrong after all, but people will often say this insincerely when what really happened is they were cowed by opprobrium). But again, the apparent absence of people pressuring Munich to 'cancel Hanson' makes this moot.

(I agree with Linch that the incentives look a little weird here given if Munich had found out about work by Hanson they deemed objectionable before they invited him, they presumably would not have invited him and none of us would be any the wiser. It's not clear "Vet more carefully so you don't have to rescind invitations to controversial speakers (with attendant internet drama) rather than not inviting them in the first place" is the lesson folks would want to be learned from this episode.)

Habryka @ 2020-08-30T23:58 (+30)

Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin's ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.

As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).

More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will show you that they have experienced implicit of explicit threats some form or another. In this situation there was also absolutely not an "apparent absence of people pressuring Munich to 'cancel Hanson'". There was indeed an abundance of threats that were readily visible by anyone looking at the current public intellectual climate, talking to people who are trying to organize public discourse, and just seeing how many other people are being actively punished on social media and other places for organizing events like this. 

While I don't think this had substantial weight in this specific decision, there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson. The organizers assured others at the debrief that this did not play a substantial role in their final decision, but it does at least show that explicit threats were made.

Max_Daniel @ 2020-08-31T11:24 (+52)

I found it valuable to hear information from the debrief meeting, and I agree with some of what you said - e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I'm not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.

I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad". Such principles could ground criticism that's independent from the content of beliefs, values, and norms: "I don't care what this is about, structurally your actions are blackmail, and so they're bad."

I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing "deplatforming" and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don't think it's coming from a value-neutral place.

So I don't think there's one side either condoning blackmail or being unaware it's committing blackmail versus another condemning it. I think there's one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn't try to locate blackmail (not because I don't think blackmail is bad but because I don't think this is where the sides differ), I'd ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate 'deontological' constraints and are there alternatives that wouldn't, would or wouldn't physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.

Below I explain why I think blackmail isn't the main issue here.

--

I think a central example of blackmail as the term is commonly used is something like

Alice knows information about Bob that Bob would prefer not to be public. Alice doesn't independently care about Bob or who has access to this information. Alice just wants generic resources such as money, which Bob happens to have. So Alice tells Bob: "Give me some money or I'll disclose this information about you."

I think some features that contribute to making this an objectionable case of blackmail are:

  • Alice doesn't get intrinsic value from the threatened action (and so it'll be net costly to Alice in isolation, if only because of opportunity cost).
  • There is no relationship between the content of the threat or the threatened action on one hand, and Alice's usual plans or goals.
  • By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.

Similar remarks apply to robbing at knifepoint or kidnapping.

Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I'd likely agree they'd be commonly described as threats. (Only "likely" because new information could also update my characterization of threats, which was quite ad hoc.)

However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I'd mostly expect things like:

  • Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson's public appearances.
  • A significant part of Group X's mission is to further value V, and they routinely take other actions for the stated reason to further V.
  • Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.

To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these "threat properties" were present to some extent, e.g.:

  • Group X doesn't usually care about the planned topic of Hanson's talk (tort law).
  • Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
  • Possibly some groups Y, Z, ... are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it's useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.

To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:

The Society for Evidence-Based Medicine has friendly relations and some affiliation with the Society of Curious Doctors. Then they learn that the Curious Doctors plan to host a talk by Dr. Sanhon on a new type of scalpel to be used in surgery. However, they know that Dr. Sanhon has in the past advocated for homeopathy. While this doesn't have any relevance to the topic of the planned talk, they have been concerned for a long time that hosting pro-homeopathy speakers at universities provides a false appearance of scientific credibility for homeopathy, which they believe is really harmful and antithetical to their mission of furthering evidence-based medicine. They didn't become aware of a similar case before, so they don't have a policy in place for how to react; after an ad-hoc discussion, they decide to inform the Curious Doctors that they plan to [disrupt the talk by Sanhon / remove their affiliation]. They believe the responses they've discussed would be good to do anyway if such talks happen, so they think of their message to the Curious Doctors more as an advance notice out of courtesy rather than as a threat.

Alice voted for Republican candidate R. Nashon because she hoped they would lower taxes. She's otherwise more sympathetic to Democratic policies, but cares most about taxation. Then she learns that that Nashon has recently sponsored a tax increase bill. She writes to Nashon's office that she'll vote for the Democrats next time unless Nashon reverses his stance on taxation.

A group of transhumanists is concerned about existential risks from advanced AI. If they knew that no-one was going to build advanced AI, they'd happily focus on some of their other interests such as cryonics and life extension research. However, they think there's some chance that big tech company Hasnon Inc. will develop advanced AI and inadvertently destroy the world. Therefore, they voice their concerns about AI x-risk publicly and advocate for AI safety research. They are aware that this will be costly to Hasnon, e.g. because it could undermine consumer trust or trigger misguided regulation. The transhumanists have no intrinsic interest in harming Hasnon, in fact they mostly like Hasnon's products. Hasnon management invites them to talks with the aim of removing this PR problem and understands that the upshot of the transhumanists' position is "if you continue to develop AI, we'll continue to talk about AI x-risk".
ragyo_odan_kagyo_odan @ 2020-09-01T11:25 (+14)

talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad"

One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.

https://www.overcomingbias.com/2019/02/checkmate-on-blackmail.html

Julia_Wise @ 2020-09-01T14:41 (+35)

there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson.

If I were reading this and didn't know the facts, I would assume the organization you're referring to might be CEA. I want to make clear that CEA didn't threaten EA Munich in any way. I was the one who advised them when they said they were thinking of canceling the event, and I told them I could see either decision being reasonable. CEA absolutely would not have penalized them for continuing with the event if that's how they had decided.

Habryka @ 2020-09-01T17:57 (+18)

Yes! This was definitely not CEA. I don't have any more info on what organization it is (the organizers just said "an organization").

Julia_Wise @ 2020-09-03T14:59 (+14)

Sorry, didn't mean to imply that you intended this - just wanted to be sure there wasn't a misunderstanding.

DanielFilan @ 2020-09-02T02:32 (+12)

FYI, I read this, didn't know the facts, and it didn't occur to me that the organisation Habryka was referring to was CEA - I think my guess was that it was maybe some other random student group?

Linch @ 2020-09-02T20:46 (+6)

It didn't occur to me that the organization was CEA but I also didn't read it too carefully.

Gregory_Lewis @ 2020-09-02T12:34 (+17)
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]

As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.

Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (which I say harms my interests), I'm going to do Y (and Y harms your interests)" are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel's treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith "You do X then I do Y is our BATNA here, can we negotiate something better?" (In some treatments these are termed warnings versus threats, or using terms like 'spiteful', 'malicious' or 'bad faith' to make the distinction).

The 'very explicit threat' of disassociation you mention is a prime example of 'plausibly (/prima facie) not-blackmail'. There are many credible motivations to (e.g.) renounce (or denounce) a group which invites a controversial speaker you find objectionable independent from any hope threatening this makes them ultimately resile from running the event after all. So too 'trenchantly criticising you for holding the event', 'no longer supporting your group', 'leaving in protest (and encouraging others to do the same)' etc. etc. Any or all of these might be wrong for other reasons - but (again, per Daniels) 'they're trying to blackmail us!' is not necessarily one of them.

(Less-than-coincidentally, the above are also acts of protest which are typically considered 'fair game', versus disrupting events, intimidating participants, campaigns to get someone fired, etc. I presume neither of us take various responses made to the NYT when they were planning to write an article about Scott to be (morally objectionable) attempts to blackmail them, even if many of them can be called 'threats' in natural language).

Of course, even if something could plausibly not be a blackmail attempt, it may in fact be exactly this. I may posture that my own interests would drive me to Y, but I would privately regret having to 'follow through' with this after X happens; or I may pretend my threat of Y is 'only meant as a friendly warning'. Yet although our counterparty's mind is not transparent to us, we can make reasonable guesses.

It is important to get this right, as the right strategy to deal with threats is a very wrong one to deal with warnings. If you think I'm trying to blackmail you when I say "If you do X, I will do Y", then all the usual stuff around 'don't give in to the bullies' applies: by refuting my threat, you deter me (and others) from attempting to bully you in future. But if you think I am giving a good-faith warning when I say this, it is worth looking for a compromise. Being intransigent as a matter of policy - at best - means we always end up at our mutual BATNAs even when there were better-for-you negotiated agreements we could have reached.

At worst, it may induce me to make the symmetrical mistake - wrongly believing your behaviour in is bad faith. That your real reasons for doing X, and for being unwilling to entertain the idea of compromise to mitigate the harm X will do to me, are because you're actually 'out to get me'. Game theory will often recommend retaliation as a way of deterring you from doing this again. So the stage is set for escalating conflict.

Directly: Widely across the comments here you have urged for charity and good faith to be extended to evaluating Hanson's behaviour which others have taken exception to - that adverse inferences (beyond perhaps "inadvertently causes offence") are not only mistaken but often indicate a violation of discourse norms vital for EA-land to maintain. I'm a big fan of extending charity and good faith in principle (although perhaps putting this into practice remains a work in progress for me). Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out. Beyond this being normatively unjust, it is also prudentially unwise - presuming bad faith in those who object to your actions is a recipe for making a lot of enemies you didn't need to, especially in already-fractious intellectual terrain.

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

Habryka @ 2020-09-02T19:15 (+11)

I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).

Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out.

This just seems like straightforward misrepresentation? What fervid hyperbole are you referring to? I am trying my best to make relatively clear and straightforward arguments in my comments here. I am not perfect and sometimes will get some details wrong, and I am sure there are many things I could do better in my phrasing, but nothing that I wrote on this post strikes me as being deserving of the phrase "fervid hyperbole". 

I also strongly disagree that I am applying some kind of one-sided charity to Hanson here. The only charity that I am demanding is to be open to engaging with people you disagree with, and to be hesitant to call for the cancellation of others without good cause. I am not even demanding that people engage with Hanson charitably. I am only asking that people do not deplatform others based on implicit threats by some other third party they don't agree with, and do not engage in substantial public attacks in response to long-chained associations removed from denotative meaning. I am quite confident I am not doing that here.

Of course, there are lots of smaller things that I think are good for public discourse that I am requesting in addition to this, but I think overall I am running a strategy that seems quite compatible to me with a generalizable maxim that if followed would result in good discourse, even with others that substantially disagree with me. Of course, that maxim might not be obvious to you, and I take concerns of one-sided charity seriously, but after having reread every comment of mine on this post in response to this comment, I can't find any place where such an accusation of one-sided charity fits well to my behavior.

That said, I prefer to keep this at the object-level, at least given that the above really doesn't feel like it would start a productive conversation about conversation norms. But I hope it is clear that I disagree strongly with that characterization of mine. 

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

That's OK. We can read the evidence in separate ways. I've been trying really hard to understand what is happening here, have talked to the organizers directly, and am trying my best to build models of what the game-theoretically right response is. I expect if we were to dig into our disagreements here more, we would find a mixture of empirical disagreements, and some deeper disagreements about when something constitutes blackmail, or something game-theoretically equivalent. I don't know which direction would be more fruitful to go into. 

Misha_Yagudin @ 2020-08-30T21:49 (+3)

We probably wouldn't know and hence the issue wouldn't ger discussed.

It is plausible that if someone made widely known that they decided not to invite a speaker based on similar considerations it could have been discussed as well. As I expect "X is deplatformed by Y" to provoke a similar response to "X is canceled by Y" by people caring about the incident.

I am not sure it is a case of The Copenhagen Interpretation of Ethics as I doubt people who are arguing against would think that the decision is an improvement upon the status quo.

Linch @ 2020-09-02T22:44 (+2)

Hmm in my parent comment I said "structurally similar, though of course it is not exactly the same" which means I'm not defending that it's exactly a case. However upon a reread I actually think considering it a noncentral example is not too badly off. I think the following (the primary characterization of Copenhagen Interpretation of Ethics) is a fairly accurate representation:

when you observe or interact with a problem in any way, you can be blamed for it.

However it does not fill the secondary constraints Jai lays out:

At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster.

In this case, by choosing to invite a speaker and then (privately) cancelling it, they've indeed made the situation worse by a) wasting Hanson's time and b) mildly degraded professional norms.

But that level of badness seems on the whole pretty mediocre/mundane to first order.

Milan_Griffes @ 2020-08-29T17:05 (+18)
Some of Hanson’s writing has probably been, on net, detrimental to his own influence...

https://xkcd.com/137

strangepoop @ 2020-08-31T21:03 (+17)

I know that this is probably about clearly illustrating the emotional impetus behind one viewpoint, but I can't get on board with people going "fuck that shit" at difficult tradeoffs.

Ozzie Gooen @ 2020-09-01T08:30 (+15)

I think this is a complex issue, and a confident stance would require a fair bit of time of investigation.

I don't like the emotional hatred going on on both sides. I'd like to see a rational and thoughtful debate here, not a moralistic one. I don't want to be part of a community where people are colloquially tarred and feathered for making difficult decisions. I could imagine that many of us may wind up in similar positions one day. 

So I'd like discussion of Robin Hanson to be done thoughtfully, and also discussions of EA Munich to be done thoughtfully. 

The [Twitter threads](https://twitter.com/pranomostro1/status/1293267131270864903) seem like a mess to me. There are a few thoughtful comments, but tons of misery (especially from anonymous accounts and the like). I guess this is one thing that Twitter is just naturally quite poor at. 

There are a lot of hints that the the EA Munich team is exhausted over the response:

"Note that this document tries to represent the views of 8 different people on a controversial topic, compiled within a couple of hours, and is therefore necessarily simplifying."

"Because we're kind of overwhelmed with the situation, we won't be able to respond to your comments. We understand this is frustrating (especially if you think we have done a bad job), but we're only volunteers doing this in our free time."

I'm personally pretty happy with many people applying Hanlon's razor here to analyze our decision. "I disagree with you/you made the wrong choice" is much better than "you're a bad person and harbour ill will against X"

This is like that point in the movie where someone on one side would do something really stupid and cause actual violence. 

I imagine EA will face much bigger challenges of similar types in the future, so we should get practice in handling them well. 

Ozzie Gooen @ 2020-09-01T08:32 (+13)

To be more clear, I think the snarky comments on Twitter on both sides are a pretty big anti-pattern and should be avoided. They sometimes get lots of likes, which is particularly bad. 

Denise_Melchin @ 2020-09-02T15:13 (+42)

I certainly agree that it would be great if the debate was thoughtful on all sides. But I am reluctant to punish emotional responses in these contexts.

When I look at this thread, I see a lack of women participating. Exceptions: Khorton, and Julia clarifying a CEA position. There were also a couple of people whose gender I could not quickly identify.

There are various explanations for this. I am not sure the gender imbalance on this thread is actually worse than on other threads. It could be noise. But I know why I said nothing: I found writing a thoughtful, non-emotional response too hard. I expect to fail because the subject is too upsetting.

This systematically biases the debate in favour of people who bear no emotional cost in participating.

DanielFilan @ 2020-09-02T21:22 (+17)

In the 'Recent Discussion' feed of the front page of the EA forum, I found this page between Owen Cotton-Barratt's AMA and this question about insights in longtermist macrostrategy. The AMA had 9 usernames that appeared male to me, no usernames that appeared female to me, and 3 usernames whose gender I couldn't discern. The macrostrategy discussion had 12 names that appeared male to me, 1 that I gathered was female based on this comment, and 3 whose gender I couldn't discern. This should obviously be taken with a grain of salt, since determining gender from usernames is a tricky business.

anon_account @ 2020-09-02T21:54 (+12)

Interesting, and thanks, Denise for a different take. When I read Ozzie's comment, I thought he meant that the people leaping to Robin's defense should consider that they might be over-emotion, chill out a bit, and practice their rationality skills. Which, I would agree with. I don't think there's *no* concern that reasonable people could have here. I can think of several concerns, some of which have been pointed out in the comments on this post. But I think people who are freaked out by this one decision seem just as likely to be reacting with the kind of knee-jerk fear, tribalism, confirmation bias, and slippery slope thinking that they'd be quick to criticize in others. This is human, but honestly, it's disappointing. I'm appreciating the more measured responses on this post, though there's still some catastrophizing that seems kind of tiresome. There's so much of that going around in the world, I'd like to see EAs or rationalists handle it better.

Ozzie Gooen @ 2020-09-02T15:57 (+11)

Thanks for the points Denise, well taken.  

I think the issue of "how rational vs. emotional should we aim for in key debates" (assume there is some kind of clean distinction) is quite tricky.

I would point out some quick thoughts, that might be wrong.
1. I'm also curious to better understand why there isn't more discussion by women here. I could imagine a lot of possible reasons for this. It could be that people don't feel comfortable providing emotional responses, but it could also be that people notice that responses on the other side are so emotional that there may be severe punishment.
2. Around the EA community and on Twitter, i see much more emotional-seeming arguments in support of Robin Hanson than for him. Twitter is really the worst at this.
3. Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from.
4. I could imagine a bunch of scary situations where important communication gets much more emotional. If they get less emotional, it's trickier to tell. I like to think that rationally minded people could help seek out biases like the one you mention and respond accordingly, instead of having to modify a large part of the culture to account for it.

Khorton @ 2020-09-02T17:58 (+9)

"Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from."

In this analogy, I don't feel like I'm commenting as a rational member of the jury, I feel like I'm commenting as an emotional witness to the impact of tolerating sexist speech in the EA community.

Ozzie Gooen @ 2020-09-02T19:07 (+4)

Yea, I think the court analogy doesn't mean we should all aim to be "rational", but that some of the key decision makers and discussion should hold a standard. Having others come in as emotional witnesses makes total sense, especially if it's clear that's what's happening. 

ragyo_odan_kagyo_odan @ 2020-08-31T22:17 (+14)
The more time someone spends talking to a variety of community members (and potential future members), the more likely they are to have an accurate view of which norms will best encourage the community’s health and flourishing.

Correctness is not a popularity contest, it feels like this is an intellectual laundering of groupthink. Also, if you promote a particular view, that *changes* who is going to be a member of the community in the future, as well as who is excluded.

For example, the EA community has decided to exclude Robin Hanson and be more inclusive towards Slate journalists and people who like the opinions of Slate; this defines a future direction for the movement, rather than causing a fixed movement to either flourish or not.

Aaron Gertler @ 2020-09-02T00:59 (+5)

Correctness is not a popularity contest.

This isn't at all what I was trying to say. Let me try to restate my point: 

"If you want to have an accurate view of people say will help them flourish in the community, you're more likely to achieve that by talking to a lot of people in the community."

Of course, what people claim will help them flourish may not actually help them flourish, but barring strong evidence to the contrary, it seems reasonable to assume some correlation. If members of a community differ on what they say will help them flourish, it seems reasonable to try setting norms that help as many community members as possible (though you might also adjust for factors like members' expected impact, as when 80,000 Hours chooses a small group of people to advise closely).

*****

Whether EA Munich decides to host a Robin Hanson talk hardly qualifies as "the EA community deciding to exclude Robin Hanson and being more inclusive towards Slate journalists," save in the sense that what eight people in one EA group do is a tiny nudge in some direction for the community overall. In general, the EA community tends to treat journalists as a dangerous element, to be managed carefully if they are interacted with at all. 

For example, the response to Scott Alexander's near-doxxing (which drew much more attention than the Hanson incident) was swift, decisive, and near-unified in favor of protecting speech and unorthodox views from those who threatened them. To me, that feels much more representative of the spirit of EA than the actions of, again, a single group (who were widely criticized afterward, and didn't get much public support).

ragyo_odan_kagyo_odan @ 2020-09-02T14:32 (+1)

If it's only a tiny nudge, why are we talking about it?

Why is it important for a teacher to give a harsh detention to the first student who challenges their authority, or for countries to defend their borders strictly rather than let it slide if someone encroaches just a few kilometres?

An expectation is being set here. Worse, an expectation has been set that threats of protest are a legitimate way to influence decision-making in our community. You have ceded authority to Slate by obeying their smear-piece on Hanson. Hanson is one of our people, you left him hanging in favour of what Slate thought.

EA people are, IMO, being naĂŻve.

Aaron Gertler @ 2020-09-03T00:51 (+6)

If it's only a tiny nudge, why are we talking about it?

I'm talking about something I considered a tiny nudge because I thought that a lot of people, including people who are pretty influential in communities I care about it, either reacted uncharitably or treated the issue as a much larger deal than it was. 

You have ceded authority to Slate by obeying their smear-piece on Hanson. Hanson is one of our people, you left him hanging in favour of what Slate thought.

To whom is "you" meant to refer? I don't work on CEA's community health team and I've never been in contact with EA Munich about any of this. 

I also personally disagreed with their decision and (as I noted in the post) thought the Slate piece was really bad. But my disagreeing with them doesn't mean I can't try to think through different elements of the situation and see it through the eyes of the people who had to deal with it.

ragyo_odan_kagyo_odan @ 2020-09-11T11:49 (+3)

I think the issue here is attempting to unilaterally disarm in a culture war. If your attitude is "let's through different elements of the situation and see it through the eyes of the people" , and their attitude is "let's use the most effective memetic superweapons we have access to to destroy everyone we disagree with", then you're going to lose and they are going to win.

Aaron Gertler @ 2020-09-14T06:51 (+12)

A stark conclusion of "you're going to lose" seems like it's updating too much on a small number of examples. 

For every story we hear about someone being cancelled, how many times has such an attempt been unsuccessful (no story) or even led to mutual reconciliation and understanding between the parties (no story)? How many times have niceness, community, and civilization won out over opposing forces?

(I once talked to a professor of mine at Yale who was accused by a student of sharing racist material. It was a misunderstanding. She resolved it with a single brief email to the student, who was glad to have been heard and had no further concerns. No story.)

I'm also not sure what your recommendation is here. Is it "refuse to communicate with people who espouse beliefs of type X"? Is it "create a centralized set of rules for how EA groups invite speakers"?

SamuelKnoche @ 2020-08-28T17:59 (+14)

Thanks for writing this post. I'm glad this incident is getting addressed on the EA forum. I agree with most of the points being made here.

However, I'm not sure if 'becoming more attentive to various kinds of diversity' and maintaining norms that allow for 'the public discussion of ideas likely to cause offense' have to be at odds. In mainstream political discourse it often sounds like this is the case, however I would like to think that EA might be able to balance these two concerns without making any significant concessions.

The reason I think this might be possible is because discussions among EAs tend to be more nuanced than most mainstream discourse, and because I expect EAs to argue in good faith and to be well intentioned. I find that EA concerns often transcend politics, and so I would expect two EAs with very different political views to be able to have more productive discussions on controversial topics than two non-EAs.

Aaron Gertler @ 2020-08-28T18:23 (+4)

I find that EA concerns often transcend politics, and so I would expect two EAs with very different political views to be able to have more productive discussions on controversial topics than two non-EAs.

I think this is true, but even if EA discussion might be more productive, I still think trade-offs exist in this domain. Given that the dominant culture in many intellectual spaces holds that public discussion of certain views is likely to cause harm to people, EA groups risk appearing very unwelcoming to people in those spaces if they support discussion of such views. 

It may be worthwhile to have these discussions anyway, given all the benefits that come with more open discourse, but the signal will be sent all the same.

Denise_Melchin @ 2020-09-09T08:37 (+13)

I would really appreciate if commentators were more careful to speak about this specific instance of uninviting a speaker instead of uninviting speakers in general, or at least clarify why they choose to speak about the general case.

I am not sure whether they choose to speak about the general case because they think uninviting in this particular case would in itself be an appropriate choice, but it sets up a slippery slope to uninvite more and more speakers, or whether this is because uninviting in this particular case is already net negative for the movement.

Khorton @ 2020-09-10T23:20 (+2)

I've also wondered about this.

mawa @ 2020-08-28T12:51 (+3)

Thanks for writing up your thoughts on the incident and showing that much respect to both sides of the argument!

I'm a bit confused about the last parts (7. and 8.):
1. Would a rephrasing of 8. as "Some of the people who spent a lot of time having private conversations with community members think that EA should be more cautious and attentive to diversity. And some of them don't. So we can't actually draw conclusions from this." be fair?
2. By whom is EA is presented as some kind of restrictive orthodoxy? So far, I did not get the impression that it is such presented.

What do you think are the main trade-offs to be made in making EA more attentive to diversity? That there are more impactful causes to invest time and effort in? Or would making EA more attentive to diversity actually have potential negative effects? Is there a good write-up of these trade-offs?

Aaron Gertler @ 2020-08-28T18:12 (+9)
  1. Yes, you could rephrase it that way. I've spoken directly to the people who think we should be more cautious/attentive, but only heard secondhand from them about the people who think this is a bad idea (and have talked to lots of community members about these topics -- I've met people with views all over the spectrum who haven't had as many such conversations).
  2. I was referring mostly to the comments that popped up in the various Twitter threads surrounding the decision, one of which I linked at the top of the piece. A few quotes along these lines:

"Effective altruism has been shown to be little more than the same old successor-ideology wearing rationalism as a skin-suit."

"They believe they are in a war and the people like Hanson are the enemy."

"If EA starts worrying about PR and being inoffensive, what even is the point anymore? Make EA about EA, not about signaling."

"There always was something 'off' about so-called effective altruism."

Some of these types of comments probably come from people who never liked or cared about EA much and are just happy to have something to criticize. But I sometimes see similar remarks from people who are more invested in EA and seem to think it's become much more censorious over time. While there is some truth to that (as I mention in the piece), I think the overall picture is much more complicated than these kinds of claims make it out to be.

*****

Regarding trade-offs, that would be a much longer post. You could check the "Diversity and Inclusion" tag, which includes some Forum posts along similar themes. Kelsey Piper's writing on "competing access needs" is also relevant.