Avoiding Munich's Mistakes: Advice for CEA and Local Groups

By Larks @ 2020-10-14T17:08 (+186)

If all mankind minus one, were of one opinion, and only one person were of contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.
We strive to base our actions on the best available evidence and reasoning about how the world works. We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously. ...
We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them. If good arguments or evidence show that our current plans are not the best way of helping, we will change our beliefs and actions.

Introduction

This post argues that Cancel Culture is a significant danger to the potential of EA project, discusses the mistakes that were made by EA Munich and CEA in their deplatforming of Robin Hanson, and provides advice on how to avoid such issues in the future.

As ever, I encourage you to use the navigation pane to jump to the parts of the article that are most relevant to you. In particular, if you are already convinced you might skip the 'examples' and 'quotes' sections.

Background

The Nature of Cancel Culture

In the past couple of years, there’s been much damage done to the norms around free speech and inquiry, in substantial parts due to what’s often called cancel culture. Of relevance to the EA community is that there have been an increasing number of highly public threats and attacks on scientists and public intellectuals, where researchers are harassed online, disinvited from conferences, had their papers retracted, and fired, because of mass online mobs reacting to an accusation over slight wording on topics of race, gender, and other issues of identity, or guilt-by-association with other people who have also been attacked by such mobs.

This is colloquially called ‘cancelling’, after the hashtags that have formed saying #CancelX or #xisoverparty, where X is some person, company or other entity, hashtags which are commonly trending on Twitter.

While such mobs cannot attack every person who speaks in public, they can attack any person who speaks in public, leading to chilling effects where nobody wants to talk about the topics that can lead to cancelling.

Cancel Culture essentially involves the following steps:

  1. A victim, often a researcher, says or does something that irks someone online.
  2. This critic then harshly criticises the person using attacks that are hard to respond to in our culture - the accusation of racism is a common one. The goal of this attack is to signal to a larger mob that they should pile on, with the hope of causing massive damage to the person’s private and professional lives.
  3. Many more people then join in the attack online, including (often) contacting their employer.
  4. People who defend the victim are attacked as also being guilty of a similar crime.
  5. Seeing this dynamic, many associates of the victim prefer to sever their relationship, rather than be subject to this abuse. This may also include their employer, for whom the loss of one employee seems a relatively small cost for maintaining PR.
  6. The online crowd may swiftly move on; however, the victim now lives under a cloud of suspicion that is hard to displace and can permanently damage their career and social life.
  7. Other researchers, observing this phenomenom, choose to remain silent on issues they think may draw the attention of such cancel mobs.

It’s certainly the case that such a pattern of behaviour existed before, but the issue seems to have become significantly worse in recent years.

Examples

There have been many examples of this form of abuse in recent months. Below I’ve included quotes illustrating a few, but I encourage the interested reader to research more themself. If you’re already aware of these cases, especially since the first one is already so prominent in our community, feel free to skip to the section titled ‘Cancel Culture is Harmful for EA’.

One disadvantage of these examples is they show only the tip of the iceberg. They can show us the cases where someone was forced into a humiliating apology, or fired from their job, but they cannot show us the massively greater cost of everyone who self-censored out of fear. Those who write on this subject invariably seem to receive a slew of grateful communications from academics who were too afraid to speak out themselves.

Scott Alexander

Scott Alexander is one of the most skilled commentators of our age, with a gift for insightful and always generous commentary, as well as a close ally of the EA movement. He has written hugely influential posts on a wide range of topics, including identifying Motte and Bailey arguments, Moloch, reason, the bizarre world of IRBs, why debates focus on the most worst possible cases, hierarchies of intellectual contrarianism, cost disease, and he was early to the replication crisis. And of course, the Biodeterminists Guide to Parenting.

One offshoot of this was the Culture-War threads on his associated subreddit, designed to segregate Culture War type discussions from the other comment sections on his blog. While he didn’t directly participate very much, and largely handed off moderation to others, it not only achieved its primary goal (keeping Culture War out of most of his comment sections), but also produced some very valuable discussion:

Thanks to a great founding population, some very hard-working moderators, and a unique rule-set that emphasized trying to understand and convince rather than yell and shame, the Culture War thread became something special. People from all sorts of political positions, from the most boring centrists to the craziest extremists, had some weirdly good discussions and came up with some really deep insights into what the heck is going on in some of society’s most explosive controversies. For three years, if you wanted to read about the socialist case for vs. against open borders, the weird politics of Washington state carbon taxes, the medieval Rule of St. Benedict compared and contrasted with modern codes of conduct, the growing world of evangelical Christian feminism, Banfield’s neoconservative perspective on class, Baudrillard’s Marxist perspective on consumerism, or just how #MeToo has led to sex parties with consent enforcers dressed as unicorns, the r/SSC culture war thread was the place to be. I also benefited from its weekly roundup of interesting social science studies and arch-moderator baj2235’s semi-regular Quality Contributions Catch-Up Thread.

The users of these threads, as with the rest of his blog and the wider EA ecosystem, skewed left wing (as is shown by the multiple extensive surveys done of his readers, with 1000s of users filling them out annually). Despite the ground truth, to some people it felt right-wing:

I acknowledge many people’s lived experience that the thread felt right-wing; my working theory is that most of the people I talk to about this kind of thing are Bay Area liberals for whom the thread was their first/only exposure to a space with any substantial right-wing presence at all, which must have made it feel scarily conservative. This may also be a question of who sorted by top, who sorted by new, and who sorted by controversial. In any case, you can just read the last few threads and form your own opinion.

Open discussion of controversial topics naturally leads to some controversial opinions. Naturally, these are the ones your opponents choose to highlight, so soon they run the risk of dominating your reputation - or at least, your reputation among people who aren’t ‘woke’ to the dangers of cancel culture:

It doesn’t matter if taboo material makes up 1% of your comment section; it will inevitably make up 100% of what people hear about your comment section and then of what people think is in your comment section. Finally, it will make up 100% of what people associate with you and your brand. The Chinese Robber Fallacy is a harsh master; all you need is a tiny number of cringeworthy comments, and your political enemies, power-hungry opportunists, and 4channers just in it for the lulz can convince everyone that your entire brand is about being pro-pedophile, catering to the pedophilia demographic, and providing a platform for pedophile supporters. And if you ban the pedophiles, they’ll do the same thing for the next-most-offensive opinion in your comments, and then the next-most-offensive, until you’ve censored everything except “Our benevolent leadership really is doing a great job today, aren’t they?” and the comment section becomes a mockery of its original goal.

This leads to a narrative that his blog was somehow ‘alt-right’:

People settled on a narrative. The Culture War thread was made up entirely of homophobic transphobic alt-right neo-Nazis. … [I]t was always that the the thread was “dominated by” or “only had” or “was an echo chamber for” homophobic transphobic alt-right neo-Nazis, which always grew into the claim that the subreddit was dominated by homophobic etc neo-Nazis, which always grew into the claim that the SSC community was dominated by homophobic etc neo-Nazis, which always grew into the claim that I personally was a homophobic etc neo-Nazi of them all.

Despite this being clearly false:

I freely admit there were people who were against homosexuality in the thread (according to my survey, 13%), people who opposed using trans people’s preferred pronouns (according to my survey, 9%), people who identified as alt-right (7%), and a single person who identified as a neo-Nazi (who as far as I know never posted about it). … I am a pro-gay Jew who has dated trans people and votes pretty much straight Democrat. I lost distant family in the Holocaust. You can imagine how much fun this was for me.

This lead to his being subject to vicious abuse:

Some people found my real name and started posting it on Twitter. Some people made entire accounts devoted to doxxing me in Twitter discussions whenever an opportunity came up. A few people just messaged me letting me know they knew my real name and reminding me that they could do this if they wanted to.

A common strategy is to try to poison one’s relationships with real-life friends:

Some people started messaging my real-life friends, telling them to stop being friends with me because I supported racists and sexists and Nazis. Somebody posted a monetary reward for information that could be used to discredit me.

And to get someone fired:

One person called the clinic where I worked, pretended to be a patient, and tried to get me fired.

In this case, it didn’t end up with his being fired. ‘All’ that happened was his suffering a nervous breakdown and closing down one of the most popular parts of his site (though it was somewhat reborn under new leadership elsewhere).

The one positive element of this sorry story is Scott, a devotee to truth till the end, wrote up the story as a cautionary tale:

Fifth, if someone speaks up against the increasing climate of fear and harassment or the decline of free speech, they get hit with an omnidirectional salvo of “You continue to speak just fine, and people are listening to you, so obviously the climate of fear can’t be too bad, people can’t be harassing you too much, and you’re probably just lying to get attention.” But if someone is too afraid to speak up, or nobody listens to them, then the issue never gets brought up, and mission accomplished for the people creating the climate of fear. The only way to escape the double-bind is for someone to speak up and admit “Hey, I personally am a giant coward who is silencing himself out of fear in this specific way right now, but only after this message”. This is not a particularly noble role, but it’s one I’m well-positioned to play here, and I think it’s worth the awkwardness to provide at least one example that doesn’t fit the double-bind pattern.

David Shor

David Shor was a political scientist working for a left-wing political consultancy, which analysed data to try to help Democrat politicians win elections in the US. On May 28th he tweeted a link to an academic paper that argued that while non-violent protests pushed voters to support the Democrats, violent protests pushed them towards the Republicans, saying

Post-MLK-assasination race riots reduced Democratic vote share in surrounding counties by 2%, which was enough to tip the 1968 election to Nixon. Non-violent protests *increase* Dem vote, mainly by encouraging warm elite discourse and media coverage.

This swiftly led to many heavily critical and aggressive tweets. To illustrate a typical such exchange I will quote one, Trujillo Wesler, at length:

Yo. Minimizing black grief and rage to "bad campaign tactic for the Democrats" is bullshit most days, but this week is absolutely cruel.
This take is tone deaf, removes responsibility for depressed turnout from the 68 Party and reeks of anti-blackness.

Shor earnestly replied:

The mechanism for the paper isn’t turnout, it’s violence driving news coverage that makes people vote for Republicans. The author does a great job explaining his research here: <link>

Trujillo Wesler replied:

Do you think I didn't read the paper and know what I was talking about when calling out your callousness?
I think Omar's analysis is sloppy and underwhelming, but that's not the point.
YOU need to stop using your anxiety and "intellect" as a vehicle for anti-blackness

… before then tagging the CEO of Shor’s company:

@danrwagner Come get your boy.

The next day Shor apologised, and then a few days later he signed a non-disclosure agreement with the company and was fired solely as a result of the tweet.

This story has received a lot of press at the time; see for example this article for more details.

Steven Hsu

Steven Hsu is a physics professor at Michigan State University, where he has worked on a wide variety of projects, including advanced genetics work: his team developed novel techniques to predict adult height very accurately from DNA, as well as a variety of illnesses. Noteworthy for EAs, he cofounded Genomic Prediction, the first company (to my knowledge) offering consumer embryo selection - a technology whose potential has been of great interest to EAs.

As well as a tenured professorship, he also held an administrative role as Senior Vice President. In June the student union started to agitate for his firing:

The concerns expressed by the Graduate Employees Union ... and other individuals familiar with Hsu indicates an individual that cannot uphold our University Mission or our commitment to Diversity, Equity, and Inclusion. Given this discordance with university values, Stephen Hsu should not be privileged with the power and responsibility of recruiting and funding scholars, overseeing ethical conduct, or coordinating graduate study.
By signing this open letter we ask MSU to follow through to its commitment to be a diverse and inclusive institution and to change its institutional and administrative practices so that the passion and talent of Black scholars, Indigenous scholars, and other scholars of color (BIPOC) can be recognized and fostered within these university halls.

A second letter advocating for his dismissal came out, which among other things highlighted his work on embryo selection:

Hsu also appears to be dabbling in eugenics through his beliefs that embryos may be selected on the basis of genetic intelligence.

One might have thought that academic freedom would permit a professor who studied genetics to hold such views, but that was not the case:

Not only do these views ignore the copious social science research on social determinants of intelligence and accomplishments, therefore rendering them suspect in a scholarly sense, it is also deeply disturbing that someone whose role is to allocate funding and provide authoritative input in decisions regarding promotion and tenure cases for faculty in a diverse institution should hold such beliefs.

In a twitter thread they argued that his scientific work was bad because they claimed that if true it would have undesirable political consequences:

Hsu has also entertained and hosted views arguing that racial underperformance in colleges is related to *lack of segregation* in education and flaws in multiculturality, undercutting the basis of Brown v. Board of education.

Similarly, he was accused of supporting the use of standardised tests to measure cognitive ability:

Hsu is against removing standardized tests like the GRE & SAT because he believes they measure cognitive ability & that lack of Black & Hispanic representation in higher ed reflects lower ability, despite evidence these tests negatively impact diversity.

For brevity's sake I shan’t quote everything they accused him of, but one common thread is suggesting that his scientific views, or at least caricatures of them, were wrong because, if true, they would contradict the (extreme) political views of the authors, and that because of this he could not be trusted to direct university resources.

A counter-protest letter occurred, with a large number of very prominent signatories, arguing that his professional conduct was flawless and that he had been badly mis-represented:

The charges of racism and sexism against Dr. Hsu are unequivocally false and the purported evidence supporting these charges ranges from innuendo and rumor to outright lies. (See attached letters for details.) We highlight that there is zero concrete evidence that Hsu has performed his duties as VP in an unfair or biased manner. Therefore, removing Hsu from his post as VP would be to capitulate to rumor and character assassination.

Alas, this was not enough, as the president of his university soon asked him to resign from the role:

President Stanley asked me this afternoon for my resignation. I do not agree with his decision, as serious issues of Academic Freedom and Freedom of Inquiry are at stake. I fear for the reputation of Michigan State University.
However, as I serve at the pleasure of the President, I have agreed to resign. I look forward to rejoining the ranks of the faculty here.

Emmanuel Cafferty

Emmanuel was an ordinary utility worker in Southern California, who was tricked into making an ‘ok’ sign as he drove home one day:

At the end of a long shift mapping underground utility lines, he was on his way home, his left hand casually hanging out the window of the white pickup truck issued to him by the San Diego Gas & Electric company. When he came to a halt at a traffic light, another driver flipped him off. … He flashed what looked to Cafferty like an “okay” hand gesture and started cussing him out. When the light turned green, Cafferty drove off, hoping to put an end to the disconcerting encounter.
But when Cafferty reached another red light, the man, now holding a cellphone camera, was there again. “Do it! Do it!” he shouted. Unsure what to do, Cafferty copied the gesture the other driver kept making. The man appeared to take a video, or perhaps a photo.

Unfortunately, this is now considered by some to be a white supremacist sign (though there is no evidence he was aware of this fact):

Two hours later, Cafferty got a call from his supervisor, who told him that somebody had seen Cafferty making a white-supremacist hand gesture, and had posted photographic evidence on Twitter. (Likely unbeknownst to most Americans, the alt-right has appropriated a version of the “okay” symbol for their own purposes because it looks like the initials for “white power”; this is the symbol the man accused Cafferty of making when his hand was dangling out of his truck.)

Despite the fact he is 75% latin american by ancestry, after a series of people called into his employer to demand he was fired, his employer duly caved:

Dozens of people were now calling the company to demand Cafferty’s dismissal … By the end of the call, Cafferty had been suspended without pay. By the end of the day, his colleagues had come by his house to pick up the company truck. By the following Monday, he was out of a job.

More details available in many places including here.

James Bennett

After widespread rioting, on June 3rd the NYT published an editorial by an influential Republican Senator arguing that the military should be used to restore order:

The pace of looting and disorder may fluctuate from night to night, but it’s past time to support local law enforcement with federal authority.

He was careful to distinguish between peaceful protesters and violent rioters:

[T]he rioting has nothing to do with George Floyd, whose bereaved relatives have condemned violence. On the contrary, nihilist criminals are simply out for loot and the thrill of destruction, with cadres of left-wing radicals like antifa infiltrating protest marches to exploit Floyd’s death for their own anarchic purposes.

And that the majority of voters agreed that this was a good idea:

Not surprisingly, public opinion is on the side of law enforcement and law and order, not insurrectionists. According to a recent poll, 58 percent of registered voters, including nearly half of Democrats and 37 percent of African-Americans, would support cities’ calling in the military to “address protests and demonstrations” that are in “response to the death of George Floyd.”

Whether or not one agrees with the opinion, this seems squarely within the realm of typical op-ed pieces. However, many NYT employees objected, in private and in public, tweeting things like:

A parade of Times journalists tweeted a screen shot showing the headline of Cotton's piece, "Send In the Troops," with the accompanying words: "Running this puts Black @NYTimes staff in danger."

This language is very clever, because complaints about workplace safety enjoy special legal protections that they would not if they made a merely political objection, however implausible the safety claim is.

Initially the editor in charge, James Bennett, defended the editorial:

Times Opinion owes it to our readers to show them counter-arguments, particularly those made by people in a position to set policy.

Shortly afterwards, he was forced to resign, and a replaced with a new editor who made clear that such offensive content would not be tolerated.

More details here and here.

Several months later the NYT published another editorial by a senior Hong Kong politician defending their controversial new security law. In most objective ways it is far more objectionable than Cotton’s piece - the law is extremely draconian, including criminalising conduct all over the world, but to my knowledge no editor has been made to resign.

Greg Patton

Greg Patton is a professor of business communication at USC. As an expert in Mandarin, he used a Chinese example to illustrate the role of words like ‘um’ and ‘err’:

He also tries to mix in culturally diverse examples. When he talks about the importance of pausing, for instance, he notes that other languages have equivalent filler words. Because he taught in the university’s Shanghai program for years, his go-to example is taken from Mandarin: nèige (那个). It literally means “that,” but it’s also widely used in the same way as um.

Unfortunately, when spoken out loud this sounds similar to a slur in US English. As a result a group of students complained to the administrators:

[A] group of students sent an email to business-school administrators saying they were “very displeased” with the professor. They accused Patton of “negligence and disregard” and deemed the Mandarin example “grave and inappropriate.” They referenced the killings of George Floyd and Breonna Taylor. “Our mental health has been affected,” they wrote. “It is an uneasy feeling allowing him to have power over our grades. We would rather not take this course than to endure the emotional exhaustion of carrying on with an instructor that disregards cultural diversity and sensitivities and by extension creates an unwelcome environment for us Black students.” The email is signed “Black MBA Candidates c/o 2022.”

Despite counter-complaints by Chinese alumni, who felt that their language was being insulted,

As the story made its way into the Chinese news media, and onto the social network Weibo, it was met with disbelief and anger. A letter signed by more than 100 mostly Chinese alumni of the business school avers that the “spurious charge has the additional feature of casting insult toward the Chinese language.”

The administrator responsible issued an apology for Greg’s conduct:

Dean Garrett emailed the M.B.A. Class of 2022 to let them know that another professor would take over. It was, he wrote, “simply unacceptable for faculty to use words in class that can marginalize, hurt and harm the psychological safety of our students.” He went on to say that he was “deeply saddened by this disturbing episode that has caused such anguish and trauma,” but that “[w]hat happened cannot be undone.”

Greg also apologised:

Patton wrote a 1,000-word email to the Marshall Graduate Student Association in which he offered a “deep apology for the discomfort and pain that I have caused members of our community.”

Nonetheless, Greg was made to step down from teaching the course.

More details here and here.

Cancel Culture is Harmful for EA

On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject.

The rise of cancel culture is a threat to honest intellectual inquiry - a core part of the EA project. The silencing effect - whereby seeing some poor soul being destroyed makes other people keep quiet in self-preservation - intimidates people from exploring new and controversial areas. Yet it has been a consistent trend in EA thought that exactly this process of intellectual groundbreaking is vital to the EA project. EA arose out of a dissatisfaction with the existing state of affairs: dissatisfaction with people’s unwillingness to share with those in need, dissatisfaction with the poor epistemic standards. EA was born out of powerful critiques of these things - critiques which were, and still are, highly controversial.

I think it is easy for newcomers now, joining a movement that should get a lot of credit for professionalising over time, to not realise quite how chaotic things were in the early days.

Earning To Give, an idea which, while less central to EA than it once was, is still a key part, was extremely controversial, especially when pitched in the early days as “Who is more moral: Doctors or Bankers?” If 80k had given in to the very offended people we would have lost an important part of the movement - and if we had abandoned the progenitors of the concept as ‘too controversial’ we would have lost individuals who are now highly respected leaders of the movement.

Similarly the early GiveWell was no stranger to saying highly controversial and offensive things. They proudly criticised existing charities, even though many people argued that doing so would deter donors from their entire sector, preventing innocent lives from being saved. I think everyone reading this would agree it is good that we have stuck with them!

This drive to fearlessly explore the unknown is even more important when we move outside of global health and into Longtermism. It is only after many many years that we have finally figured out a respectable sounding method of pitching many of the longtermist ideas - a situation we would not be in if we had shunned the earlier more controversial versions. It is natural that someone, discovering a new vista of intellectual possibilities for the first time, should take a strong stance. Only by doing so can they fully explore, and only by doing so can they properly show to others why this is a fertile region for their own energies. Later, more cautious thinkers can refine the early work of these pioneers and make it more legible to the mainstream. It is easy now for us to read early Xrisk writings and cringe, but Rome was not built in a day and could not be built if we had shunned the founders for their many controversies.

Similarly, the field of animal welfare, another core EA concern, is rife with controversies. To many animal rights activists, factory farming is literally the worst thing in the history of the world. Comparisons with the holocaust jump to their minds - both because of the nature of the activities and also because of the colossal scope of the harm. Yet to the ordinary person, and here I count myself somewhat, what could be more offensive than to compare a lovely chicken sandwich to genocide? Similarly, EAs have pushed forward the frontiers of animal welfare work, investigating invertebrates and wild animal suffering. Some of the ideas being suggested, like major ecosystem redesigning, are controversial and extreme to say the least! Yet I am sure the reader is glad that we have not shunned these people.

One way of thinking about the EA approach to charity is that people should do two things: give a larger amount, and give more intelligently. However, over time we have come to appreciate that the potential with the latter is far higher than the former. The average American already gives over 2% of their income to charity, and it’s hard for most people to double their income even if they really try, so realistically at most there is scope for an order of magnitude increase or so. In contrast, we know there are many orders of magnitude difference in effectiveness between charities, even within one cause area.

We can use a similar decomposition for the development of the EA movement. We can grow by attracting new members, which is definitely valuable (so long as it doesn’t introduce value drift), but growing along this axis is difficult. We have already identified the easiest recruiting groups - elite universities - and I think it is fair to say it will be quite difficult to add an additional counterfactual order of magnitude in this way. Additionally, many of the people we recruit will be coming from similar communities, so the value of acquiring them is only the incremental value that the EA movement adds over their prior activities.

New prioritisation research, in contrast, offers vast potential for improvement. Not only was it the source of the 1000x multiplier within global health charities, it is what causes us to focus on third world health over the US in the first place - a huge improvement, but not an uncontroversial one. And outside of global health the gains have been even larger, potentially even flipping the sign of wildlife conservation, and offering us the entire Longtermist agenda.

Additionally, I think that intellectual daring cause prioritisation research is probably beneficial, on net, for attracting new members. Is it possible some people will be put off? Of course - practically anything you do will annoy some people, at the same time that it attracts others. It’s no secret that EA has grown in no small part by posing intellectual challenges to highly intellectual people and drawing them in. We are nerd-sniping; offering the chance to discuss some of the most important issues in the world with some of the most intelligent people in the world, for the low, low, price of 10% of your future income. It is no surprise that we draw heavily from academic philosophy departments and tech companies, and appeal to some hyper-analytical millionaires and billionaires.

If there truly are no new intellectual worlds left to conquer, only steady refinements to our existing mechanisms, then perhaps it would not be so harmful to cancel our pioneers. Ungrateful, perhaps, but if their work is done, could EA enter a chrysalis of cancellation, to emerge a fashionable and unimpeachably moderate movement, fully in sync with the moral fashions of the current year? Yet I see no reason to think that the consistent history of controversial ideas proving vital to the progression of the movement is over. The possibility of another crucial consideration being discovered, which transforms our understanding on an important topic and better guides our actions towards the good, is too important and too likely to be set aside.

This is EA’s unique contribution. Without the pioneering cause prioritization, without the courage to ask important questions that none have asked before, we add almost nothing to global charitable landscape. Only by offering something different and new - and in my opinion much, much better - is the EA project worthwhile.

Quotes from EA Leaders on searching for new ideas

This concern for intellectual inquiring into new causes is not a niche one; hopefully this section, consisting largely of quotes from a huge variety of EA sources about the importance of exploring new intellectual areas, will show this is a widely recognised issue. As these quotes are somewhat lengthly, if you are already convinced feel free to skip to the next section.

For example, 80k has written about the importance of investigating a wide variety of causes:

Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. ... Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below.

Similarly, Ben Todd recently emphasised the importance of EA as an intellectual project investigating new ways of improving the world:

If anything, I’m even more convinced that the ideas are what matter most about EA, and that there should at least be a branch of EA that’s focused on being an intellectual project.

Rob recently wrote about the concentration of EAs in 'safe' topics as being a potential problem:

They feel low-risk and legitimate. People you meet can easily tell you're doing something they think is cool. And you might feel more secure that you're likely doing something useful or at least sensible.

In the EA handbook we have Kelsey on the importance of being open and supportive to weird ideas:

Next, we need to be continually monitoring for signs that the things we’re doing are actually doing harm, under lots of possible worldviews. That includes worldviews that aren’t intuitive, or that aren’t the way most people think about charity. … Basically, we need to cast a really, really wide net for possible ways we’re screwing up, so that the right answer is at least available to us.
Next, imagine someone walked into that 1840s EA group and said, ‘I think black people are exactly as valuable as white people and it should be illegal to discriminate against them at all,” or someone walked into the 1920s EA group and said, “I think gay rights are really important.” I want us to be a community that wouldn’t have kicked them out. I think the principle I want us to abide by is something like ‘if something is an argument for caring more about entities who are widely regarded as not worthy of such care, then even if the argument sounds pretty absurd, I am supportive of some people doing research into it. And if they’re doing that research with the intent of increasing everyone’s well-being and flourishing as much as possible, then they’re part of our movement’. ...
I hope we have space to hear out more speculative things, and specifically to hear out (1) arguments for caring about things we wouldn’t normally think to care about, (2) arguments that our society is fundamentally and importantly wrong, and (3) arguments that we are making important mistakes.

Indeed, EAG 2018 emphasised the importance of intellectual curiosity to find a new potential 'cause X':

The key idea of EA Global: San Francisco 2018 is ‘Stay Curious’. As more people take the ideas behind effective altruism seriously, we must continue to seek new problems to work on, and be mindful that we may still be missing ‘cause X’.

And Will spoke about it at length in 2016:

Given this, what we should be thinking about is: What are the sorts of major moral problems that in several hundred years we'll look back and think, "Wow, we were barbarians!"? What are the major issues that we haven't even conceptualized today?

It also features in CEA's Guiding Principles:

We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them.

The Guiding Principles even discuss the need to be open to weird ideas:

We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously.

The more classically minded reader might appreciate the wisdom of John Stuart Mill, one of the founders of Utilitarianism:

A state of things in which a large portion of the most active and inquiring intellects find it advisable to keep the general principles and grounds of their convictions within their own breasts, and attempt, in what they address to the public, to fit as much as they can of their own conclusions to premises which they have internally renounced, cannot send forth the open, fearless characters, and logical, consistent intellects who once adorned the thinking world. The sort of men who can be looked for under it, are either mere conformers to commonplace, or time-servers for truth, whose arguments on all great subjects are meant for their hearers, and are not those which have convinced themselves. Those who avoid this alternative, do so by narrowing their thoughts and interest to things which can be spoken of without venturing within the region of principles, that is, to small practical matters, which would come right of themselves, if but the minds of mankind were strengthened and enlarged, and which will never be made effectually right until then: while that which would strengthen and enlarge men’s minds, free and daring speculation on the highest subjects, is abandoned.

Indeed, suppressing an idea is harmful for those who believe it and those who do not:

“[T]he peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.”

This topic has been discussed on the EA forum, including highly upvoted posts like this one:

EA is a nascent field; we should expect over time our understanding of many things to change dramatically, in potentially unpredictable ways. This makes banning or discouraging topics, even if they seem irrelevant, harmful, because we don’t know which could come to be important.
Fortunately, there are some examples we have to make this clear. For example, Making Discussions Inclusive provides a list of things that we should not discuss (or at least we should be very wary of discussing). We will argue that there are actually very good reasons for EAs to discuss these topics. Even in cases where it would not be reasonable to dispute the statement as given, we suggest that people may often be accused of rejecting these statements when they actually believe something much more innocent.

and this one, comparing recent trends in the US to the Cultural Revolution in China:

If the United States were to experience a cultural revolution-like event, it would likely affect nearly all areas of impact that effective altruists care about, and would have profound effects on our ability to produce free open-ended research on controversial issues. Given that many of the ideas that effective altruists discuss -- such as genetic enhancement, factory farming abolition, and wild animal suffering -- are controversial, it is important to understand how our movement could be undermined in the aftermath of such an event. Furthermore, conformity pressures of the type exhibited in the Chinese cultural revolution could push important threads of research, such as AI alignment research, into undesirable directions.

This post argues against trying to fight back against cancellations, as it is expensive and risky to do so:

A friend of mine has parents who lived through the cultural revolution. At least one grandparent made a minor political misplay (his supervisor wanted him to cover up embezzling resources, he refused) and had his entire family history (including minor land ownership in an ancestor) dragged out of him. He was demoted, berated for years, had trash thrown at him etc. This seemed unfortunate, and likely limited his altruistic impact.

However, even he agreed that his original stance

As a general strategy, it seems much better for most people in the community to [...] quickly disavow any associations that could be seen as potentially problematic.

was too strong, because it is bad for team-building (quoting from a third party he agreed with):

If I expect my peers to lie or stab me in the back as soon as this seems useful to them, then I’ll be a lot less willing and able to work with them. This can lead to a bad feedback loop, where EAs distrust each other more and more as they become more willing to betray each other.
Highly knowledgeable and principled people will tend to be more attracted to groups that show honesty, courage, and integrity. There are a lot of contracts and cooperative arrangements that are possible between people who have different goals, but some level of trust. Losing that baseline level of trust can be extremely costly and cause mutually beneficial trades to be replaced by exploitative or mutually destructive dynamics.
Camaraderie gets things done. If you can create a group where people expect to have each other’s back, and expect to be defended if someone lies about them, then I think that makes the group much more attractive to belong to, and helps with important things like internal cooperation.

I also recommend Anna’s highly upvoted comment, strongly disagreeing with the post:

It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well (so as to be actually be "effective", rather than merely tribal/random). It lies in our ability to trust in the integrity of one anothers' speech and reasoning, and to talk together to figure out what's true.
Finding the real leverage points in the world is probably worth orders of magnitude in our impact. Our ability to think honestly and speak accurately and openly with each other seems to me to be a key part of how we access those "orders of magnitude of impact."

Even pro-censorship posts like this only advocating restricting some topics from being discussed in some spaces:

We argue that being a part of an inclusive community can sometimes mean refraining from pursuing every last theory or thought experiment to its end in public places.

And even then, a highly critical comment received far more karma than the original post, as well as this excellent response.

To my knowledge no-one has argued that people should be banned just for having discussed an unrelated topic in an unrelated location! The move by EA Munich, which we will go over below, was considerably outside the Overton Window.

There is of course extensive discussion of and brave opposition to the problems of cancel culture outside of EA, omitted here for brevity’s sake, but one could do worse than start the Philadelphia Statement.

EA Munich and Robin Hanson

Robin Hanson has been one of the oldest intellectual allies of the EA movement. His work has been ground-breaking on a number of topics that pertain to EA, from Signalling to the Great Filter to AGI takeoff to Prediction Markets. His blog, co-hosted for a while with Eliezer, was one of the key originators to the EA movement. This involvement has continued over time, providing a steady source of incisive yet friendly criticism that is so vital for any intellectual sound movement, including speaking at multiple EA Global events.

Scott Aaronson, everyone’s favourite Quantum Cryptography Theorist and author of an excellent book which I will definitely finish sometime very soon, had this to say about Robin intellectual virtues:

I’ve met many eccentric intellectuals in my life, but I have yet to meet anyone whose curiosity is more genuine than Robin’s, or whose doggedness in following a chain of reasoning is more untouched by considerations of what all the cool people will say about him at the other end.
So if you believe that the life of the mind benefits from a true diversity of opinions, from thinkers who defend positions that actually differ in novel and interesting ways from what everyone else is saying—then no matter how vehemently you disagree with any of his views, Robin seems like the prototype of what you want more of in academia. To anyone who claims that Robin’s apparent incomprehension of moral taboos, his puzzlement about social norms, are mere affectations masking some sinister Koch-brothers agenda, I reply: I’ve known Robin for years, and while I might be ignorant of many things, on this I know you’re mistaken. Call him wrongheaded, naïve, tone-deaf, insensitive, even an asshole, but don’t ever accuse him of insincerity or hidden agendas. Are his open, stated agendas not wild enough for you??
In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it.

Scott Alexander wrote this of Robin’s discernment back in the relatively early days of Effective Altruism, not that long after the name was coined:

Then Robin Hanson of Overcoming Bias got up and just started Robin Hansonning at everybody. First he gave a long list of things that people could do to improve the effectiveness of their charitable donations. Then he declared that since almost no one does any of these, people don’t really care about charity, they’re just trying to look good. Then he told the room – this beautiful room in the Faculty Club, full of sophisticated-looking charity donors who probably thought they were there to get a nice pat on the back – that they probably thought that just because they were attending an efficient charity talk they weren’t like that, but that probabilistically there was excellent evidence that they were.

Even Bryan Caplan, one of the foremost advocates of appeasement, speaks highly of Robin’s character:

Virtually everyone who knows Robin personally vouches for his sincerity and kindness.

And his intellect:

In a similar vein, since we should expect a man of Robin’s intelligence to to produce a steady stream of original insight, the fact that he just unveiled yet another gem is no reason to be amazed.

Of course, in some sense this is by-the-by: even were Robin an irredeemable scoundrel, it would still be worthwhile defending him from unjust treatment. If ordinary people see that even the unpopular are defended, they can have confidence that they too are secure. In contrast, if they see that security comes only with popularity, people will be encouraged to constantly signal their in-group bonafides, and to always be watching over their shoulders that the mob is coming for them next.

The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

Recently, EA Munich decided to deplatform Robin Hanson after inviting him to give a talk on tort reform. At the time they briefly summarised this as due to his ‘controversial claims’; subsequently they explained themselves somewhat in a writeup, which is apparently a “pretty thorough” description of their thought process. It, along with my subsequent communications with CEA on this topic, form the primary basis of this article.

This decision has been widely criticised, both on this website and elsewhere. I agree this decision was a very poor one, and will focus on what we can do better next time. The EA Munich team are volunteers, and I'm sure relatively junior, so I do not place too much responsibility on them, though I am extremely disappointed with the advice that came from CEA, who should know better. As such, this article lays out what I see as the main mistakes that were made, and how we can avoid making them in the future.

In his blog, Aaron suggests that there is not much more that CEA (his employer) could have done, as the decision is ultimately up to the local group. Similarly, in my communication with them, CEA repeatedly emphasised that ultimately it all comes down to the local group. Naturally, I fully agree with this - the independence of local groups is something that CEA should and must respect. But I disagree that this lets CEA off the hook. In cases where a local group comes to CEA for guidance, CEA has the obligation to provide the best possible advice, and CEA clearly failed to do so here.

Mistakes and How to Avoid Them

I realise that a generalised exhortation to resist cancel culture can be difficult, especially when presented with plausible seeming and highly specific considerations in the opposite direction. So in this section I will try to forensically lay out the specific mistakes that were made in this instance, and how we can avoid them in future.

Defend core EA activities

Most important is to constantly bear in mind that the purpose of local groups is not the avoidance of conflict, or minimising the number of people who are annoyed with you: it is promoting the goals and values of the Effective Altruism movement. In this case, EA Munich, and CEA's advice to them, directly undermined one of the core tenants of EA, which is the freedom and courage to investigate new potential cause areas.

As we discussed at length in the previous section, this requires people be willing to investigate new moral issues, which obviously are going to sound weird (and potentially immoral!) to many people. To avid carnivores, the idea of investing in animal rights sounds like immorally imposing costs and restrictions on real human people, and neglecting the real problems people have that we could be solving, for the sake of … animals? But as EAs, we should push past the 'ambient sense of unease' and evaluate such new ideas logically. Even if poorly presented, we should be willing to steelman them and give them a fair hearing; if the idea's originator saves us this work by writing lengthy and detailed arguments in their favour, all the better.

The absolute minimum requirement is that CEA not actively undermine people who are doing this work. But really, CEA should be living up to the talk and actively supporting these people.

Robin is precisely the sort of thinker who is disproportionately likely to come up with the next Cause X. He is the intellectual father of prediction markets, a subject of immense discussion and advocacy in the EA community. He has written on the subject of human hypocrisy, and helped shed light on the very reasons that people ignore EA analysis in favor of their lower motives, and was the first to argue in the EA community for giving later instead of giving now. He wrote extensively on AI before it was a major focus of the EA community, in his debate on AI FOOM and his writing on emulations. He wrote one of the classic papers about modeling history as a series of exponential growth modes, research currently being pursued with substantial resources by Open Philanthropy’s David Roodman. Robin’s production of novel ideas has greatly exceeded most academics, typically written about in accessible blogposts. Politics isn’t about policy. Against prestige. This is the dream time. Stories are like religion. Inequality Is About Grabbing. This AI Boom Will Also Bust. If you want to know what is a likely Cause X, a decent way to approach that question would be to start by looking at whatever Robin Hanson has been blogging about a lot.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. Which, then, was the target in the cancelling of Robin, as exemplified in the Slate article? Did they correctly castigate his vice, or were they slandering his virtue?

It seems clear to me that the areas of Robin's work referenced in the Slate article - things like his post Two Types of Envy, where he points to a perceived inconsistency in how people talk about financial and sexual inequality, and the negative societal effects of, and mental health impacts to, large numbers of disaffected low-status males (e.g. the Incel movement) - fall firmly within the category of what we are talking about by ‘Cause X’ research. For those who dare to discuss this as being a problem that afflicts them, society is quick to offer mockery, but almost never sympathy or solutions. Robin analyses these issues in detail, compares them to another major cause, and conducts empirical work to try to estimate their magnitude.

Now, mere indifference from EAs could be understood - many people make proposals for a Cause X, and most of them are terrible. People do not have an automatic right to a hearing, because our time is limited and our attention could be spent elsewhere. Similarly, disagreement is a perfectly reasonable response.

I can see the argument that we do not necessarily have to publicly defend everyone who is attacked unfairly, as our political and reputational capital is finite. This is a bit of a dangerous path to go down - if we do not stand up for our friends, who will stand up for us? - but it does highlight an important consideration, and I wouldn’t blame someone who took this perspective. Defending people from unfair treatment is good and virtuous, but supererogatory.

However, what I think is clear is that EA, and CEA specifically, should not treat someone worse as a result of their good faith attempts at EA prioritisation research than they would have otherwise. To violate this is a fundamental betrayal of the movement and the community. If you would have had someone speak in the absence of their innovative EA work, it is unacceptable to deplatform them in response to smears resulting from this work.

Distinguishing between truth-seeking criticism and attempted cancellations.

Often, criticism is good! People are often wrong, and it is good to point this out, if possible in a sensitive fashion that is not unnecessarily nasty. But not all criticism is equal.

An example of what I think of as relatively good criticism is Alexey Guzey’s article criticising Why We Sleep. This article is good because it is logical and methodical, laying out precise arguments for why we should be sceptical of the book. It does not attempt to distort the author’s intentions; it shows that even a generous reading will find the book sadly deficient. Nor does he cherry-pick a small section; Alexey clearly explains which section of the book he focused on and why. The article does not rely on insinuation, nor does it even directly criticise the author at all. Most importantly, it aims to establish truth from falsehood.

Unfortunately, not all criticism is like this. In EA Munich’s writeup, they highlighted an article in Slate. I think it should be clear to even an uninformed reader that this piece is not in any way a fair or objective account. The Slate writer is deliberately attempting to frame him in the worst possible light, in an article full of innuendo and viciousness. There is no careful evaluation of Robin’s arguments - indeed, only one paragraph, towards the end of the article, even pretends to be forming a counterargument. It does not attempt a charitable reading of Robin. It willfully selects a handful of blog posts of his solely to make him look as bad as possible. This is not an article that is trying to make our beliefs about the world more accurate - it is trying to belittle and humiliate someone. It is a hit piece.

One technique for doing this analysis is to examine the Fnords: if we remove the filler words from the first few lines of the Slate piece, I think it is clear what the subtext is, and how fair the article is going to be:

economist creepy libertarian-leaning professor notorious odd disconcerting socio-sexual...

I would be very surprised if any article that began with such a tone was conducive to truth-seeking - save, perhaps, as a cautionary example.

So my advice is to carefully distinguish between truth-seeking criticism of someone’s arguments, and social shaming ad hominem insinuation against a person. The former is potentially very valuable, the latter… not so much. This is something that individual local groups should do, and that CEA, if it sees they are faltering in this regard, can step in and gently provide guidance.

Determining the relevance of criticism

Some criticism is highly relevant. One particular thing that criticism can tell us is that a potential speaker is not as much of an expert as we thought. For example, if you were considering inviting a famous academic to tell you about the science of sleep, learning that his book was highly inaccurate is valuable, because it implies that your potential speaker is actually less knowledgeable than you assumed about the topic. If you invite him, he might tell you false things about sleep, which would frustrate your purpose of learning true things about sleep.

In contrast, some criticism is not relevant. For example, criticism is less relevant if it is concerned with a different topic. In the case of the Slate article mentioned above, the ‘argument’ is basically that Robin is creepy because of the topics he wrote about in some blog posts. Given that EA Munich had invited him to speak about a totally different topic, the relevance is significantly reduced. If the topic of his talk is also creepy… well, maybe you shouldn’t have invited him to talk about it! Furthermore, as the Slate argument didn’t really bother much to really argue that Robin was actually mistaken, let alone systematically fraudulent like the sleep article does, it doesn’t really give us much reason to doubt Robin’s general intellectual calibre.

Perhaps because of the Horns Effect (mirror to the Halo Effect), it is easy to allow one problem to ‘spill over’ and affect your evaluation of a person’s other attributes, even if this is not logical. I encourage you to bear in mind that, even if someone has flaws, they may not be relevant flaws. For this we can consult no less an authority than the US President (no, the previous one):

You know this idea of purity and you're never compromised and you're always politically woke and all that stuff, you should get over that quickly. The world is messy, there are ambiguities. People who do really good stuff have flaws.

Apply your standards consistently

Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook. If you apply a standard only when external actors demand it, you are letting them control you. But by being cognizant of this, you can protect yourself.

In this case, one of the main reasons EA Munich gave for deplatforming Robin are that they are afraid of being associated with controversial ideas, and of the consequences of letting Robin talk. So the standard here seems to be that controversial ideas should be avoided.

However, just the previous month they hosted a talk on psychedelic drugs (according to Facebook). Needless to say, psychedelic drugs are a highly controversial topic! In the US they are generally considered Schedule I drugs with a high potential for abuse. Possessing these drugs is in general (with very limited exceptions) a felony, with the potential for very harsh penalties. The War of Drugs is a highly political topic on which people have very strong opinions. In this case, EA Munich could have noticed that a rule against controversial topics would have excluded this previous talk that they had been happy to let take place.

Of course, there is a big difference between the talk on psychedelics and Robin's talk, which is that the subject of Robin's talk was a totally different and I think unobjectionable topic (reforming tort law) - suggesting a greater degree of concern would have been due about the psychedelics talk.

So I recommend you consider the reasons being given for deplatforming a speaker, and think about whether you would really want to apply those principles in general.

Focus on getting the decision right, rather than appearances.

The other reason they gave focused on the potential negative consequences of letting Robin talk. This consisted in a frankly bizarre paragraph (quote below), suggesting that allowing Robin to give a zoom talk to ~20-30 people, on an unrelated topic, might accidentally undo feminism and civil rights (or perhaps re-institute slavery? unclear), despite neither being his intention. I am almost loathe to quote it because it seems like a strawman:

Specifically, women's rights have been suppressed for most of human history, and we believe that the rise of emancipatory women's movements has been a tremendous humanitarian achievement over the last few hundred years. Statements such as Hanson's might rekindle misogynistic sentiments and destroy some of the progress made so far, even if that is not Professor Hanson's intention. In a similar vein, we see the discussion around the tweet concerning Juneteenth. We also believe that Professor Hanson perhaps underestimates the impact of these statements.

If this was a serious concern of theirs then the nicest thing I can say is that they were hopelessly miscalibrated.

It can be difficult to tell when one’s reasoning is amiss. However, I think this is where CEA could have helped. A reasonable thing to do, when learning that this was a concern, would be to gently argue that EA Munich was exaggerating the threat. The fact that they would write such an argument should have been a sign to CEA that EA Munich was not being rational, and as such CEA should encourage them to reconsider their decision.

Instead, apparently CEA encouraged them to reconsider the language in their justification:

Their language on him destroying the progress of feminism was originally stronger, and I suggested they tone it down.

This is, I think, extremely wrongheaded. Our objective should be to make the right decision. A public summary of the decision-making should contain an accurate account of the decision-making. If you feel the need to ‘tone down’ part of it, this could be a sign you regret part of that decision… in which case you should consider changing your mind. CEA should have taken the opportunity to suggest that EA Munich had misjudged the situation, and that they should consider changing their mind.

Think about the wider impact and precedent.

It is natural for the organisers of a small group to just want the whole thing to go away. They just wanted to host some nice discussion groups and tell people about AMF - they didn’t ask for any of this! In such a scenario, giving in to the pressure and disinviting the speaker seems like the easy option. Maybe it’s not the right one - EA Munich did mention they were worried about Cancel Culture, so they had some understanding of the issues - but it is at least an end to it.

It seems such considerations were high in the minds of EA Munich, who spoke of wanting to take the action that would leave the fewest people annoyed with them.

Alas, this is a very poor decision criterion. While it is easy, in EA we try to do what is right, and EA groups should actually try to live up to these virtues rather than ignoring them because they're hard. EA groups are designed for the benefit of the principles and objective of the EA movement, not the convenience of the organisers. By giving in, we grant a heckler’s veto to nerdowells. Every instance of backing down creates a precedent that controversial speakers should be canceled, which affects both this group and all other local groups. And it encourages people to be quicker to take offense and to condemn, a danger that has been well understood since Kipling.

It’s important not to think of giving in as being the ‘middle’ route, or a ‘compromise’ decision. I can see why people might naively think this: they see some people who support a speaker, and some who condemn him, so surely the middle ground is to simply not feature the speaker in any way? But this is not the case - tolerance itself is the middle ground, between Catholic and Protestant, or Right and Left. Giving one side - or rather, a small group of extremists on one side - a veto is far from evenhanded: it immensely privileges that group. Alternatively, we could afford everyone such respect - not merely the loudest and most aggressive - which would at least be fair. But as almost everything is offensive to somebody, the range of permitted opinions left would be very small indeed! Only by saying to partisans of all stripes, “I know you are offended by this, but we judge ideas for ourselves, on their merit” can we have discussion unfettered by a political censor.

Now, one could object that this is hyperbole. After all, a group is not obliged to invite Robin to speak in the first place. Why then can they not equally uninvite him? Yes, it will be a little inconvenient, but there’s a pandemic, so it’s not like anyone has paid for plane tickets or hotels.

Here one man’s modus ponens is another’s modus tollens. For the same reasons I think it is bad to deplatform a speaker as the result of a vicious cancel culture attack, I think it would be bad to not invite them in the first place for these reasons. There are many acceptable reasons not to invite someone - like timing, or relevance, or having a full schedule, or simple being unaware of their existence. But appeasing cancel culture is not one of them. We would be ill-served if, to avoid the risk of ever having to deplatform someone, groups simply became ultra-conservative about invitations and never included anyone who wasn’t a CEA employee!

Here I think an analogy with US Labour Law might be illuminating. Most workers in the US have ‘At-Will’ contracts: this means that the worker can quit, and the employer can fire them, at basically any time for any reason, except for for a narrow group of forbidden motivations. You can quit because you’re not paid enough, or your colleagues are annoying, or you’re just sick of the colour of the carpets. You can fire someone for being unproductive, for having a name beginning with the letter ‘G’, or for supporting the wrong sports team. But you can’t fire them because of their race, or because they refused to break the law, or because they took maternity leave. This is because these are properties that the US Legal System considers important enough to protect, even in the general context of freedom of association.

Similarly, in general local groups should be free to do more or less what they want. We should want to let people explore new approaches, which might be better suited for promoting Effective Altruism. There is simply a narrow class of activities which should be strongly avoided, and which CEA should strongly advise against: deplatforming a speaker because of Cancel Culture is such a proscribed activity.

Conclusion

In this particular scenario, here are some things I think it would have been good for CEA to do, when asked for advice by EA Munich:

  1. Remind them that openness to unusual ideas is one of the guiding principles of Effective Altruism, and that local groups should uphold and promote this.
  2. Clarify the importance of fundamental cause research that challenges existing ideas to the movement, and that we should not punish people for engaging in it.
  3. Explain that the Slate article they linked is not a reliable source of information, and encourage them to refer to Robin's own work.
  4. Explain that deplatforming someone is a serious action, and widely seen as not equivalent to simply never having invited them in the first place.
  5. Explain that Robin is very unlikely to accidentally undo feminism during his talk, and this should not be a major part of their decision making process.
  6. Not take EA Munich’s claim that they understood the dangers of Cancel Culture at face value: actively discuss this with them to ensure they understand why it is harmful to the movement.
  7. To the extent that EA Munich made their decision for poor reasons, encourage them to reconsider.

The final decision is of course up to the local organisers. However, I think by providing this advice, CEA could have better equipped them to make the decision in an epistemically virtuous way that supported the goals of the movement.

Acknowledgements

Thanks to Nick Whitaker and several invaluable anonymous proofreaders for their extremely helpful feedback. Any mistakes remain my own. A draft of this document was shared with CEA and EA Munich prior to publication, and one section removed as a gesture of goodwill.

edited 2020-10-15: typos


Julia_Wise @ 2020-10-14T21:37 (+153)

I appreciate that Larks sent a draft of this post to CEA, and that we had the chance to give some feedback and do some fact-checking.

I agree with many of the concerns in this post. I also see some of this differently.

In particular, I agree that a climate of fear — wherever it originates— silences not only people who are directly targeted, but also others who see what happened to someone else. That silencing limits writers/speakers, limits readers/listeners who won’t hear the ideas or information they have to offer, and ultimately limits our ability to find ways to do good in the world.

These are real and serious costs. I’ve been talking with my coworkers about them over the last months and seeking input from other people who are particularly concerned about them. I’ll continue to do that.

But I think there are also real costs to pushing groups to go forward with events they don’t want to hold. I’m still thinking through how I see the tradeoffs between these costs and the costs above, but here’s one I think is relevant:

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situation, one organizer wrote about the Peter Singer talk their group hosted. [I’m waiting to see if I can give a fuller quote, but their summary was about how the Q&A session got conflicted enough that the group was known as “the group that invited Peter Singer” for two years and basically overpowered any other impression students had of what the EA group was about.]

“It seemed like the talk itself went pretty well, but during the Q&A section a few people basically took over the discussion and only asked question about all the previous things he has said about disabled people (and possibly some other things). The Q&A is basically all people remembered from the event. I think it did a lot of reputation damage to our group, which took 2 years to get over (by which point many attendees of the talk graduated). Before that, people basically didn't know what EA was and after it was "the group that invited Peter Singer". "

Hosting Singer and other speakers who have said controversial things has been good for many EA groups. But I also think it’s okay for individual organizers to decide they’re not up for hosting an event that carries some risk of seriously throwing their group off the rails. Being at the center of a controversy, especially for student organizers constantly living in the same environment where the talk is held, can bear a heavy personal cost as well. (Of course, knowing that people will back down if you make it costly enough for them to follow through is exactly what incentivizes you to make it costly.)

On the specifics: I was the main staff member who advised the Munich organizers, and I’d like to add more detail about how this all unfolded. There are a lot of quotes so I’ll italicize them.

The week before Hanson’s scheduled online talk about tort law reform for the Munich group, the organizers contacted CEA to say they were considering canceling the event after learning about some of Hanson’s past writing. From my first message to the Munich organizers:

"I don’t have a clear answer about whether to cancel the event. I could it being reasonable either way. . . . If the discussion goes into areas where you think people may be offended or upset, maybe have an organizer or two stay behind after to have continued discussion after the Q&A with Hanson is done. I looked at what I think is basically the same talk: https://www.youtube.com/watch?v=rPdHXw05SvU Some parts will probably go over ok with an audience that’s used to thinking about alternate governance systems. But for example he suggests torture as a possible penalty, and as far as I saw from a cursory look through the slides, he doesn’t address objections to that. People seem to find his work very polarizing, so some people find it very refreshing because he says things almost no one else says, and other people hate it. So you may well get some indication from the Q&A about how people are feeling, and you may want to follow up with them if they seem upset."

After that, the Munich organizers discussed the situation internally, held a vote, and wrote to Hanson saying that they had decided to cancel the talk. Hanson tweeted about the cancellation, indicating he didn’t think they had adequately explained their decision.

I wrote to the Munich organizers:

"If you were going to respond, I'd send this both to Hanson and perhaps also reply on the Twitter, with points along these lines:

Using these suggestions, the Munich organizers drafted their statement explaining the situation and their decision, and a coworker and I made some minor suggestions afterwards.

Since they were volunteers writing what was probably their first public statement to be read by the wider internet on a tight timeframe, I do wish I had given them more feedback on the draft. I also wish I had focused my advice not just on the practicalities, but also on the tradeoffs discussed above. Specifically, I should have checked that organizers were tracking some of the things that Larks raises in the conclusion. I also agree that when CEA leaves the final decision to organizers, we aren’t off the hook — we aim to provide the best advice we can to organizers, and to learn from experience.

willbradshaw @ 2020-10-15T07:49 (+34)

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situation, one organizer wrote about the Peter Singer talk their group hosted. [I’m waiting to see if I can give a fuller quote, but their summary was about how the Q&A session got conflicted enough that the group was known as “the group that invited Peter Singer” for two years and basically overpowered any other impression students had of what the EA group was about.]

Just for context, if anyone is unaware, Peter Singer is extremely controversial in Germany, much (/even) more so than in the English-speaking world. There was a talk by him in Cologne a few years ago, and everyone was a bit surprised it didn't get shouted down by student activists.

So I can definitely see this happening, and sympathise with the desire for it not to happen again, even though I still think the Hanson decision was ill-made.

Jonas Vollmer @ 2020-10-15T08:55 (+29)

+1, in the German-speaking area, activists have tried to prevent people from gaining physical access to where Singer's talk was to be hosted, and Singer was even physically assaulted on one occasion (a couple of decades ago though). Some venues have cancelled him. There are often protests (by disability rights activists, religious people, etc.) where he speaks.

gruban @ 2020-10-15T09:51 (+43)

As one of the organisers of the EA Munich group this was the first thing I thought of when we heard about the press coverage of Robin Hanson: What can we learn from the EA association of the controversies of Peter Singer. I was thinking of your comment and of Ben Todd's quote "Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts." I think there is much harm that can be done in canceling but it should be weighed against the potential harm of hurting the movement in a country where values and sentiments can be different than in the english speaking world.

For me the Robin Hanson talk would have been the first event as a co-organiser and seeing a potential cooperation partner unearthing the negative press about Robin Hanson and telling us that they would not be able to work with us if we hosted him, was an indication that we shouldn't rush to hold this talk. Oliver Habryka summarised this pretty well:

Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin's ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.

I just looked up what I wrote internally after the decision and still think this is a good summary:

In an ideal world we have known about the issues beforehand, would have talked them through internally and if we had invited him we would have known how to address them in a way that is not harmful to the EA community. However given the short time we saw more risks in alienating people than getting them interested in EA through the talk.

The monthly talks we host are public and posted on Meetup and Facebook so our audience consists of people who are new to the community. We as EA local groups are the first impression many people get of the community and are the faces of the community in our region so I would argue we should be well prepared and versed in potential controversies before hosting talks especially with prominent people and on a video platform where all statements can be recorded and shared. As a group that had just one female speaker in the last 15 talks I think this is especially the case if press coverage may seem that the speaker has views that may make women feel less welcome.

At the time it seemed riskier to try to assess and reduce the risks about the potential negative consequences around the talk then to cancel it. However my error was in not assessing risks around signaling in terms of Cancel Culture.

Julia_Wise @ 2020-10-15T13:37 (+4)

I got permission to add the full quote, though the meaning is the same. This example was actually in the US.

willbradshaw @ 2020-10-15T17:14 (+4)

Ah, then my comment was based on a misunderstanding. Apologies.

Julia_Wise @ 2020-10-15T17:45 (+5)

But still relevant for the Munich organizers, since Singer seems to get protested more per event in Germany than in other countries.

kbog @ 2020-10-22T11:19 (+99)

I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I  don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn't care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn't care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (including me) think he's smart in some ways and has some good ideas. He's just here for the ride, we don't owe him anything.

I'm definitely not trying to character-assassinate or 'cancel' him, I'm just saying that he only deserves as much community respect from us as any other decent academic does, we shouldn't give him the kind of special anti-cancelling loyalty that we would reserve for people who have really worked as allies for us.

Robert_Wiblin @ 2020-10-15T20:47 (+77)

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

AGB @ 2020-10-14T20:53 (+77)

I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:

-The detailed block-by-block approach to making the case for both cancel culture's prevalence and its potential harm to the movement.

-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.

-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.

But there's still a piece I think is missing. I don't fault Larks for this directly, since the post is already very long and covers a lot of ground, but it's the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.

Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook.

Given that this post is titled 'advice for CEA and local groups', reading this made me hope that this post would end with some suggested 'rules and standards' for who we do and do not invite to speak at local events/EAG/etc. Where do we draw the line on 'behaving immorally'? I strongly agree that whatever rules are being applied should be applied consistently, and think this is most likely to happen when discussed and laid down in a transparent and pre-agreed fashion.

While I have personal views on the Munich case which I have laid out elsewhere, I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin's character or work. Moreover, my commitment to consistency and transparency is far stronger than my preference for any one set of rules over others. I also expect clear rules about what we will and won't allow at various levels to naturally insulate against cancel culture. To the extent I agree that cancel culture is an increasing problem, the priority on getting this clear and relying less on ad hoc judgements of individuals has therefore risen, and will likely continue to rise.

So, what rules should we have? What are valid reasons to choose not to invite a speaker?

Ben Pace @ 2020-10-14T22:55 (+14)

It's a good question. I've thought about this a bit in the past.

One surprising rule is that overall I think people with a criminal record should still be welcome to contribute in many ways. If you're in prison, I think you should generally be allowed to e.g. submit papers to physics journals, you shouldn't be precluded from contributing to humanity and science. Similarly, I think giving remote talks and publishing on the EA Forum should not be totally shut off (though likely hampered in some ways) for people who have behaved badly and broken laws. (Obviously different rules apply for hiring them and inviting them to in-person events, where you need to look at the kind of criminal behavior and see if it's relevant.) 

I feel fairly differently to people who have done damage in and to members of the EA community. Someone like Gleb Tsipursky hasn't even broken any laws and should still be kicked out and not welcomed back for something like 10 years, and even then he probably won't have changed enough (most people don't).

In general EA is outcome-oriented, it's not a hobby community, there's sh*t that needs to be done because civilization is inadequate and literally everything is still at stake at this point in history. We want the best contributions and care about that to the exemption of people being fun or something. You hire the best person for the job.

There's some tension there, and I think overall I am personally willing to put in a lot of resources in my outcome-oriented communities to make sure that people who contribute to the mission are given the spaces and help they need to positively contribute.

I can't think of a good example that isn't either of a literal person or too abstract... like, suppose Einstein has terrible allergies to most foods, just can't be in the space as them. Can we have him at EAG? How much work am I willing to put in for him to have a good EAG? Do I have to figure out a way to feed everyone a very exclusive yet wholesome diet that means he can join? Perhaps.

Similarly, if I'm running a physics conference and Einstein is in prison for murder, will I have him in? Again, I'm pretty open to video calls, I'm pretty willing to put in the time to make sure everyone knows what sort of risks he is, and make sure he isn't allowed to end up in a vulnerable situation with someone, because it's worth it for our mission to have him contribute.

You get the picture. Y'know, tradeoffs, where you actually value something and are willing to put in extraordinary effort to make it work.

jackmalde @ 2020-10-14T21:49 (+1)

As I said in an earlier comment, I think we need to evaluate this on a case-by-case basis and ultimately make decisions based on a (rough) calculation of expected benefit vs expected harm of letting someone speak. So for me there isn't really a standard "line on behaving immorally". For example, if someone has bad character but it is genuinely plausible they might come up cause X, then I reckon they should (probably) be allowed to speak.

So I don't think actual 'rules' are helpful. General 'reasons' why we might or might not invite a speaker on the other hand are certainly helpful and I think Larks alludes to some in this post (for example the cause X point!).

I didn't actually interpret Lark's post as trying to contribute to the "ongoing prosecution-and-defence of Robin's character or work", but instead think it is trying to add to the cancel culture conversation more generally, using Robin's case as a useful example.

AGB @ 2020-10-15T11:14 (+8)

Thanks for your response.

I didn't actually interpret Lark's post as trying to contribute to the "ongoing prosecution-and-defence of Robin's character or work", but instead think it is trying to add to the cancel culture conversation more generally, using Robin's case as a useful example.

Sorry, this is on me. The original draft of that sentence read something like "I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin's character or work, so I'm not going to weigh in again on those specific points and request others replying to this comment do the same, instead focusing on the question of what rules we do/don't want in general".

I then cut the sentence down, but missed that in doing so it could now be read as implying that this was Larks' objective. That wasn't intentional, and I don't think this.

Wei_Dai @ 2020-10-16T07:12 (+61)

I urge those who are concerned about cancel culture to think more strategically. For instance, why has cancel culture taken over almost all intellectual and cultural institutions? What can EA do to fight it that those other institutions couldn't do, or didn't think of? Although I upvoted this post for trying to fight the good fight, I really doubt that what it suggests is going to be enough in the long run.

Although the post includes a section titled "The Nature of Cancel Culture", it seems silent on the social/political dynamics driving cancel culture's quick and widespread adoption. To make an analogy, it's like trying to defend a group of people against an infectious disease that has already become a pandemic among the wider society, without understanding its mechanism of infection, and hoping to make do with just common sense hygiene.

In one particularly striking example, I came across this article about a former head of the ACLU. It talks about how the ACLU has been retreating from its free speech principles, and includes this sentence:

But the ACLU has also waded into partisan political issues, at precisely the same time as it was retreating on First Amendment issues.

Does it not seem like EA is going down the same path, and for probably similar reasons? If even the ACLU couldn't resist the pull of contemporary leftist ideology and its attending abandonment of free speech, why do you think EA could, absent some truly creative and strategic thinking?

(To be clear, I don't have much confidence that sufficiently effective strategic ideas for defending EA against cancel culture actually exist or can be found by ordinary human minds in time to make a difference. But I see even less hope if no one tries.)

Pablo_Stafforini @ 2020-10-16T15:07 (+32)

This comment expresses something I was considering saying, but more clearly than I could. I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection: it seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation. Unfortunately, I suspect that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

willbradshaw @ 2020-10-16T15:25 (+17)

I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection[...]

This seems right to me, and I upvoted to support (something like) this statement. I think there's a great deal of danger in both directions here.

(Not just for reputational reasons. I also think that there are lots of SJ-aligned – but very sincere – EAs who are feeling pretty alienated from anti-CC EAs right now, and it would be very bad to lose them.)

It seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation.

The epistemic standards seem totally core to EA to me. If we relax much at all on those I think the expected future value of EA falls quite dramatically. The question to me is whether we can relax/alter our discourse norms without compromising those standards.

Unfortunately, it seems that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

I sympathise with this, but I think if we don't have public posts like this one, the outcome is more-or-less decided in advance. If everyone who thinks something is bad remains silent for the sake of reputational harm, the discourse in the movement will be completely dominated by those who disagree with them, while those who would agree with them become alienated and discouraged. This will in turn determine who engages with the movement, and how it evolves in relation to that idea in the future.

If that outcome (in this case, broad adoption of the kinds of norms that give rise to cancel culture within EA) is unacceptable, some degree of public opposition is necessary.

Pablo_Stafforini @ 2020-10-16T16:52 (+4)

I sympathise with this, but I think if we don't have public posts like this one, the outcome is more-or-less decided in advance.

Yes, I agree. What I'm uncertain about is whether it's desirable to have more of these posts at the current margin. And to be clear: by saying I'm uncertain whether it's a good idea, I don't mean to suggest it's not a good idea; I'm simply agnostic.

willbradshaw @ 2020-10-16T17:18 (+4)

Okay, sure, at the margin I agree it's tricky. Both for reputational reasons, and the broad-tent/community-cohesion concerns I mention above.

Milan_Griffes @ 2020-10-16T15:35 (+2)

Trump demonstrates that thoroughgoing shamelessness effectively wards off cancellation, at least in the short run.

kokotajlod @ 2020-10-17T11:43 (+17)

I disagree. Trump draws his power from the Red Tribe; the Blues can't cancel him because they don't have leverage over him.

We, by contrast, are mostly either Blues ourselves or embedded in Blue communities.

Can you give an example of someone or some community in a situation like ours, that adopted a strategy of thoroughgoing shamelessness, and that successfully avoided cancellation?

Milan_Griffes @ 2020-10-17T19:32 (+4)

Agree that the Blues can't cancel Trump. Note that being affiliated with Red Tribe isn't sufficient to avoid cancellation (though it probably helps) – see Petraeus, see the Republicans on these lists: 1, 2

Jordan Peterson seems basically impossible to cancel due to a combination of his shamelessness & his virtue (he isn't really Blue Tribe though). Same for Joe Rogan and Tyler Cowen.

Tsunayoshi @ 2020-10-18T14:56 (+5)

Jordan Peterson is probably indeed a good example. A more objective way to describe his demeanor than shamelessness is "not giving in". One major reason why he seems to be popular is his perceived willingness to stick to controversial claims. In turn that popularity is some form of protection against attempts to get him to resign from his position at the University of Toronto.

However, I think that there are significant differences between Peterson and EA's situation, so Peterson's example is not my endorsement of a "shamelessness" strategy.

ofer @ 2020-10-15T16:33 (+60)

Thank you for writing this important post Larks!

I would add that the harm from cancel culture's chilling effect may be a lot more severe than what people tend to imagine. The chilling effect does not only prevent people from writing things that would actually get them "canceled". Rather, it can prevent people from writing things that they merely have a non-neglectable credence (e.g. 0.1%) of getting them canceled (at some point in the future); which is probably a much larger and more important set of things/ideas that we silently lose.

Nicole_Ross @ 2020-10-15T17:59 (+51)

+1. I also think that the chilling effect can extend to people's thoughts, i.e., limiting what people even let themselves think let alone write.

Wei_Dai @ 2020-10-15T17:14 (+15)

See also https://www.lesswrong.com/posts/2LtJ7xpxDS9Gu5NYq/open-and-welcome-thread-october-2020?commentId=YrRcRxNiJupZjfgnc

ETA: In case it's not clear, my point is that there's also an additional chilling effect from even smaller but more extreme tail risks.

rohinmshah @ 2020-10-14T20:24 (+56)

It seems like you believe that one's decision of whether or not to disinvite a speaker should depend only on one's beliefs about the speaker's character, intellectual merits, etc. and in particular not on how other people would react.

Suppose that you receive a credible threat that if you let already-invited person X speak at your event, then multiple bombs would be set off, killing hundreds of people. Can we agree that in that situation it is correct to cancel the event?

If so, then it seems like at least in extreme cases, you agree that the decision of whether or not to hold an event can depend on how other people react. I don't see why you seem to assume that in the EA Munich case, the consequences are not bad enough that EA Munich's decision is reasonable.

Some plausible (though not probable) consequences of hosting the talk:

At least the first two seem quite bad, there's room for debate on the third.

In addition, while I agree that the extremes of cancel culture are in fact very harmful for EA, it's hard to argue that disinviting a speaker is anywhere near the level of any of the examples you give. Notably, they are not calling for a mob to e.g. remove Robin Hanson from his post, they are simply cancelling one particular talk that he was going to give at their venue. This definitely does have a negative impact on norms, but it doesn't seem obvious to me that the impact is very large.

Separately, I think it is also reasonable for a random person to come to believe that Robin Hanson is not arguing in good faith.

(Note: I'm still undecided on whether or not the decision itself was good or not.)

Milan_Griffes @ 2020-10-14T20:27 (+12)

I'm reminded of this.

Milan_Griffes @ 2020-10-14T20:30 (+2)

Also of The Apology, though that's obviously an extreme case.

Ben Pace @ 2020-10-14T23:02 (+11)

Naturally, you have to understand Rohin, that in all of the situations where you tell me what the threat is, I'm very motivated to do it anyway? It's an emotion of stubbornness and anger, and when I flesh it out in game-theoretic terms it's a strong signal of how much I'm willing to not submit to threats in general.

Returning to the emotional side, I want to say something like "f*ck you for threatening to kill people, I will never give you control over me and my community, and we will find you and we will make sure it was not worth it for you, at the cost of our own resources".

rohinmshah @ 2020-10-15T05:51 (+15)

Yeah, I'm aware that is the emotional response (I feel it too), and I agree the game theoretic reason for not giving in to threats is important. However, it's certainly not a theorem of game theory that you always do better if you don't give in to threats, and sometimes giving in will be the right decision.

we will find you and we will make sure it was not worth it for you, at the cost of our own resources

This is often not an option. (It seems pretty hard to retaliate against an online mob, though I suppose you could randomly select particular members to retaliate against.)

Another good example is bullying. A child has ~no resources to speak of, and bullies will threaten to hurt them unless they do X. Would you really advise this child not to give in to the bully?

(Assume for the sake of the hypothetical the child has already tried to get adults involved and it has done ~nothing, as I am told is in fact often the case. No, the child can't coordinate with other children to fight the bully, because children are not that good at coordinating.)

Gregory_Lewis @ 2020-10-15T11:35 (+66)

Another case where 'precommitment  to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2)

Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds hosting Bob abhorrent, and is dismayed that a group at her university is planning to do just this. She approaches this group, making clear her objections and stating her intention to, if this goes ahead, to (e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation, and so on. 

This could be an attempt to bully (where usual game theory provides a good reason to refuse to concede anything on principle). But it also could not be: Alice may be explaining what responses she would make to protect her interests which the groups planned action would harm, and hoping to find a better negotiated agreement for her and the EA group besides "They do X and I do Y". 

It can be hard to tell the difference, but some elements in this example speak against Alice being a bully wanting to blackmail the group to get her way: First is the plausibility of her interests recommending these actions to her even if they had no deterrent effect whatsoever (i.e. she'd do the same if the event had already happened). Second the actions she intends falls roughly falls in 'fair game' of how one can retaliate against those doing something they're allowed to do which you deem to be wrong. 

Alice is still not a bully even if her motivating beliefs re. Bob are both completely mistaken and unreasonable. She's also still not a bully even if Alice's implied second-order norms are wrong (e.g. maybe the public square would be better off if people didn't stridently object to hosting speakers based on their supposed views on topics they are not speaking upon, etc.) Conflict is typically easy to navigate when you can dictate to your opponent what their interests should be and what they can license themselves to do. Alas such cases are rare.

It is extremely important not to respond to Alice as if she was a bully if in fact she is not, for two reasons. First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck. Maybe there is no good compromise with Alice this time, but there may be the next time one finds oneself at cross-purposes.

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail. If Alice also thinks about these things through the lens of game theory (although perhaps not in the most sophisticated way), she may reason she is rationally obliged to retaliate against you (even spitefully) to deter you from doing harm again. 

The stage is set for continued escalation. Presumptive bad faith is pernicious, and can easily lead to martyring oneself needlessly on the wrong hill. I also note that 'leaning into righteous anger' or 'take oneself as justified in thinking the worst of those opposed to you' are not widely recognised as promising approaches in conflict resolution, bargaining, or negotiation.

rohinmshah @ 2020-10-15T16:06 (+16)

I agree with parts of this and disagree with other parts.

First off:

First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck.

Definitely agree that pre-committing seems like a bad idea (as you could probably guess from my previous comment).

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail.

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.

(I definitely agree that if someone emails you saying "I think this speaker is bad and you shouldn't invite him", and after some discussion they say "I'm sorry but I can't agree with you and if you go through with this event I will protest / criticize you / have the university withdraw affiliation", you should not treat this as a bad faith attack. Afaik this was not the case with EA Munich, though I don't know the details.)

----

Re: the first five paragraphs: I feel like this is disagreeing on how to use the word "bully" or "threat", rather than anything super important. I'll just make one note:

Alice is still not a bully even if her motivating beliefs re. Bob are both completely mistaken and unreasonable. She's also still not a bully even if Alice's implied second-order norms are wrong (e.g. maybe the public square would be better off if people didn't stridently object to hosting speakers based on their supposed views on topics they are not speaking upon, etc.)

I'd agree with this if you could reasonably expect to convince Alice that she's wrong on these counts, such that she then stops doing things like

(e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation

But otherwise, given that she's taking actions that destroy value for Bob without generating value for Alice (except via their impact on Bob's actions), I think it is fine to think of this as a threat. (I am less attached to the bully metaphor -- I meant that as an example of a threat.)

Gregory_Lewis @ 2020-10-15T20:39 (+15)

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.
 

I think so. 

In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad faith. Alice may foresee there is no bargaining frontier, but is informing you what your BATNA looks like and gives you the opportunity to consider whether 'giving in' is nonetheless better for you (this may not be very 'nice', but it isn't 'blackmail'). A lot turns on whether her 'or else' is plausibly recommended by the lights of her interests (e.g. she would do these things if we had already held the event/she believed our pre-commitment to do so) or she is threatening spiteful actions where their primary value is her hope they alter our behaviour (e.g. she would at least privately wish she didn't have to 'follow through' if we defied her). 

The reason these are important to distinguish is 'folk game theory' gives a pro tanto reason to not give in the latter case, even if doing so is better than suffering the consequences (as you deter future attempts to coerce you). But not in the former one, as Alice's motivation to retaliate does not rely on the chance you may acquiesce to her threats, and so she will not 'go away' after you've credibly demonstrated to her you will never do this. 

On the particular case I think some of it was plausibly bad faith (i.e. if a major driver was 'fleet in being' threat that people would antisocially disrupt the event) but a lot of it probably wasn't: "People badmouthing/thinking less of us for doing this" or (as Habryka put it) the 'very explicit threat' of an organisation removing their affiliation from EA Munich are all credibly/probably good faith warnings even if the only way to avoid them would have been complete concession. (There are lots of potential reasons I would threaten to stop associating with someone or something where the only way for me to relent is their complete surrender)

(I would be cautious about labelling things as mobs or cancel culture.)


[G]iven that she's taking actions that destroy value for Bob without generating value for Alice (except via their impact on Bob's actions), I think it is fine to think of this as a threat. (I am less attached to the bully metaphor -- I meant that as an example of a threat.)

Let me take a more in-group example readers will find sympathetic.

When the NYT suggested it will run an article using Scott's legal name, may of his supporters responded by complaining to the editor, organising petitions, cancelling their subscriptions (and encouraging others to do likewise), trying to coordinate sources/public figures to refuse access to NYT journalists, and so on. These are straightforwardly actions which 'destroy value' for the NYT, are substantially motivated to try and influence its behaviour, and was an ultimatum to boot (i.e. the only way the NYT can placate this 'online mob' is to fully concede on not using Scott's legal name). 

Yet presumably this strategy was not predicated on 'only we are allowed to (or smart enough to) use game theory, so we can expect the NYT to irrationally give in to our threats when they should be ostentatiously doing exactly what we don't want them to do to demonstrate they won't be bullied'. For although these actions are 'threats', they are warnings/ good faith/ non-spiteful, as these responses are not just out of hope to coerce: these people would be minded to retaliate similarly if they only found out NYT's intention after the article had been published. 

Naturally the hope is that one can resolve conflict by a meeting of the minds: we might hope we can convince Alice to see things our way; and the NYT probably hopes the same. But if the disagreement prompting conflict remains, we should be cautious about how we use the word threat, especially in equivocating between commonsense use of the term (e.g. "I threaten to castigate Charlie publicly if she holds a conference on holocaust denial") and the subspecies where folk game theory - and our own self-righteousness - strongly urges us to refute (e.g. "Life would be easier for us at the NYT if we acquiesced to those threatening to harm our reputation and livelihoods if we report things they don't want us to. But we will never surrender the integrity of our journalism to bullies and blackmailers.")

rohinmshah @ 2020-10-16T00:20 (+13)

Yeah, I think I agree with everything you're saying. I think we were probably thinking of different aspects of the situation -- I'm imagining the sorts of crusades that were given as examples in the OP (for which a good faith assumption seems straightforwardly wrong, and a bad faith assumption seems straightforwardly correct), whereas you're imagining other situations like a university withdrawing affiliation (where it seems far more murky and hard to label as good or bad faith).

Also, I realize this wasn't clear before, but I emphatically don't think that making threats is necessarily immoral or even bad; it depends on the context (as you've been elucidating).

kokotajlod @ 2020-10-15T08:31 (+6)

I think I agree with you except for your example. I'm not sure, but it seems plausible to me that in many cases the bullied kid doing X is a bad idea. It seems like it will encourage the bullies to ask for Y and Z later.

RowanADonovan @ 2020-10-18T13:07 (+53)

[Epistemic status: I find the comments here to be one-sided, so I’m mostly filling in some of the missing counterarguments. But I feel strong cognitive dissonance over this topic.]

I’m worried about these developments because of the social filtering and dividing effect that controversy-seeking speakers have and because of the opposition to EA that they can create.

Clarification 1: Note that the Munich group was not worried that their particular talk might harm gender equality but that this idea of Hanson’s might have that effect if it becomes much more popular, and that they don’t want to contribute to that. My worries are in a similar vein. The most likely effect of any individual endorsement, invitation, or talk will likely be small, but I think the expected effect is much more worrying and driven by accumulation and tail risks.

Clarification 2: I’m not concerned with truth-seeking but with controversy-seeking (edit: a nice step-by-step guide). In some cases it’s hard to tell whether someone has a lot of heterodox ideas and lacks a bit in eloquence and so often ruffles feathers, or whether the person has all those heterodox ideas but is particularly attracted to all the attention they get if they say things that are just on the edge of the Overton window.

The second type of people thereby capitalize on the narcissism of small differences to sow discord among sufficiently similar groups of people, which divides the groups and makes them martyr of one and anathema of the other – so a well-known figure in both.

A lot of social movements have succumbed to infighting. If we seem to endorse or insufficiently discourage controversy-seeking, we’re running a great risk of EA also succumbing to infighting, attrition, external opposition, and (avoidable) fragmentation.

It seems only mildly more difficult to me to rephrase virtually any truth-seeking insight in an empathetic way. The worst that can be said, in my opinion, against that is that it raises the bar to expression slightly and disadvantages less eloquent people. Those problems can probably be overcome. Scott Alexander for example reported that friends often ask him whether an idea they have is socially appropriate to mention, and he advises them on it. Asking friends for help seems like a strong option.

And no, this will not prevent everyone from finding fault or affront with your writings, but it will maximize the chances that the only people who continue to find fault with your writings are ones who do it because they’re high on the social power that they wield. This is a real problem but one very separate from the Munich case and other mere withdrawals of endorsement.

Clarification 3: I would also like to keep two more things completely separate. The examples of cancel culture that involve personal threats and forced resignations are (from what little, filtered evidence I’ve seen) completely disproportionate. But there is a proportionate way of responding to controversy-seeking, and I think not inviting someone and maybe even uninviting someone from an event is proportionate.

In fact, if a group of people disapproves of the behavior of a member (the controversy-seeking, not the truth-seeking), it is a well-proven mechanism from cultural evolution to penalize the behavior in proportionate ways. Both tit for tat and the Pavlov strategy work because of such penalties – verbal reprimands, withdrawal of support and endorsement, maybe at some point fines. Because of the ubiquity of such proportionate penalties, it seems to me not neutral but like an endorsement to invite someone and maybe also to fail to uninvite them.

Given what Hanson has written, I find it disproportionate to put him at the center of all these discussions. That seems a lot more stressful than the mere cancellation of an event (without all the ensuing discussion). So please read this as a general comment on movement/community strategy and not as a comment on his character.

Clarification 4: My opinion on the behaviors of Hanson and also Alexander is quite weakly held. I’m mildly skeptical but could easily see myself revising that opinion if I knew them better. My stronger concern is that I see really bad things (such as the examples collected by Larks above) used against good things (such as the social norms espoused by, say, Encompass or some “blue tribe” groups).

I don’t think anyone has figured out the optimal set of social norms yet. But it seems (unintentionally) unfair and divisive to (unintentionally) weaponize the bad behavior of some students or some people on Twitter or Reddit against the empathetic, careful, evolving, feedback-based norms that a lot of blue tribe people want to establish to push back against discrimination, oppression, or even just disrespect. I know a lot of the people in the second camp, and they would never character-assassinate someone, judge someone by an ambiguous hand sign, or I try to get them fired over the research they’re doing.

I want to stress that I think this happens unintentionally. Larks strikes me as having very fair and honest intentions, and the tone of the article is commendable too.

---

That said, I see a number of ways in which it is risky to invite, cite, refer, or otherwise endorse people that show controversy-seeking behavior. I’ve seen each of these happening, which is worrying, but especially that there are a number of independent paths along which it is risky increases my worry.

Failure to build a strong network:

If you invite/cite/endorse anyone randomly, the invitation/citation/endorsement is uninformative. But no one does that, so onlookers are justified in thinking that you invite them for one or several things that are special about them. Even if they divide the probability evenly over ten ways in which the person is special, of which only one is objectionable, that leaves a much greater than 10% chance that you invited them also for the objectionable reason.

A greater than 10% chance that you invite/cite/endorse someone also because of their objectionable ideas is enough for many smart but not-yet-involved people to stay away from your events. (Reforming tort law is also a sufficiently obscure topic that people will be excused if they think that you invited the speaker for their name rather than the topic.)

A greater than 10% chance that you invite/cite/endorse someone also because of their objectionable ideas is also enough for powerful parties to avoid associating and cooperating with you.

Fragmentation:

No one quite clearly sees whether the people who endorse or defend controversy-seeking behavior or the people who show controversy-seeking behavior are a vocal, low-status minority, or whether it’s widespread. Even if the majority rejects controversy-seeking behavior but is merely silent about it, that may cause that majority to disassociate from the rest.

I’m thinking of the risk of a majority of EAs disassociating from EA and fragmenting into smaller camps that have clearer antiracist, antisexist, etc. social norms. EA may for example split into separate high and low agreeableness camps that don’t interact.

External opposition:

Outside opposition to EA may ramp up to the point where it is unsafe to use EA vocabulary in public communication. This will make it a lot harder to promote cost-effectiveness analysis for altruistic ends and cause neutrality because they’ll be associated with the right-wing, and the actual right-wing will not actually be interested in them. It may become just as difficult to discuss longtermism in public as it is to discuss genetic modification to increase well-being.

Or, less extremely, the success of some parts of EA may depend on a lot of people. Encompass uses the fitting term people of the global majority for people of color. If animal rights remains a fad among affluent white people because the rest are appalled by the community that comes with it, not a whole lot of animals are helped. This, I think, should be a great concern for animal rights, though, arguably, it’s less of a concern for AI safety because of the very small set of people who’ll likely determine how that will play out.

Attracting bad actors:

Further I see the risk that actual bad actors will pick up on the lax behavioral norms and the great controversy potential, and so will be disproportionately attracted to the community. The’ll be the actual controversy-seeking narcissists who will sow discord wherever they can to be at the center of attention. This will exacerbate the risk of to all the failure modes above and may lead to a complete collapse of the community, because no one else wants to waste such a great share of their time and energy on infighting.

Harm to society:

Finally, EA has become more powerful at an astounding rate. Maybe the current blue tribe norms are fine-tuned to prevent or fight back against discrimination and oppression at a tolerable cost of false positives (just like any law). If EA becomes sufficiently powerful and then promotes different norms, those may be more exploitable, and actual harm – escalating discrimination and oppression – may result.

Conversely, maybe we can also come up with superior behavioral norms. I don’t yet see them, but maybe someone will start a metacharity to research optimal social norms. Maybe a Cause X candidate?

Finally, I think mere disclaimers that your invitation/citation/recommendation is not an endorsement of this and that that the person has said go ~ 80% of the way to solving this problem I think. (Though I could easily be wrong.) Every link to their content could have a small footnote to that effect, every talk invitation a quick preamble to that effect, and verbal recommendations could also come with a quick note. Admittedly, these are very hard to write. Then again others can copy the phrasings they like and save time.

HaydnBelfield @ 2020-10-18T12:17 (+39)

I think I have a different view on the purpose of local group events than Larks. They're not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.

They're primarily about attracting people to effective altruism. They're about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.

So its good not to have a speaker at your event who is going to repel the people you want to attract.

Abby Hoskin @ 2020-10-15T20:59 (+30)

As somebody currently involved in a university group, I am extremely sympathetic towards the EA Munich group, even though they might have made a mistake here. There is a huge amount of pressure to avoid controversial topics/speakers, and it seems like they did not have a lot of time to make a decision in light of new evidence. I have hosted Peter Singer for multiple events (and am glad to have done so), but it has led to multiple uncomfortable confrontations that the average student group (e.g., knitting society) just does not have to deal with.

This highlights why Larks' post is so important. When groups face decisions about when to carry out or cancel an event, having an explicit framework for this decision making would be incredibly helpful. I'm very glad to see Julia Wise/CEA engage with this post, as I think it would be helpful for both CEA and local groups to decide at the beginning of term/before inviting speakers what qualifies people to be speakers.

The main (in my opinion, reasonable) principles elucidated in this post as I read it are:

1. Openness to unusual ideas is one of the guiding principles of Effective Altruism; groups should uphold and promote this.

2. Fundamental cause research that challenges existing ideas to the movement is important; we should not punish people for engaging in it.

But it is also important to consider what *disqualifies* people from speaking.

The most critical thing to me would be a speaker's history of promoting ideas in bad faith. (E.g., promoting ideas that have been clearly falsified with scientific evidence; deliberately falsifying data in order to push a specific agenda.) I am sure there are other factors that would also make sense to consider! It would be helpful for them to be elucidated somewhere.

Linch @ 2020-10-16T11:18 (+3)

For this and also Robert Wiblin's comment, I'm interested in whether unrepentant opponents of scientific replication should be considered beyond the pale in EA circles. It's not a central problem in most people's minds, but a) it's uncontroversially bad in our circles and b) EAs have a stronger case for considering denial of truth very bad than other groups.

This is arguably not a hypothetical example (note that I do not have an opinion on the original research).

EDIT: Removed concrete examples since they might be a distraction.

Habryka @ 2020-10-16T17:47 (+10)

I would actually be really interested in talking to someone like Baumeister at an event, or ideally someone a bit more careful. I do think I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

Linch @ 2020-10-16T21:56 (+4)
I do think I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

I think that's roughly my position as well.

Abby Hoskin @ 2020-10-23T18:01 (+3)

Same. Especially agree that the format of the event needs to be structured so that ideas are not presented as facts, but are instead open to (lots of public) criticism. 

gruban @ 2020-10-15T10:07 (+17)

I think this post could have profited from explaining the word "deplatforming" as in the sentence "Recently, EA Munich decided to deplatform Robin Hanson" as described in "3 suggestions about jargon in EA".

As one of the organisers of EA Munich it would be helpful to know more clearly what is meant by this as I could read it as us trying to "shut down" a speaker. It could also just be a synonym of "disinvite". I think especially in criticizing members of the community we should be as precise as possible.

Larks was so kind to share this article with us before posting and I pointed out this objection as my personal opinion in my reply to him.

casebash @ 2020-10-15T05:26 (+16)

This is a really challenging situation - I could honestly see myself leaning either way on this kind of scenario. I used to lean a lot more towards saying whatever I thought was true and ignoring the consequences, but lately I've been thinking that it's important to pick your battles.

I think the key sentence is this one - "On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject."

What seems more important to me is not necessarily these kinds of edge cases, but that we talk openly about the threat potentially posed. Replacing the talk with a discussion about cancel culture instead seems like it could have been a brilliant Jiu Jitsu move. I'm actually much more worried about what's been going on with ACE than anything else.

HaydnBelfield @ 2020-10-18T12:33 (+15)

[minor, petty, focussing directly on the proposed subject point]

In this discussion, many people have described the subject of the talk as "tort law reform". This risks sounding technocratic or minor.

The actual subject (see video) is a libertarian proposal to replace the entirety of the criminal law systen with a private, corporate system with far fewer limits on torture and constitutional rights. While neglected, this proposal is unimportant (and worse, actively harmful) and completely intractable.

The 17 people who were interested in attending didn't miss out on hearing about the next great cause X.

DannyBressler @ 2020-10-15T02:49 (+14)

By the title, I thought this was going to be a discussion of the dangers of appeasing genocidal dictators (e.g. https://www.ynetnews.com/articles/0,7340,L-3476200,00.html) ... clearly I was wrong!

Max_Daniel @ 2020-10-15T13:11 (+12)

(FWIW, I had a similar reaction. Like, it was quite clear to me what the actual topic of the post was going to be, but I was wondering whether the author was making a deliberate reference to highlight how bad they think the issue is. I was also wondering if the author was trying to sort of lead by example since comparisons to Nazi-related issues are very taboo in mainstream German discourse. Overall I figured that it's probably unintentional.)

Khorton @ 2020-10-14T18:56 (+9)

This is quite a long article so forgive me if I've missed it, but it seems like you're arguing that someone's general character - for example, whether they have a history of embezzling money or using racial slurs - shouldn't affect whether or not we invite them to speak at EA events. Whether or not we invite them should only depends on the quality of their ideas, not their reputation or past harmful actions. Is that what you're saying?

Habryka @ 2020-10-14T19:39 (+28)

I cannot find any section of this article that sounds like this hypothesis, so I am pretty confident the answer is that no, that is not what the article says.  The article responds relatively directly to this: 

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. 

Khorton @ 2020-10-14T20:00 (+24)

Thanks Oli. So I guess this article is arguing that EA Munich was either mistaken about Robin Hanson's character or they were prioritizing reputation over character?

I find this discussion very uncomfortable because I really don't like publicly saying "I have concerns about the impact an individual has on this community" - I prefer that individual groups like EA Munich make the decision on their own and as discreetly as possible - but it seems the only way they could defend themselves is to publicly state everything they dislike about Robin Hanson. I know they've said a couple things already but I don't love that we're encouraging a continued public prosecution and defense of Robin Hanson's character.

Milan_Griffes @ 2020-10-14T20:11 (+20)

I read this piece as proposing a stance towards a social dynamic ("how EA should orient to cancel culture"), rather than continuing litigation of anyone's character.

kokotajlod @ 2020-10-14T20:07 (+15)

Judgments about someone's character are, unfortunately, extremely tribal. Different political tribes have wildly different standards for what counts as good character and what counts as mere eccentricity. In many cases one tribe's virtue is another tribe's vice.

In light of this, I think we should view with suspicion the argument that it's OK to cancel someone because they have bad character. Yes, some people really do have bad character. But cancel culture often targets people who have excellent character (this is something we all can agree on, because cancel culture isn't unique to any one tribe; for examples of people with excellent character getting cancelled, just look at what the other tribe is doing!) so we should keep this sort of rationale-for-cancellation on a tight leash.

Here is a related argument someone might make, which I bring up as an analogy to illustrate my point:

Argument: Some ideas are true, others are false. The false ideas often lead to lots of harm, and spreading false ideas therefore often leads to lots of harm. Thus, when we consider whether to invite people to events, we shouldn't invite people insofar as we think they might spread false ideas. Duh.

My reply: I mean, yeah, it seems like we have to draw the line somewhere. But the overwhelming lesson of history is that when communities restrict membership on the basis of true ideas, that just leads to an epistemic death spiral where groupthink and conformity reign, ideology ossifies into dogma, and the community drifts farther from the truth instead of continually seeking it. Instead, communities that want to find the truth need to be tolerant of a wide range of opinions, especially opinions advanced politely and in good faith, etc. There's a lot more to say about best practices for truth-seeking community norms, and I'd be happy to go into more detail if you like, but you get the idea.

I think the legal/justice system is another example of this.

Argument: Look, we all know OJ Simpson did it. It's pretty obvious at this point. So why don't we just... go grab him and put him in jail? Or beat him up or something?

Reply: Vigilante justice often makes mistakes. Heck, even the best justice systems often make mistakes. Worse, sometimes the mistakes are systemically biased in various ways, or even not even mistakes at all, but rather intentional patterns of oppression. And the way we prevent this sort of thing is by having all sorts of rules for what counts as admissible evidence, for when the investigation into someone's wrongdoing is supposed to be over and they are supposed to go free, etc. And yeah, sometimes following these rules means that people we are pretty sure are guilty end up going free. And this is bad. But it's better than the alternative.

Linch @ 2020-10-15T20:52 (+40)

EDIT: I plausibly misunderstood kokotajlod, see his reply.

I think there's a dangerous rhetorical slip when we construe "do not invite someone to [speak at] events" as "cancel culture."

Judgments about someone's character are, unfortunately, extremely tribal. Different political tribes have wildly different standards for what counts as good character and what counts as mere eccentricity. In many cases one tribe's virtue is another tribe's vice.
In light of this, I think we should view with suspicion the argument that it's OK to cancel someone because they have bad character.

I think this is one of the things that sounds really good in the abstract, but in practice not the practical way to think about how to do local group organizing. If I think about people who I was part of decisions of banning/softbanning from our meetups, I definitely don't think "in many cases one tribe's virtue is another tribe's vice" feels like a particularly appealing abstract argument relative to more concrete felt sense of "this person negatively impacts the experience of others at the meetup much more than they plausibly derive value from it." Though it hasn't been immediately applicable in the examples I've mentioned (since we can point to concrete issues), I'd argue that in many situations "character" makes the situation overdetermined, so we'd arguably have been in the right to exclude people before the concrete problems very obviously surface.

I'm also very much not convinced by meta-level arguments that EA (in this context, local EA meetups) has too much exclusion rather than too little. I think people by default (myself included), especially group organizers, have a very strong egalitarian instinct/distaste for being mean/cliquish, so will by default tolerate much higher levels of social infractions than are plausibly +EV. See EY on well-kept gardens.

(Aside: Being explicitly disinvited; akin to "hi I don't wanna be friends with you" is very unpleasant. This can be worse if you've formed being part of a group as part of your identity. In some cases there's a risk of harm to others as well. For an organizer, being in the situation of disinviting someone who wants to attend your meetup is in a sense admitting failure. If the meetup structure was designed well to begin with, people who should self-select out of wanting to attend, or alternatively, have the meetup designed around them in such a way that everybody can plausibly contribute positively. However, if we say that organizers should have events iff they can guarantee that their meetup will not accidentally have people who will be net negative for the group, this will be a high cost to organizing, maybe implausibly high)

If we reframe "do not invite someone to events" as "do we want to hang out with them," I feel like the alternative "we must not consider the quality of someone's character in our friendships, only the quality of their ideas" will be absurd.

Now I think the bar for not inviting someone to a local meetup has to be somewhat higher (or at least different) than to not be friends with them. For example, having similar musical preferences is a valid preference for friends, but will be absurd as an exclusion criteria for a meetup (unless it's a music lover's meetup of a specific genre).

But the bar shouldn't be infinitely high (and honestly I'm not convinced that it should be very high at all), and I think it's better for local groups to handle this themselves in their local context.

Now I think the modal example of CC is plausibly pernicious for other important reasons, namely that the appeal isn't to whether local group A benefits from interacting with person B, but in a (sometimes implausible) appeal to externalities, that A meeting with B somehow substantially negatively impacts the experience of C, where C can be individuals from across the world who had no desire to attend A. To the extent CC is global, the bar then becomes excluding folks from every social group, which ought to be much higher than the bar for excluding them from a single social group.

Reply: Vigilante justice often makes mistakes. Heck, even the best justice systems often make mistakes. Worse, sometimes the mistakes are systemically biased in various ways, or even not even mistakes at all, but rather intentional patterns of oppression. And the way we prevent this sort of thing is by having all sorts of rules for what counts as admissible evidence, for when the investigation into someone's wrongdoing is supposed to be over and they are supposed to go free, etc. And yeah, sometimes following these rules means that people we are pretty sure are guilty end up going free. And this is bad. But it's better than the alternative.

Anglo-Saxxon criminal justice (in theory, not in practice) has a fundamental conception of erring on the side of innocence: "It is better that ten guilty persons escape than that one innocent suffer." I think the bar ought to be much lower for EA meetups, eg if you prefer >10 cases of sexual harassment, or physical threatening, or yelling at people, or just general unpleasantness, to a single case of a wronged innocent due to false accusations or plausible misunderstanding, I think you're not going to have a good time.

kokotajlod @ 2020-10-16T05:44 (+18)

I'm not construing "do not invite someone to speak at events" as cancel culture.

This was an invite-then-caving-to-pressure-to-disinvite. And it's not just any old pressure, it's a particular sort of political tribal pressure. It's one faction in the culture war trying to have its way with us. Caving in to specifically this sort of pressure is what I think of as adopting cancel culture.

Linch @ 2020-10-16T10:56 (+11)

Got it, I must have misunderstood you! I think it's a little difficult for me to understand how much people were talking about the general principles vs the specific example in Munich, and/or how much they believe the Munich example generalizes.

I think this discussion can benefit from more rigor, though it's unclear how to advance it in practice.

kokotajlod @ 2020-10-17T11:37 (+14)

Yeah, I wasn't super clear, sorry. I think I basically agree with you that communities can and should have higher standards than society at large, and that communities can and should be allowed to set their own standards to some extent. And in particular I think that insofar as we think someone has bad character, that's a decently good reason not to invite them to things. It's just that I don't think that's the most accurate description of what happened at Munich, or what's happening with cancel culture more generally -- I think it's more like an excuse, rationalization, or cover story for what's really happening, which is that a political tribe is using bullying to get us to conform to their ideology. As a mildly costly signal of my sincerity here, I'll say this: I personally am not a huge fan of Robin Hanson and if I was having a birthday party or something and a friend of his was there and wanted to bring him along, I'd probably say no. This is so even though I respect him quite a lot as an intellectual.

I should also flag that I'm still confused about the best way to characterize what's going on. I do think there are people within each tribe explicitly strategizing about how the tribe should bully people into conformity, but I doubt that they have any significant control over the overall behavior of the tribe; instead I think it's more of an emergent/evolved phenomenon... And of course it's been going on since the dawn of human history, and it waxes and wanes. It just seems to be waxing now. Personally I think technology is to blame--echo chambers, filter bubbles, polarization, etc. I think that if these trends are real then they are extremely important to predict and understand because they are major existential risk factors and also directly impede the ability of our community to figure out what we need to do to help the world and coordinate to do it.

Stefan_Schubert @ 2020-10-18T09:40 (+32)

This study looked at nine countries and found that polarisation had decreased in five. The US was an outlier, having seen the largest increase in polarisation. That may suggest that American polarisation is due to US-specific factors, rather than universal technological trends.

Here are some studies suggesting the prevalence of technology-driven echo chambers and filter bubbles may be exaggerated.

Andreas_Häfner @ 2021-03-08T16:52 (+11)

Please note that this study does not measure "polarization" - but instead "polarization between the top two parties"! See:
> "We analyze the sensitivity of our findings to restricting attention to the top two parties in each country and focusing on periods in which this pair of parties is stable"

This does not work for any country with proportional electoral systems. I can speak to the German case, since I live there:

The two big parties are CDU/CSU (christian democrat / conservative) and SPD (social democrat). Both parties have become more similar to each other over the decades, and SPD in particular has bled voters like crazy for it. Here you can see current and historical polling data: CDU/CSU in black, SPD in red.

The most notable events here may be the "Energiewende" and the response to the mass migration in context of the Syrian civil war 2015 for the CDU/CSU, and the "Agenda 2010" and the continual "great coalition" as the junior partner with the conservatives with the SPD. 

This has opened up space to the right of the conservatives (the AfD has taken this space), and to the left of the SPD (taken up in parts by the far left party PDS / DIE LINKE and the green party BÜNDNIS 90 / DIE GRÜNEN).  The SPD is now arguably not in the top two anymore, the greens seem to have taken that spot, possibly for good. 

 

So, indeed, the polarization between CDU/CSU and SPD may have gone down, but this does not generalize. Germany has also become more polarized.

Stefan_Schubert @ 2021-03-09T15:17 (+5)

Figure 2 looks at the top two parties, but the legend to Figure 1 doesn't say it's restricted to the top two parties. And Figure 1 also shows decreasing polarisation in Germany. However, I haven't looked into this research in depth.

kokotajlod @ 2020-10-18T14:07 (+3)

Thanks! This is good news; will go look at those studies...

RyanCarey @ 2020-10-18T10:36 (+3)

Interesting that one of the two main hypotheses advanced in that paper is that media is influencing public opinion, but the media is not the internet, but TV!

The rise of 24-hour partisan cable news provides another potential explanation. Partisan cable networks emerged during the period we study and arguably played a much larger role in the US than elsewhere, though this may be in part a consequence rather than a cause of growing affective polarization.9 Older demographic groups also consume more partisan cable news and have polarized more quickly than younger demographic groups in the US (Boxell et al. 2017; Martin and Yurukoglu 2017). Interestingly, the five countries with a negative linear slope for affective polarization all devote more public funds per capita to public service broadcast media than three of the countries with a positive slope (Benson and Powers 2011, Table 1; see also Benson et al. 2017). A role for partisan cable news is also consistent with visual evidence (see Figure 1) of an acceleration of the growth in affective polarization in the US following the mid-1990s, which saw the launch of Fox News and MSNBC.

(The other hypothesis is "party sorting", wherein people move to parties that align more in ideology and social identity.)

Perhaps campaigning for more money to PBS or somehow countering Fox and MSNBC could be really important for US-democracy.

Also, if TV has been so influential, it also suggests that even if online media isn't yet influential on the population-scale, it may be influential for smaller groups of people, and that it will be extremely influential in the future.

Stefan_Schubert @ 2020-10-18T12:05 (+5)

Some argue, however, that partisan TV and radio was helped by the abolition of the FCC fairness doctrine in 1987. That amounts to saying that polarisation was driven at least partly by legal changes rather than by technological innovations.

Obviously media influences public opinion. But the question is whether specific media technologies (e.g. social media vs TV vs radio vs newspapers) cause more or less polarisation, fake news, partisanship, filter bubbles, and so on. That's a difficult empirical question, since all those things can no doubt be mediated to some degree through each of these media technologies.

Linch @ 2020-10-18T05:13 (+3)
I think that if these trends are real then they are extremely important to predict and understand because they are major existential risk factors and also directly impede the ability of our community to figure out what we need to do to help the world and coordinate to do it.

This seems like an interesting line of reasoning, and I'd maybe be excited to see more strategic thinking around this.

Might eventually turn out to be pointless and/or futile, of course.

kokotajlod @ 2020-10-18T06:43 (+1)

I agree! I'd love to see more research into this stuff. In my relevant pre-agi possibilities doc I call this "Deterioration of collective epistemology." I intend to write a blog post about a related thing (Persuasion Tools) soon.

Ben Pace @ 2020-10-14T22:38 (+5)

Thx for the long writeup. FWIW I will share some of my own impressions.

Robin's one of the most generative and influential thinkers I know. He has consistently produced fascinating ideas and contributed to a lot of the core debates in EA, like giving now vs later, AI takeoff, prediction markets, great filter, and so on. His comments regarding common discussion of inequality are all of a kind with the whole of his 'elephant in the brain work', noticing weird potential hypocrisies in others. I don't know how to easily summarize the level of his intellectual impact on the world, so I'll stop here.

It seems like there's been a couple of (2-4) news articles taking potshots at Hanson for his word choices, off the back of an angry mob, and this is just going to be a fairly standard worry for even mildly interesting or popular figures, given that the mob is going after people daily on Twitter. (As the OP says, not everyone, but anyone.)

It seems to me understandable if some new group like EA Munich (this was one of their first events?) feels out of their depth when trying to deal with the present-day information and social media ecosystem, and that's why they messed up. But overall this level of lack of backbone mustn't be the norm, else the majority of interesting thinkers will not be interested in interacting with EA. I am less interested in contributing-to and collaborating-with others in the EA community as a result of this. I mean, there's lots of things I don't like that are just small quibbles, which is your price for joining, but this kind of thing strikes at the basic core of what I think is necessary for EA to help guide civilization in a positive direction, as opposed to being some small cosmetic issue or personal discomfort.

Also, it seems to me like it would be a good idea for the folks at EA Munich to re-invite Robin to give the same talk, as a sign of goodwill. (I don't expect they will and am not making a request, I'm saying what it seems like to me.)

Aaron Gertler @ 2020-10-15T06:26 (+16)

Any discussion of the Munich cancellation as a potential indicator of "norms" should probably note that there are hundreds of talks by interesting thinkers each year at EA conferences/meetups around the world. At least, people I'd consider interesting, even if they don't come into conflict with social norms as regularly as Robin.

On a graph of "controversial x connection to EA," Robin is in the top corner (that is, I can't think of anyone who is both at least as controversial and at least as connected to EA,  other than maybe Peter Singer). So all these other talks may not say much about our "norm" for handling controversial speakers. But based on the organizers I know, I'd be surprised if most other EA groups (especially the bigger/more experienced ones) would have disinvited Robin.

In terms of your own feelings about contributing/collaborating in EA, do you think sentiments like those of the Munich group are common? It seems like their decision was widely criticized by lots of people in EA (even those who, like me, defended their right to make the decision/empathized with their plight while saying it was the wrong move), and supported by very few. If anything, I updated from this incident in the direction of "wow, EA people are even more opposed to 'cancel culture' than I expected."

pranomostro @ 2021-10-10T16:39 (+1)

(More for archival purposes than anything else)

this was one of their first events?

This is definitely not the case, the record of events for EA Munich goes back to May 2018, and I'm pretty sure the group got founded in 2015/2016 (although at the time of the decision, only a few of the original founding members were still involved).

jackmalde @ 2020-10-14T21:00 (+4)
For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.

We certainly shouldn't 'ignore' or give 'carte blanche' to the bad in a person, but I don't think that necessarily means we have to cancel them.

I'm not saying that there shouldn't be occasions where we do in fact cancel someone on account of their character, but as someone who identifies as a consequentialist EA I've never understood the reluctance to do something 'for the greater good'. Clue is in the word greater?

If someone is a shitty person but having them speak will in expectation lead to greater benefit than harm it seems to me we should let them speak. If it is the case that expected harm exceeds expected benefit then of course let's cancel, but let's continue to do these (rough) EV calculation on a case by case basis - this is a strength of the EA community.

michaelchen @ 2020-12-02T05:11 (+3)

Minor comment regarding the case of Greg Patton: As someone who heard about the story in early September and was shocked at the fallout, it was heartening to read the aftermath in https://www.lamag.com/citythinkblog/usc-professor-slur/ and https://poetsandquants.com/2020/09/26/usc-marshall-finds-students-were-sincere-but-prof-did-no-wrong-in-racial-flap/ and see that the university eventually “concluded there was no ill intent on Patton’s part and that ‘the use of the Mandarin term had a legitimate pedagogical purpose.’”