On Loyalty

By Nathan Young @ 2023-02-20T10:29 (+56)

Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them.

I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand. 

Tl;dr

Intro

Testing situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here?

I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick. 

What is Loyalty (in this article)?

I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear". 

I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly".

The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated. 

I am sure I didn't come up with this idea. Let me know what it's called

Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them.

But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings"

So let's cut to the chase.

The One With The Email

Nick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this.

Let's look at this through some different frames.

The loyalty frame

I think I was a bit disloyal to Bostrom. I don't know him, but I don't think he's the thing described by "a racist" [1]. Likewise the email, while awful, I am relatively confident isn't some coded call to the far right. It was an a point about truth framed in an edgy way. Then the apology was clumsy and self-righteous and failed to convey to many readers that he was sorry he sent it. To many it read more like he was sad that he had to write an apology. So at the time I liked that the CEA criticism meant that EA probably wouldn't be dragged into another scandal.

But what did I owe Bostrom? I don't  think I owe public EAs some special loyalty as EAs, nor do I want It from you if you dislike my actions. But I think I owe public figures who create valuable work and aren't racists to not imply so. At the same time it's okay to say he was a fool to write it, but only if I'm clear about my general views.

I think it's underrated how productive Bostrom is. He's created loads of useful ideas. I'm glad he did and I'm glad that he answers our emails. I probably would want Bostrom involved much more than a random EA (though I wouldn't want a culture that drives away productive and non-productive people alike). I sense this is actually the nub of the discussion, though it feels aversive to raise - how many EAs leaving is Bostrom worth?

Finally, there is an increased benefit to helping people who are being attacked. Émile Torres was writing an article which would portray that email as evidence of Bostrom's racism. I think that was incorrect and so should not benefit Émile for trying it. 

The harm frame pt. 1

Some people really disliked Bostrom's apology. But I sense that more than that they disliked an immediate community response to protect him. I guess these people want to feel like racist emails won't be sent by people they respect and that they won't fear that scientific racism undergirds bad treatment they receive. 

I have heard a number of people say things like this so I take it seriously. Bostrom's apology knocked their confidence that EA was safe. Or would be safe. I guess a few people will leave the EA community as a result. That's a real cost.

The people who hold this view that I talked to liked CEA's response. It gave them confidence that this wouldn't happen again.

The harm frame pt. 2

Some people disliked the condemnation of Bostrom. I guess they felt that while he had been unwise a long time ago but that he wasn't a racist so we shouldn't imply he was. 

For them the backlash against saying this was scary. They were articulating true and relevant beliefs but were being told to keep quiet because it was upsetting.

They disliked CEA's response because they thought it was literally false, implied that all EAs supported it and was PR-ey as opposed to being straightforward. They thought it was the wrong process (an edict from on high that didn't need to happen) and the wrong answer (Bostrom was dumb but not a racist).

The honesty frame

I think the CEA statement does pretty mediocrely on the onion test, in that I don't think the first reading gives the impression of what the institution actually believes.

Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.

— The Centre for Effective Altruism

For me, the first line is fine. I think we can get into it, but many of the things we believe actually sit under that idea. On the rest, while I can't actually disagree with any of it, I think that it implies Bostrom was racist and.. I don't think he was. I think moreover it sends a couple of signals:

I have complex feeling about the first signal and I guess I'll pass on whether I agree or disagree. In hindsight, I don't support the second, as cheap as that is to say now. 

It's worth considering what signal things send/what norms they set. I guess here, the signals are both "don't say stuff that upsets people" and "be careful what you say if it might damage our reputation" and also "like everywhere else, racism is an issue mired in complexity, fear and status games".

I think I'd prefer a unified signal that we all agreed with rather than these quite different signals to different groups.

Personally I could have sent a signal that:

But I didn't send such signals.

Attempt at consensus

I struggle to weigh these harms to different groups to be honest. I know that usually we go with the "all racism must be dealt with swiftly and maximally" but I think my desire to do that comes more out of fear than truth. I think there was a way to have the discussion in a way that left most people feeling like the EA forum wasn't a place for racism but that also standing up for Bostrom not being a racist wasn't an awful thing to do. I think we failed there. And I see that as a community failure. 

To avoid a sort of general "we all could have done better", I think each party could have specifically done better. Those defending Bostrom could have talked more about their emotions - this wasn't just a discussion about truth, I think it was about them feeling safe in the community in future. Likewise those who liked CEAs response shouldn't be pushing for a rapid response to every scandal. Cold takes are good, I think (though as I've written elsewhere, the silence on FTX is almost farcical at this point).

I think an additional complexity is that Bostrom's comms was just really bad here. And to the extent that he needs to do comms he should stop or get better. Likewise his org I guess needs to hire people from both parties here, so it's worth understand and engaging with both. I don't think it's good to turn on people in a crisis, but often crises do reveal actual faults or strengths. Bostrom could have revealed himself as a master communicator. EA could have revealed itself as wise and balanced when managing such issues. In both cases that's not what happened.

The underrated thing is that there really is some kind of tradeoff between thinking he's part of EA (which might affect his productivity or how much he recommends EA ideas) and some people being a part of EA who otherwise wouldn't.  It feels right to surface that these things are currently at odds. And perhaps an honest accounting of what we thinking Bostrom currently believes and does fits in here too, though I don't know where.

I think it's pretty cowardly to write something like this and not attempt to give actual solutions. So, here is what I wish I'd done at the time.

The EA Governance Slack Leaks

In the wake of the FTX crisis, the New Yorker wrote an article, with many exerpts from the EA governance slack. In particular it attempted to skewer Robert Wiblin for these comments:

[In reference to some article] This is pretty bad because our great virtue is being right, not being likeable or uncontroversial or ‘right-on’ in terms of having fashionable political opinions.

Again some frames:

The loyalty frame

Wiblin is a senior member in this community. He and others should be able to talk without fear of being quoted out of context. If he can't then either he and others will discuss in some more private space or they will have to be less clear about what they think. Both of those options seem bad.

 I am upset that someone leaked extensive amounts of private conversation without evidence of severe wrongdoing. And unlike much of what I say about whistleblowers[3] (who I think we could serve better), I think this is an offence which should get whoever did it blacklisted. 

I say a lot of things. And I stand by the content of almost all of them. But I misphrase things all the time. I would not want my comments in a big magazine. And, absent of serious wrongdoing, I don't think you should either. 

The truth frame

This is, I guess, a thing Wiblin actually believes. Until the FTX crisis, I think I'd have agreed with him - (it's a bit more complex now - how does one compare predicting a pandemic correctly, to incorrectly predicting a fraudster?). So why not publish it?

Because people are allowed not to say things. We don't live in a panopticon, we can't all read each others' minds. And generally consensus is that that's a good norm. So it should be here too. 

An attempt at consensus

As you know, I'm a big fan of prediction markets, but I draw the line at markets on relationships (among other things). This is because I once mused about publicly forecasting a friend's relationship and they were upset. Even though I wouldn't have been saying anything untrue, some beautiful things cannot stand scrutiny. This was a dumb and callous mistake from me, but I resolved not to make it again. Hence, I try to be especially cautious around surfacing things that are vulnerable and personal.

While not exactly the same, I think that good governance requires some privacy. It requires space to think about your views, to change your mind and speak freely. We should have an extremely strong prior against sharing these conversations. 

This leak in my opinion did not meet this bar. Had the slack contained evidence that EA figures knew about FTX's fraud that certainly would, but to me this falls short by at least an order of magnitude. And seemingly a lot of documents were shared. There weren't crimes just opinions. Currently, I think whoever shared of it should be ashamed of themselves and I would seriously consider kicking them out of community events. For clarity, I absolutely do not think that about those who gave sexual harassment stories to Time. For me the two are as night and day - by all means give evidence of severe wrongdoing, but never share verbatim random conversations for political gain.

Some early reviews think that final point is a crux. That sometimes it's acceptable to use internal private knowledge to steer EA if people aren't convinced by your arguments. I think such thinking destroys trust and turns the whole community into an adversarial environment. We are a community who argue our positions, not one that seeks to uses newspapers to win arguments we couldn't win by discussion.

Talking With Journalists

It seems the consensus that we shouldn't do this. That the costs outweigh the benefits.

The cost-benefit frame

If a journalist contacts you it's unlikely that you are going to significantly shift the narrative of their story. But you might accidentally give them a quote that makes the story a lot worse or leak a load of private slack messages, smh.

Everyone likes to claim that they respect communities that are transparent, but I don't see that in practice. Does OpenPhil gain respect from people other than us for making its grants easy to critique? My guess is no. So really we should optimise transparency for what we want to know rather than externals.

Here, talking to journalists has a high downside and small upside. You play in an adversarial environment. They are experts, I am not.

The reputation/accuracy frame

I think you should aim for a good reputation, rather than good PR. And I think you should aim for it by actually... being good and then communicating that accurately. If I am bad, I deserve a bad reputation. 

So if we do bad things and a journalist asks about them, I feel a strong temptation to go on the record if I think the story is framed correctly (though not sharing private quotes from others). I usually try and get friends to talk me out of it. What I will say is that I don't mind worsening the image of EA when that is reality.

The loyalty frame

Here, my gut says that talking to journalists is a betrayal of the community - others don't expect me to do it so it's disloyal. I don't want to read people speaking for me when it's not true or misframing discussions. 

I guess at times the "loyalty" part of me can see journalists as our outgroup who should at worst be placated, but at best ignored. This is a soldier mindset, though, before criticising, it seems very standard in other large communities. 

I guess I'd like us to shift our loyaly to "truth in an adverse information environment" so without being naïve we can still acknowledge or even publicly affirm the flaws within EA. 

The Notorious S.B.F.

I'm not gonna introduce this one. You know the story.

The loyalty frame?

I thought SBF was one of the best people alive. Within 12 hours I turned on him.  Was I disloyal?

And do I owe loyalty to a fraudster? What about a fraudster who was encouraged to take big risks by people like me?

The main response from people I've talked to about this article is the speed with which I heel turned on SBF, before I could be sure. My main response here is that there are not many legal ways you can lose $8bn and that my work as a forecaster teaches me to update fast. Within ~6 hours, I'd have been happy to bet at 90%+ odds that SBF had done immoral things and I did take flack for the position I took. I think that position was defensible then and I think it's defensible now.

In that way I don't feel loyal to people that did immoral things. You should not expect me to defend you if you do them. Perhaps my friend fearing being hung out to dry was right. The question is what actions he's concerned about.

But was what SBF did immoral. Austin makes the case that SBF was taking risks and allocating money towards good causes. That is behaviour we should want to stand by if it goes badly. Maybe especially if it goes badly, since if it goes well it has rewards in money and status.

I don't find this case compelling, but I do find that awfully convenient. I don't know.

The coherence frame

If SBF had won his bets but then I'd found out about the fraud would I have denounced him? I doubt it. In that world, I'd justify by saying "he knew the risks he was taking" then happily take his cash. Maybe I ought to, but I know myself well enough to say that without this moment to stop and consider I probably wouldn't have. 

That's not to say I won't act differently in future, though I really hope this doesn't happen again.

This leaves me with a problem if I don't want to be a charlatan here too. And honestly, I don't know the answer. Or maybe I'm afraid of the answer. Am I a charlatan? Integrity really matters to me and it scares me to think, maybe I just don't have much here.

The empathy frame

I think I am badly compromised by being able to much more easily empathise with SBF than the millions who lost money on FTX (though I think my public statements are things I'd stand by). 

Then again, I find it much easier to empathise with the people who lost money on FTX than the people who won't live if we all day. Do these ends justify these means? But like actually?

Perhaps this isn't really in the discussion

One thing I think is underrated is that maybe this wasn't really about the ends justifying the means. Maybe it was just incompetence and then fraud to cover it up. At that point I don't feel much loyalty.

No attempt at consensus

I think my response is suspect. Even with the knowledge of fraud I probably would have supported SBF et al if successful and denied them if not. If there was another possibly fraudulent EA I think I'd deny them much sooner. 

I don't know how to synthesise this. Let me know if you do. Maybe write a comment that others can agree with.

Some throwaway remarks on loyalty in regard to other recent issues

Non-Longtermism

Has it been disloyal to global health folks to use the positive image they have generated for pushing catastrophic risk work?

I lean quite strongly "no" here. I think most arguments against the importance of catastrophic risk are bad. I would support global health/animal welfare folks if they left, though I'd be sad.

Going forward (for me)

I am okay with the idea that we messed this up so here are some personal thoughts:

Going forward (more broadly)

I would like to:

Thanks to everyone who looked over this. In the past, I have found that thanking people for controversial blogs is controversial in itself. Suffice it to say I'm grateful and that ~6 people looked over it.

In the Comments

I've written 5 proposals for possible future EA community norms around loyalty: agreevote if you agree, disagreevote if you don't (neither affects karma). Can you write statements that people agree with more?

Upvotes and downvotes are for how important things are to be seen. Some things that are true are boring and some things that are false are worth reading.

  1. ^

    I doubt he treats people significantly differently on the basis of race. Though I wouldn't be surprised if he sometimes makes people uncomfortable while talking about it. I also wouldn't be surprised if people of colour didn't want to work in his org a bit after this.

  2. ^

    Topic and medium matter. While I think there is a forum culture of prising apart any issue, this isn't true everywhere. Twitter is not a place to randomly challenge people you don't know well unless that is a norm they have consented to. Feel free to do it to me, do not feel free to do it to random EAs. It's like striding up to someone and poking them in the chest at a party. It comes across as aggressive. And if you knowingly do something aggressive, I'm gonna think you intend to be aggressive.

  3. ^

    This is a wip article, but it's evidence that I was pro-whistleblower even before FTX https://forum.effectivealtruism.org/posts/QCv5GNcQFeH34iN2w/what-we-are-for-community-correction-and-scale-wip (there is some chance I edited it after and forgot but I don't think so)

  4. ^

    One reviewer noted they didn't sign up for SBF's fraud. Well I didn't either. I think that's a different problem than a global health/catastrophic risk split


Nathan Young @ 2023-02-20T10:30 (+16)

Proposed community norm. Agreevote if you agree. 

Be very careful when talking to journalists. Have a strong prior against it unless you are confident they are honest, in which case have a weak prior towards it. Consider what the story they are trying to write is. Will your comment make it more accurate or not. And be careful, they are experts and you aren't. But, if you trust them it's okay to give accurate information in areas that EAs reputation is wrong (even if it's too positive)

Guy Raveh @ 2023-02-20T13:37 (+4)

I agreevoted but I don't think this is a community norm. It's just a life skill.

Nathan Young @ 2023-02-20T14:54 (+5)

I think many communities have a norm against talking to journalists. Or talking to them in a specific way.

Eg political communities will probably burn you if they find you talking to journalists, but political people cultivate relationships with journalists who they feed info to to advance their aims.

titotal @ 2023-02-20T19:08 (+10)

If SBF had won his bets but then I'd found out about the fraud would I have denounced him? I doubt it. In that world, I'd justify by saying "he knew the risks he was taking" then happily take his cash.

community norm:

Fraud and other morally questionable behavior should always be denounced, regardless of whether it "paid off". 

Nathan Young @ 2023-02-20T20:47 (+7)

Would you have done in that other world? I'm not sure I would have.

Nathan Young @ 2023-02-20T10:32 (+10)

Proposed community norm. Agreevote if you agree. 

People should not be condemned for things we think they don't actually believe or that we don't actually think are issues. We should at least weather more than a day of criticism before folding to stuff like this

Jason @ 2023-02-21T17:17 (+3)

Two separate statements there . . .

Nathan Young @ 2023-03-01T17:59 (+2)

Sure, upvote only if you agree to both.

Nathan Young @ 2023-02-20T10:30 (+7)

Proposed community norm. Agreevote if you agree. 

Do not share private information publicly unless it is of serious wrongdoing. It damages our ability to trust each other. 

Jason @ 2023-02-20T13:56 (+5)

Disagreevote because "private information" is too vague and likely too broad. What you're describing as far as the Slack channel is what we would call "deliberative process" in my field -- and a pattern of leaking that kind of information is pretty corrosive of the ability to explore ideas freely.  On the other hand, I think the norm should be significantly weaker for official actions of organizations, even if knowledge of those actions is "private." And I think whether the person told you the information in confidence is a pretty significant factor here.

Nathan Young @ 2023-02-20T14:57 (+2)

I suggest people write their own attempts at community norms.

Nathan Young @ 2023-02-20T12:22 (+2)

Proposed community norm. Agreevote if you agree. 

We do not want to be a community where upsetting issues unrelated to doing good are regularly discussed. Especially not in a callous or edgy way. If they need to be discussed, we should be able to, but they should not be discussed frequently. If we became a community that discussed upsetting issues reguarly, we wouldn't want to be here or invite others to be here.

edited for clarity.

Ubuntu @ 2023-02-20T13:45 (+5)

Technically I agree, but I feel like some of your proposals lead people to believe there's a big problem when there isn't one. We didn't bring up The Email; Torres did. And then lots of people understandably attacked a well-respected member of the community and then lots of people understandably defended him. We weren't discussing it to be "edgy" and we don't discuss it"regularly."

I suspect there were a number of callous comments given the sensitive nature of the topic, so I might support a version of this that's about reminding ourselves of the harm that can do as and when appropriate, but I think the forum mods are pretty good at this.

Jason @ 2023-02-20T13:57 (+2)

The last clause is difficult to understand -- could you clarify?

Nathan Young @ 2023-02-20T14:55 (+2)

How's that?

Jason @ 2023-02-20T15:04 (+2)

Clearer! Thanks!

Ulrik Horn @ 2023-02-21T14:26 (+1)

This seems somewhat overconfident, knowing what to me seems like robust results from e.g. resume experiments:

I doubt he treats people significantly differently on the basis of race

I acknowledge that you did use the phrase "significantly differently", but still I would not consider discrimination in hiring processes insignificant. That said, I do not think this detracts much from your overall argument and I might be nit-picky here, but it felt important to me to point out. In other words: I would expect most people to exhibit somewhat significantly differential treatment of people based on race.

I am no expert in this, perhaps there is solid refutation of resume experiment results - I would be happy to learn that my current understanding of this is wrong as it would indicate a less discriminatory world!

Nathan Young @ 2023-02-21T14:27 (+3)

All the more reason for CV blind hiring!

Ulrik Horn @ 2023-02-21T14:29 (+3)

Absolutely agree! I like that Rethink Priorities explicitly ask people not to include a photo in their applications (if I remember correctly):