Replaceability v. 'Contextualized Worthiness'

By Tee @ 2022-07-27T16:48 (+9)

[Reposted because because it was correctly pointed out that karma didn't seem to be working in short-form mode for some reason]

My guess is that 80K is likely unaware of this, but the concept of 'replaceability',[1] or at least as my clients almost exclusively seem to interpret it,[2] seems to wreak havoc as a mental model on people's self-approximations around whether they should be taking on/staying in a given role. I see lots of evidence that it can even be ongoingly corrosive for those holding a role over a long period of time.

This feels like a big problem. In fact, I’d go as far as to say that I believe it’s a primary culprit for imposter syndrome and decision paralysis in EA.

Anxieties around replaceability are often delivered to me as a completely decontextualized hypothetical exercise, which is "is it possible that there's someone in the world who would be better at this role than me? If so, me taking this role could be critically bad for the world." The weight of this is likely exacerbated in leadership positions.

Putting high credence into decontextualized replaceability arguments seem obviously flawed to me, but more importantly, seems to me to have the psychological effect of egregiously warping risk calculations around career exploration, patient accumulation and consolidation of career capital, and particularly willingness to assume responsibility and take action.

You can basically condemn yourself (internally & socially) as a bad person for taking a role.[3]

A thing that I believe calibrates people better would be called something like “contextualized worthiness” considerations.

Here’s a handful: 

The above ‘contextualized worthiness’ considerations often have the effect of getting people to track more inputs from reality, rather relying too heavily upon an abstract thought exercise that yields an absurdly high bar for action, and often bottoms out in a nasty set of implications for any misstep.

If 80K doesn't already plan to do this, a suggestion for remedial action would be an additional series of posts nuancing this concept for people. Many people I speak to could use this.[4]

More on my coaching trials with a dozen EA leaders here

  1. ^
  2. ^

    It could be claimed that this bastardizes the concept, but how concepts are originally designed and how they spread mimetically are very different.

  3. ^

    Worse still, interacting mental models often reinforced by EA can make people feel morally very bad for inaction

  4. ^

    Someone made the point that knowing how messages will spread is quite hard. Were I speaking to someone from 80K, I hope for the tenor of my message to be "hey, I get that mass media is hard, and you've largely been doing great, but we've potentially (re)discovered something pretty big as a result of your messaging. Would you seriously consider following up here?"