Distancing EA from rationality is foolish

By Jan_Kulveit @ 2024-06-25T21:02 (+136)

Edit: If you are landing here from the EA Forum Digest, note that this piece is not about Manifest and I don't want it it to be framed as being about Manifest.

Recently, I've noticed a growing tendency within EA to dissociate from Rationality. Good Ventures have stopped funding efforts connected with the rationality community and rationality, and there are increasing calls for EAs to distance themselves. 

This trend concerns me, and I believe it's good to make a distinction when considering this split.

We need to differentiate between 'capital R' Rationality and 'small r' rationality. By 'capital R' Rationality, I mean the actual Rationalist community, centered around Berkeley: A package deal that includes ideas about self-correcting lenses and systematized winning, but also extensive jargon, cultural norms like polyamory, a high-decoupling culture, and familiarity with specific memes (ranging from 'Death with Dignity' to 'came in fluffer').

On the other hand, 'small r' rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.

If people want to distance themselves, it's crucial to be clear about what they're distancing from. I understand why some might want to separate from aspects of the Rationalist community – perhaps they dislike the discourse norms, worry about negative media coverage, or disagree with prevalent community views. 

However, distancing yourself from 'small r' rationality is far more radical and likely less considered. It's similar to rejecting core EA ideas like scope sensitivity or cause prioritization just because one dislikes certain manifestations of the EA community (e.g., SBF, jargon, hero worship).

Effective altruism is fundamentally based on pursuing good deeds through evidence, reason, and clear thinking - in fact when early effective altruists were looking for a name, one of the top contenders was rational altruism. Dissecting the aspiration to think clearly would in my view remove something crucial.

Historically, the EA community inherited a lot of epistemic aspects from Rationality[1] – including discourse norms, emphasis on updating on evidence, and a spectrum of thinkers who don't hold either identity closely, but can be associated with both EA and rationality. [2]

Here is the crux: if the zeitgeist pulls effective altruists away from Rationality, they should invest more into rationality, not less. As it is critical for effective altruism to cultivate reason, someone will need to work on it. If people in some way connected to Rationality are not who EAs will mostly talk to, someone else will need to pick up the baton. 

  1. ^

    Clare Zabel in 2022 expressed similar worry

    Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.

  2. ^

    EA community actually inherited more than just ideas about epistemics: compare for example Eliezer Yudkowsky's essay on Scope Insensitivity from 2007 with current introductions to effective altruism in 2024.


David Mathers @ 2024-06-27T09:34 (+44)

I think two things are being conflated here into a 3rd position no one holds

-Some people don't like the big R community very much.

-Some people don't think improving the world's small-r rationality/epistemics should be a leading EA cause area.

Are getting conflated into:

-People don't think it's important to try hard at being small-r rational. 

 

I agree that some people might be running together the first two claims, and that is bad, since they are independent, and it could easily be high impact to work on improving collective epistemics in the outside world even if the big R rationalist community was bad in various ways. But holding the first two claims (which I think I do moderately) doesn't imply the third. I think the rationalists are often not that rational in practice, and are too open to racism and sexim. And I also (weakly) think that we don't currently know enough about "improving epistemics" for it to be a tractable cause area. But obviously I still want us to make decisions rationally, in the small-r sense internally. Who wouldn't! Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand. 

Jan_Kulveit @ 2024-06-28T07:17 (+24)

I don't think so. I think in practice

I. - Some people don't like the big R community very much.

AND

2a. - Some people don't think improving the EA community small-r rationality/epistemics should be one of top ~3-5 EA priorities. 
OR
2b.  - Some people do agree this is important, but don't clearly see the extent to which the EA community imported healthy epistemic vigilance and norms from Rationalist or Rationality-adjacent circles

=>

- As a consequence, they are at risk of distancing from small r rationality as a collateral damage / by neglect


Also I think many people in the EA community don't think it's important to try hard at being small-r rational at the level of aliefs.  No matter what is the actual situation revealed by actual decisions, I would expect the EA community to at least pay lip service to epistemics and reason, so I don't think stated preferences are strong evidence. 

"Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand." 
Yes I do agree almost no one thinks about themselves that way. I think it is maybe somewhat similar to "Being against effective charity" - I would be surprised if people though about themselves that way. 

Linch @ 2024-06-27T12:14 (+1)

Eh, I agree with you that LW-style rationalists are far from sinless in this regard, but it's hard to not notice that many people, including on EAF, seem to have a strong revealed preference for irrationality. 

I'm not sure why; one guess I have is that people (subconsciously) correctly identify rational irrationality as the best strategy to come across as loyal to one's tribe. I find this sad, but I don't have a real answer here; the incentives are strong and point in the wrong direction. 

In my ideal culture, everybody will be polite about it, but sloppy thinking will still be heavily censured, rather than rewarded. 

 

(slightly feverish, apologies if I'm not making as much sense, ironically). 

MichaelStJules @ 2024-06-27T18:08 (+12)

What instances do you have in mind by "strong revealed preference for irrationality"?

Linch @ 2024-06-27T22:48 (+7)

On LW, I thought comments here were very poor, with a few half-exceptions. It wasn't even a controversial topic! 

On EAF, I pragmatically am not that interested in either starting new fights, or relitigating past ones. I will say that making my comment here solely about kindness, rather than kindness and epistemics, was a tactical decision. 

Vaidehi Agarwalla @ 2024-06-26T05:31 (+34)

Good Ventures have stopped funding efforts connected with the rationality community and rationality

 

Since that post doesn't specify specific causes they are exiting from, could you clarify if they specified that they are also not funding lower case r "rationality"?

rime @ 2024-06-28T20:42 (+28)

Um, I did not know about "came in fluffer" until I googled it now, inspired by your post. I'm not English, so I thought "fluffer" meant some type of costume, and that some high-status person showed up somewhere in it. My innocence didn't last long.

I'm not against sexual activities, per se, but do you really want to highlight and reinforce that as a salient example of "Rationality culture"?

RobertJMoore @ 2024-07-11T15:43 (+2)

Given the post is broadly negative about Rationality culture, choosing an obnoxious, sexual, and niche example strikes me as likely deliberate. 

Vaidehi Agarwalla @ 2024-06-26T05:38 (+28)

However, distancing yourself from 'small r' rationality is far more radical and likely less considered.

 

Could you share some examples of where people have done this or called for it? 

From what I've seen online and the in person EA community members I know, people seem pretty clear about separating themselves from the Rationalist community. 

Jan_Kulveit @ 2024-06-26T09:35 (+35)

It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.

I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is deciding about what intro level materials to link to; some links to LessWrong are not included

The crux is really what's at the end of my text - if people do steps like above, and nothing else, they are distancing also from the 'small r' thing. 

Obviously part of the problem for the separation plan is Rationality and Rationality-adjacent community actually made meaningful progress on rationality and rationality education; a funny example here in the comments ... Radical Empath Ismam advocates for the split and suggests EAs should draw from the "scientific skepticism" tradition instead of Bay Rationality. Well, if I take that suggestion seriously, and start looking for what could be good intro materials relevant to the EA project (which "debunking claims about telekinesis" advocacy content probably isn't) .... I'll find New York City Skeptics and their podcast, Rationally Speaking. Run by Julia Galef, who also later wrote Scout Mindset. Excellent. And also, co-founded CFAR. 

NickLaing @ 2024-06-26T07:29 (+4)

Yeah for sure I don't really understand how you could be an Effective Altruist without implementing a heavy dose of "small r" rationality. I agree with the post and think its a really important point to make and consolidate, but I don't think people are really calling for being less rational...

Pancakes4all @ 2024-06-26T11:45 (+16)

I don't find the concept of small "r" rationalist helpful because what you describe to me actually sounds like "understand most of Kahneman and Traversky's work" and I wouldn't refer to that as rationalism but cognitive psychology. I think in general even small r rationalism tries to repackage concepts in ways that are only new or interesting to people who haven't studied psychology and in my opinion does so mostly in very distinct ways that tend to have non-stated underlying philosophical assumptions like objectivism and Kantian ideals. But cognitive psych doesn't (shouldn't?) have to be applied in those ways. Probably just read Joshua Greene's Moral Tribes and get on with your day? That's how I got into EA, and it does whatever ur describing as small r rationalism better than small r rationalism (if that makes sense?) without all the underlying non-stated assumptions that comes with small r rationality and the ties with the big R community

Jan_Kulveit @ 2024-06-26T13:15 (+18)

Reducing rationality to "understand most of Kahneman and Tversky's work" and cognitive psychology would be extremely narrow and miss most of the topic.

To quickly get some independent perspective, I recommend reading the Overview of the handbook part of "The Handbook of Rationality"  (2021, MIT Press, open access). For an extremely crude calibration: the Handbook has 65 chapters. I'm happy to argue at least half of them cover topics relevant to the EA project. About ~3 are directly about Kahneman and Tversky's work. So, by this proxy, you would miss about 90% of whats relevant.


 

Sarah Levin @ 2024-06-28T04:02 (+9)

This is a good account of what EA gets from Rationality, and why EAs would be wise to maintain the association with rationality, and possibly also with Rationality.

What does Rationality get from EA, these days? Would Rationalists be wise to maintain the association with EA?

kave @ 2024-07-02T17:31 (+1)

Rationality has supported and been supported by EA a bunch. In that time, Rationality+EA has caused a bunch of harm (I’m not certain about net harm, but I do think a bunch of harm has happened: supporting scaling labs, supporting SBF, low integrity political manoeuvring (I hear)). I think Rationality should own its relationship to EA and its mixed legacy.

David Mathers @ 2024-07-06T09:39 (+4)

There's no reason to blame the Rationalist influence on the community for SBF that I can see. What would the connection be?

Sarah Levin @ 2024-07-06T17:52 (+7)

IIRC, while most of Alameda's early staff came from EA, the early investment came largely from Jaan Tallinn, a big Rationalist donor. This was a for-profit investment, not a donation, but I would guess that the overlapping EA/Rationalist social networks made the deal possible.

That said, once Bankman-Fried got big and successful he didn't lean on Rationalist branding or affiliations at all, and he made a point of directing his "existential risk" funding to biological/pandemic stuff but not AI stuff.

kave @ 2024-07-07T08:02 (+3)

I think Rationality provided undirected support to EA during that period (sharing goodwill and labour, running events together), and received funding from EA funders, and so is not clean of the stuff listed in my comment. I think it probably overall made those things worse by supporting EA more, even if it helped the bad things somewhat less than it helped the good things.

Pato @ 2024-06-29T00:35 (+4)

Is it me or there is too much filler on some posts? This could have been a quicktake: "If you distance yourself from Rationality, be careful to not distance yourself from rationality".