How should EA navigate the divide between those who prioritise epistemics vs. social capital?

By Chris Leong @ 2023-01-16T06:31 (+22)

Recent events seem to have revealed a central divide within Effective Altruism.

On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.

On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.

  1. How should we navigate this divide?
  2. Do you disagree with this framing? For example, do you think that the core divide is something else?
  3. How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?

Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).

In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".

  1. ^

    I fall more into this camp, although not entirely as I do think that a wise person will try to avoid certain topics insofar as this is possible.

  2. ^

    I've replaced the word "compromise" based on feedback that this word had a negative connotation.


pseudonym @ 2023-01-16T08:27 (+39)

As you are someone who falls into the "prioritize epistemics" camp, I would have preferred for you to steelman the "camp" you don't fall in, and frame the "other side" in terms of what they prioritize (like you do in the title), rather than characterizing them as compromising epistemics. 

This is not intended as a personal attack-I would make a similar comment to someone who asked a question from "the other side" (e.g.: "On one side, you have the people who prioritize making sure EA is not racist. On the other, you have the people who worried that if we don't compromise at all, we'll simply end up following what's acceptable instead of what's true".)

In general, I think this kind of framing risks encouraging tribal commentary that assumes the worst in each other, and is unconstructive to shared goals. Here is how I would have asked a similar question:

"It seems like there is a divide on the forum around whether Nick Bostrom/CEA's statements were appropriate. (Insert some quotes of comments that reflect this divide). What do people think are the cruxes that are driving differences in opinion? How do we navigate these differences and work out when we should prioritize one value (or sets of values) over others?"

Chris Leong @ 2023-01-16T09:47 (+2)

I don't doubt that there are better ways of characterising the situation.

However, I do think there is a divide between those that prioritise epistemics and those that prioritise optics/social capital, when push comes to shove.

I did try to describe the two sides fairly, ie. "saving us from losing our ability to navigate" vs. "saving us from losing our influence". I mean both of these sound fairly important/compelling and plausibly could cause the EA movement to fail to achieve its objectives. And, as someone who did a tiny bit of debating back in the day, I don't think I'd have difficulty taking either side in a debate.

So I would be surprised if my framing were quite as biased as you seem to indicate.

That said, I've gone back and attempted to reword the post in the hope that it reduces some of these concerns.

pseudonym @ 2023-01-16T10:25 (+20)

Yeah, I'm not saying there is zero divide. I'm not even saying you shouldn't characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can't tell which "camp" you're in.

The "not racist" vs "happy to compromise on racism" was my way of trying to illustrate how your "good epistemics" vs "happy to compromise on epistemics" wasn't balanced, but I could have been more explicit in this.

Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.

Saying one side prioritizes good epistemics and the other side prioritizes "good optics" or "social capital" (to a weaker extent) seems to similarly weakly favor the first side. For example, I don't think it's a charitable interpretation of the "other side" that they're primarily doing this for reasons of good optics.

I also think asking the question more generally is useful.

For example, my sense is also that your "camp" still strongly values social capital, just a different kind of social capital. In your response to CEA's PR statement, you say "It is much more important to maintain trust with your community than to worry about what outsiders think". Presumably trust within the community is also a form of social capital. By your statement alone you could say one camp prioritizes maintaining social capital within the movement, and one camp prioritizes building social capital outside of the movement. I'm not saying this is a fair characterization of the differences in groups, but there might not be a "core divide".

It might be a difference in empirical views (e.g. both groups believe social capital and good epistemics are important. Both groups believe in good epistemics and social capital for predominantly consequentialist reasons. There is no "core" difference. But one group think that empirically, the downstream effects of breaking a truth-optimizing norm is what leads to the worst outcome. Perhaps this might be associated with a theory of change of a closely knit, high-trust, high-leverage group of people. It is less important what the rest of the world thinks, because it's more important that this high-leverage group of people can drown out the noise and optimize for truth, so they can make the difficult decisions that are likely to be missed by simply following what's popular and accepted. The other group thinks that the downstream effects of alienating large swaths of an entire race and people who care about racial equality (and the norm that this is an acceptable tradeoff) is what leads to the worst outcome. Perhaps this might be associated with a view that buy-in from outsiders is critical to scaling impact.

But people might have different values or different empirical views or different theories of change, or some combination. That's why I am more reluctant to frame this issue in such clear demarcated ways ("central divide", "on one side", "this camp"), when it's not settled that this is the cause of the differences in opinion.

Chris Leong @ 2023-01-16T10:34 (+2)

Hmm... the valence of the word "compromise" is complex. It's negative in "compromising integrity", but "being unwilling to compromise" is often used to mean that someone is being unreasonable. However, I suppose I should have predicted this wording wouldn't have been to people's liking. Hopefully my new wording of "trade-offs" is more to your liking. 

titotal @ 2023-01-16T12:04 (+36)

I really dislike the "2  camps" framing of this. 

I believe that this forum should not be a forum for certain debates, such as that over holocaust denial. I do not see this as a "trade-off" of epistemics, but rather a simple principle of "there's a time and a place". 

I am glad that there are other places where holocaust denial claims are discussed in order to comprehensively debunk them. I don't think the EA forum should be one of those places, because it makes the place unpleasant for certain people and is unrelated to EA causes. 

In the rare cases where upsetting facts are relevant to an EA cause, all I ask is that they be treated with a layer of compassion and sensivity, and an awareness of context and potential misuse by bad actors. 

If you had a friend that was struggling with depression, health, and obesity, and had difficulty socialising, it would not be epistemically courageous for you to call them a "fat loser", even if the statement is technically true by certain definitions of the words. Instead you could take them aside, talk about your concerns in a sensitive manner, and offer help and support.  Your friend might still get sad about you bringing the issue up, but you won't be an asshole. 

I think EA has a good norm of politeness, but I am increasingly concerned that it still needs to work on norms of empathy, sensitivity, and kindness, and I think that's the real issue here. 

freedomandutility @ 2023-01-16T09:14 (+36)

I think we should reject a binary framing of prioritising epistemic integrity vs prioritising social capital. 

My take is:

Veering too far from Overton Windows too quickly makes it harder to have an impact, and staying right in the middle probably means you're having no impact - there is a sweet spot to hit where you are close enough to the middle that your reputation is intact and are taken seriously, but you are far enough away from the middle that you are still having impact.

 

In addition to this, I think EA's focus on philanthropy over politics is misguided, and much of EA's longterm impact will come from influencing politics, for which good PR is very important.

AnonymousQualy @ 2023-01-16T13:39 (+4)

I'd be very interested in seeing a more political wing of EA develop.   If folks like me who don't really think the AGI/longtermist wing is very effective can nonetheless respect it, I'm sure those who believe political action would be ineffective can tolerate it.

I'm not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won't tank my professional reputation through guilt-by-association with racism).

freedomandutility @ 2023-01-16T23:48 (+3)

I think it’s unlikely (and probably not desirable) for “EA Parties” to form, but instead it’s more likely for EA ideas to gain influence in political parties across the political spectrum

AnonymousQualy @ 2023-01-17T01:04 (+1)

I agree!  When I say "wing" I mean something akin to "AI risk" or "global poverty" - i.e., an EA cause area that specific people working on.

Anon Rationalist @ 2023-01-16T19:15 (+3)

Despite us being on seemingly opposite sides of this divide, I think we arrived at a similar conclusion. There is an equilibrium between social capital and epistemic integrity that achieves the most total good, and EA should seek that point out.

We may have different priors as to the location of that point, but it is a useful shared framing that works towards answering the question.

Wil Perkins @ 2023-01-16T16:27 (+2)

Agree strongly here. In addition if we are truly in a hinge moment like many claim, large political decisions are likely going to be quite important.

Guy Raveh @ 2023-01-16T12:01 (+31)

I think this framing is wrong, because high standards of truth-seeking are not separable from social aspects of communication, or from ensuring the diversity of the community. The idea that they are separable is an illusion.

Somehow the rationalist ideas that humans have biases and should strive to contain them, have brought an opposite result - that individuals in the community believe that if they follow some style of thinking, and if they prioritise truth-seeking as a value, then that makes them bias-free. In reality, people have strong biases even when they know they have them. The way to make the collective more truth-seeking is to make it more diverse and add checks and balances that stop errors from propagating.

I guess you could say the divide is between people who think 'epistemics' is a thing that can be evaluated in itself, and those who think it's strongly tied to other things.

AnonymousQualy @ 2023-01-16T13:07 (+21)

I think this is a much needed corrective.

I frequently feel there's a subtext here that high decouplers are less biased (whether the bias is racial, confirmation, in-group, status-seeking, etc.).  Sometimes it's not even a subtext.

But I don't know of any research showing that high decouplers are less biased in all the normal human ways.  The only trait "high decoupler" describes is tending to decontextualize a statement.  And context frequently has implications for social welfare, so it's not at all clear that high decoupling is, on average, useful to EA goals - much less a substitute to group-level check on bias.

I say all this while considering myself a high decoupler!

emre kaplan @ 2023-01-16T10:50 (+27)

I find this framing misleading for two reasons:

  1. People who support restrictive norms on conversations about race do so for partially epistemic reasons. Just like stigmatising certain arguments or viewpoints makes it more difficult to hear those viewpoints, a lot of the race-talk also makes the movement less diverse and some people feel less safe in voicing their opinions. If you prioritise having a epistemically diverse community, you can't just let anyone speak their mind. You need to actively enforce norms of inclusiveness so that highest diversity of opinions can be safely voiced.
  2. Norms on conversations about race shouldn't be supported by "popularity" reasons. They should be supported out of a respect for the needs of the relevant moral patients. This is not about "don't be reckless when talking about race because this will hurt object-level work". It's about being respectful for the needs of the people of color who get affected by these statements. 
Chris Leong @ 2023-01-16T11:02 (+2)

"People who support restrictive norms on conversations about race do so for partially epistemic reasons" - This is true and does score some points for the side wishing to restrict conversations. However, I would question how many people making this argument, when push came to shove, had epistemics as their first priority. I'm sure, there must be at least one person fitting this description, but I would be surprised if it were very many.

Point 2 is a good point. Maybe I should have divided this into three different camps instead? I'm starting to think that this might be a neater way of dividing the space.

Nathan Ashby @ 2023-01-16T10:41 (+17)

Put me down on the side of "EA should not get sidetracked into pointless culture war arguments that serve no purpose but to make everyone angry at each other."

Some people have argued for the importance of maintaining a good image in order to maximise political impact. I'm all for that. But getting into big internal bloodletting sessions over whether or not so and so is a secret racist does not create a good image. It makes you look like racists to half the population and like thought police to the other half.

Just be chill, do good stuff for the world, and don't pick fights that don't need to be picked.

Amber Dawn @ 2023-01-16T11:07 (+15)

This is a bit orthogonal to your question but, imo, part of the same conversation. 

A take I have on social capital/PR concerns in EA is that sometimes when people say they are worried about 'social capital' or 'PR', it means  'I feel uneasy about this thing, and it's something to do with being observed by non-EAs/the world at large, and I can't quite articulate it in terms of utilitarian consequences'.

And how much weight we should give to those feelings sort of depends on the situation:

(1) In some situations we should ignore it completely, arguably. (E.g., we should ignore the fact that lots of people think animal advocacy is weird/bad, probably, and just keep on doing animal advocacy). 

(2) In other situations, we should pay attention to them but only inasmuch as they tell us that we need to be cautious in how we interact with non-EAs. We might want to be circumspect with how we talk about certain things, or what we do, but deep down we recognize that we are right and outsiders are wrong to judge us. (Maybe doing stuff related to AI risk falls in this category)

(3) In yet other situations, however, I suggest that we should take these feelings of unease as a sign that something is wrong with what we are doing or what we are arguing. We are uneasy about what people will think because we recognize that people of other ideological commitments also have wisdom, and are also right about things, and we worry that this might be one of those times (but we have not yet articulated that in straightforward "this seems to be negative EV on the object level, separate from PR concerns" terms).

I think lots of people think we are currently in (2): that epistemics would dictate that we can discuss whatever we like, but some things look so bad that they'll damage the movement. I, however, am firmly in (3) - I'm uncomfortable about the letter and the discussions because I think that the average progressive person's instinct of 'this whole thing stinks and is bad'...probably has a point? 

To further illustrate this, a metaphor: imagine a person who, when they do certain things, often worries about what their parents would think. How much should they care? Well, it depends on what they're doing and why their parents disapprove of it. If the worry is 'I'm in a same-sex relationship, and I overall think this is fine, but my homophobic parents would disapprove' - probably you should ignore the concern, beyond being compassionate to the worried part of you. But if the worry is 'I'm working for a really harmful company, and I've kinda justified this to myself, but I feel ashamed when I think of what my parents would think' - that might be something you should interrogate. 





 

Amber Dawn @ 2023-01-16T11:11 (+8)

Maybe another way to frame this is 'have a Chesterton's fence around ideologies you dismiss' - like, you can only decide that you don't care about ideologies once you've fully understood them. I think in many cases, EAs are dismissing criticisms without fully understanding why people are making them, which leads them to see the whole situation as 'oh those other people are just low decouplers/worried about PR', rather than 'they take the critique somewhat seriously but for reasons that aren't easily articulable within standard EA ethical frameworks'. 

AnonymousQualy @ 2023-01-16T12:58 (+11)

I think it is trivially true that we sometimes face a tradeoff between utilitarian concerns arising from social capital costs and epistemic integrity (see this comment).

But I don't think the Bostrom situation boils down to this tradeoff.  People like me believe Bostrom's statement and its defenders don't stand on solid epistemic ground.  But the argument for bad epistemics has a lot of moving parts, including (1) recognizing that the statement and its defenses should be interpreted to include more than their most limited possible meanings, and that its omissions are significant, (2) recognizing the broader implausibility of a genetic basis for the racial IQ  gap, and (3) recognizing the epistemic virtue in some situations of not speculating about empirical facts without strong evidence.

All of this is really just too much trouble to walk through for most of us.  Maybe that's a failing on our part!  But I think it's understandable.  To  convincingly argue points (1) through (3) above I would need to walk through all the subpoints made on each link.  That's one heck of a comment.

So instead I find myself leaving the epistemic issues to the side, and trying to convince people that voicing support for Bostrom's statement is bad on consequentialist social capital grounds alone.  This is understandably less convincing, but I think the case for it is still strong in this particular situation (I argue it here and here).

Jason @ 2023-01-17T01:05 (+4)

One meta-question: Is this "divide" merely based on different assumptions and beliefs about what strategy will maximize impact, or is it based at least in part on something else for one or both sides?

Yellow @ 2023-01-16T17:38 (+4)

How should we navigate this divide?

I generally think we should almost always prioritize honesty where honesty and tact genuinely trade off against each other. That said, I suspect the cases where the trade-off is genuine (as opposed to people using tact as a bad justification for a lack of honesty, or honesty as a bad justification for a lack of tact) are not that common.

Do you disagree with this framing? For example, do you think that the core divide is something else?

I think that a divide exists, but I disagree that it pertains to recent events. Is it possible that you're doing a typical mind fallacy thing where just because you don't find something to be very objectionable, you're assuming others probably don't find it very objectionable and are only objecting for social signaling reasons? Are you underestimating the degree to which people genuinely agree with what you'd framing as the socially acceptable consensus views. rather than only agreeing with said socially acceptable consensus views due to social capital? 

To be clear, I think there is always a degree to which some people are just doing things for social reasons, and that applies no less to recent events than it does to everywhere else. But I don't think recent events are particularly more illustrative of these camps. 

it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp.

I think this is false. If you look at every instance of an organization seemingly failing at full transparency for optics reasons, you won't find much in the way of trend towards global health organizations. 

On the other hand, if you look at more positive instances (people who advocate concern for branding, marketing, and PR with transparent and good intentions),  you still don't see any particular trend towards global health. (Some examples: [1]][2][3] just random stuff pulled out by doing a keyword search for words like "media", "marketing" etc). Alternatively you could consider the cause area leanings of most"ea meta/outreach" type orgs in general, w.r.t. which cause area puts their energy where.

It's possible that people who prioritize global poverty are more strongly opposed systematic injustices such as racism, in the same sense that people who prioritize animal welfare are more likely to be vegan. It does seem natural, doesn't it, that the type of person who is sufficiently motivated by that to make a career out of it, might also be more strongly motivated to be against racism? But that, again, is not a case of "prioritizing social capital over epistemics", any more than an animal activist's veganism is mere virtue-signaling.  It's a case of genuine difference in worldviews. 

Basically, I think you've only arrived at this conclusion that global health people are more concerned with social capital because you implicitly have the framing that being against the racist-sounding stuff specifically is a bid for social capital, while ignoring the big picture outside of that one specific area. 

Also I think that if you think people are wrong about that stuff, and you'd like them to change their mind, you have to convince them of your viewpoint, rather than deciding that they only hold their viewpoint because they're seeking social capital rather than holding it for honest reasons.

Chris Leong @ 2023-01-16T23:52 (+2)

When I was talking about what AI safety people prioritise vs global health, I was thinking more at a grassroots level than at a professional level. I probably should have been clearer, and it seems plausible my model might even reverse at the professional level.

Wil Perkins @ 2023-01-16T16:31 (+3)

I actually think the framing is fine, and have had this exact discussion at EAG and in other related venues. It seems to me that many people at the “top” of the EA hierarchy so to speak tend to prefer more legible areas like AI, pandemics, and anything that isn’t sales/marketing/politics.

However as the moment has grown, this means leadership has a large blind spot as to what many folks on the ground of the movement want. Opening EA global, driving out the forced academic tone on the forum, and generally increasing diversity in the movement are crucial if EA ever wants to have large societal influence.

I take it almost as a given that we should be trying to convince the public of our ideas, rather than arrogantly make decisions for society from on high. Many seem to disagree.

Harrison Durland @ 2023-01-16T06:46 (+3)

I think that some sort of general guide on “How to think about the issue of optics when so much of your philosophy/worldview is based on ignoring optics in the sake of epistemics/transparency (including embedded is-ought fallacies about how social systems ought to work), and your actions have externalities that affect the community” might be nice, if only so people don’t have to constantly reexplain/rehash this.

But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire.

It’s too bad that Scout Mindset not only doesn’t seem to address this issue effectively, it also seems to push people more towards the is-ought fallacy of “optics shouldn’t matter that much” or “you can’t have good epistemics without full transparency/explicitness” (in my view: https://forum.effectivealtruism.org/posts/HDAXztEbjJsyHLKP7/outline-of-galef-s-scout-mindset?commentId=7aQka7YXrhp6GjBCw)

Chris Leong @ 2023-01-16T06:54 (+3)

"But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire" - Agreed. Times like this when people's emotions are high are likely the most difficult time to make progress here.

Harrison Durland @ 2023-01-16T07:06 (+13)

you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.

And relatedly, I think that such concerns about longterm epistemic damage are overblown. I appreciate that allowing epistemics to constantly be trampled in the name of optics is bad, but I don’t think that’s a fair characterization of what is happening. And I suspect that in the short term optics dominate due to how they are so driven by emotions and surface-level impressions, whereas epistemics seem typically driven more by reason over longer time spans and IMX are more the baseline in EA. So, there will be time to discuss what if anything “went wrong” with CEA’s response and other actions, and people should avoid accidentally fanning the flames in the name of preserving epistemics, which I doubt will burn.

(I’ll admit what I wrote may be wrong as written given that it was somewhat hasty and still a bit emotional, but I think I probably would agree with what I’m trying to say if I gave it deeper thought)

rickyhuang.hexuan @ 2023-01-16T10:44 (+2)

I think you ask two questions with this framing (1) a descriptive question on whether or not this divide exists currently int the EA Community (2) a normative question on whether or not this divide should exist. I think it is useful to separate the two questions, as some of the comments seem to use responses to (2) as a response to (1).  I don't know if (1) is true. I don't think I've noticed it in the EA Community but I'm willing to have my mind changed on this

On (2), I think this can be easily resolved. I don't think we should (and I don't think we can) have non-epistemic* reasons for belief. However, we can have non-epistemic reasons on why we would want to act on a certain proposition. I'm not really falling into either "camp" here, and I don't think it necessitates us to fall into any "camp". There's a wide wealth of literature in Epistemology on this.

*I think sometimes EAs use the word "epistemic" differently then what I conventionally see in academic philosophy, but this comment is based on conventional interpretations of "epistemic" in Philosophy. 

Wil Perkins @ 2023-01-16T18:13 (+1)

From my perspective (1) absolutely exists. I am on the social capital side of the debate, and have found many others share my view.

 

I don't understand your point of number 2 - I agree that when people around here use the word "epistemic" they tend to really mean something more like "intelligence."