Deferring
By Owen Cotton-Barratt @ 2022-05-12T23:44 (+101)
[meta: my attempt to find a good big-picture framing for a topic that's clearly important but I am not convinced we're nailing]
Deferring is when you adopt someone else's view on a question over your own independent view (or instead of taking the time to form an independent view). You can defer on questions of fact or questions of what to do. You might defer because you think they know better (epistemic deferring), or because there is a formal or social expectation that you should go along with their view (deferring to authority).
Both types of deferring are important — epistemic deferring lets people borrow the fruits of knowledge; deferring to authority enables strong coordination. But they are two-edged. Deferring can mean that you get less chance to test out your own views, so developing mastery is slower. Deferring to the wrong people can be straightforwardly bad. And when someone defers without everyone understanding that's what's happening, it can cause issues. Similarly, unacknowledged expectations of deferral from others can cause problems. We should therefore learn when and how to defer, when not to, and how to be explicit about what we're doing.
Why deferring is useful
Epistemic deferring
Epistemic deferring is giving more weight to someone else's view than your own because you think they're in a position to know better. The opposite of epistemic deferring is holding one's own view.
Examples:
- "You've been to this town before; where's the best place to get coffee?"
- "My doctor/lawyer says this is a common situation, and the right thing to do is ..."
- "A lot of smart folks seem to think AI risk is a big deal; it sounds batshit to me, but I guess I'll look into it more"
The case for epistemic deferring is simple: for most questions, we can identify someone (or some institution or group of people) whose judgement on the question would — if they were possessed of the facts we knew — be better than our own. So to the extent that
- (A) We want to optimize for accurate judgements above all else, &
- (B) We are willing to make the investment to uncover that better judgement,
deferring will be correct.
Partial deferring
The degree to which (A) and (B) hold will vary with circumstance. It will frequently be the case that they partially hold; in this case it may be appropriate to partially defer, e.g.
- “I’m torn between whether to take job X or job Y. On my view job X seems better. When I talk to my friends and family they overwhelmingly think job Y sounds better; maybe they’re seeing something I’m not. If I thought it was a close call anyway this might be enough to tip me over, but it won’t change my mind if my preference for X was clear.”
Deferring to authority
Deferring to authority is adopting someone else's view because of a social contract to do so. Often deferring to authority happens on questions of what should be done — e.g. "I'm going to put this fire alarm up because [my boss / my client / the law] tells me to", or “I’m helping my friend cook dinner, so I’ll cut the carrots the way they want, even though I think this other way is better”.[1] The opposite of deferring to authority is acting on one's own conscience.
Deferring to authority — and the reasonable expectation of such deferring — enables groups of people to coordinate more effectively. Militaries rely on it, but so do most projects (large and small, but especially large). It's unreasonable to expect that everyone working on a large software project will have exactly the same views over the key top-level design choices, but it's better if there's some voice that can speak authoritatively, so everyone can work on that basis. If we collectively want to be able to undertake large ambitious projects, we’ll likely need to use deferring to authority as a tool.
Ways deferring goes wrong
- Deferring to the wrong people
- The "obvious" failure mode, applies to both:
- Epistemic deferring — misidentifying who is an expert
- Deferring to authority — buying into social contracts it would be better to withdraw from
- The "obvious" failure mode, applies to both:
- Deferring with insufficient bandwidth
- Even if Aditi would make a better decision than Sarah, the process of Sarah deferring to Aditi (for epistemic or authority reasons) can produce a worse decision if either:
- There's too much context for Sarah to communicate to Aditi
- The "right" decision includes too much detail for Aditi to communicate to Sarah
- This is more often a problem with questions of what to do than questions of fact (since high context on the situation is so often important for the answer), but may come up in either case
- A special case is deferring with zero bandwidth (e.g. Sarah is deferring to she imagines Aditi would say in the situation, based on an article she read)
- Another cause of deferring with insufficient bandwidth is if someone wants to delegate responsibility but not authority for a project, and not to spend too much time on it; this is asking for deferral to them as an authority without providing much bandwidth
- Even if Aditi would make a better decision than Sarah, the process of Sarah deferring to Aditi (for epistemic or authority reasons) can produce a worse decision if either:
- Deferring can be bad for learning
- Contrast — "letting people make their own mistakes"
- The basic dynamic is that if you act from your own models, you bring them more directly into contact with the world, and can update faster
- Note that a certain amount of deferring can be good for learning, especially:
- When first trying to get up to speed with an area
- When taking advice on what to pay attention to
- In particular because this can help rescue people from traps where they think some dimension is unimportant, so never pay attention to it to notice that it's actually important
- This intersects with #2; deferring is more often good for learning when it’s high-bandwidth (since the person deferring can use it as an opportunity to interrogate the model of the person being deferred to), and more often bad for learning when it’s low-bandwidth
- Contrast — "letting people make their own mistakes"
- Deferring can interfere with belief formation
- If people aren't good at keeping track of why they believe things, it can be hard to notice when one's body of knowledge has caught up and one should stop deferring on an issue (because the deferred-to-belief may be treated as a primitive belief); cf. independent impressions for discussion of habits for avoiding this
- Conflation between epistemic deferring and deferring to authority can lead to people accidentally adopting as beliefs things that were only supposed to be operating assumptions
- This can happen e.g.
- When deferring to one's boss
- Easy to slip between the two since one's boss is often in a superior epistemic position re. what needs to be done
- In some cases organizational leadership might exert explicit pressure towards shared beliefs, e.g. saying “if someone doesn’t look like they hold belief X, this could destabilize the team’s ability to orient together as a team”
- Deferring to someone high status when the true motivation for deferring is to seem like one has cool beliefs / get social acceptance
- Again there's plausible deniability since the high status person may well be in a superior epistemic position
- The high-status person may like it when others independently have similar views to them (since this is evidence of good judgement), which can create incentives for the junior people to adopt “as their own view” the relevant positions
- When deferring to one's boss
- This can happen e.g.
Deferring without common knowledge of deferring is a risk factor for these issues (since it's less likely that anyone is going to spot and correct them).
Social deferring
Often there’s a lot of deferring within a group or community on a particular issue (i.e. both the person deferring and the person being deferred to are within the group, and the people being deferred to often have their own views substantially via deferring). This can lead to issues, for reasons like:
- If there are long chains of deferral, this means there’s often little bandwidth to the people originating the views
- If you don’t know when others are deferring vs having independent views, it may be unclear how many times a given view has been independently generated, which can make it hard to know how much weight to put on it (“the emperor’s new clothes” gives an extreme example)
- If the people with independent takes update their views in response to evidence, it may take some time until the newer views have filtered through to the people who are deferring
- If people are deferring to the authority of the social group (where there's a pressure to have the view as a condition of membership), this may be bad for belief formation
Ultimately we don’t have good alternatives to basing a lot of our beliefs on chains of deferral (there are too many disparate disciplines of expertise in the world to personally be fluent with knowing who are the experts to listen to in each one). But I think it’s helpful to be wary of ways in which it can cause problems, and we should feel relatively better about:
- A group or community collectively deferring to a single source (e.g. the same expert report, or a prediction market), as it’s much more legible what’s happening
- People sometimes taking the effort to dive into a topic and shorten the deferral chain (cf. “minimal trust investigations”)
- Creating spaces which explicitly state their operating assumptions as a condition of entry (“in this workshop we’ll discuss how to prepare for a nuclear war in 2025”) without putting pressure on the beliefs of the participants
When & how to defer
Epistemic deferring
There's frequently a tension between on the one hand knowing that you can identify someone who knows more than you, and on the other hand not wanting to take the time to get answers from them, or wanting to optimize for your own learning rather than just the best answer for the question at hand.
Here are the situations where I think epistemic deferring is desirable:
- Early in the learning process for any paradigm
- By “paradigm” I mean a body of knowledge with something like agreed-on metrics of progress
- This might include “learning a new subfield of chemistry” or “learning to ride a unicycle”
- I’m explicitly not including areas that feel preparadigmatic — among which notably I want to include cause prioritization — where I feel more confused about the correct advice (although it certainly seems helpful to hear existing ideas)
- Here you ideally want to defer-but-question — perhaps you assume that the thing you're being told is correct, but are intensely curious about why that could be (and remain open to questioning the assumption later)
- Taking advice on what to pay attention to is a frequent special case of this — it's very early in the learning process of "how to pay attention to X", for some X you previously weren't giving attention to
- By “paradigm” I mean a body of knowledge with something like agreed-on metrics of progress
- When the importance of a good answer is reasonably high compared to the cost of gathering the information about how to defer, and either:
- It's on a topic that you're not hoping to develop mastery of
- i.e. you just want the easily-transmissible conclusions, not the underlying generators
- There are only weak feedback loops from the process back into your own models
- The importance of a good answer is high even compared to the cost of gathering thorough information about how to defer
- Sometimes thorough information about how to defer is cheap! e.g. if you want to know about a variable that has high quality public expert estimates
- If you’re making a decision about what to do, however, often gathering thorough information about how to defer means very high bandwidth context-sharing
- You intend to defer only a little
- It's on a topic that you're not hoping to develop mastery of
Note: even when not deferring, asking for advice is often a very helpful move. You can consider the advice and let it guide your thinking and how to proceed without deferring to any of the advice-givers.[2]
Deferring to authority
Working out when to defer to authority is often simply a case of determining whether you want to participate in the social contract.
It's often good to communicate when you're deferring, e.g. tell your boss "I'm doing X because you told me to, but heads up that Y looks better to me". Sometimes the response will just be "cool"; at other times they might realize that you need to understand why X is good in order to do a good job of X (or that they need to reconsider X). In any case it's helpful to keep track for yourself of when you're deferring to authority vs have an independent view.
A dual question of when to defer to authority is when to ask people to defer to you as an authority. I think the right answer is "when you want someone to go on following the plan even if they’re not personally convinced". If you’re asking others to defer it’s best if you’re explicit about this. Vice-versa if you’re in a position of authority and not asking others to defer it’s good to be explicit that you want them to act on their own conscience. (People take cultural cues from those in positions of authority; if they perceive ambiguity about whether they should defer, it may be ambiguous in their own mind, which seems bad for the reasons discussed above.)
Deferring to authority in the effective altruism community
I think people are often reluctant to ask others to defer to their authority within EA. We celebrate people thinking for themselves, taking a consequentialist perspective, and acting on their own conscience. Deferring to authority looks like it might undermine these values. Or perhaps we'd get people who reluctantly "deferred to authority" while trying to steer their bosses towards things that seemed better to them.
This is a mistake. Deferring to authority is the natural tool for coordinating groups of people to do big things together. If we're unwilling to use this tool, people will use social pressure towards conformity of beliefs as an alternate tool for the same ends. But this is worse at achieving coordination[3], and is more damaging to the epistemics of the people involved.
We should (I think) instead encourage people to be happy taking jobs where they adopt a stance of "how can I help with the agenda of the people steering this?", without necessarily being fully bought into that agenda. This might seem a let down for individuals, but I think we should be willing to accept more "people work on agendas they're not fully bought into" if the alternatives are "there are a bunch of epistemic distortions to get people to buy into agendas" and "nobody can make bets which involve coordinating more than 6 people". People doing this can keep their eyes open for jobs which better fit their goals, while being able and encouraged to have their own opinions, and still having professional pride in doing a good job at the thing they're employed to do.
This isn't to say that all jobs in EA should look like this. I think it is a great virtue of the community that we recognise the power of positions which give people significant space to act on their own conscience. But when we need more coordination, we should use the correct tools to get that.
Meta-practices
My take on the correct cultural meta-practices around deferring:
- Choices to defer — or to request deferral — should as far as possible be made deliberately rather than accidentally
- We should be conscious of whether we're deferring for epistemic or authority reasons
- We should discuss principles of when to defer and when not to defer
- Responsibility for encouraging non-deferral (when that's appropriate) should lie significantly with the people who might be deferred to
- We should be explicit about when we're deferring (in particular striving not to let the people-being-deferred-to remain ignorant of what's happening)
Closing remarks
A lot of this content, insofar as it is perceptive, is not original to me; a good part of what I'm doing here is just trying to name the synthesis position for what I perceive to be strong pro-deferral and anti-deferral arguments people make from time to time. This draft benefited from thoughts and comments from Adam Bales, Buck Shlegeris, Claire Zabel, Gregory Lewis, Jennifer Lin, Linch Zhang, Max Dalton, Max Daniel, Raymond Douglas, Rose Hadshar, Scott Garrabrant, and especially Anna Salamon and Holden Karnofsky. I might edit later to tighten or clarify language (or if there are one or two substantive points I want to change).
Should anyone defer to me on the topic of deferring?
Epistemically — I've spent a while thinking about the dynamics here, so it's not ridiculous to give my views some weight. But lots of people have spent some time on this; I'm hoping this article is more helpful as a guide to let people understand things they already see than as something that needs people to defer to.
As an authority — not yet. But I'm offering suggestions for community norms around deferring. Norms are a thing which it can make sense to ask people to defer to. If my suggestions are well received in the discussion here, perhaps we'll want to make asks for deference to them at some point down the line.
- ^
Some less central examples of deferring to authority in my sense:
- Doing something because you promised to (the “authority” deferred to is your past self)
- Adopting a belief that the startup you’re joining will succeed as part of the implicit contract of joining (not necessarily a fully adopted belief, but acted upon while at work)
- ^
cf. https://www.lesswrong.com/posts/yeADMcScw8EW9yxpH/a-sketch-of-good-communication
- ^
At least “just using ideological conformity” is worse for coordination than “using ideological conformity + deference to authority”. After we’re using deference to authority well I imagine there’s a case that having ideological conformity as well would help further; my guess is that it’s not worth the cost of damage to epistemics.
Emrik @ 2022-05-16T20:20 (+45)
This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it's similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
- Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
- Under certain conditions, there's a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the "Zollman effect").
- Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can't achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
- To increase the breadth of exploration over ideaspace, we can encourage "community bubbliness" among researchers (aka "small-world network"), where communication inside bubbles is high, and communication between them is limited. There's a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
- How much you should update on someone's testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and "raising sanity waterlines", imo.
- If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
- Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I'm ok with locking myself out of some areas in ideaspace because I'm sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
- Deferring to academic research can have the bandwidth problem[1] you're talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden's History is a good example: he shouldn't defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
- Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
- Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
- Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
- Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you're very reluctant to update on testimonial evidence even from other experts.
- Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it's surprisingly healthy. It wouldn't be terrible if EA kept doing whatever it's doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
- ^
I like this term for it! It's better than calling it the "Daddy-is-a-doctor problem".
Owen Cotton-Barratt @ 2022-05-16T21:29 (+8)
[Without implying I agree with everything ...]
This comment was awesome, super high density of useful stuff. I wonder if you'd consider making it a top level post?
Emrik @ 2022-05-16T21:47 (+5)
Thanks<3
Well, I've been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren't necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I'm an Explorer, not an Expert. Not safe to defer to.
Owen Cotton-Barratt @ 2022-05-16T22:30 (+10)
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we've got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Emrik @ 2022-05-16T22:49 (+17)
Some (controversial) reasons I'm surprisingly optimistic about the community:
1) It's already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they're likely to improve opinions), and the top-level tries to avoid conforming, there's a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn't that hard to attain for conscientious smart people, I think, so smart people aren't stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it's worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I'm not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
Mo Putera @ 2024-07-30T11:57 (+2)
Hi Emrik, wow, I thought this was a genuinely great comment deserving of its own top-level post. From your response to Owen above and your recent lack of top-level posting history it doesn't seem like you'll do it anytime soon, so I'm hoping to nudge you to reconsider just in case you've warmed to the idea since :) (of course feel free to say no)
Emrik @ 2024-07-30T15:52 (+2)
Thank you for appreciating! 🕊️
Alas, I'm unlikely to prioritize writing except when I lose control of my motivations and I can't help it.[1] But there's nothing stopping someone else extracting what they learn from my other comments¹ ² ³ re deference and making post(s) from it, no attribution required.
(Arguably it's often more educational to learn something from somebody who's freshly gone through the process of learning it. Knowledge-of-transition can supplement knowledge-of-target-state.)
Haphazardly selected additional points on deference:
- Succinctly, the difference between Equal-Weight deference and Bayes
- "They say . | Then I can infer that they updated from to by multiplying with a likelihood ratio of . And because C and D, I can update on that likelihood ratio in order to end up with a posterior of . | The equal weight view would have me adjust down, whereas Bayes tells me to adjust up."
- Paradox of Expert Opinion
- "Ask the experts. They're likely the most informed on the issue. Unfortunately, they're also among the groups most heavily selected for belief in the hypothesis."
- ^
It's sort of paradoxical. As a result of my investigations into social epistemology 2 years ago, I came away with the conclusion that I ought to focus ~all my learning-efforts on trying to (recursively) improve my own cognition, with ~no consideration for my ability to teach anyone anything of what I learn. My motivation to share my ideas is an impurity that I've been trying hard to extinguish. Writing is not useless, but progress toward my goal is much faster when I automatically think in the language I construct purely to communicate with myself.
Mo Putera @ 2024-08-01T04:26 (+7)
Thanks for the thoughtful & generous response and interesting links Emrik :) The natural cluster of questions that include deference has been on my mind ever since I learned about epistemic learned helplessness years ago, so I appreciate the pointers.
I confess to being a bit alarmed by your footnote. For reasoning transparency's sake, would you be willing to share how you were led to the conclusion to turn inward? I have in my own way been trying to improve clarity of thought, although my reasons include an extrinsic component (e.g. I really like helping people figure out their problems, or fail productively in trying), and even the intrinsic component (clarity makes my heart sing) often points me outward (cf. steps 3 and 8 here) and can also look like teaching others. And I've noticed that both can speed up my progress greatly despite reducing time spent just thinking, the former akin to being Alice not Bob, and the latter in a way a bit like "pruning the branching factor" or making me realize I had been overlooking fruitful branches or just modeling the whole thing wrongly. This is the overall "vibe" from which I doubt the effectiveness of your inward turn.
But that's admittedly not the real reason I'm writing this; my real reason echoes Julia's comment.
HowieL @ 2022-05-16T17:58 (+17)
Thanks for writing this post. I think it's really to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.
ButI think "deferring to authority" is bad branding (as you worry about below) and I'm not sure your definition totally captures what you mean. I think it's probably worth changing even though I haven't come up with great alternatives.
Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexisting authority because they have power over you, not deferring to a person/norm/institution/process because you're bought into the value of coordination. Relatedly, it doesn't seem like the most natural phrase to capture a lot of your central examples.
Substantive definition. I don't think "adopting someone else's view because of a social contract to do so" is exactly what you mean. It suggests that if someone were not to defer in one of these cases, they'd be violating a social contract (or at least a norm or expectation), whereas I think you want to include lots of instances where that's not the case (e.g. you might defer as a solution to the unilateralist's curse even if you were under no implicit contract to do so). Most of your examples also seem to be more about acting based on someone else's view or a norm/rule/process/institution and not really about adopting their view.[1] This seems important since I think you're trying to create space for people to coordinate by acting against their own view while continuing to hold that view.
I actually think the epistemics v. action distinction is a cleaner distinction so I might base your categories just on whether you're changing your views v. your actions (though I suspect you considered this and decided against).
***
Brainstorm of other names for non-epistemic deferring (none are great). Pragmatic deferring. Action deferring. Praxological deferring (eww). Deferring for coordination.
(I actually suspect that you might just want to call this something other than deferring).
[1] Technically, you could say you're adopting the view that you should take some action but that seems confusing.
Owen Cotton-Barratt @ 2022-05-16T18:09 (+4)
Perhaps "deferring on views" vs "delegating choices" ?
HowieL @ 2022-05-16T19:33 (+2)
I think that's an improvement though "delegating" sounds a bit formal and it's usually the authority doing the delegating. Would "deferring on views" vs "deferring on decisions" get what you want?
Owen Cotton-Barratt @ 2022-05-16T21:02 (+3)
No, that doesn't work because epistemic deferring is also often about decisions, and in fact one of the key distinctions I want to make is when someone is deferring on a decision how that can be for epistemic or authority reasons, and how those look different.
I agree it's slightly awkward that authorities often delegate, but I think that that's usually delegating tasks; "delegating choices" to me has much less connotation of a high-status person delegating to a low-status person.
Although ... one of the examples of "deferring to authority" in my sense is a boss deferring to the authority of a subordinate after the subordinate has been tasked with making a decision, even though the boss disagrees and has the power to override it. With this example, "delegating choice" has very much the right connotation, and "deferring to authority" feels a bit of a stretch.
Vaidehi Agarwalla @ 2022-05-17T01:35 (+2)
Just to make sure I understand correctly is"delegating choice" is "delegating a choice (of an action to be made)" ?
If so, I think this is a much better phrase at least than deferring to authority, and would even propose editing the OP to suggest this as an alternative phrase / address this so that others don't get the wrong impression - based on our conversation it seems we have more agreement than I would have guessed from reading the OP alone.
HowieL @ 2022-05-16T22:28 (+2)
Yeah that does sell me a bit more on delegating choice.
Vaidehi Agarwalla @ 2022-05-17T01:46 (+2)
Related post to the importance of delegating choice, but that was not framed as a trade-off between buying into a thing vs doing it was Jan Kulveit's What to do with people from a few years ago.
Owen Cotton-Barratt @ 2022-05-13T15:55 (+14)
I think this is getting downvotes and I'm curious whether this is because:
- People are disagreeing with the conclusions?
- It's poorly explained/confusing?
- Something about tone is rubbing people the wrong way?
- Something else?
MichaelPlant @ 2022-05-16T12:13 (+8)
[Writing in a personal capacity, etc.]
I found this post tone-deaf, indeed chilling, when I read it, in light of the current dynamics of the EA movement. I think its the combination of:
(1) lots of money appearing in EA (with the recognition this might be a big problem for optics and epistemics and there are already 'bad omens')
(2) the central bits of EA seeming to obviously push an agenda (EA being ‘just longtermism' now, with CEA's CEO, Max Dalton, indicating their content will be "70-80% longtermism"; CEA's Julia Wise is suggesting people shouldn't talk to high net worths themselves, but should funnel them towards LongView)
(3) this post then saying people should defer to authority.
Taken in isolation, these are somewhat concerning. Taken together, they start to look frightening - of the flavour, "join our elite society, see the truth, obey your leaders".
I am pretty sure anyone reading this will agree that this is not how we want EA either to be or to be perceived to be. However, things do seem to be moving in that direction, and I don't think this post helped - sorry, Owen, I am sure you wrote it with the best of intentions. But the road to hell, pavements, etc.
Khorton @ 2022-05-16T19:06 (+19)
I am concerned about some of the long-termism push but didn't get that vibe from this post, as an alternate perspective
Edit: wow why is Michael getting downvoted though, wtf? different people can have different impressions of the tone of a written piece of work, it's not harmful to point it out
MichaelPlant @ 2022-05-17T09:17 (+7)
Edit: wow why is Michael getting downvoted though, wtf?
Perhaps people didn't like the cult-ish comparison? But criticising someone for saying they are feeling something is cult-ish is, um, well, pretty cult-ish...
Or perhaps it's people who can't properly distinguish between "criticising because you care and want to improve something" and "criticising to be mean" and mistakenly assume I'm doing the latter (despite my strenuous attempts to make it clear I am doing the former).
Owen Cotton-Barratt @ 2022-05-17T15:59 (+9)
I sort of guess the second thing?; although I never downvoted at least I felt a little defensive and negative about "tone-deaf, indeed chilling" and didn't upvote despite having found your comment useful!
(I've now noticed the discrepancy and upvoted it)
calebp @ 2022-05-22T17:44 (+4)
I don't think we should only downvote harmful things, we should instead look at the amount of karma and use our votes to push the score to the value we think the post should be at.
I downvoted the comment because:
- Saying things like "... obviously push an agenda...." And "I'm pretty sure anyone reading this... " Has persuasiony vibes which I don't like.
- Saying "this post says people should defer to authority" is a bit of a straw/weak man and isn't very charitable.
Owen Cotton-Barratt @ 2022-05-22T21:23 (+19)
Using votes to push towards the score we think it should be at sounds worse than just individually voting according to some thresholds of how good/helpful/whatever a post needs to be? I'm worried about zero sum (so really negative sum because of the effort) attempts to move karma around where different people are pushing in different ways, where it's hard to know how to interpret the results, compared to people straightforwardly voting without regard to others' votes.
At least, if we should be voting to push things towards our best guess I think the karma system should be reformed to something that plays nice with that -- e.g. each individual gives their preferred score, and the displayed karma is the median.
calebp @ 2022-05-23T01:55 (+4)
(I think that the pushing towards a score thing wasn't a crux in downvoting, I think there are lots of reasons to downvote things that aren't harmful as outlined in the 'how to use the form post/moderator guidelines')
I think that karma is supposed to be a proxy for the relative value that a post provides.
I'm not sure what you mean by zero-sum here, but I would have thought that the control system type approach is better as the steady-state values will be pushed towards the mean of what users see as the true value of the post. I think that this score + total number of votes is quite easy to interpret.
The everyone voting independently thing performs poorly when some posts have much more views than others (so it seems to be tracking something more like how many people saw it and liked it rather than is the post high quality).
I think I misunderstand your concern, but the control system approach seems, on the surface to be much better to me, but I am keen to find the crux here, if there is one.
Owen Cotton-Barratt @ 2022-05-16T16:07 (+14)
Interesting, thanks.
So, my immediate reaction is that I can feel that kind of concern, but I think the "see the truth, obey your leaders" is exactly the kind of dynamic I'm worried about! & then I'm trying to help avoid it by helping to disambiguate between epistemic deferring and deferring to authority (because conflating them is where I think a lot of the damage comes from).
So then I'm wondering if I've made some bad branding decisions (e.g. should I have used a different term for what I called "deferring to authority"? It's meant to evoke that someone has authority in a particular domain, not some kind of general purpose authority, and not that they know a lot), or if I'm failing to frame my positions correctly? I guess at least a bit of the latter, since it sounds like you read my post as saying people should defer more? Which definitely wasn't something I intended to say (I'm confused; I'd like to see more deferring of some types and less of other types; I guess overall I'd be into a bit less deferring but not confident enough about that that I'd want to make it a headline).
Vaidehi Agarwalla @ 2022-05-16T17:41 (+16)
(fwiw I upvoted this post, because I thought it raised a lot of interesting points that are worth discussing despite disagreeing some bits).
In sum: I think your post sometimes lacks specificity which makes people think you're talking more generally than (I suspect) you are.
- Who exactly you're proposing doesn't buy into the agenda - this is left vague in your post. Are you envisioning 20% of people? 50%? What kinds of roles are these folks in? Is it only junior level non-technical roles or even mid-managers doing direct work?
Those details matter because I think I'd be fine with e.g. junior ops people at an AI org not fully buying the specific research agenda of that org, but I'm not sure about the other roles here.
- Who do you count as the EA community or movement? I think if we are thinking big tent EA where you have people with the needed skills for the movement but not necessarily a deep understanding of EA, I'm more sympathetic to this argument. But if we're thinking core community EA where many people are doing things like community building or for whom EA is a bit part of their lives, I feel much more uncomfortable with people deferring to authority - perhaps I feel particularly uncomfortable with people in the meta space deferring to authority.
Owen Cotton-Barratt @ 2022-05-16T21:43 (+8)
I vibe with the sentiment "particularly uncomfortable with people in the meta space deferring to authority", but I think it's too strong. e.g. I think it's valuable for people to be able to organize big events, and delegate tasks among the event organizers.
Maybe I'm more like "I feel particularly uncomfortable with people in the meta space deferring without high bandwidth, and without explicit understanding that that's what they're doing".
Vaidehi Agarwalla @ 2022-05-17T01:41 (+5)
I think the important thing with delegation which Howie pointed out, is that there is a social contract in the example you gave of event organising between the volunteer / volunteer manager or employer / contractor where I'd expect that in the process of choosing to sign up for this job, the person makes a decision based on their own thinking (or epistemic deference) to contribute to this event - I think this is what you mean by high bandwidth?
If so, I feel in agreement with the statement: "I feel particularly uncomfortable with people in the meta space delegating choice without high bandwidth, and without explicit understanding that that's what they're doing"
Owen Cotton-Barratt @ 2022-05-16T21:39 (+3)
I'm fine with junior ops people at an AI org being not really at all bought into the specific research agenda.
I'm fine with senior technical people not being fully bought in -- in the sense that maybe they think if it were up to them a different agenda would be slightly higher value, or that they'd go about things a slightly different way. I think we should expect that people have slightly different takes, and don't get the luxury of ironing all of those differences out, and that's pretty healthy. (Of course I like them having a go at discussing differences of opinion, but I don't think failure to resolve a difference means that the they need to adopt the party line or go find a different org.)
Vaidehi Agarwalla @ 2022-05-17T01:23 (+2)
That makes sense, and feels mostly in line with what I would imagine.
Maybe this is a small point (since there will be many more junior than senior roles in the long run) : I feel like the senior group would likely join an org for many other reasons than deference to authority (e.g. not wanting to found an org themselves, wanting to work with particular people they feel they could get a good work environment from, or because of epistemic deference). It seems like in practice those would be much stronger motivating reasons than authority, and I'm having a hard time picturing someone doing this in practice.
MichaelPlant @ 2022-05-17T10:16 (+6)
Okay, well, just to report that what you said by way of clarification was reassuring but not what I picked up originally from your post! I agree with Vaidehi below that an issue was a lack of specificity, which led to me reading it as a pretty general comment.
Reading your other comments, it seems what you're getting at is a distinction between trusting someone is right without understanding why vs just following their instructions. I agree that there's something there: to e.g. run an organisation, it's sometimes impractical or unnecessary to convince someone of your entire worldview vs just ask them to do something.
FWIW, what I see lots of in EA, worries me, and I was hoping your post would be about, is that people defer so strongly to community leaders that they refuse to even engage with object-level arguments against whatever it is that community leaders believe. To draw from a personal example, quite often when I talk about measuring wellbeing, people will listen and then say something to the effect of "what you say seems plausible, I can't think of any objections, but I'm going to defer to GiveWell anyway". Deferring may have a time and a place, but presumably we don't want deference to this extent.
Vaidehi Agarwalla @ 2022-05-16T17:20 (+5)
"Deferring to experts" might be a less loaded term. Also defining what experts are especially for a lot of EA fields that are newer and less well established could help.
Owen Cotton-Barratt @ 2022-05-16T17:49 (+2)
"Deferring to experts" carries the wrong meaning, I think? At least to me that sounds more like epistemic deferring.
An alternative to "deferring to authority" a couple of people have suggested to me is "delegating", which I sort of like (although maybe it's confusing if one of the paradigm examples is delegating to your boss).
Vaidehi Agarwalla @ 2022-05-17T02:28 (+8)
In light of the other discussions, delegating choice seems better than deferring to experts.
calebp @ 2022-05-13T13:49 (+14)
Thanks for writing this, I thought it was great.
(Apologies if this is already included, I have checked the post a few times but possible that I missed where it's mentioned.)
Edit: I think you mention this in social defering (point 2).
One dynamic that I'm particularly worried about is belief double counting due to deference. You can imagine the following scenario:
Jemima: "People who's name starts with J are generally super smart."
Mark: [is a bit confused, but defers because Jemima has more experience with having a name that starts with J] "hmm, that seems right"
[Mary joins conversation]
Mary: [hmm, seems odd but 2 people think and I'm just 1 person so I should update towards their position] "hmm, I can believe that"
Bill: [hmm, seems odd but 3 people think and I'm just 1 person so I should update towards their position] "hmm, I can believe that"
From Bill's perspective it looks like there are 3 pieces of evidence pointing in the direction of a hypothesis but really there was just one piece (Jemima's experience) and a bunch of parroting.
I don't think we often have these literal conversations, but sometimes I feel confused and I find myself doing belief aggregation type things in conversations to make progress on some question. I think it's helpful to stop and be careful when making moves like "hmm most people here seem to think x therefore I should update in that direction" before seeing how much people at an individual level are themselves deferring to each other (or someone upstream of them) both to form better beliefs myself and not pollute the epistemic environment for others.
Distinguishing between your 'impression" and"all considered view" is helpful for this too.
Another way of saying this is is it can be hard to distinguish "great minds think alike" from "highly correlated error sources".
Owen Cotton-Barratt @ 2022-05-13T16:14 (+3)
Yeah I briefly alluded to this but your explanation is much more readable (maybe I'm being too terse throughout?).
My take is "this dynamic is worrying, but seems overall less damaging than deferral interfering with belief formation, or than conflation between epistemic deferring and deferring to authority".
calebp @ 2022-05-13T16:46 (+1)
I think I roughly agree althought I haven't thought much about the epistemic vs authority deferring thing before.
Idk if you were too terse, it seemed fine to me. That said, I would have predicted this would be around 70 karma by now, so I may be poorly calibrated on what is appealing to other people.
John G. Halstead @ 2022-05-13T09:54 (+14)
i thought this post by Huemer was a nice discussion of deference - https://fakenous.net/?p=550
Owen Cotton-Barratt @ 2022-05-13T15:52 (+2)
Nice, thanks!
Khorton @ 2022-05-13T22:47 (+8)
I think we should be willing to accept more "people work on agendas they're not fully bought into" if the alternatives are "there are a bunch of epistemic distortions to get people to buy into agendas" and "nobody can make bets which involve coordinating more than 6 people".
YES!!
Ty @ 2022-05-14T15:10 (+7)
My view is that for anything reasonably consequential (ie potentially worth the time spent investigating), one should at least briefly probe before deferring as a) virtually everyone lies at least occasionally and b) popular opinions are often clearly dubious due to the inertia they carry within a group (even a group of experts) from other people deferring without investigating (this can result in evidence needing to be overwhelming to shift majority opinion and overcome this self-perpetuating cycle).
David Mears @ 2022-05-14T11:24 (+5)
I’m not sure what you mean by ‘bandwidth’, each time you use it.
Owen Cotton-Barratt @ 2022-05-16T21:45 (+8)
Communication channels which allow for lots of information and context to flow back and forth between people. e.g. if I read an article and then go to enact the plan described in the article, that's low-bandwidth. If I sit down with the author for three hours and interrogate them about the reasoning and ask what they think about my ideas for possible plan variations, that's high-bandwidth.