"Good judgement" and its components
By Owen Cotton-Barratt @ 2020-08-19T23:30 (+63)
Meta: Lots of people interested in EA (including me) think that something like "good judgement" is a key trait for the community, but there isn't a commonly understood definition. I wrote a quick version of these notes in response to a question from Ben Todd, and he suggested posting them here. These represent my personal thinking about judgement and its components.
Good judgement is about mental processes which tend to lead to good decisions. (I think good decision-making is centrally important for longtermist EA, for reasons I won't get into here.) Judgement has two major ingredients: understanding of the world, and heuristics.
Understanding of the world helps you make better predictions about how things are in the world now, what trajectories they are on (so how they will be at future points), and how different actions might have different effects on that. This is important for helping you explicitly think things through. There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts. Sometimes understanding is held in terms of implicit predictions (perhaps based on experience). How good someone's understanding of the world is can vary a lot by domain, but some of the sub-skills are transferrable across domains.
You can improve your understanding of the world by learning foundational facts about important domains, and by practicing skills like model-building and forecasting. You can also improve understanding of a domain by importing models from other people, although you may face challenges of being uncertain how much to trust their models. (One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)
Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good. Some heuristics are fairly general (e.g. "avoid doing sketchy things"), and some apply to specific domains (e.g. "when hiring programmers, put a lot of weight on the coding tests").
You can improve your heuristics by paying attention to your experience of what worked well or poorly for you. Experience might cause you to generate new candidate heuristics (explicitly or implicitly) and hold them as hypotheses to be tested further. They can also be learned socially, transmitted from other people. (Hopefully they were grounded in experience at some point. Learning can be much more efficient if we allow the transmission of heuristics between people, but if you don't require people to have any grounding in their own experience or cases they've directly heard about, it's possible for heuristics to be propagated without regard for whether they're still useful, or if the underlying circumstances have changed enough that they shouldn't be applied. Navigating this tension is an interesting problem in social epistemology.)
One of the reasons that it's often good to spend time with people with good judgement is that you can make observations of their heuristics in action. Learning heuristics is difficult from writing, since there is a lot of subtlety about the boundaries of when they're applicable, or how much weight to put on them. To learn from other people (rather than your own experience) it's often best to get a chance to interrogate decisions that were a bit surprising or didn't quite make sense to you. It can also be extremely helpful to get feedback on your own decisions, in circumstances where the person giving feedback has high enough context that they can meaningfully bring their heuristics to bear.
Good judgement generally wants a blend of understanding the world and heuristics. Going just with heuristics makes it hard to project out and think about scenarios which are different from ones you've historically faced. But our ability to calculate out consequences is limited, and some forms of knowledge are more efficiently incorporated into decision-making as heuristics rather than understanding about the world.
One kind of judgement which is important is meta-level judgement about how much weight to put on different perspectives. Say you are deciding whether to publish an advert which you think will make a good impression on people and bring users to your product, but contains a minor inaccuracy which would require much more awkward wording to avoid. You might bring to bear the following perspectives:
A) The heuristic "don't lie"
B) The heuristic "have snappy adverts"
C) The implicit model which is your gut prediction of what will happen if you publish
D) The explicit model about what will happen that you drew up in a spreadsheet
E) The advice of your partner
F) The advice of a professional marketer you talked to
Each of these has something legitimate to contribute. The choice of how to reach a decision is a judgement, which I think is usually made by choosing how much weight to put on the different perspectives in this circumstance (including sometimes just letting one perspective dominate). These weights might in turn be informed by your understanding of the world (e.g. "marketers should know about this stuff"), and also by your own experience ("wow, my partner always seems to give good advice on these kinds of tricky situations").
I think that almost always the choice of these weights is a heuristic (and that the weights themselves are generally implicit rather than explicit). You could develop understanding of the world which specify how much to trust the different perspectives, but as boundedly rational actors, at some point we have to get off the understanding train and use heuristics as shortcuts (to decide when to spend longer thinking about things, when to wrap things up, when to make an explicit model, etc.).
Overall I hope that people can develop good object-level judgement in a number of important domains (strategic questions seem particularly tricky+important, but judgement about technical domains like AI, and procedural domains like how to run organisations also seem very strongly desirable; I suspect there's a long list of domains I'd think are moderately important). I also hope we can develop (and support people to develop) good meta-level judgement. When decision-makers have good meta-level judgement this can act as a force-multiplier on the presence of the best accessible object-level judgement in the epistemic system. It can also add a kind of robustness, making badly damaging mistakes quite a lot less likely.
MichaelA @ 2020-08-20T13:47 (+8)
Nice post, thanks!
Some other posts which parts of this reminded me of, and which some readers might find interesting:
- Some thoughts on deference and inside-view models
- Information cascades
- I was reminded of that concept by the paragraph ending "Navigating this tension is an interesting problem in social epistemology"
- Sequence thinking vs. cluster thinking
- I was reminded of that post by the parts about deciding how much weight to put on different perspectives
- Improving the future by influencing actors' benevolence, intelligence, and power
- (Disclaimer: Written by me)
- I was reminded of that post by the final paragraph
- That post largely supports the idea that it'd be quite valuable to facilitate the development of good judgement by EAs and perhaps members of related communities. But it also provides some caveats, as good judgement could in some cases be used in pursuit of counterproductive goals.
ofer @ 2020-08-20T03:18 (+7)
Thanks for writing this!
Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good.
I think we usually need to have a good understanding of why a certain heuristic is good and what are the implications of following it (maybe you agree with this; it wasn't clear to me from the post). The world is messy and complex. We don't get to see the counterfactual world where we didn't follow the heuristic at a particular time, and the impact of following the heuristic may be dominated by flow-through effects.
Owen_Cotton-Barratt @ 2020-08-20T10:31 (+6)
Thanks. I think this is kind of nuanced, but here are some statements in the vicinity I agree with:
- Heuristics and understanding of the world are not separate magisteria, and can inform each other
- Understanding can tell us the implications of following different heuristics and let us choose
- Noticing that a heuristic seems to work well can lead us to question what about the world makes it work well (or just provide evidence for worlds where that heuristic would work well over ones where it wouldn't)
- In general time spent thinking and exploring the interplay between these often seems valuable to me
- Having a good understanding of why a heuristic is good can increase our trust in that heuristic
- Lack of understanding of why a heuristic is good, when we've spent time looking for such understanding, is evidence against the heuristic
- Particularly if we can't even see a plausible mechanism it can be significant evidence
On the other hand I think I disagree with your statement taken literally:
- I think usually heuristics are employed at a micro-scale, they're implicit, and there are a lot of them: we simply don't get to have good understanding of most
- Even for heuristics that are explicit and have been promoted to our conscious attention, we sometimes justifiably have more trust in the heuristic than in our understanding of the underlying mechanisms
- e.g. I do think "avoid doing sketchy things" is often a useful heuristic; my evidence base for this includes a bunch of direct and reported observations, as well as social proof of others' views. I'm sure I don't fully understand the boundaries of how to apply it (even the specification of "sketchy" is done implicitly). I've thought about why it seems good to avoid sketchy things, and have a partial understanding of mechanisms, but I'm sure there's a lot of detail I don't understand there. But I don't think that I need to fully understand those details to get value out of the heuristic. I also would prefer that my past self put some weight on this heuristic, even before I'd tried to think through the mechanisms (although I'm glad I've done that thinking).
ofer @ 2020-08-20T14:03 (+1)
Thank you for the thoughtful comment!
As an aside, when I wrote "we usually need to have a good understanding ..." I was thinking about explicit heuristics. Trying to understand the implications of our implicit heuristics (which may be hard to influence) seems somewhat less promising. Some of our implicit heuristics may be evolved mechanisms (including game-theoretical mechanisms) that are very useful for us today, even if we don't have the capacity to understand why.
Owen_Cotton-Barratt @ 2020-08-20T14:14 (+2)
I don't think there's always a clear line between implicit and explicit heuristics, e.g. often I think they might start out as implicit and then be made (partially) explicit in the process of reflecting on them.
If you're going to import an explicit heuristic I think that it's usually a good idea to have a good understanding of its mechanism. But you might forgo this requirement if you have enough trust in its provenance. Also moderately often I think hearing an explicit heuristic from someone else gives you a hypothesis that you can now pay some attention to and see how it performs in different contexts and then work out whether you want to give it any weight in your decision-making. (I think a lot of distilled advice has something of this nature.)
MaxRa @ 2020-08-26T09:10 (+5)
Thanks for writing this up, I found it very stimulating.
(One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)
Probably an edge case, but I wonder if an adversary could purposefully divert your attention away from important considerations. Thinking about it, I actually remember doing something like this in an adversarial board game, where I used „helpful“ clues to direct the attention of somebody who I‘m plotting against.
Another thought regarding incentives for good judgements came up: People seem to automatically develop good judgement in areas where they get feedback and have „skin in the game“, for example when deciding how to get a tasty meal or with whom to talk about a personal problem. So I wondered how much attention we should put on the surrounding incentives to develop good judgement. Will we get feedback, e.g. from peers that look at our reasoning, or from failed predictions because we develop the habit to make forecasts? Are we betting on our believes and making our track record public? There is probably much more to say how we could better incentivize good judgement.
Ozzie Gooen @ 2021-04-17T22:03 (+4)
I've been thinking about this topic recently. One question that comes to mind: How much of Good Judgement do you think is explained by g/IQ? My quick guess is that they are heavily correlated.
My impression is that people with "good judgement" match closely with the people that hedge funds really want to hire as analysts, or who make strong executives of product managers.
Owen_Cotton-Barratt @ 2021-04-17T23:45 (+4)
Yeah my quick guess is that (as for many complex skills) g is very helpful, but that it's very possible to be high g without being very good at the thing I'm pointing at (partially because feedback loops are poor, so people haven't necessarily has a good training signal for improving).
G Gordon Worley III @ 2020-08-20T18:07 (+2)
To what extent are you thinking (without so far explicitly saying it) that "good judgment" is a possible EA rebranding of LessWrong-style rationality?
Owen_Cotton-Barratt @ 2020-08-20T22:42 (+11)
Gosh, I wasn't (explicitly) thinking about branding at all. This is something I've been finding useful in my personal ontology, and I actually wasn't thinking about sharing it publicly until Ben suggested that, I thought "oh that makes sense" and tidied up the notes to post here (with some quick mental checks that they didn't seem somehow harmful). I'm mildly embarrassed that I hadn't thought about questions of how it could interact with branding of ideas -- but in some recent reflection I realised I was probably underweighting the value of making thinking public even when imperfect, so I'm not certain that there was any meta-level error here.
I think that there are legitimate questions here for me anyway, though: how much does my conception line up with LessWrong-style rationality, and/or why am I not just using that mental bucket? Definitely this is in a similar space. I guess I tend to think of "rationality" as referring to both a goal (think well) and a culture / {set of content designed to facilitate that}. I'm wanting to refer to the objective but without taking much of a stance on the informational content that people should consume to get better at it. I feel like there are lots of people in the world with a lot of elements of good judgement who have never heard of EA/rationality. I want to be able to point to them and what they're doing well, rather than have something that feels like a particular (niche?) school of thinking, so I don't really want strong associations with either EA or LessWrong.
G Gordon Worley III @ 2020-08-21T16:40 (+11)
Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.