Collection of definitions of "good judgement"
By MichaelAđ¸ @ 2022-03-14T14:14 (+31)
âGood judgementâ seems to me a useful and widely used (within EA) term which points at an important skill. But I think there isnât a standard dictionary definition that matches what EA community members have in mind for the term, and that people often use the term without defining it. So I decided to collect and summarise in this post all the definitions of the term Iâm aware of.
Please comment below if you know of or would suggest additional definitions or sources!
My summaries of peopleâs definitions
- Linch (2020) states: âGood judgment can roughly be divided within 2 mostly distinct clusters:
- Forming sufficiently good world models given practical constraints.
- Making good decisions on the basis of such (often limited) models.â
- Todd (2020) describes good judgement as âThe ability to weigh complex information and reach calibrated conclusionsâ, and says someone with good judgement is able to:
- âFocus on the right questions
- When answering those questions, synthesise many forms of weak evidence using good heuristics, and weigh the evidence appropriately
- Be resistant to common cognitive biases by having good habits of thinking
- Come to well-calibrated conclusionsâ
- Cotton-Barratt (2020) describes good judgement as being about mental processes which tend to lead to good decisions, and highlights three major ingredients: understanding, heuristics, and meta-level judgement. Sub-skills of understanding include model-building, having calibrated estimates, and just knowing relevant facts. Meta-level judgement is about how much weight to put on different perspectives.
- Shlegeris (2019) describes good judgement as being about âSpotting the important questionsâ, âmaking quick guesses for answers to questions they care aboutâ, âthink[ing] critically about evidence [and spotting] ways that itâs misleadingâ, âHaving good sense about how the world works and what plans are likely to workâ, âKnowing when theyâre out of their depth, knowing who to ask for help, knowing who to trust.â
(Itâs possible that those summaries somewhat misrepresent these peopleâs full views.)
Disclaimers
- This post does not attempt to address how to develop this skill or why it matters, even though those are of course key questions. (That said, some of the linked posts and the quotes from them do discuss this.)
- You might therefore want to just read the above summary and then the linked posts, rather than reading the excerpts I include below.
- I only spent an hour writing on this post, and only included the definitions I already knew and remembered - I didnât conduct anything close to a thorough search.
- Itâs possible that itâs weird/bad/copyright infringement for me to include such extensive quotes below; please let me know if you think that thatâs the case.
The definitions in full & in context
These are in order of recency.
Various people, 2020, How can good generalist judgment be differentiated from skill at forecasting?
See the comments there. (But note that I personally think the top comment isnât useful; I think it casts much too wide a net and differs too much from how other people use the term âgood judgementâ, which could then create misunderstandings between people.)
One comment Iâd like to highlight is from Linch:
âGood judgment can roughly be divided within 2 mostly distinct clusters:
- Forming sufficiently good world models given practical constraints.
- Making good decisions on the basis of such (often limited) models.
Forecasting is only directly related to the former, and not the later (though presumably there are some general skills that are applicable to both). In addition, within the "forming good world models" angle, good forecasting is somewhat agnostic to important factors like:
- Group epistemics. There are times where it's less important whether an individual has the right world models but that your group has access to the right plethora of models.
- It may be the case that it's practically impossible for a single individual to hold all of them, so specialization is necessary.
- Asking the right questions. Having the world's lowest Brier score on something useless is in some sense impressive, but it's not very impactful compared to being moderately accurate on more important questions.
- Correct contrarianism. As a special case of the above two points, in both science and startups, it is often (relatively) more important to be right about things that others are wrong about than it is to be right about everything other people are right about.
___
Note that "better world models" vs "good decisions based on existing models" isn't the only possible ontology to break up "good judgment."
- Owen uses understanding of the world vs heuristics.
- In the past, I've used intelligence vs wisdom.â
Benjamin Todd, 2020, Notes on good judgement and how to develop it
âJudgement, which I roughly define as âthe ability to weigh complex information and reach calibrated conclusions,â is clearly a valuable skill.
[...]
Why good judgement is so valuable when aiming to have an impact
One reason is lack of feedback. We can never be fully certain which issues are most pressing, or which interventions are most effective. Even in an area like global health â where we have relatively good data on what works â there has been huge debate over the cost effectiveness of even a straightforward intervention like deworming. Deciding whether to focus on deworming requires judgement.
This lack of feedback becomes even more pressing when we come to efforts to reduce existential risks or help the long-term future, and efforts that take a more âhits basedâ approach to impact. An existential risk can only happen once, so thereâs a limit to how much data we can ever have about what reduces them, and we must mainly rely on judgement.1
Reducing existential risks and some of the other areas we focus on are also new fields of research, so we donât even have established heuristics or widely accepted knowledge that someone can simply learn and apply in place of using their judgement.
You may not need to make these judgement calls yourself â but you at least need to have good enough judgement to pick someone else with good judgement to listen to.
In contrast, in other domains itâs easier to avoid relying on judgement. For instance, in the world of for-profit startups, itâs possible (somewhat) to try things, gain feedback by seeing what creates revenue, and refine from there. Someone with so-so judgement can use other approaches to pursue a good strategy.
Other fields have other ways of avoiding judgement. In engineering you can use well-established quantitative rules to figure out what works. When you have lots of data, you can use statistical models. Even in more qualitative research like anthropology, there are standard âbest practiceâ research methods that people can use. In other areas you can follow traditions and norms that embody centuries of practical experience.
I get the impression that many in effective altruism agree that judgement is a key trait. In the 2020 EA Leaders Forum survey, respondents were asked which traits they would most like to see in new community members over the next five years, and judgement came out highest by a decent margin.
[...]
Itâs also notable that two of the other most desired traits â analytical intelligence and independent thinking â both relate to what we might call âgood thinkingâ as well. (Though note that this question was only about âtraits,â as opposed to skills/expertise or other characteristics.)
[...]
More on what good judgement is
I introduced a rough definition above, but thereâs a lot of disagreement about what exactly good judgement is, so itâs worth saying a little more. Many common definitions seem overly broad, making judgement a central trait almost by definition. For instance, the Cambridge Dictionary defines it as:
âThe ability to form valuable opinions and make good decisionsâ
While the US Bureau of Labor Statistics defines it as:
âConsidering the relative costs and benefits of potential actions to choose the most appropriate oneâ
I prefer to focus on the rough narrower definition I introduced at the start (and which was used in the survey I mentioned above), which makes judgement more clearly different from other cognitive traits:
âThe ability to weigh complex information and reach calibrated conclusionsâ
More practically, I think of someone with good judgement as someone able to:
- Focus on the right questions
- When answering those questions, synthesise many forms of weak evidence using good heuristics, and weigh the evidence appropriately
- Be resistant to common cognitive biases by having good habits of thinking
- Come to well-calibrated conclusions
Owen Cotton-Barratt wrote out his understanding of good judgement, breaking it into âunderstandingâ and âheuristics.â His notion is a bit broader than mine.
Here are some closely related concepts:
- Keith Stanovichâs work on ârationality,â which seems to be something like someoneâs ability to avoid cognitive biases, and is ~0.7 correlated with intelligence (so, closely related but not exactly the same)
- The cluster of traits (listed later) that make someone a good âsuperforecasterâ in Philip Tetlockâs work (Tetlock also claims that intelligence is only modestly correlated with being a superforecaster)
Here are some other concepts in the area, but that seem more different:
- Intelligence: I think of this as more like âprocessing speedâ â your ability to make connections, have insights, and solve well-defined problems. Intelligence is an aid in good judgement â since it lets you make more connections â but the two seem to come apart. We all know people who are incredibly bright but seem to often make dumb decisions. This could be because theyâre overconfident or biased, despite being smart.
- Strategic thinking: Good strategic thinking involves being able to identify top priorities, develop a good plan for working towards those priorities, and improve the plan over time. Good judgement is a great aid to strategy, but a good strategy can also make judgement less necessary (e.g. by creating a good backup plan, you can minimise the risks of your judgement being wrong).
- Expertise: Knowledge of the topic is useful all else equal, but Tetlockâs work (covered more below) shows that many experts donât have particularly accurate judgement.
- Decision making: Good decision making depends on all of the above: strategy, intelligence, and judgement.
[...]
Forecasting isnât exactly the same as good judgement, but seems very closely related â it at least requires âweighing up complex information and coming to calibrated conclusionsâ, though it might require other abilities too. That said, I also take good judgement to include picking the right questions, which forecasting doesnât cover.
All told, I think thereâs enough overlap that if you improve at forecasting, youâre likely going to improve your general judgement as well.â
[Todd then discusses traits and practices of good forecasters and how to improve at forecasting, which is also relevant for good judgement.]
Owen Cotton-Barratt, 2020, "Good judgement" and its components - EA Forum
[What follows is the post in its entirety, since itâs short and entirely relevant here. Thereâs also some good discussion in the comments which I wonât copy or summarise here.]
âMeta: Lots of people interested in EA (including me) think that something like "good judgement" is a key trait for the community, but there isn't a commonly understood definition. I wrote a quick version of these notes in response to a question from Ben Todd, and he suggested posting them here. These represent my personal thinking about judgement and its components.
Good judgement is about mental processes which tend to lead to good decisions. (I think good decision-making is centrally important for longtermist EA, for reasons I won't get into here.) Judgement has two major ingredients: understanding of the world, and heuristics.
Understanding of the world helps you make better predictions about how things are in the world now, what trajectories they are on (so how they will be at future points), and how different actions might have different effects on that. This is important for helping you explicitly think things through. There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts. Sometimes understanding is held in terms of implicit predictions (perhaps based on experience). How good someone's understanding of the world is can vary a lot by domain, but some of the sub-skills are transferrable across domains.
You can improve your understanding of the world by learning foundational facts about important domains, and by practicing skills like model-building and forecasting. You can also improve understanding of a domain by importing models from other people, although you may face challenges of being uncertain how much to trust their models. (One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)
Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good. Some heuristics are fairly general (e.g. "avoid doing sketchy things"), and some apply to specific domains (e.g. "when hiring programmers, put a lot of weight on the coding tests").
You can improve your heuristics by paying attention to your experience of what worked well or poorly for you. Experience might cause you to generate new candidate heuristics (explicitly or implicitly) and hold them as hypotheses to be tested further. They can also be learned socially, transmitted from other people. (Hopefully they were grounded in experience at some point. Learning can be much more efficient if we allow the transmission of heuristics between people, but if you don't require people to have any grounding in their own experience or cases they've directly heard about, it's possible for heuristics to be propagated without regard for whether they're still useful, or if the underlying circumstances have changed enough that they shouldn't be applied. Navigating this tension is an interesting problem in social epistemology.)
One of the reasons that it's often good to spend time with people with good judgement is that you can make observations of their heuristics in action. Learning heuristics is difficult from writing, since there is a lot of subtlety about the boundaries of when they're applicable, or how much weight to put on them. To learn from other people (rather than your own experience) it's often best to get a chance to interrogate decisions that were a bit surprising or didn't quite make sense to you. It can also be extremely helpful to get feedback on your own decisions, in circumstances where the person giving feedback has high enough context that they can meaningfully bring their heuristics to bear.
Good judgement generally wants a blend of understanding the world and heuristics. Going just with heuristics makes it hard to project out and think about scenarios which are different from ones you've historically faced. But our ability to calculate out consequences is limited, and some forms of knowledge are more efficiently incorporated into decision-making as heuristics rather than understanding about the world.
One kind of judgement which is important is meta-level judgement about how much weight to put on different perspectives. Say you are deciding whether to publish an advert which you think will make a good impression on people and bring users to your product, but contains a minor inaccuracy which would require much more awkward wording to avoid. You might bring to bear the following perspectives:
A) The heuristic "don't lie"
B) The heuristic "have snappy adverts"
C) The implicit model which is your gut prediction of what will happen if you publish
D) The explicit model about what will happen that you drew up in a spreadsheet
E) The advice of your partner
F) The advice of a professional marketer you talked to
Each of these has something legitimate to contribute. The choice of how to reach a decision is a judgement, which I think is usually made by choosing how much weight to put on the different perspectives in this circumstance (including sometimes just letting one perspective dominate). These weights might in turn be informed by your understanding of the world (e.g. "marketers should know about this stuff"), and also by your own experience ("wow, my partner always seems to give good advice on these kinds of tricky situations").
I think that almost always the choice of these weights is a heuristic (and that the weights themselves are generally implicit rather than explicit). You could develop understanding of the world which specify how much to trust the different perspectives, but as boundedly rational actors, at some point we have to get off the understanding train and use heuristics as shortcuts (to decide when to spend longer thinking about things, when to wrap things up, when to make an explicit model, etc.).
Overall I hope that people can develop good object-level judgement in a number of important domains (strategic questions seem particularly tricky+important, but judgement about technical domains like AI, and procedural domains like how to run organisations also seem very strongly desirable; I suspect there's a long list of domains I'd think are moderately important). I also hope we can develop (and support people to develop) good meta-level judgement. When decision-makers have good meta-level judgement this can act as a force-multiplier on the presence of the best accessible object-level judgement in the epistemic system. It can also add a kind of robustness, making badly damaging mistakes quite a lot less likely.
Buck Shlegeris, 2019, Thoughts on doing good through non-standard EA career pathways
âWhen I say someone has good judgement, I mean that I think theyâre good at the following things:
- Spotting the important questions. When they start thinking about a topic (How good is leaflet distribution as an intervention to reduce animal suffering? How should we go about reducing animal suffering? How worried should we be about AI x-risk? Should we fund this project?), they come up with key considerations and realize what they need to learn more about in order to come to a good decision.
- Having good research intuitions. They are good at making quick guesses for answers to questions they care about. They think critically about evidence that they are being presented with, and spot ways that itâs misleading.
- Having good sense about how the world works and what plans are likely to work. They make good guesses about what people will do, what organizations will do, how the world will change over time. They have good common sense about plans theyâre considering executing on; they rarely make choices which seem absurdly foolish in retrospect.
- Knowing when theyâre out of their depth, knowing who to ask for help, knowing who to trust.
These skills allow people to do things like the following:
- Figure out cause prioritization
- Figure out if they should hire someone to work on something
- Spot which topics are going to be valuable to the world for them to research
- Make plans based on their predictions for how the world will look in five years
- Spot underexplored topics
- Spot mistakes that are being made by people in their community; spot subtle holes in widely-believed arguments
I think itâs likely that there exist things you can read and do which make you better at having good judgement about whatâs important in a field and strategically pursuing high impact opportunities within it. I suspect that other people have better ideas, but here are some guesses. (As I said, I donât think that Iâm overall great at this, though I think Iâm good at some subset of this skill.)
- Being generally knowledgeable seems helpful.
- Learning history of science (or other fields which have a clear notion of progress) seems good. Iâve heard people recommend reading contemporaneous accounts of scientific advancements, so that you learn more about what itâs like to be in the middle of shifts.
- Perhaps this is way too specific, but I have been trying to come up with a general picture of how science advances by talking to scientists about how their field has progressed over the last five years and how they expect it to progress in the next five. For example, maybe the field is changing because computers are cheaper now, or because we can make more finely tuned lasers or smaller cameras, or because we can more cheaply manufacture something. I think that doing this has given me a somewhat clearer picture of how science develops, and what the limiting factors tend to be.
- I think that you can improve your skills at this by working with people who are good at it. To choose some arbitrary people, Iâm very impressed by the judgement of some people at Open Phil, MIRI, and OpenAI, and I think Iâve become stronger from working with them.
- The Less Wrong sequences try to teach this kind of judgement; many highly-respected EAs say that the Sequences were very helpful for them, so I think itâs worth trying them out. I found them very helpful. (Inconveniently, many people whose judgement that Iâm less impressed with are also big fans of the Sequences. And many smart EAs find them offputting or unhelpful.)â
MaxRa @ 2022-03-20T05:06 (+5)
Thanks Michael, that's a useful collection.
I think I like Linch's basic definition most, maybe because it's so close to the concepts of epistemic and instrumental rationality, which I found useful before. I'll extend his definition from the summary a little with points touched upon by the other definitions:
Good judgment can roughly be divided within 2 mostly distinct clusters:
- Forming sufficiently good world models given practical constraints.
- building world models that are useful for you and your community's world model portfolio
- efficiently seeking and using diverse forms of evidence
- learning models from people who have shown good judgement
- being able to derive calibrated forecasts from your models
- Making good decisions on the basis of such (often limited) models.
- strategically focussing on highest priority decisions
- using heuristics that are informed by and selected based on feedback
- seeking & weighing advice from people with relevant knowledge
- be reflective about cognitive biases & previous mistakes
(Note: Linch is currently my supervisor & Michael is another senior manager in my department, so take my positive feedback with a grain of salt :P)