Collection of definitions of "good judgement"

By MichaelA🔸 @ 2022-03-14T14:14 (+32)

“Good judgement” seems to me a useful and widely used (within EA) term which points at an important skill. But I think there isn’t a standard dictionary definition that matches what EA community members have in mind for the term, and that people often use the term without defining it. So I decided to collect and summarise in this post all the definitions of the term I’m aware of. 

Please comment below if you know of or would suggest additional definitions or sources!

My summaries of people’s definitions

(It’s possible that those summaries somewhat misrepresent these people’s full views.)

Disclaimers

The definitions in full & in context

These are in order of recency.

Various people, 2020, How can good generalist judgment be differentiated from skill at forecasting?

See the comments there. (But note that I personally think the top comment isn’t useful; I think it casts much too wide a net and differs too much from how other people use the term “good judgement”, which could then create misunderstandings between people.)

One comment I’d like to highlight is from Linch:

“Good judgment can roughly be divided within 2 mostly distinct clusters:

Forecasting is only directly related to the former, and not the later (though presumably there are some general skills that are applicable to both). In addition, within the "forming good world models" angle, good forecasting is somewhat agnostic to important factors like:

___

Note that "better world models" vs "good decisions based on existing models" isn't the only possible ontology to break up "good judgment."

- Owen uses understanding of the world vs heuristics.

- In the past, I've used intelligence vs wisdom.”

Benjamin Todd, 2020, Notes on good judgement and how to develop it

“Judgement, which I roughly define as ‘the ability to weigh complex information and reach calibrated conclusions,’ is clearly a valuable skill.

[...]

Why good judgement is so valuable when aiming to have an impact

One reason is lack of feedback. We can never be fully certain which issues are most pressing, or which interventions are most effective. Even in an area like global health – where we have relatively good data on what works – there has been huge debate over the cost effectiveness of even a straightforward intervention like deworming. Deciding whether to focus on deworming requires judgement.

This lack of feedback becomes even more pressing when we come to efforts to reduce existential risks or help the long-term future, and efforts that take a more ‘hits based’ approach to impact. An existential risk can only happen once, so there’s a limit to how much data we can ever have about what reduces them, and we must mainly rely on judgement.1

Reducing existential risks and some of the other areas we focus on are also new fields of research, so we don’t even have established heuristics or widely accepted knowledge that someone can simply learn and apply in place of using their judgement.

You may not need to make these judgement calls yourself – but you at least need to have good enough judgement to pick someone else with good judgement to listen to.

In contrast, in other domains it’s easier to avoid relying on judgement. For instance, in the world of for-profit startups, it’s possible (somewhat) to try things, gain feedback by seeing what creates revenue, and refine from there. Someone with so-so judgement can use other approaches to pursue a good strategy.

Other fields have other ways of avoiding judgement. In engineering you can use well-established quantitative rules to figure out what works. When you have lots of data, you can use statistical models. Even in more qualitative research like anthropology, there are standard ‘best practice’ research methods that people can use. In other areas you can follow traditions and norms that embody centuries of practical experience.

I get the impression that many in effective altruism agree that judgement is a key trait. In the 2020 EA Leaders Forum survey, respondents were asked which traits they would most like to see in new community members over the next five years, and judgement came out highest by a decent margin. 

[...] 

It’s also notable that two of the other most desired traits – analytical intelligence and independent thinking – both relate to what we might call ‘good thinking’ as well. (Though note that this question was only about ‘traits,’ as opposed to skills/expertise or other characteristics.)

[...]

More on what good judgement is

I introduced a rough definition above, but there’s a lot of disagreement about what exactly good judgement is, so it’s worth saying a little more. Many common definitions seem overly broad, making judgement a central trait almost by definition. For instance, the Cambridge Dictionary defines it as:

‘The ability to form valuable opinions and make good decisions’

While the US Bureau of Labor Statistics defines it as:

‘Considering the relative costs and benefits of potential actions to choose the most appropriate one’

I prefer to focus on the rough narrower definition I introduced at the start (and which was used in the survey I mentioned above), which makes judgement more clearly different from other cognitive traits:

‘The ability to weigh complex information and reach calibrated conclusions’

More practically, I think of someone with good judgement as someone able to:

  1. Focus on the right questions
  2. When answering those questions, synthesise many forms of weak evidence using good heuristics, and weigh the evidence appropriately
  3. Be resistant to common cognitive biases by having good habits of thinking
  4. Come to well-calibrated conclusions

Owen Cotton-Barratt wrote out his understanding of good judgement, breaking it into ‘understanding’ and ‘heuristics.’ His notion is a bit broader than mine.

Here are some closely related concepts:

Here are some other concepts in the area, but that seem more different:

[...]

Forecasting isn’t exactly the same as good judgement, but seems very closely related – it at least requires “weighing up complex information and coming to calibrated conclusions”, though it might require other abilities too. That said, I also take good judgement to include picking the right questions, which forecasting doesn’t cover.

All told, I think there’s enough overlap that if you improve at forecasting, you’re likely going to improve your general judgement as well.”

[Todd then discusses traits and practices of good forecasters and how to improve at forecasting, which is also relevant for good judgement.]

Owen Cotton-Barratt, 2020, "Good judgement" and its components - EA Forum 

[What follows is the post in its entirety, since it’s short and entirely relevant here. There’s also some good discussion in the comments which I won’t copy or summarise here.]

“Meta: Lots of people interested in EA (including me) think that something like "good judgement" is a key trait for the community, but there isn't a commonly understood definition. I wrote a quick version of these notes in response to a question from Ben Todd, and he suggested posting them here. These represent my personal thinking about judgement and its components.

Good judgement is about mental processes which tend to lead to good decisions. (I think good decision-making is centrally important for longtermist EA, for reasons I won't get into here.) Judgement has two major ingredients: understanding of the world, and heuristics.


Understanding of the world helps you make better predictions about how things are in the world now, what trajectories they are on (so how they will be at future points), and how different actions might have different effects on that. This is important for helping you explicitly think things through. There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts. Sometimes understanding is held in terms of implicit predictions (perhaps based on experience). How good someone's understanding of the world is can vary a lot by domain, but some of the sub-skills are transferrable across domains.

You can improve your understanding of the world by learning foundational facts about important domains, and by practicing skills like model-building and forecasting. You can also improve understanding of a domain by importing models from other people, although you may face challenges of being uncertain how much to trust their models. (One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)


Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good. Some heuristics are fairly general (e.g. "avoid doing sketchy things"), and some apply to specific domains (e.g. "when hiring programmers, put a lot of weight on the coding tests").

You can improve your heuristics by paying attention to your experience of what worked well or poorly for you. Experience might cause you to generate new candidate heuristics (explicitly or implicitly) and hold them as hypotheses to be tested further. They can also be learned socially, transmitted from other people. (Hopefully they were grounded in experience at some point. Learning can be much more efficient if we allow the transmission of heuristics between people, but if you don't require people to have any grounding in their own experience or cases they've directly heard about, it's possible for heuristics to be propagated without regard for whether they're still useful, or if the underlying circumstances have changed enough that they shouldn't be applied. Navigating this tension is an interesting problem in social epistemology.)

One of the reasons that it's often good to spend time with people with good judgement is that you can make observations of their heuristics in action. Learning heuristics is difficult from writing, since there is a lot of subtlety about the boundaries of when they're applicable, or how much weight to put on them. To learn from other people (rather than your own experience) it's often best to get a chance to interrogate decisions that were a bit surprising or didn't quite make sense to you. It can also be extremely helpful to get feedback on your own decisions, in circumstances where the person giving feedback has high enough context that they can meaningfully bring their heuristics to bear.


Good judgement generally wants a blend of understanding the world and heuristics. Going just with heuristics makes it hard to project out and think about scenarios which are different from ones you've historically faced. But our ability to calculate out consequences is limited, and some forms of knowledge are more efficiently incorporated into decision-making as heuristics rather than understanding about the world.

One kind of judgement which is important is meta-level judgement about how much weight to put on different perspectives. Say you are deciding whether to publish an advert which you think will make a good impression on people and bring users to your product, but contains a minor inaccuracy which would require much more awkward wording to avoid. You might bring to bear the following perspectives:

A) The heuristic "don't lie"

B) The heuristic "have snappy adverts"

C) The implicit model which is your gut prediction of what will happen if you publish

D) The explicit model about what will happen that you drew up in a spreadsheet

E) The advice of your partner

F) The advice of a professional marketer you talked to

Each of these has something legitimate to contribute. The choice of how to reach a decision is a judgement, which I think is usually made by choosing how much weight to put on the different perspectives in this circumstance (including sometimes just letting one perspective dominate). These weights might in turn be informed by your understanding of the world (e.g. "marketers should know about this stuff"), and also by your own experience ("wow, my partner always seems to give good advice on these kinds of tricky situations").

I think that almost always the choice of these weights is a heuristic (and that the weights themselves are generally implicit rather than explicit). You could develop understanding of the world which specify how much to trust the different perspectives, but as boundedly rational actors, at some point we have to get off the understanding train and use heuristics as shortcuts (to decide when to spend longer thinking about things, when to wrap things up, when to make an explicit model, etc.).


Overall I hope that people can develop good object-level judgement in a number of important domains (strategic questions seem particularly tricky+important, but judgement about technical domains like AI, and procedural domains like how to run organisations also seem very strongly desirable; I suspect there's a long list of domains I'd think are moderately important). I also hope we can develop (and support people to develop) good meta-level judgement. When decision-makers have good meta-level judgement this can act as a force-multiplier on the presence of the best accessible object-level judgement in the epistemic system. It can also add a kind of robustness, making badly damaging mistakes quite a lot less likely.

Buck Shlegeris, 2019, Thoughts on doing good through non-standard EA career pathways 

“When I say someone has good judgement, I mean that I think they’re good at the following things:

These skills allow people to do things like the following:

I think it’s likely that there exist things you can read and do which make you better at having good judgement about what’s important in a field and strategically pursuing high impact opportunities within it. I suspect that other people have better ideas, but here are some guesses. (As I said, I don’t think that I’m overall great at this, though I think I’m good at some subset of this skill.)


MaxRa @ 2022-03-20T05:06 (+5)

Thanks Michael, that's a useful collection.

I think I like Linch's basic definition most, maybe because it's so close to the concepts of epistemic and instrumental rationality, which I found useful before. I'll extend his definition from the summary a little with points touched upon by the other definitions:

Good judgment can roughly be divided within 2 mostly distinct clusters:

(Note: Linch is currently my supervisor & Michael is another senior manager in my department, so take my positive feedback with a grain of salt :P)