The Dangers of a Little Knowledge

By Richard Y ChappellšŸ”ø @ 2024-01-02T00:29 (+30)

This is a linkpost to https://rychappell.substack.com/p/the-dangers-of-a-little-knowledge

TL;DR: Sometimes paired philosophical mistakes (mostly) cancel each other out, forming a protective equilibrium. A little knowledge is a dangerous thing: you donā€™t want people to end up in the situation of knowing enough to see through the illusory guardrails, but not enough to navigate successfully without the illusion. I suggest six such pairs, where it seems important to correct both mistakes simultaneously.

Introduction

Iā€™m struck by how often two theoretical mistakes manage to (mostly) cancel each other out. For example, I think that common sense ethical norms tend to do a pretty good job (albeit with significant room for improvement), in practice, while resting upon significant theoretical falsehoods. These falsehoods may be part of a ā€œlocal maximumā€: if you corrected them, without making further corrections elsewhere, you could well end up with morally worse beliefs and practices.

This observation forms the kernel of truth in the claim that utilitarianism is self-effacing. Utilitarianism is not strictly self-effacing: I still expect the global maximum may be achieved by having entirely true moral beliefs (or a close enough approximation).[1] But most people are stubbornly irrational in various ways, which may make it better for them to have false beliefs of a sort that limit the damage done by their other irrationality. These paired mistakes then constitute a protective equilibrium that stops these people from veering off into severe practical error (such as naive utilitarianism).

Itā€™s important to note that these paired mistakes are not the only protective equilibria available. The corresponding paired truths also work! But a little knowledge is a dangerous thing: you donā€™t want people to end up in the situation of knowing enough to see through the illusory guardrails, but not enough to navigate successfully without the illusion.

In this post, Iā€™ll suggest a few examples of such ā€œpaired mistakesā€:

  1. Using ā€œcollectivistā€ reasoning as a fudge to compensate for irrational views about individual efficacy.
  2. Using near-termism as a fudge to compensate for irrational cluelessness about the long term.
  3. Ignoring small probabilities as a fudge against Pascalian gullibility.
  4. Using deontology as a fudge to compensate for irrational naive instrumentalism.
  5. Tabooing inegalitarian empirical beliefs as a fudge for irrational (and unethical) essentializing of social groups.
  6. Viewing all procreative decisions as equally good, as a fudge against unethical coercive interference.[2]

Further suggestions welcome!

1. Inefficacy and Anti-individualism

Many people have false views about individual efficacy and expected value (see my Five Fallacies of Collective Harm), that lead them to underestimate the strength of our individualistic moral reasons to contribute to collective goods (like voting for the better candidate) and to reduce oneā€™s contributions to collective bads (like pollution and environmental damageā€”or voting for the worse candidate, for that matter).

If you make this mistake, it would be good to also make the paired mistake of believing that you have collectivistic moral reasons based on group contributions. There are no (non-negligible) such reasons, as I prove in ā€˜Valuing Unnecessary Causal Contributionsā€™. But the false belief that there are such reasons can help motivate you to do as you ought, when youā€™re too confused about inefficacy to be able to get the practical verdicts right for the right reasons.

Conversely: if you correctly understand why collectivist reasons are such a silly idea, itā€™s very important that you also appreciate why there often are sufficient individualistic moral reasons to contribute to good things even when the chance of your act making a difference is very small. (Remember that All Probabilities Matter!)

2. Cluelessness and Anti-longtermism

Some people falsely believe that we cannot justifiably regard anything (even preventing nuclear war!) as having long-term positive expected value. Iā€™ve previously argued that such cluelessness is less than perfectly rational, though it may itself be a useful protection against some forms of ā€œnaive instrumentalistā€ irrationality (see #4 below).

Still, if you make this mistake, it would be good to pair it with anti-longtermism, so you avoid decision paralysis and continue to do some good thingsā€”like trying to prevent nuclear warā€”albeit in partial ignorance of just how good these things are.

3. Pascalian Gullibility and Probability Neglect

Another form of misguided prior involves ā€œPascalian gullibilityā€: giving greater-than-infinitesimal credence to claims that unbounded value depends upon your satisfying anotherā€™s whims (e.g. their demand for your wallet)ā€”yielding a high ā€œexpected valueā€ to blind compliance.

If you are disposed to make this mistake, it would be good to pair it with anotherā€”namely, the disposition to simply ignore any sufficiently small probabilities, effectively rounding the Pascalian muggerā€™s threat down to the ā€œzeroā€ it really ought to have been all along. But this latter disposition is itself a kind of mistake (i.e. when dealing with better-grounded probabilities), as explained in my recent post: All Probabilities Matter. So it might be especially important to correct this pair.

4. Naive Instrumentalism and Anti-consequentialism

Many people (from academic censors to those who think that utilitarianism would actually justify Sam Bankman-Friedā€™s crimes)[3] seem drawn to naive instrumentalism: the assumption that oneā€™s moral goals are apt to be better achieved via Machiavellian means than by pursuing them with honesty and integrity, constraining oneā€™s behaviour by tried-and-tested norms and virtues. Like most (all?) historical utilitarians, I reject naive instrumentalism as hubristic and incompatible with all we know of human fallibility and biased cognition. (See here for more on what sort of decision procedure I take to be rationally superior.)

Still, if you areā€”abhorrentlyā€”a naive instrumentalist, youā€™d best pair it with non-consequentialism to at least limit the damage your irrationality might otherwise cause!

5. Social Essentialism and Tabooed Empirical Inquiry

Most people are terrible at statistical thinking. As Sarah-Jane Leslie explains in ā€˜The Original Sin of Cognition: Fear, Prejudice, and Generalizationā€™, people are natural ā€œessentialistsā€, prone to generalize ā€œstriking [i.e. threatening] propertiesā€ to entire groups based on even a tiny proportion of actual threats. (She compares the generics ā€œMuslims are terroristsā€ with ā€œmosquitos carry the West Nile virusā€.)

If youā€™re bad at thinking about statistical differences, and prone to draw unwarranted (and harmful) inferences about individuals on this basis, then it might be best for you to also believe that any sort of inquiry into group differences is taboo and morally suspect. You should just take it on faith that all groups are inherently equal, if anything more nuanced would corrupt you.[4]

But of course thereā€™s no reason that any empirical possibility should prove morally corrupting to a clear thinker (rare though the latter may be). As I noted previously: ā€œJust as opposition to homophobia shouldnā€™t be contingent on the (rhetorically useful but morally irrelevant) empirical claim that sexual orientation is innate, so our opposition to racial discrimination shouldnā€™t be contingent on empirical assumptions about genetics, IQ, or anything else.ā€[5] Group-level statistics just arenā€™t that relevant to how we should treat individuals, about whom we can easily obtain much more reliable evidence by directly assessing them on their own merits.

6. Illiberalism and Procreative Neutrality

Naive instrumentalists assume that illiberal coercion is often the best way to achieve moral goals. As a result, they imagine that pro-natalist longtermism must be a threat to reproductive rights (and to procreative liberty more generally).

I think this is silly because illiberalism is so obviously suboptimal. Thereā€™s just no excuse to resort to coercion when incentives work better (by allowing individuals to take distinctive features of their situation into account).

But for all the illiberal naive instrumentalists out there, perhaps it is best if they also mistakenly believe in procreative neutralityā€”i.e., the claim that there are no reasons of beneficence to bring more good lives into existence.

Should we lie?

Probably depends on your audience! Iā€™m certainly not going to, because Iā€™m committed to intellectual honesty, and I trust that my readers arenā€™t stupid. Plus, itā€™s dangerous for the lies to be too widespread: plenty of smart people are going to recognize the in-principle shortcomings of collectivism, neartermism, probability neglect, deontology, moralizing empirical inquiry, and procreative neutrality. We shouldnā€™t want such people to think that this commits them in practice to free riding, decision paralysis, Pascalian gullibility, naive instrumentalism, social essentialism, or procreative illiberalism. That would be both harmful and illogical.

So I think itā€™s worth making clear (i) that these pairs are (plausibly) mistakes, but (ii) it could be even worse to only correct one part of the mistake, since together they form a protective equilibrium. To avoid bad outcomes, you should try to move straight from one protective equilibrium to another, avoiding the shortcomings of just ā€œa little knowledgeā€.

We should typically expect the accurate protective equilibrium to be practically superior to the thoroughly false one, since accurate beliefs do tend to be useful (with rare exceptions that one would need to make a case for). But if you donā€™t think you can manage to make it all the way to the correct pairing, maybe best to stick with the old fudge for now!

 

  1. ^

    E.g., although Iā€™m (like everyone) probably wrong about some things, Iā€™m confident enough about the broad contours of my moral theory. And Iā€™m not aware of any reason to think that any alternative broad moral outlook would be more beneficial in practice than the sort of view I defend. The only real danger I see is if people only go part way towards my view, miss out on the protective equilibrium that the full view offers, and instead end up in a ā€œlocal minimumā€ for practicality. That would be bad. And maybe it would be difficult for some to make it all the way to my view, in which case it could be bad for them to attempt it. But thatā€™s very different from saying that the view itself is bad.

  2. ^

    I added this one after initial posting, thanks to Dan G.ā€™s helpful comment on the public facebook thread suggesting a general schema for paired mistakes involving (i) openness to wrongful coercion and (ii) mistakenly judging all options to be on a par.

  3. ^

    I think itā€™s interesting, and probably not a coincidence, that people with naive instrumentalist empirical beliefs are overwhelmingly not consequentialists. (A possible explanation: commitment to actually do whatā€™s expectably best creates stronger incentives to think carefully and actually get the answer right, compared to critics whose main motivation may just be to make the view in question look bad. Alternatively, the difference may partly lie in selection effects: consequentialism may look more plausible to those who share my empirical belief that it typically prohibits intuitively ā€œviciousā€ actions. Though itā€™s striking that the censors actually endorse their short-sighted censorship. Not really sure how to explain why their empirical beliefs differ so systematically from free-speech-loving consequentialists.)

  4. ^

    I should stress that the ā€œmistakeā€ Iā€™m attributing here is the taboo itself, not the resulting egalitarian beliefs. Due to the taboo, I have no idea what the first-order truth of the matter is. Maybe progressive dogma is 100% correct; itā€™s just that, for standard Millian reasons, we cannot really trust this in the absence of free and open inquiry into the matter. Still, if you would be corrupted by any result other than progressive orthodoxy, then it would also seem best to just take that on faith and not inquire any further. But the central error here, I want to suggest, is the susceptibility to corruption in the first place. That just seems really stupid.

  5. ^

    I always worry about people who think thereā€™s such a thing as inherently ā€œracist (empirical) beliefsā€. Like, suppose weā€™re unpleasantly surprised, and the empirical claims in question turn out to be true. (Philosophers have imagined stranger things.) Are you suddenly going to turn into a racist? Iā€™d hope not! But then you shouldnā€™t think that any mere empirical contingency of this sort entails racism. Obviously we should be morally decent, and treat individuals as individuals, no matter what turns out to be the case as far as mere group statistics are concerned. The latter simply donā€™t matter to how we ought to treat people, and everyone ought to appreciate this.

    Of course, conventionally ā€œracist beliefsā€ may be (defeasible) evidence of racism, in the sense that the belief in question isnā€™t evidentially supported, but appeals to racists. After all, if the only reason to believe it is ā€œwishful thinkingā€, except it wouldnā€™t be worth wishing for unless you were racist, then the belief is evidence of racism. But this reasoning doesnā€™t apply to more agnostic attitudes. This is because taboos prevent us from knowing what is actually evidentially supported: we know that people would say the same thing, for well-intentioned ideological reasons, no matter what the truth of the matter was. (Naive instrumentalism strikes again.)


Stefan_Schubert @ 2024-01-02T12:01 (+8)

Iā€™m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.

If that's so, one might wonder why that happens.

In these cases, it seems that there are three questions; e.g.:

1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?

You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.

 It's possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) ("the conclusion"). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.

If that's the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don't do).

(Effectively, I'm saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)

A corollary of this is that it's maybe not as common as one might think that "a little knowledge" is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they'll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.

My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that "a little knowledge" is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.

In general, I think a little knowledge is usually beneficial, meaning our prior that it's harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.

Richard Y Chappell @ 2024-01-02T14:31 (+4)

Thanks, yeah, I think I agree with all of that!

Arsalaan Alam @ 2024-01-04T14:04 (+1)

amazing read richard!

Adebayo Mubarak @ 2024-01-02T00:41 (+1)

This is a nice read, however, in your conclusion, you asked the question "Should we lie?" Why that may seem self-explanatory and intriguing, where is the place of diplomacy in this regard? You know, as you've said your type of audience matters and others apart from your direct audience might or will see through the lies, here now lies the question, the exploration of diplomacy and frankness, can the two go pari-passu?