Naïve vs Prudent Utilitarianism

By Richard Y Chappell🔸 @ 2022-11-11T23:53 (+97)

This is a linkpost to https://rychappell.substack.com/p/naive-vs-prudent-utilitarianism

Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.

Adjusting for Bias

Imagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.

Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.

And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.

Metacoherence prohibits naïve utilitarianism

“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.

This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:

  1. Shortage of time and energy will in general preclude such calculations.
  2. Even if time and energy are available, the relevant information commonly is not.
  3. An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.
  4. Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.
  5. Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.
  6. And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is no real chance that actions will even approximate to them.

For all these reasons and more (e.g. the risk of reputational harm to utilitarian ethics),1 violating people's rights is practically guaranteed to have negative expected value. You should expect that most people who believe themselves to be the rare exception are mistaken in this belief. First-pass calculations that call for rights violations are thus known to be typically erroneous. Generally-beneficial rules are “generally beneficial” for a reason. Knowing this, it would be egregiously irrational to violate rights (or other generally-beneficial rules) on the basis of unreliable rough calculations suggesting that doing so has positive “expected value”. Unreliable calculations don’t reveal the true expected value of an action. Once you take into account the known unreliability of such crude calculations, and the far greater reliability of the opposing rule, the only reasonable conclusion is that the all-things-considered “expected value” of violating the rule is in fact extremely negative.

Indeed, as I argued way back in my PhD dissertation, this is typically so clear-cut that it generally shouldn’t even occur to prudent utilitarians to violate rights in pursuit of some nebulous “greater good”—any more than it occurs to a prudent driver that they could swerve into oncoming traffic. In this way, utilitarianism can even accommodate the thought that egregious violations should typically be unthinkable. (Of course one can imagine hypothetical exceptions—ticking time bomb scenarios, and such—but utilitarianism is no different from moderate deontology in that respect. I don’t take such wild hypotheticals to be relevant to real-life practical ethics.)

Prudent Utilitarians are Trustworthy

In light of all this, I think (prudent, rational) utilitarians will be much more trustworthy than is typically assumed. It’s easy to see how one might worry about being around naïve utilitarians—who knows what crazy things might seem positive-EV to them in any fleeting moment? But prudent utilitarians abide by the same co-operative norms as everyone else (just with heightened beneficence and related virtues), as Stefan Schubert & Lucius Caviola explain in ‘Virtues for Real-World Utilitarians’:

While it may seem that utilitarians should engage in norm-breaking instrumental harm, a closer analysis reveals that it often carries large costs. It would lead to people taking precautions to safeguard against these kinds of harms, which would be costly for society. And it could harm utilitarians’ reputation, which in turn could impair their ability to do good. In light of such considerations, many utilitarians have argued that it is better to respect common sense norms. Utilitarians should adopt ordinary virtues like honesty, trustworthiness, and kindness. There is a convergence with common sense morality… [except that] Utilitarians can massively increase their impact through cultivating some key virtues that are not sufficiently emphasized by common sense morality…

This isn’t Rule Utilitarianism

I’ve argued that prudent utilitarians will follow reliable rules as a means to performing better actions—doing more good—than they would through naively following unreliable, first-pass calculations. When higher-order evidence is taken into account, prudent actions are the ones that actually maximize expected value. It’s a straightforwardly act-utilitarian view. Like the master archer, the prudent utilitarian’s target hasn’t changed from that of their naïve counterpart. They’re just pursuing the goal more competently, taking naïve unreliability into account, and making the necessary adjustments for greater accuracy in light of known biases.

There are a range of possible alternatives to naïve utilitarianism that aren’t always clearly distinguished. Here’s how I break them down:

(1) Prudent (“multi-level”) utilitarian: endorses act-utilitarianism in theory, motivated by utilitarian goals, takes into account higher-order evidence of unreliability and bias, and so uses good rules as a means to more reliably maximize (true) expected value.

(2) Railton’s “sophisticated” utilitarian: endorses act-utilitarianism in theory, but has whatever (potentially non-utilitarian) motivations and reasoning they expect to be for the best.

(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.

(4) Rule utilitarian: not really consequentialist; moral goal is not to do good, but just to act in conformity with rules that would do good in some specified — possibly distant — possible world. (Subject to serious objections.)

See also:

1 As we stress on utilitarianism.net [fn 2]: “This reputational harm is far from trivial. Each individual who is committed to (competently) acting on utilitarianism could be expected to save many lives. So to do things that risk deterring many others in society (at a population-wide level) from following utilitarian ethics is to risk immense harm.”


Lauren Maria @ 2022-11-13T21:42 (+8)

I wish you would engage more with other philosophers who speak about utilitarianism, especially since (reading the comments on this thread) you appear to be taken as having some kind of authority on the topic within the EA community even though other prominent philosophers disagree with your takes. 

Chris Bertram posted this today for example. Here's two quotes from the post:

"Don’t get me wrong utilitarianism is a beautiful, systematic theory, a lovely tool to help navigate acting in the world in a consistent and transparent matter. When used prudently it’s a good way to keep track of one’s assumptions and the relationship between means and ends. But like all tools it has limitations. And my claim is that the tradition refuses to do systematic post-mortems on when the tool is implicated in moral and political debacles. Yes, somewhat ironically, the effective altruism community (in which there is plenty to admire) tried to address this in terms of, I  think, project failure. But that falls short in willing to learn when utilitarianism is likely to make one a danger to innocent others."

"By framing the problem as Mr. Bankman-Fried’s “integrity” and not the underlying tool, MacAskill will undoubtedly manage to learn no serious lesson at all. I am not implicating utilitarianism in the apparent ponzi scheme. But Bankman-Fried’s own description back in April of his what he was up to should have set off alarm bells among those who associated with him–commentators noticed it bore a clear resemblance to a Ponzi.+ (By CrookedTimber standards I am a friend of markets.) Of course, and I say this especially to my friends who are utilitarians; I have not just discussed a problem only within utilitarianism; philosophy as a professional discipline always assumes its own clean hands, or finds ways to sanitize the existing dirt."



Finally, you should perhaps consider holding yourself to a higher standard, as a philosophy professor, than to straw-man people who are genuinely trying to engage with you philosophically, as you did here: 

"I don't think we should be dishonest.  Given the strong case for utilitarianism in theory, I think it's important to be clear that it doesn't justify criminal or other crazy reckless behaviour in practice.  Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point."





 

Richard Y Chappell @ 2022-11-13T22:15 (+11)

Sorry, how is that a straw man?  I meant that comment perfectly sincerely.  Publius raised the worry that "such distinctions are too complex for a not insignificant proportion of the public", and my response simply explained that I expect this isn't true of those who would be reading my post.  I honestly have no idea what "standard" you think this violates.  I think you must be understanding the exchange very differently from how I understood it.  Can you explain your perspective further?

re: Bertram: thanks for the pointer, I hadn't seen his post.  Will need to find time to read it. From the quotes you've given, it looks like we may be discussing different topics.  I'm addressing what is actually justified by utilitarian theory.  He's talking about ways in which the tools  might be misused.  It isn't immediately obvious that we necessarily disagree. (Everything I say about "naive utilitarianism" is, in effect, to stress ways that the theory, if misunderstood, could be misused.)

Richard Y Chappell @ 2022-11-14T09:41 (+2)

I would genuinely appreciate an explanation for the downvotes.  There's evidently been some miscommunication here, and I'm not sure what it is.

Dancer @ 2022-11-14T13:33 (+13)

I didn't downvote, but I'd guess that Lauren and perhaps others understood your "Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point." to mean something like "If you, dear interlocutor, were sophisticated enough then you'd grasp my point."

(Not confident in this though, as this interpretation reads to me like an insult rather than a straw man.)

Richard Y Chappell @ 2022-11-14T14:47 (+4)

Huh, okay, thanks. fwiw, I definitely did not intend any such subtext. (For one thing, Publius did not themselves deny the relevant distinction, but merely worried that some other ppl in the general population would struggle to follow it.  I was explicitly expressing my confidence in the sophistication of all involved in this discussion.)

freedomandutility @ 2022-11-12T18:02 (+4)

I agree with the distinction between naive and prudent utilitarianism, but I also think it all breaks down when you factor in the infinite expected value of mitigating extinction risks (assuming a potential techno utopian future, as some longtermists do) and the risk of AGI driven extinction in our lifetimes.

I’m pretty sure lots of pure prudent utilitarians would still endorse stealing money to fund AI safety research, especially as we get closer in time to the emergence of AGI.

(https://forum.effectivealtruism.org/posts/2wdanfCRFbNmWebmy/ea-should-consider-explicitly-rejecting-pure-classical-total)

BrownHairedEevee @ 2022-11-14T15:59 (+2)

I think the archer metaphor makes sense only if you accept moral realism. To an anti-realist, moral rules are just guides to ethical behavior. If naïve utilitarianism is a bad guide, and "following reliable rules" a good one, then why pretend that you're maximizing utility at all?

Richard Y Chappell @ 2022-11-14T16:53 (+12)

I don't see how metaethics makes any difference here.  Why couldn't an anti-realist similarly distinguish between (i) their moral goals, and (ii) the instrumental question of how best to achieve said goals?  (To pursue goals in a prudent, non-naive way, is not at all to mean that you are merely "pretending" to have those goals!) 

E.g. you could, in principle, have two anti-realists who endorsed exactly the same decision-procedure, but did so for different reasons. (Say one cared intrinsically about the rules, while the other followed rules for purely instrumental reasons.) I think it makes sense to say that these two anti-realists have different fundamental values, even though they agree in practice.