Moral Misdirection (full post)
By Richard Y Chappellđ¸ @ 2024-06-14T18:49 (+43)
This is a linkpost to https://www.goodthoughts.blog/p/moral-misdirection
I previously included a link to this as part of my trilogy on anti-philanthropic misdirection, but a commenter asked me to post the full text here for the automated audio conversion. This forum post combines my two substack posts on 'Moral Misdirection' and on 'Anti-Philanthropic Misdirection'. Apologies to anyone who has already read them.
Moral Misdirection
One can lieâor at least misdirectâby telling only truths.
Suppose Don shares news of every violent crime committed by immigrants (while ignoring those committed by native-born citizens, and never sharing evidence of immigrants positively contributing to society). He spreads the false impression that immigrants are dangerous and do more harm than good. Since this isnât true, and promulgates harmful xenophobic sentiments, I expect most academics in my social circles would judge Don very negatively, as both (i) morally bad, and (ii) intellectually dishonest.
It would not be a convincing defense for Don to say, âBut everything I said is literally true!â What matters is that he led his audience to believe much more important falsehoods.[1]
I think broadly similar epistemic vices (not always deliberate) are much more common than is generally appreciated. Identifying them requires judgment calls about which truths are most important. These judgment calls are contestable. But I think theyâre worth making. (Others can always let us know if they think our diagnoses are wrong, which could help to refocus debate on the real crux of the disagreement.) People donât generally think enough about moral prioritization, so encouraging more importance-based criticism could provide helpful correctives against common carelessness and misfocus.
Moral misdirection thus strikes me as an important and illuminating concept.[2] In this post, Iâll first take an initial stab at clarifying the idea, and then suggest a few examples. (Free free to add more in the comments!)
Defining Moral Misdirection
Moral misdirection involves leading people morally astray, specifically by manipulating their attention. So explicitly asserting a sincerely believed falsehood doesnât qualify. But misdirection neednât be entirely deliberate, either. Misdirection could be subconscious (perhaps a result of motivated reasoning, or implicit biases), or even entirely inadvertentâmerely negligent, say. In fact, deliberately implicating something known to be false wonât necessarily count as âmisdirectionâ. Innocent examples include simplification, or pedagogical âlies-to-childrenâ. If a simplification helps oneâs audience to better understand whatâs important, thereâs nothing dishonest about thatâeven if it predictably results in some technically false beliefs.
Taking all that into account, hereâs my first stab at a conceptual analysis:
Moral misdirection, as it interests me here, is a speech act that functionally operates to distract oneâs audience from more important moral truths. It thus predictably reduces the importance-weighted accuracy of the audienceâs moral beliefs.
Explanation: Someone who is sincerely, wholeheartedly in error may have the objective effect of leading their audiences astray, but their assertions donât functionally operate towards that end, merely in virtue of happening to be false.[3] Their good-faith erroneous assertions may rather truly aim to improve the importance-weighted accuracy of their audienceâs beliefs, and simply fail. Mistakes happen.
At the other extreme, sometimes people deliberately mislead (about important matters) while technically avoiding any explicit assertion of falsehoods. These bad-faith actors maintain a kind of âplausible deniabilityââa sheen of superficial intellectual respectabilityâwhile deliberately poisoning the epistemic commons. I find this deeply vicious.
But very often, I believe, people are negligent communicators. They just arenât thinking sufficiently carefully or explicitly about whatâs important in the dispute at hand. They may have other (perhaps subconscious) goals that they implicitly prioritize: making âtheir sideâ look good, and the âother sideâ look bad. When they communicate in ways that promote these others goals at predictable cost to importance-weighted accuracy, they are engaging in moral misdirectionâwhether they realize it or not.
Significance: I think that moral misdirection, so understood, is a great force for ill in the world: one of the major barriers to intellectual and moral progress. It is a vice that even many otherwise âgoodâ people routinely engage in. Its avoidance may be the most important component of intellectual integrity. Itâs disheartening to consider how rare this form of intellectual integrity seems to be, even amongst intellectuals (in part because attention to the question of what is truly important is so rare). By drawing explicit attention to it, I hope to make it more common.
Three Examples
(1) Anti-Woke Culture Warriors
In a large, politically polarized country, youâll find plenty of bad behavior spanning the political spectrum. So you can probably think of some instances of âwokeness run amokâ. (If you wanted to, you could probably find a new example every week.) But as with Don the xenophobe, if you draw attention to all and only misbehavior from one specific group, you can easily exaggerate the threat they pose: creating the (mis)impression that wokeness is a grave threat to civilized society and should be our top political priority (providing sufficient reason to vote Republican, say).
As always, if someone is willing to explicitly argue for this conclusionâthat wokeness really is the #1 problem in American society todayâthen Iâll give them points for intellectual honesty. (Iâll just disagree on the substance.)[4] But I think most recognize that this claim isnât really defensible. And if one grants that Democrats are better on the more important issues (not all would grant this, of course), then it would constitute moral misdirection for one to engage in anti-woke culture warring without stressing the far graver threats from MAGA culture. Even if some anti-woke were 100% correct about every particular dispute they draw attention to, it matters how important these particulars are compared to competing issues of concern.
Similar observations apply to many political disputes. Politics is absolutely full of moral misdirection. Like all intellectual vices, we find it easier to recognize when the âother sideâ is guilty of it. But itâs worth being aware of more generally. I think that academics have an especially strong obligation to communicate with intellectual honesty,[5] even if dishonesty may (lamentably) sometimes be justified for politicians.[6]
(2) Media Misdirection: âBut Her Emails!â
An especially important form of moral misdirection comes from misplaced media attention. In an ideal world, the prominence of an issue in the news media would highly correlate with its objective moral importance. Real-world journalistic practices notoriously fall far short of this ideal.
Election coverage is obviously especially high stakes here, and itâs a perennial complaint that the media does not sufficiently focus on âthe real issuesâ of importance: what practical difference it would make to have one candidate elected rather than the other. The mediaâs treatment of Hilary Clintonâs email cybersecurity as the #1 issue in the 2016 election was a paradigmatic example of moral misdirection. (No one could seriously believe that this was what an undecided voterâs decision rationally ought to turn on.)
A general lesson we can take from this is that scandals tend to absorb our attention in ways that are vastly disproportionate to their objective importance. (We would probably be better-off, epistemically, were we to completely ignore them.) Politicians exploit this, whipping up putative scandals to make the other side look bad. Media coverage of them would be much more responsible if they foregrounded analysis of how, if at all, any given âscandalâ should change our expectations about how the candidate would govern.
(3) Anti-Vax scaremongering
Hereâs another clear example of moral misdirection: highlighting the ârisksâ of vaccines, while ignoring or downplaying the far greater risks from remaining unvaccinated.
For a subtler (and hence more philosophically interesting) variation on the case: Consider how, at the peak of the pandemic, with limited vaccines available, western governments suspended access to some COVID vaccines (AstraZeneca in Europe, Johnson & Johnson in the US) due to uncertain risks of side-effects.
As I argued in my 2022 paper, âPandemic Ethics and Status Quo Riskâ, the suspensions communicated a kind of moral misinformation:[7]
Public institutions ought not to engage in strategic deception of the public. The idea that vaccine risks outweigh (either empirically or normatively) the risks of being unvaccinated during the pandemic is an instance of public health misinformation that is troublingly prevalent in our society. When public health institutions implement alarmist vaccine suspensions or other forms of vaccine obstructionism on strategic grounds, this communicates and reinforces the false message that the vaccine risks warrant such a response. Rather than trying to manipulate the public by pandering to unwarranted fears, public institutions have an obligation to communicate accurate information and promote the policies that are warranted in light of that information.
The most important thing for anyone to know during the pandemic was that they would be better off vaccinated ASAP. Any message that undermined this most important truth thus constituted (inadvertent) moral misdirection. To avoid this charge, public communication around the risks and side-effects of vaccines should always have been accompanied by the reminder that the risks and side-effects of getting COVID while unvaccinated were far more severe. When public health agencies instead engaged in alarmist vaccine suspensions, this was both (i) harmful, and (ii) intellectually dishonest. Itâs no excuse that what they said about the risks and uncertainty was true. They predictably led their audience to believe much more important falsehoods.
Itâs reasonable for public health agencies to want to insulate tried-and-true vaccines from the reputational risks of experimental vaccines (due to irresponsible media alarmism). But I think they should find a better way to do this. (One option: make clear that they do not vouch for the safety of these vaccines the way that they do for others. Downgrade them to âexperimentalâ status. But allow access, and further communicate that many individuals may find, in consultation with their doctors, that the vaccine remains a good bet for them given our current evidenceâdespite the uncertaintyâbecause COVID most likely posed a greater risk.)
Misleading Appeals to Complexity
âX is more complex than youâd realize from proponentsâ public messaging,â is a message that academics are very open to (we love complexity!). But itâs also a message that can very easily slide into misdirection, as becomes obvious when you plug âvaccine safetyâ in place of âXâ.
To repeat my central claims:
Honest communication requires taking care not to mislead your audience. Honest public communication requires taking care not to mislead general audiences. True claims can still (very predictably) mislead.
In particular, over-emphasizing the âuncertaintiesâ of overall good things can easily prove misleading to general audiences. (Itâs uncertain whether any given immigrant will turn out to be a criminalâor to be the next Steve Jobsâbut it would clearly constitute moral misdirection to try to make the âriskâ of criminality more salient, as nativist politicians too often do.) Public communicators should appreciate the risks they runânot just morally, but epistemicallyâand take appropriate care in how they communicate about high-stakes topics. Remember: if you mislead your audience into believing important falsehoods, that is both (i) morally bad, and (ii) dishonest. The higher the stakes, the worse it is to commit this moral-epistemic vice.
How to Criticize Good Things Responsibly
I think itâs almost always possible to find a responsible way to express your beliefs. And itâs usually worth doing so: even Good Things can be further improved, after all. (Or you might learn that your beliefs are false, and update accordingly.)
To responsibly criticize a (possibly) Good Thing, a good first step is to work out (i) what its proponents take to be the most important truth, and (ii) whether you agree on that point or not.
Either way, you should be honest and explicit about your verdict. If you think that proponentsâ âmost important truthâ is either unimportant or false, you should explicitly explain why. That would be the most fundamental and informative criticism you could offer to their view. (I would love for critics of my views to attempt this!)
If you agree that your target is correct about the most important truth in the context at hand, then in a public-facing article you should probably start off by acknowledging this. And end by reinforcing it. Generally try not to mislead your audience into thinking that the important truth is false. After first doing no epistemic harm, in the middle you can pursue your remaining disagreements.[8] With any luck, everyone will emerge from the discussion with overall more accurate (importance-weighted) beliefs.
Anti-Philanthropic Misdirection
I've so far argued that honest communication aims to increase the importance-weighted accuracy of your audienceâs beliefs. Discourse that predictably does the opposite on a morally important matterâeven if the explicit assertions are technically trueâconstitutes moral misdirection. Emphasizing minor, outweighed costs of good things (e.g. vaccines) is a classic form that this can take. I'll now turn to another important case study: exaggerating the harms of trying to do good.
Whatâs Important
Hereâs something that strikes me as very important, true, and neglected:
Target-Sensitive Potential for Good (TSPG): We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against catastrophic risks). Doing so would be extremely worthwhile. In all these areas, it is worth making deliberate, informed efforts to try to do more good rather than less with our resources: Better targeting our efforts may make even more of a difference than the basic decision to help at all.
This belief, together with a practical commitment to acting upon it, is basically the defining characteristic of effective altruists. So, applying the above guidance on how to criticize good things responsibly, responsible critics of EA should first consider whether they agree that TSPG is true and important, and explain their verdict.
As I explain in a companion post (see #25) the stakes here are extremely high: whether or not people engage in acts of effective altruism is literally a matter of life or death for the potential beneficiaries of our moral efforts. A total lack of concern about these effects is not morally decent. Public-facing rhetoric that predictably creates the false impression that TSPG is false, or that acts of effective altruism are not worth doing, is more plainly and obviously harmful than any other speech I can realistically imagine philosophers engaging in.[9] It constitutes literally lethal moral misdirection.
Responsible Criticism
To draw attention to these stakes is not to claim that people âarenât allowed to criticize EA.â As I :
I think itâs almost always possible to find a responsible way to express your beliefs. And itâs usually worth doing so: even Good Things can be further improved, after all. (Or you might learn that your beliefs are false, and update accordingly.)
But it requires care. And the mud-slinging vitriol of EAâs public critics is careless in the extreme, elevating lazy hostile rhetoric over lucid ethical analysis.
Thereâs no reason that criticism of EA must take this vicious form. You could instead highlight up-front your agreement with TSPG (or whatever other important neglected truths you agree we do well to bring more attention to), before going on to calmly explain your disagreements.
The hostile, dismissive tone of many critics seems to communicate something more like âEAs are stupid and wrong about everything.â (Even if this effect is not deliberate, itâs entirely predictable that vitriolic articles will have this effect on first-world readers who have every incentive to find an excuse to dismiss EAâs message. Iâve certainly seen many people on social media pick upâand repeatâexactly this sort of indiscriminate dismissal.) If TSPG is true, then EAs are right about the most important thing, and itâs both harmful and intellectually dishonest to imply otherwise.
Of course, if you truly think that TSPG is false, then by all means explicitly argue for that. (Similarly, regarding my initial examples of moral misdirection: if public health authorities ever truly believed that some vaccines were more dangerous than COVID itself, they should say so and explain why. And if immigrants truly caused more harm than benefit to their host societies, that too would be important to learn.) Itâs vital to get at the truth about important questions, and that requires open debate. Iâm 100% in favor of that.
But if you agree that TSPG is true and important, then you really should take care not to implicitly communicate its negation when pursuing less-important disagreements.
The critics might not realize that theyâre engaged in moral misdirection,[10] any more than Don the xenophobe does.[11] I expect the critics donât explicitly think about the moral costs of their anti-philanthropic advocacy: that less EA influence means more kids dying of malaria (or suffering lead exposure), less effective efforts to mitigate the evils of factory farming, and less forethought and precautionary measures regarding potential global catastrophic risks. But if youâre going to publicly advocate for less altruism and/or less effective altruism in the world, you need to face up to the reality of what youâre doing![12]
Wenarâs Counterpart on âDeaths from Vaccinesâ
I previously discussed how academic audiences may be especially susceptible to moral misdirection based upon misleading appeals to complexity. âThings are more complex than they seem,â is a message that appeals to us, and is often true!
But true claims can still (very predictably) mislead. So when writing for a general audience on a high-stakes issue, in a very prominent venue, public intellectuals have an obligation not to reduce the importance-weighted accuracy of their audienceâs beliefs.
Leif Wenar egregiously violated this obligation with his WIRED article, âThe Deaths of Effective Altruismâ. And (judging by my social media feeds) a hefty chunk of the philosophy profession publicly cheered him on.
I canât imagine that an implicitly anti-vax screed about âDeaths from Vaccinesâ would have elicited the same sort of gushing praise from my fellow academics. But itâs structurally very similar, as Iâll now explain.
Wenar begins by suggesting, âWhen you meet [an effective altruist], ask them how many people theyâve killed.â He highlights various potential harms from aid (many of which are not empirically well-supported, and donât plausibly apply to GiveWellâs top charities in particular, while the few that clearly do apply seem rather negligible compared to the benefits), while explicitly disavowing full-blown aid skepticism: rather, he compares aid to a doctor who offers useful medicine that has some harmful side-effects.[13]
His anti-vax counterpart writes that he âabsolutely does not mean that vaccines donât work⌠Yet what no one in public health should say is that all theyâre doing is improving health.â Anti-vax Wenar goes on to describe âharanguingâ a pro-vaccine visiting speaker for giving a conceptual talk explaining how many small health benefits (from vaccinating against non-lethal diseases) can add up to a benefit equivalent to âsaving a lifeâ. Why does this warrant haranguing? Because vaccines are so much âmore complex than âjabs save livesâ!â
Wenar laments that the speaker didnât see the value in this pointâtheir eyes glazed over with the âpro-vax glazeâ. He interprets this as the speaker having a hero complex, and fearing âHeâs trying to stop me.â As I explain on the EA Forum, Wenarâs âhero complexâ seems an entirely gratuitous projection. But it would seem very reasonable for the pro-vax speaker to worry that this haranguing lunatic was trying to stop or undermine net-beneficial interventions. I worry that, too!
People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisions. We saw this time and again throughout the pandemic, and now Wenar is encouraging a similarly biased approach to thinking about aid. Note that his âdearest testâ does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action.[14] Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.[15]
Wenar scathingly criticized GiveWellâthe most reliable and sophisticated charity evaluators aroundâfor not sufficiently highlighting the rare downsides of their top charities on their front page.[16] This is insane: like complaining that vaccine syringes donât come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. Itâs extraordinary, and really brings out why this concept matters.
Honest public communication requires taking care not to mislead general audiences.
Wenar claims to be promoting âhonestyâ, but the reality is the opposite. My understanding of honesty is that we aim to increase importance-weighted accuracy in our audiences. Itâs not honest to selectively share stories of immigrant crime, or rare vaccine complications, or that one time bandits killed two people while trying to steal money from an effective charity. Itâs distorting. There are ways to carefully contextualize these costs so that they can be discussed honestly without giving a misleading impression. But to demand, as Wenar does, that costs must always be highlighted to casual readers is not honest. Itâs outright deceptive.
Further reading
Thereâs a lot more to say about the bad reasoning in Wenarâs article (and related complaints from other anti-EAs). One thing that I especially hope to explore in a future post is how deeply confused many people (evidently including Wenar) are about the role of quantitative tools (like âexpected valueâ calculations) in practical reasoning about how to do the most good. But that will have to wait for another day.
In the meantime, I recommend also checking out the following two responses:
- Richard Pettigrew: Leif Wenar's criticisms of effective altruism
- Benthamâs Bulldog: On Leif Wenarâs absurdly unconvincing critique of effective altruism
- ^
As I was finishing up this post, I saw that Neil Levy & Keith Raymond Harris offer a similar example of âtruthful misinformationâ on the Practical Ethics blog. Theyâre particularly interested in communication that induces âfalse beliefs about a groupâ, and donât make the general link to importance that I focus on in this post.
- ^
Huge thanks to Helen for many related discussions over the years that have no doubt shaped my thoughtsâand for suggestions and feedback on an earlier draft of this post.
- ^
A tricky case: what if they misdirect as a result of sincerely but falsely believing that what theyâre drawing our attention to is really more important than what theyâre distracting us from? Iâm not sure how best to extend the concept to this case. (Maybe it comes down to whether their false belief about importance is reasonable or not?) Either way, the main claim I want to make about this sort of case is that we would make more dialectical progress by foregrounding the background disagreement about importance.
- ^
I might be more sympathetic to a more limited claim, e.g. that excessive wokeness is one of the worst cultural tendencies on university campuses. (I donât have a firm view on the matter, but that at least sounds like a live possibilityâI wouldnât be shocked if it turned out to be true.) But I donât think campus culture is the most important political issue in the world. And I certainly donât trust Republican politicians to be principled defenders of academic freedom!
- ^
Itâs obviously valuable for society to have truth-seeking institutions and apolitical âexpertsâ who can be trusted to communicate accurate information about their areas of expertise. When academics behave like political hacks for short-term political gain, they are undermining one of the most valuable social institutions that we have. As I previously put it: âThose on the left who treat academic research as just another political arena for the powerful to enforce their opinions as orthodoxy are making DeSantisâ case for himâwhy shouldnât a political arena be under political control? The only principled grounds to resist this, Iâd think, is to insist that academic inquiry isnât just politics by another means.â
- ^
I find this really sad, but I assume an intellectually honest politician would (like carbon taxes) be a dismal political failure. Matthew Yglesias has convinced me of the virtues of political pandering. But thatâs very much a role-specific virtue. Good politicians should pander so that theyâre able to get the democratic support needed to do go things, given the realities of the actually-existing electorate and the fact that their competition will otherwise win and do bad things. As a consequence, no intelligent person should believe what politicians say. But, as per the previous note, itâs really important that people in many other professions (e.g. academics) be more trustworthy!
- ^
I also argued that killing innocent people (by blocking their access to life-saving vaccines) is not an acceptable means to placating the irrationally vaccine-hesitant. (Iâm a bit surprised that more non-consequentialists werenât with me on this one!)
- ^
Helen pointed me to this NPR article on the âperils of intense meditationâ as a possible exemplar. They highlight in their intro that âMeditation and mindfulness have many known health benefits,â and conclude by noting that âthe podcast isnât about the people for whom this works.... The purpose is to scrutinize harm that is being done to people and to question why isnât the organization itself doing more to prevent that harm.â This seems perfectly reasonable, and the framing helps to reduce the risk of misleading their audience.
- ^
Compare all the progressive hand-wringing over wildly speculative potential for causing âharmâ whenever politically-incorrect views are expressed in obscure academic journals. Many of the same people seem completely unconcerned about the far more obvious risks of spreading anti-philanthropic misinformation. The inconsistency is glaring.
- ^
My best guess at what is typically going on: I suspect many people find EAs annoying. So they naturally feel some motivation to undermine the movement, if the opportunity arises. And plenty of opportunities inevitably do arise. (When a movement involves large numbers of people, many of whom are unusually ambitious and non-conformist, some will inevitably mess up. Some will even be outright crooks.) But once again, even if some particular complaints are true, thatâs no excuse for predictably leading their audiences to believe much more important falsehoods.
- ^
One difference: Donâs behavior is naturally understood as stemming from hateful xenophobic attitudes. I doubt that most critics of EA are so malicious. But I do think theyâre morally negligent (and very likely driven by motivated reasoning, given the obvious threat that EA ideas pose to either your wallet or your moral self-image). And the stakes, if anything, are even higher.
- ^
In the same way, I wish anyone invoking dismissive rhetoric about utilitarian ânumber-crunchingâ would understand that those numbers represent peopleâs lives, and it is worth thinking about how we can help more rather than fewer people. It would be nice to have a catchy label for the failure to see through to the content of whatâs represented in these sorts of cases. âRepresentational myopia,â perhaps? Itâs such a common intellectual-cum-moral failure.
- ^
Though he doesnât even mention GiveDirectly, a long-time EA favorite thatâs often treated as the most reliably-good âbaselineâ for comparison with other promising interventions.
- ^
As Benthamâs Bulldog aptly notes:
Perhaps Wenar should have applied the âdearest testâ before writing the article. He should have looked in the eyes of his loved ones, the potential extra people who might die as a result of people opposing giving aid to effective charities, and saying âI believe in my decisions, enough that Iâd still make them even if one of the people who could be hurt was you.â
- ^
As Scott Alexander puts it:
I want to make it clear that I think people like this Wired writer are destroying the world. Wind farms could stop global warming - BUT WHAT IF A BIRD FLIES INTO THE WINDMILL, DID YOU EVER THINK OF THAT? Thousands of people are homeless and high housing costs have impoverished a generation - BUT WHAT IF BUILDING A HOUSE RUINS SOMEONE'S VIEW? Medical studies create new cures for deadly illnesses - BUT WHAT IF SOMEONE CONSENTS TO A STUDY AND LATER REGRETS IT? Our infrastructure is crumbling, BUT MAYBE WE SHOULD REQUIRE $50 MILLION WORTH OF ENVIRONMENTAL REVIEW FOR A BIKE LANE, IN CASE IT HURTS SOMEONE SOMEHOW.
âMalaria nets save hundreds of thousands of lives, BUT WHAT IF SOMEONE USES THEM TO CATCH FISH AND THE FISH DIE?â is a member in good standing of this class. I think the people who do this are the worst kind of person, the people who have ruined the promise of progress and health and security for everybody, and instead of feting them in every newspaper and magazine, we should make it clear that we hate them and hold every single life unsaved, every single renewable power plant unbuilt, every single person relegated to generational poverty, against their karmic balance.
They never care when a normal bad thing is going on. If they cared about fish, they might, for example, support one of the many EA charities aimed at helping fish survive the many bad things that are happening to fish all over the world. They will never do this. What they care about is that someone is trying to accomplish something, and fish can be used as an excuse to criticize them. Nothing matters in itself, everything only matters as a way to extract tribute from people who are trying to do stuff. âNice cause you have there . . . shame if someone accused it of doing harm.â
- ^
Note that GiveWell is very transparent in their full reports: thatâs where Wenar got many of his examples from. But to list âdeaths causedâ on the front page would mislead casual readers into thinking that these deaths were directly caused by the interventions. Wenar instead references very indirectly caused deathsâlike when bandits killed two people while trying to steal money from an effective charity, or when a charity employs a worker who was previously doing other good work. Even deontologists should not believe in constraints against unintended indirect harm of this sortâthat would immediately entail total paralysis. Morally speaking, every sane view should agree that these harms merely count by reducing the net benefit. They arenât something to be highlighted in their own right.
SummaryBot @ 2024-06-17T18:00 (+1)
Executive summary: Moral misdirection, which involves leading people morally astray by manipulating their attention, is a major barrier to intellectual and moral progress that even many otherwise good people routinely engage in, and its avoidance may be the most important component of intellectual integrity.
Key points:
- Moral misdirection involves leading people morally astray by manipulating their attention, predictably reducing the importance-weighted accuracy of their moral beliefs.
- Examples of moral misdirection include anti-woke culture warring, media misdirection (e.g. focusing on Clinton's emails), and anti-vax scaremongering.
- Responsible criticism of a good thing should acknowledge the most important truth and reinforce it, while pursuing remaining disagreements.
- Criticism of effective altruism that fails to acknowledge the potential to do a lot of good in the face of severe global problems constitutes harmful, lethal moral misdirection.
- Honest public communication requires taking care not to mislead general audiences, even if individual claims are technically true.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.