When the impact model says "Lie" but morality says wait.
By Dr Kassim @ 2025-08-19T02:57 (+5)
What would you do.?
Imagine you’ve built a rigorous spreadsheet model to save lives during a pandemic. It weighs interventions by lives saved per dollar, ITN (importance, tractability, neglectedness), and catastrophic tail risks. One day, it spits out a disquieting result, the optimal strategy is to lie. For instance, the model suggests publicly announcing a “confirmed outbreak” before tests come back, to spur an early lockdown. Or it recommends presenting a worst case fatality projection as if it were likely, knowing it will scare policymakers into action. In short, the spreadsheet says a small misrepresentation would maximize expected value. Lives are on the line, and truthfulness seems to be the casualty of doing good.
Such scenarios aren’t far fetched. An AI safety advocate might privately estimate AGI is probably decades away, yet consider emphasizing a 5 year timeline in public to prompt urgent regulation. A global health campaigner might think a disease will likely kill 5,000 people, but highlight a 50,000 worst case projection to unlock funding. These ethical edge cases force us to ask, when the impact model says “lie,” do we ever oblige? Or are there moral red lines, even in the pursuit of the greater good?
Real world Parallels. Truth Bending for the Greater Good
This tension between integrity and utilitarian urgency isn’t just theoretical. In recent crises and campaigns, well intentioned leaders have sometimes bent the truth hoping to achieve better outcomes.
Public Health Messaging. Early in the COVID 19 pandemic, health authorities famously flip flopped on masks. In February 2020, the U.S. Surgeon General and CDC advised the public not to wear masks, partly out of concern that panic buying would create shortages for hospitalsbostonreview.netbostonreview.net. This “noble lie” (or error) aimed to preserve resources, but it backfired. When guidance reversed weeks later, public trust suffered. The episode illustrates a pragmatic misrepresentation (“masks don’t help”) made for the greater good, yet it became an “embarrassing example” of lost credibility.
Tanzania’s COVID Testing Scandal. In 2020, Tanzania’s President John Magufuli sought to downplay COVID 19. He went so far as to claim he’d secretly tested a papaya and a goat for the virus, and that both absurdly tested positivetheguardian.com. He used this stunt to cast doubt on legitimate testing and avoid lockdowns. Here the lie (or at least a drastic misrepresentation of test results) was deployed to prevent public panic and economic harm at the cost of an informed pandemic response. Magufuli’s story shows how easily data can be twisted “for the greater good,” with dangerous consequences when the truth emerges.
Climate Change “Crisis” Framing. In climate advocacy, communicators often emphasize alarming scenarios to spur action. For example, some campaigns have used urgent slogans like “We have 12 years to save the planet,” conveying a point of no return by 2030. While rooted in a real concern, such rhetoric can overshoot. Interviews with climate skeptical Americans found that overblown crisis language (“the world will end in 5 years because of climate change”) actually bred more skepticismpewresearch.org. When people sense exaggeration, they start doubting “why it’s being presented in such grandiose terms. This highlights a risk. exaggerating the truth to inspire action might undermine trust and backfire.
Global Health Campaigns. Aid organizations sometimes must choose between nuanced truth and dramatic storytelling to unlock funding. Campaigners may highlight the most heart wrenching case studies or use simplified statistics to convey urgency. For instance, an NGO might cite an outdated higher child mortality rate because it motivates donors, even if current data shows improvement. The intention is noble – rally resources to save lives – but it lives in a grey area of honesty. Over time, if supporters discover the data was cherry picked, the credibility of the cause can erode.
AI Risk Communication. Within the AI safety community, there’s debate on how frank or dramatic to be about timelines and scenarios. Emphasizing short AI timelines (e.g. Saying advanced AI is likely in the next decade) can light a fire under policymakers. But if insiders actually view such timelines as only, say, 10% likely, is it honest to lead with them? Some worry that focusing on the scariest, nearest term outcomes could cross into strategic exaggeration. This might achieve urgency at first, yet could also set up the field for future accusations of crying wolf if those predictions don’t bear out.
Across these domains, we see a common pattern. well meaning leaders wrestling with the temptation to forsake strict truth for strategic impact. Sometimes it’s done furtively (as in Tanzania); other times it’s an open secret (as with the early mask guidance). Always, it raises uncomfortable questions about ends and means.
Philosophical Frameworks. Utilitarian Urgency vs. Moral Integrity
How do different moral philosophies approach the idea of lying for the greater good? Effective altruists often aim to be impact maximizers, which might incline them toward a utilitarian calculus. But philosophical perspectives on truth telling diverge significantly.
Act Utilitarianism (Consequentialism). From a pure utilitarian standpoint, actions are justified by outcomes. If telling an untruth would save more lives or reduce more suffering, a utilitarian model might endorse it. This is the voice of the spreadsheet saying “lie.” In our scenario, an act utilitarian could argue that a false alarm about an outbreak is not only permissible but obligatory if it leads to policies that ultimately save thousands of lives. Classical utilitarian philosophers like John Stuart Mill generally emphasize truth as a social good, but a committed act utilitarian could view honesty as instrumental – valuable only insofar as it produces good consequences. If in a specific case lying produces the best consequences, the strict act utilitarian answer might be “yes, lie.” It’s this reasoning that underlies tropes like “for the greater good,” from Plato’s Noble Lie to modern public health messaging.
Deontology (Duty Ethics). A deontologist takes the opposite stance. certain actions are intrinsically right or wrong, regardless of outcomes. Immanuel Kant, for example, famously held that one must not lie, even to a would be murderer at your door. From a deontological view, truth telling is a moral duty – perhaps a side constraint that even saving lives can’t override. Lying “pollutes” the moral law and treats others as mere means to an end. Many religious ethics (like some Christian perspectives) align with this. the Ten Commandments forbid “bearing false witness,” implying honesty is non negotiable. A Kantian EA would argue that if our movement is built on evidence and reason, we undermine our moral fabric by lying, no matter how noble the goal. They might point out that a world where everyone lies for good ends would collapse the very trust and communication that utilitarian calculus relies on. In practice, few yeas are absolutist Kantians, but deontological leanings manifest as a strong presumption against dishonesty – a sense that integrity itself is a crucial good that we shouldn’t sacrifice.
Virtue Ethics (Character and “Deep Honesty”). Virtue ethicists ask. What kind of person (or community) are we becoming if we lie? Honesty is seen as a virtue that builds character and trust. Lying, even for a good cause, could erode one’s moral character and the epistemic integrity of one’s community. On the Effective Altruism Forum, Aletheophile’s concept of “deep honesty” captures this spirit. Deep honesty means telling others what you truly believe – not just avoiding literal lies, but also not deceiving by omission or spinforum.effectivealtruism.org. It contrasts with “shallow honesty,” where one technically tells the truth but selectively frames information to manipulate reactions. The virtue ethicist would applaud deep honesty as fostering deep trust. “Explain your real concerns, and trust others to respond wisely”. From this angle, lying is not only a rule break; it’s a sign of vice (dishonesty, cowardice, hubris) rather than virtue. A movement that starts justifying lies may cultivate clever strategists, but not virtuous leaders. Moreover, proponents of virtue ethics argue that habitual honesty tends to lead to better outcomes anyway – relationships of trust, resilient cooperation, and unclouded thinking – whereas a culture of expedient fibs breeds cynicism and error. In the long run, who do we become if we let the ends justify misleading means? Virtue ethics urges caution, emphasizing integrity as part of “doing good better.”
Moral Uncertainty and Meta Ethics. Real world eas often acknowledge that we’re uncertain about moral truth. We might lean utilitarian, yet not be 100% sure deontology or virtue ethics are wrong. Philosophers William macaskill and Toby Ord (among others) have advanced frameworks for moral uncertainty, suggesting we weigh actions by their “expected choice worthiness” across multiple plausible moral theorieswilliammacaskill.comrationallyspeakingpodcast.org. Under moral uncertainty, even a utilitarian leaning person might hesitate to lie, because there’s a non negligible chance that an absolutist stance against lying is the correct morality. If there’s, say, a 20% credence that “lying is always wrong” (a deontological rule), and an 80% credence in utilitarianism, a moral uncertainty approach might recommend not lying – because the disvalue in that 20% world (violating an inviolable duty) could outweigh the utilitarian gainsacademic.oup.com. In effect, one performs a kind of moral expected value calculation. even if lying yields higher utilitarian EV, the moral risk (that lying is categorically impermissible or much worse in another moral framework) might make the act overall choice worthy to avoid. On the other hand, if the consequences of truthfulness are dire enough, the scales might tip the other way. Under moral uncertainty, one also considers reputation. once a lie is told, one’s future credibility and truth telling capacity are damaged, as Strawberry Calm notes. “The most important fact about a lie is not the lie itself, but that you lied”forum.effectivealtruism.orgforum.effectivealtruism.org. This inherently reduces future impact by eroding knowledge and trust. In summary, moral uncertainty frameworks (macaskill et al.) Encourage hedging against the possibility that integrity is a paramount moral value.
Moral Realism vs. Anti Realism vs. Pragmatism. Digging one layer deeper, our stance on what morality is will influence how we handle “the model says lie.” Moral realists believe there are objective moral facts – perhaps lying is truly wrong in itself, or perhaps only outcomes matter, but whichever, there’s a fact of the matter. If one is a confident realist that “lying is never okay,” then no model output can override that, because it would violate an objective truth. Interestingly, Lukas Gloor argues that strong moral realism sits uneasily with the very idea of trading off honesty for utility. In his post “Moral Uncertainty and Moral Realism Are in Tension,” Gloor suggests that if you’re sure objective moral facts exist, it’s hard to justify the probabilistic weighing of moral options that would lead you to sometimes lieforum.effectivealtruism.orgforum.effectivealtruism.org. Either you’re uncertain about the meta ethics (maybe realism is false) or you end up effectively acting like an anti realist who treats morality pragmatically. Moral anti realists, by contrast, see moral claims as rooted in attitudes or conventions rather than independent truths. An EA anti realist might say. “Look, ‘morality’ is ultimately about what we value – and we value helping others. If a lie helps others, then it’s moral by our lights.” This view can more straightforwardly endorse impact driven deceit, since there’s no external truth to answer to, only the consequences and our chosen principles. Finally, philosophical pragmatism offers a striking lens. pragmatists (ala William James or Richard Rorty) literally define “truth” as what works. Beliefs are tools; true beliefs are those that prove useful in achieving our aimsforum.effectivealtruism.org. A question posed on the EA Forum, “Are many eas philosophical pragmatists?”, noted that pragmatism rejects any capital T Truth and instead “reorients truth toward usefulness”, aligning epistemology with practical success. This could describe some corners of EA. an implicit mindset of “if it works to achieve good, it’s ‘true enough.’” Such a stance is empowering but also perilous. It might make an EA more willing to fudge facts or narratives for impact, since truth isn’t sacred beyond its utility. However, even a pragmatist must grapple with long term usefulness. a lie that yields short term gains might undermine the broader goal when uncovered (a point pragmatism would recognize as well). Thus, while pragmatism blurs the line between truth and expediency, it doesn’t give a free pass to lying — it merely demands we ask, “useful to whom, and for how long?”
These frameworks offer different answers, but none give a simple green light to “strategic lying” without reservations. The act utilitarian might be most sympathetic to lying for good ends, yet even they must consider game theoretic repercussions (everyone lying erodes the system). Deontologists and virtue ethicists issue strong warnings that some goods – like integrity and trustworthiness – are fundamental. And those of us with moral uncertainty or a pragmatic bent find ourselves trying to balance multiple values. truth, consequences, reputation, character. The crux is that EA’s core aspiration to do the most good sits in tension with the heuristic “honesty is the best policy.” We need to examine that tension closely, informed by both philosophy and real world evidence.
Religious Perspectives. Ancient Wisdom on Lying
Major religious traditions have wrestled with the ethics of lying for millennia, often landing on “truth as a virtue, but…” with nuanced caveats in extreme cases. These perspectives add depth to our modern EA debate, reminding us that questions of ends and means are hardly new.
Christianity – Absolutism vs. “Rahab’s Dilemma”. Christianity generally extols truthfulness – “God is truth” and Satan is the “father of lies” in biblical texts. St. Augustine went so far as to claim every lie is a sin, no exceptions. But biblical narratives themselves complicate this. A famous example is Rahab in the Book of Joshua. Rahab hid Israelite spies in Jericho and lied to the city guards about their whereabouts, thereby saving their lives. Rather than condemn her, the Bible later praises Rahab for her faith and even folds her into Jesus’s lineage. This raises the question. did Rahab do right by lying to protect innocent lives? Some Christian theologians have argued Rahab committed a “lesser sin” to avoid a greater sin (murder) – essentially choosing the lesser of two evilsproclaimanddefend.org. In this view, one might lie to prevent a grave harm, but it remains regrettable – something to repent for afterwards. Other Christian ethicists suggest a more forgiving interpretation. when you lie to thwart evil (e.g. To save life), it may not count as a sin at all, because the people pursuing injustice “forfeit their right to the truth”ebcnipawin.caebcnipawin.ca. They point to Rahab, or the Hebrew midwives in Exodus who lied to Pharaoh about killing babies, as cases where deception served a higher purpose and God seemingly approved. Jesus’s teachings also hint at prioritizing spirit over letter of the law (he healed on the Sabbath, saying “mercy” trumps rigid rules). A principled Christian might conclude. Ordinarily, lying is wrong, but in extreme situations (“lifeboat scenarios”), mercy and protection of the innocent are “weightier matters” of the law. Still, Christians warn against self serving rationalizations. The “Rahab exception” is a narrow one. lying is only condoned to prevent immediate wicked harm, not to advance one’s own agenda or avoid lesser inconveniences. The take home for EA? Even a tradition with strong truth norms allows rare exceptions when it’s literally a matter of life and death (think hiding Jews from Nazis), but it frames them as tragic choices, not utilitarian free for alls. The moral gravity of lying is never forgotten.
Islam – The Principle of Darura (Necessity). Islamic ethics similarly holds truth telling as a high virtue – the Prophet Muhammad was nicknamed “al Amin” (the trustworthy) – and the Qur’an condemns lying. However, Islamic jurisprudence has a well defined concept of “ḍarūra” (necessity), encapsulated by the maxim “Necessity makes the prohibited permissible.” In dire circumstances, acts normally haram (forbidden) can become halal (allowed) if required to save a life or avert serious harmpmc.ncbi.nlm.nih.gov. Classical scholars gave examples like. it’s forbidden to eat pork or carrion, but if you’re starving in the desert, you must eat it to survive. By analogy, lying is likened to eating carrion – only to be done in extreme necessity and only as much as neededutrujj.org. The Prophet reportedly allowed three exceptions where lying does not carry sin. during war (to deceive an enemy), to reconcile people in conflict, and between spouses to preserve harmony. All three are cases where a greater good (peace, life, marital love) is at stake and straightforward truth could cause harm or discord. Even then, Muslim scholars emphasize restraint. “Do not make the exception the norm”utrujj.org. A hadith states, “Lying is not the trait of a believer” – honesty is part of faith – and habitual liars are strongly condemnedutrujj.orgutrujj.org. But when faced with an oppressor or life and death situation, withholding or even altering the truth to “minimize damage” is allowed and sometimes obligatory. For instance, if concealing someone’s whereabouts from an unjust killer will save them, one should lie. The overarching Islamic principle is pragmatic but cautious. only lie if there is no better option to prevent grave harm, and even then, keep it to what necessity dictatesutrujj.org. One scholar, Al Ghazali, advised that you must weigh the harm of truth versus the harm of lie – if telling the truth would cause unjust loss or break apart a family, then a carefully tailored untruth may be preferable, but “do not extend beyond the need”. In summary, Islamic ethics would counsel an EA. remain truthful by default, but in an emergency where lying is the only way to prevent a disaster, it can be Islamically justified. Such a lie should be seen as a last resort – a reluctant exception under the banner of darura (necessity).
The religious insights mirror what we see in secular ethics and EA discussions. a strong prima facie duty to truthfulness, tempered by allowances for extreme circumstances. They add a note of humility – even when lying is permitted, it’s often with a heavy heart and a call to not let the exception swallow the ruleutrujj.org. For eas, who prize rational truth seeking, these traditions challenge us to clarify what counts as true necessity versus impatience or overconfidence in our models. They also underscore an idea often echoed in EA. integrity is hard won and easily lost, so violate it only if you must, and know the gravity of that choice.
Consequences for the EA Movement. Trust, Coordination, Reputation
Zooming out to the Effective Altruism community as a whole, what are the stakes of an “impact model says lie” approach? Even if an individual lie seems beneficial, on a community level it can carry steep long term costs. Here are key consequences eas must weigh.
Loss of Epistemic Integrity. Effective altruism prides itself on being evidence based and truth seeking. If eas start tolerating strategic dishonesty, we risk corroding the very foundation of our movement’s credibility. As the pseudonymous author Strawberry Calm bluntly put it, “Starting now, don’t say things that you know to be false.” Honesty is arguably EA’s most important virtue, and also the simplest to followforum.effectivealtruism.org. When that norm is breached, our internal epistemics suffer. A lie introduced into our discourse is like a small poison – it can spread misinformation, skew decisions, and create false confidence in models or interventions. Moreover, someone who lies “for the cause” may begin to filter what information they share even within the community, undermining our ability to collectively reason. Others will waste time acting on misleading data or assumptions. In EA’s collaborative ecosystem of researchers, donors, and project leads, trustworthy communication is crucial. If we can’t take each other’s words at face value, every claim becomes a puzzle of potential hidden motives. Strawberry Calm’s post warns that even “shallow honesty” (literal truth telling used to mislead) means colleagues can “only shallowly trust” youforum.effectivealtruism.org. The result is wasted energy in double guessing and a breakdown of the “common knowledge” needed to coordinate effectively. In short, lies – however well intended – disempower your allies by depriving them of accurate beliefs. And a community that loses its grip on reality cannot do effective altruism.
Coordination Breakdowns. EA projects often resemble a giant multi agent cooperation game. We rely on coordination and mutual trust to allocate resources, share information, and work toward ambitious goals (whether it’s pandemic preparedness, AI safety, or global health). Trust is the grease in this machinery. A single public lie by an EA leader could create fissures. donors might question whether impact evaluations are being hyped, collaborators might second guess the honesty of grant applicants or orgs. If some eas lie and others don’t, it introduces a selection problem. outsiders and insiders won’t know whom to trust. As Yes, Lying Has Bad Consequences emphasizes, “If you are known to lie in situation X, then no one will trust your testimony even in situations ¬X”forum.effectivealtruism.org. One liar taints the well; people cannot easily tell apart “honest eas” from “ends justify the means eas.” This uncertainty can make even honest communication less effective. For example, if an AI safety advocate is caught exaggerating timelines, policymakers might discount all AI risk warnings as alarmism – hurting everyone’s efforts. Within the community, lack of trust can kill joint projects and knowledge sharing. Effective altruism is in many ways an iterated game. we need to reliably tell each other the truth to coordinate on complex problems over decades. Lying for short term gains is a classic defection in the iterated game of trust – it might win one round, but at risk of losing many future rounds due to retaliation or collapse of cooperation.
Reputation and Legitimacy Risks. EA is not just a private club; it’s a social movement under the eyes of the world. If we gain a reputation for playing fast and loose with the truth, the fallout could be severe. As Sarah Constantin cautioned in her 2017 essay “EA Has a Lying Problem,” even hints of an “ends justify the means” ethos could corrupt and delegitimize Effective Altruismsrconstantin.github.io. EA’s credibility with academics, policymakers, and the public is one of its greatest assets – it allows our ideas to be taken seriously. That credibility could evaporate if prominent cases of dishonesty come to light. Think of scandals in other movements or organizations. once trust is shattered, it’s hard to regain. In the EA Forum, many have noted that if EA associated people are caught in lies (even putatively well meaning ones), it stains everyone who is thought to be like themforum.effectivealtruism.org. For instance, if an EA aligned global health charity were found misreporting results to appear more cost effective, not only would that charity be discredited, but funders might become skeptical of all EA charity evaluators. The movement’s brand could shift from “rigorous do gooders” to “agenda driven ideologues.” Additionally, eas often need to advocate for counterintuitive or unpopular causes (“strange but true” ideas like AI risk, anti malaria bednets as top charity, etc.). Our unimpeachable honesty has been a key to persuading others – people may disagree with us, but they often respect that we earnestly follow the evidence. If we lose that high ground, every EA pitch meets extra resistance. The pragmatic view of truth (truth as usefulness) also backfires here. once others suspect we view truth as malleable, they won’t accept our claims or statistics at face value, even when we are truthful. In summary, trading integrity for urgency may yield a Pyrrhic victory – a short term win at the cost of long term trust and moral authority, both for individual eas and the movement at largesrconstantin.github.io.
Community Culture and Moral Drift. Beyond external reputation, there’s an internal moral cost. If eas start making a habit of ethical “exception handling” (just this once, for a good cause), it can gradually shift norms and culture. We might slowly become more tolerant of “white lies”, then gray lies, then outright deception whenever we feel the stakes justify it. Each justified lie becomes a precedent. Newcomers learn from what influential eas do. A culture that begins to condone lying for good outcomes might attract or promote people who are comfortable with manipulation – and alienate those who value integrity strongly. Over time, the community’s character could shift, perhaps imperceptibly. This is what philosophers call moral drift or a slippery slope. Today’s one off crisis exception can become tomorrow’s default playbook under pressure. The virtue ethicist perspective would warn. we are at risk of no longer being the kind of community we want to be – one that thinks carefully and acts honorably. Instead, we’d be a group of clever strategists who might justify anything as long as it computes in a spreadsheet. That is a far cry from the moral shining example many of us want EA to be. As one Forum post quipped, if we embrace “lying for the greater good,” we edge toward the territory of storybook villains – recall that even in fiction, the tyrant Grindelwald’s slogan was “For the Greater Good”srconstantin.github.io. We must ask. if we allow small lies now, what bigger compromises might we slide into under future pressure? Maintaining a hard line on honesty might sometimes feel limiting, but it is also protective – it sets a clear standard that can save us from ourselves when our enthusiastic, impact driven minds might rationalize questionable means.
In sum, the strategic costs of lying for EA are huge. Even if a lie “works” as intended, it diminishes future capacity to do good by undermining trust, both internally and externally. As the saying goes, “Trust comes on foot and leaves on horseback.” One public breach of integrity can gallop away with years of painstakingly earned social capital. Effective Altruism’s influence depends on being seen as reliable, rational, and ethically consistent. Losing that for a quick win would be trading the house for the garage.
EA Community Perspectives. Debating Truth & Consequences
Unsurprisingly, eas themselves have engaged in spirited debate over honesty and strategic deception. Several thoughtful EA Forum posts examine whether lying or extreme spin is ever justified, often concluding that the downsides are greater than they first appear. Let’s compare and contrast how five contributions from the community reason about truth, moral trade offs, and “noble lies.”
“Yes, Lying Has Bad Consequences” by Strawberry Calm (2022). This concise post comes down hard against any Machiavellian rationalizations. It argues that lying always carries harmful consequences – to friends, to enemies, to yourselfforum.effectivealtruism.orgforum.effectivealtruism.org. Strawberry Calm emphasizes how lying inherently harms the person lied to, by undermining their ability to make informed decisions (it “disempowers” them, akin to stealing their car keys)forum.effectivealtruism.org. Even if you lie to enemies, the post says, you’ll likely be found out, and then “the most important fact” becomes that you liedforum.effectivealtruism.org. Once someone (or the public) catches you in a lie, your credibility is shattered. One particularly striking point. if you gain a reputation for dishonesty, even your truthful statements lose power – “you lose the ability to assert anything whatsoever. Your utterances become complete gibberish”forum.effectivealtruism.org. The post goes on to note that you can’t easily limit the fallout. you might intend to lie only to outsiders, but “if you are known to lie in situation X, then… even in situation ¬X” people won’t trust youforum.effectivealtruism.org. Friends won’t know they’re truly friends; your statement “I only lie to others, not you” itself can’t be trustedforum.effectivealtruism.org. In conclusion, Strawberry Calm takes a near absolutist but pragmatic stance. honesty is the “simplest virtue” to uphold and essential for any kind of genuine relationship or cooperationforum.effectivealtruism.org. The clear implication is that EA – which relies on cooperation and knowledge – simply cannot afford lies. The piece is a full throated defense of epistemic integrity, echoing long termist thinking. a lie might seem beneficial now, but it sews seeds of distrust that hamper all future endeavorsforum.effectivealtruism.orgforum.effectivealtruism.org. This aligns with our earlier points about trust and coordination. Strawberry Calm is basically saying. Yes, even if the spreadsheet says lie, don’t do it – you’ll pay a bigger price down the road.
“Deep Honesty” by Aletheophile (2024). Rather than focusing on blatant lies, this post critiques a subtler issue. the practice of being technically truthful but selectively misleading, which many see as a lesser evil. Aletheophile draws a line between “shallow honesty” (not lying outright, but spinning the truth) and **“deep honesty” (being candid about your real beliefs and motivations)】forum.effectivealtruism.orgforum.effectivealtruism.org. The post argues that shallow honesty still erodes trust, because people sense when you’re omitting or sugarcoatingforum.effectivealtruism.org. They may not catch you in a lie, but they’ll start reading between the lines and doubting your transparency. By contrast, deep honesty – though riskier in the moment – can yield robust, resilient trustforum.effectivealtruism.orgforum.effectivealtruism.org. The author gives examples familiar to eas. writing a funding application that omits your doubts, or using only the scariest framing for AI risk to persuade, as instances of shallow honestyforum.effectivealtruism.orgforum.effectivealtruism.org. Deep honesty would mean openly sharing weaknesses of your project, or explaining your actual level of concern about AI rather than whatever polls bestforum.effectivealtruism.org. This post doesn’t say one should never be strategic – it acknowledges deep honesty can backfire or isn’t always appropriateforum.effectivealtruism.orgforum.effectivealtruism.org. But it makes a compelling case that in many situations eas face, opting for transparency and candor strengthens relationships and leads to unforeseen positive outcomesforum.effectivealtruism.org. For example, by being honest with a funder about uncertainties, you might gain their respect and a more supportive partnership, rather than a one off grant on false pretenses. Deep Honesty resonates with virtue ethics. it’s about being the kind of person (or org) who doesn’t manage others’ perceptions with half truths, but “trusts them to come to their own responses”forum.effectivealtruism.org. In EA terms, Aletheophile’s view suggests that our community’s epistemic norms should favor forthrightness, even at the cost of some short term persuasiveness, because long term cooperation and learning flourish in an environment of deep trust. This perspective would advise. if your model says “exaggerate,” consider instead sharing the full nuance – it might not be as attention grabbing, but it keeps our collective epistemology healthy and invites others to genuinely understand rather than being prodded by fear.
“EA Has a Lying Problem” by Sarah Constantin (2017). One of the earlier and more provocative pieces, Constantin’s essay sounded an alarm about cultural tendencies in EA. She observed statements and behaviors suggesting some eas were comfortable with “lying for the greater good” – an attitude she finds extremely dangeroussrconstantin.github.iosrconstantin.github.io. Constantin, herself an EA aligned thinker, argued that this mindset “taken to an extreme… looks indistinguishable from someone who just wants power”srconstantin.github.io. The post references how even well intentioned utilitarian logic can slide into a Grindelwald esque rationale where anything (lying, hurting people) is justified by future utopiasrconstantin.github.iosrconstantin.github.io. She doesn’t claim most eas are villains, but notes troubling signs. for instance, movement leaders discouraging public criticism of EA orgs to protect the brandsrconstantin.github.iosrconstantin.github.io. She quotes instances (like an 80,000 Hours CEO’s comment) that imply “optics over honesty”, i.e. Suppressing open discussion because it might hurt fundraisingsrconstantin.github.iosrconstantin.github.io. Constantin’s verdict is clear. if EA starts lying or even just failing to be transparent out of expedience, it’s a betrayal of its philosophical core and will rot the movement from withinsrconstantin.github.iosrconstantin.github.io. She points out the irony that if we justify false claims because “it will lead to more good,” that logic “would work just as well if EA did no good at all and only claimed to do good”srconstantin.github.io. In other words, without honesty checks, a movement could entirely lose track of reality and just feed its own narrative – a nightmare scenario for a group committed to actually doing good. Constantin calls for holding EA to a higher standard of truthfulness and objectivity, precisely because of our power and ambitionssrconstantin.github.iosrconstantin.github.io. Her stance aligns with a deontological or rule utilitarian viewpoint. certain norms (like honesty and openness) are non negotiable if we want to stay on the right side of the hero/villain linesrconstantin.github.io. In EA forum discussions since, many have cited Constantin’s piece as a cautionary tale, especially after some community controversies. The underlying message. reputation aside, we should be terrified of the internal failure mode where we talk ourselves into “the ends justify anything.” It’s a slippery slope to betraying the very values that motivated us initially. So, Constantin would answer our question bluntly. if the model says lie, your model is broken – check your premises, but do not lie.
“Are Many eas Philosophical Pragmatists?” (2021, question by rorty). This was a short forum prompt, not a manifesto, but it’s telling. The user “rorty” asked if eas have a pragmatist streak, noting that pragmatism treats truth as inseparable from practical consequencesforum.effectivealtruism.org. A few responders discussed how EA’s focus on “doing what works” might align with pragmatism’s “truth = usefulness” idea, while EA’s obsession with getting the facts right leans realist. One responder preferred the Peircean pragmatism (truth is what we’d agree on at the end of inquiry) to the more radical Rortyan pragmatism (truth as just whatever is socially justified)forum.effectivealtruism.orgforum.effectivealtruism.org. The upshot for our purposes is that if many eas were indeed pragmatists, they might be comfortable with shaping narratives for impact, since epistemology would be “applied” anywayforum.effectivealtruism.org. However, the discussion also highlighted that EA as a community is “obsessed with figuring out if we might collectively be wrong” – a sign of seeking objective truth, not just convenient beliefforum.effectivealtruism.org. In comparing reasoning styles, it emerged that even those eas who lean pragmatic care about the correspondence of beliefs to reality, because to “actually make a difference” (EA’s goal) you can’t just appear effective, you must be effectiveforum.effectivealtruism.org. This implicitly rebukes the idea of purely instrumental truth. if we fool others or ourselves about an intervention’s effectiveness, reality will catch up (the intervention won’t work as imagined). So, while some eas might talk in a way that sounds pragmatist (“what matters is what works”), in practice there’s a deep respect for empirical truth as a constraint – otherwise you won’t achieve the good you want. The pragmatism question also raises the issue of moral pragmatism. are we willing to trade off moral principles for results? Some critics label eas as too pragmatic in this sense (willing to sacrifice, e.g., one person’s welfare for greater aggregate good). But many eas in that thread and elsewhere assert limits – they don’t want to be seen as mere “the ends justify the means” people. In summary, the forum consensus wasn’t that eas are all philosophical pragmatists, but the question itself suggests an awareness. if we slide toward pragmatism unchecked, we might lose sight of truth. It’s a gentle reminder that balancing practical impact with epistemic rigor is an ongoing challenge. On lying specifically, a pure pragmatist might say “lie if it helps,” but the EA community ethos – as reflected in other posts – pushes back, noting that usefulness in the long run requires truthfulness.
“Moral Uncertainty and Moral Realism Are in Tension” by Lukas Gloor (2022). Gloor’s piece, part of a series on meta ethics, isn’t directly about lying, but it addresses how to make decisions when you’re unsure about moral theories. He argues that if someone is a staunch moral realist, believing in objective moral facts, then the common EA practice of hedging between moral theories (like giving some weight to deontological side constraints) becomes conceptually awkwardforum.effectivealtruism.orgforum.effectivealtruism.org. Either one should be uncertain about realism itself, or otherwise commit to figuring out the one true morality. Translated to our topic. If an EA thinks “lying is intrinsically wrong” is objectively true, they won’t be swayed by impact calculations. Conversely, if they’re confident realism is false, they might go full steam on expected value reasoning and be willing to lie for great good (since morality is just their preferences). However, most eas are somewhere in between – not sure if there are absolute rules or not. Gloor suggests that acknowledging that uncertainty is important. If you aren’t fully sure that utilitarianism is the truth, you have reason to accommodate other moral intuitions (like truth telling) in your decision frameworkforum.effectivealtruism.org. This mirrors the approach of macaskill and others on moral uncertainty. it provides a rational justification to not be extremist even if your central estimate favors lying. Essentially, moral uncertainty acts as a moderator on urgency. yes, maybe outcomes point one way, but what about the 10% chance that lying is deeply wrong in a way your utilitarian model isn’t capturing? Gloor’s broader point is that eas often talk about moral uncertainty but implicitly assume some realism. The tension is unresolved – which hints that EA hasn’t fully codified how to treat norms like honesty under moral uncertainty. The practical takeaway from Gloor might be. given we could be wrong about the all things considered calculus, it’s prudent to abide by certain ethical constraints (like honesty) unless we’re extremely sure breaking them is correct. It’s a more philosophical backup to the intuitive worry many have. “It just feels wrong to lie, even if the model says so, and maybe that feeling tracks a moral truth or important consideration I haven’t formalized.” Gloor would likely encourage eas to avoid cavalierly violating common sense moral principles, because if there are moral truths (realism), we risk doing something truly bad, and if there aren’t, acting as if there are often leads to better cooperation anyway. In either case, the expected moral worth of lying is diminished by the uncertainty about moral theory.
Across these community voices, a common theme emerges. strategic deception is viewed with great skepticism. The strongest advocates for truth (Strawberry Calm, Constantin, Aletheophile) highlight trust and integrity as paramount. Even discussions about pragmatism and moral uncertainty circle back to, “We need to be careful; truth has a special role in keeping our efforts on track.” Notably, none of these perspectives outright celebrates lying as an underused lifehack for doing good. Instead, they grapple with just how bad an idea it usually is, even if superficially tempting. The EA Forum, reflecting the community, seems to lean heavily toward the maxim. “With rare, rare exceptions, lying is off limits for eas.” The exceptions (maybe akin to Rahab’s or darura cases) would need to be extreme and clearly beneficial – and even then, many would argue for finding alternative solutions if at all possible.
Conclusion. Reflection on Integrity vs. Urgency
Effective altruists aim to use reason and evidence to do the most good. What happens when our reasoning apparatus (the spreadsheet, the impact estimate) points toward a means that undercuts our evidence and truth seeking ethos? This tension between integrity and utilitarian urgency doesn’t yield easy answers. We’ve explored scenarios, real examples, philosophies, religious counsel, and community arguments. In the end, each of us in the EA community might still answer differently when theory collides with intuition.
The uncomfortable ASKs
If your impact model said a lie could save lives, what would you do? Would you tell the lie, tweak it into a technically true but misleading statement, or refuse and seek another path? What factors would weigh most – the immediate lives at stake, or the principle and long term credibility?
Where do we draw the line on “strategic communication”? There’s a spectrum from choosing a frame that highlights the worst case (arguably okay) to knowingly spreading false information (clearly not okay). As a community, how can we define and enforce norms that encourage persuasive yet truthful advocacy?
Can we imagine edge cases where lying is the right call? Perhaps extremely rare “rescue situations” or preventing existential catastrophes? If so, how do we guard against those exceptions becoming more common through motivated reasoning? Should there be an internal accountability mechanism if someone claims an exceptional need to lie (e.g., private peer review, later transparency)?
How do we uphold epistemic integrity under urgency? In crises (pandemics, imminent risks), the pressure to “do something” fast is immense. How can EA maintain its commitment to truth when the world seems to demand simple, dramatic messages? Are there ways to be both honest and urgently motivating without resorting to falsehoods?
What should the EA community do if/when an “impactful lie” is revealed? Human nature being what it is, it’s possible someone in EA will, at some point, exaggerate or lie with good intentions. How we respond will set a precedent. Do we need stronger community norms or statements about honesty? How do we balance compassion (not crucifying someone for a mistake) with firm disavowal of the tactic?
Ultimately, this is a test of EA’s frameworks. Can we achieve radical good without resorting to “radical honesty” failures? doing good, better – and doing good truthfully.