When the impact model says "Lie" but morality says wait.

By Dr Kassim @ 2025-08-19T02:57 (+5)

What would you do.?

Imagine you’ve built a rigorous spreadsheet model to save lives during a pandemic. It weighs interventions by lives saved per dollar, ITN (importance, tractability, neglectedness), and catastrophic tail risks. One day, it spits out a disquieting result, the optimal strategy is to lie. For instance, the model suggests publicly announcing a “confirmed outbreak” before tests come back, to spur an early lockdown. Or it recommends presenting a worst case fatality projection as if it were likely, knowing it will scare policymakers into action. In short, the spreadsheet says a small misrepresentation would maximize expected value. Lives are on the line, and truthfulness seems to be the casualty of doing good.

Such scenarios aren’t far fetched. An AI safety advocate might privately estimate AGI is probably decades away, yet consider emphasizing a 5 year timeline in public to prompt urgent regulation. A global health campaigner might think a disease will likely kill 5,000 people, but highlight a 50,000 worst case projection to unlock funding. These ethical edge cases force us to ask, when the impact model says “lie,” do we ever oblige? Or are there moral red lines, even in the pursuit of the greater good?

Real world Parallels. Truth Bending for the Greater Good

This tension between integrity and utilitarian urgency isn’t just theoretical. In recent crises and campaigns, well intentioned leaders have sometimes bent the truth hoping to achieve better outcomes.

Across these domains, we see a common pattern. well meaning leaders wrestling with the temptation to forsake strict truth for strategic impact. Sometimes it’s done furtively (as in Tanzania); other times it’s an open secret (as with the early mask guidance). Always, it raises uncomfortable questions about ends and means.

Philosophical Frameworks. Utilitarian Urgency vs. Moral Integrity

How do different moral philosophies approach the idea of lying for the greater good? Effective altruists often aim to be impact maximizers, which might incline them toward a utilitarian calculus. But philosophical perspectives on truth telling diverge significantly.

These frameworks offer different answers, but none give a simple green light to “strategic lying” without reservations. The act utilitarian might be most sympathetic to lying for good ends, yet even they must consider game theoretic repercussions (everyone lying erodes the system). Deontologists and virtue ethicists issue strong warnings that some goods – like integrity and trustworthiness – are fundamental. And those of us with moral uncertainty or a pragmatic bent find ourselves trying to balance multiple values. truth, consequences, reputation, character. The crux is that EA’s core aspiration to do the most good sits in tension with the heuristic “honesty is the best policy.” We need to examine that tension closely, informed by both philosophy and real world evidence.

Religious Perspectives. Ancient Wisdom on Lying

Major religious traditions have wrestled with the ethics of lying for millennia, often landing on “truth as a virtue, but…” with nuanced caveats in extreme cases. These perspectives add depth to our modern EA debate, reminding us that questions of ends and means are hardly new.

The religious insights mirror what we see in secular ethics and EA discussions. a strong prima facie duty to truthfulness, tempered by allowances for extreme circumstances. They add a note of humility – even when lying is permitted, it’s often with a heavy heart and a call to not let the exception swallow the ruleutrujj.org. For eas, who prize rational truth seeking, these traditions challenge us to clarify what counts as true necessity versus impatience or overconfidence in our models. They also underscore an idea often echoed in EA. integrity is hard won and easily lost, so violate it only if you must, and know the gravity of that choice.

Consequences for the EA Movement. Trust, Coordination, Reputation

Zooming out to the Effective Altruism community as a whole, what are the stakes of an “impact model says lie” approach? Even if an individual lie seems beneficial, on a community level it can carry steep long term costs. Here are key consequences eas must weigh.

In sum, the strategic costs of lying for EA are huge. Even if a lie “works” as intended, it diminishes future capacity to do good by undermining trust, both internally and externally. As the saying goes, “Trust comes on foot and leaves on horseback.” One public breach of integrity can gallop away with years of painstakingly earned social capital. Effective Altruism’s influence depends on being seen as reliable, rational, and ethically consistent. Losing that for a quick win would be trading the house for the garage.

EA Community Perspectives. Debating Truth & Consequences

Unsurprisingly, eas themselves have engaged in spirited debate over honesty and strategic deception. Several thoughtful EA Forum posts examine whether lying or extreme spin is ever justified, often concluding that the downsides are greater than they first appear. Let’s compare and contrast how five contributions from the community reason about truth, moral trade offs, and “noble lies.”

Across these community voices, a common theme emerges. strategic deception is viewed with great skepticism. The strongest advocates for truth (Strawberry Calm, Constantin, Aletheophile) highlight trust and integrity as paramount. Even discussions about pragmatism and moral uncertainty circle back to, “We need to be careful; truth has a special role in keeping our efforts on track.” Notably, none of these perspectives outright celebrates lying as an underused lifehack for doing good. Instead, they grapple with just how bad an idea it usually is, even if superficially tempting. The EA Forum, reflecting the community, seems to lean heavily toward the maxim. “With rare, rare exceptions, lying is off limits for eas.” The exceptions (maybe akin to Rahab’s or darura cases) would need to be extreme and clearly beneficial – and even then, many would argue for finding alternative solutions if at all possible.

Conclusion. Reflection on Integrity vs. Urgency

Effective altruists aim to use reason and evidence to do the most good. What happens when our reasoning apparatus (the spreadsheet, the impact estimate) points toward a means that undercuts our evidence and truth seeking ethos? This tension between integrity and utilitarian urgency doesn’t yield easy answers. We’ve explored scenarios, real examples, philosophies, religious counsel, and community arguments. In the end, each of us in the EA community might still answer differently when theory collides with intuition.

The uncomfortable ASKs 

Ultimately, this is a test of EA’s frameworks. Can we achieve radical good without resorting to “radical honesty” failures?  doing good, better – and doing good truthfully.