Maps, Terrain, and Moral Uncertainty
By Moritz Stumpe 🔸 @ 2026-02-10T15:15 (+19)
TL;DR
- I’ve noticed that many EA concepts implicitly rely on a shared “navigation” metaphor: paths, peaks, valleys, a precipice, and so many more.
- This post plays with that metaphor to surface assumptions and give us better language to think and talk about strategy under uncertainty.
- It’s intentionally incomplete and meant as a starting point for reflection and discussion, not a set of conclusions.
Thanks to Felix Werdermann, Helene Kortschak, and Naveeth Basheer for their review - and GPT 5.2 for being kind of a co-author on this. Mistakes as well as bad and good jokes are all mine.
Why this post
This post is not trying to introduce a new framework, settle major disagreements in EA, or provide action‑guiding recommendations. It’s closer to a shared way of noticing how we already think - and a prompt to play with that picture and have some fun along the way.
Over time, I’ve noticed that a lot of how people in EA think and talk - often implicitly - seems to rely on a shared metaphor: we imagine ourselves navigating some kind of landscape. We talk about paths, peaks, valleys, exploration - and you may remember someone saying that we’re standing close to a precipice?
The purpose of this post is to make that implicit picture explicit and to treat it as a playground rather than a theory. I'll only explore a small selection of ideas and navigation concepts. This is not a rigid analysis but an invitation for shared thinking. My hope is that having a shared image gives us better language, makes it easier to think more freely and creatively, and perhaps helps surface assumptions we usually leave unspoken. I don’t expect readers to agree with the picture as a whole - and I’d be happy if it mainly serves as a starting point for reflection, extension, or disagreement.
Sam Harris and the moral landscape
The immediate inspiration for this way of thinking is Sam Harris’s book The Moral Landscape. Sam argues that moral questions are in essence questions about the well‑being of conscious creatures, and that we can think of possible world states as forming a kind of “landscape” with peaks (better outcomes) and valleys (worse ones).
I broadly agree with Sam and I know many in EA do too. But you don’t need to buy his full argument to find the image useful. What matters here is the picture itself: a space of possible worlds, some clearly better than others, and many ways of moving through that space.
Where Sam largely stays at the level of value (which world states are better), EA is forced to grapple with something more practical: how agents act under uncertainty. We don’t choose world states directly. We choose actions, policies, and strategies that lead to different futures. Actions don’t let us jump freely to any point we like; they move us through space in constrained ways.
Terrain, maps, and a small caveat
Before going further, it helps to distinguish two things that often get blurred together:
- The terrain: how the world actually is (or will be), in moral terms.
- The map: our moral theories, empirical models, forecasts, and intuitions about that terrain.
Different people use different maps. Some maps are incomplete, distorted, or mutually incompatible. A large amount of disagreement in EA seems to come not from fundamentally different values or high-level aims, but from trusting different maps - different models of how the world works, how actions lead to outcomes, and how moral weight is distributed across possibilities.
For simplicity, I’ll assume a broadly moral realist picture in what follows: that there really are better and worse states of the world, even if we’re highly uncertain about where they lie. That assumption fits naturally with Sam’s starting point and with much of EA thinking.
That said, I don’t think you need strong moral realism for the navigation metaphor to make sense. Under pluralism, you might think of several moral dimensions rather than a single “height”; under anti-realism, you might think of the same terrain being scored differently by different agents. I’m not entirely sure how far this holds - this is beyond my philosophical pay grade - but in all cases we still face the same practical problem: how to navigate under deep uncertainty.
Peaks, valleys, and what we tend to optimise for
Now, let’s get into some metaphors. Starting with the basics:
- Peaks represent very good states of the world.
- Valleys represent very bad ones.
Even this simple picture already points at a familiar EA tension. Some people are more comfortable with strategies that primarily aim for very high peaks, while others place much more weight on avoiding deep valleys - a view often associated with suffering-focused ethics.
That said, there’s a more precise way to understand this disagreement. Most people are still maximising expected value - but they disagree about how the landscape itself is shaped. In particular, they disagree about how deep the valleys are, i.e. how much moral weight extreme suffering carries. Under suffering-focused views, the same path looks much riskier because the downside looms much larger on the map.
Thinking about local peaks can also be fruitful: these are states that look good from nearby but block access to better regions or don’t help you in getting there. Many familiar disagreements between EA and other social movements can be understood as disputes over whether a given path leads to a local peak or provides access to even higher ground. I work in animal welfare - trust me when I say I’ve heard enough debates about incremental reforms versus deeper structural change.
Fog, walking, and satellites: acting vs. understanding
Let’s dig a bit deeper (pun intended) and think about different ways of understanding what the landscape looks like and how to navigate it.
Much of EA takes place under fog or in cloudy conditions. We don’t see very far ahead; feedback is noisy; long‑term effects are uncertain.
One response is walking: taking small, incremental steps, relying on tight feedback loops, and improving things locally and iteratively. This maps well onto global health scaling, program improvement, and many animal‑welfare interventions.
Another response is investing in better foresight and vision, sending drones or satellites into the sky or simply giving us better binoculars to look ahead. Think about research, forecasting, and big‑picture strategy.
Your preference for how best to understand and navigate the landscape will depend on your understanding of the conditions of the world you’re in and your key uncertainties. How much fog is there? Are we able to produce a functioning satellite? And where the heck did I put grandpa’s old binoculars?
… I think I left them on the cover of Julia Galef’s The Scout Mindset (you guessed it - another EA classic).
Exploration, exploitation, and leaving the road
The explore vs. exploit trade‑off also fits neatly into our metaphor.
- Exploitation means staying on known paths: reliable routes that have worked before.
- Exploration means leaving the road to see what else is out there.
Importantly, “roads” don’t have to mean revisiting past world states (we won’t get into time travel here - I’m not Christopher Nolan). Instead, they’re repeatable patterns, scalable intervention models.
Disagreements often hinge on how confident people are that better paths exist beyond the highway to the bednet factory - or the McDonald’s you’re stopping by at on the way to do some cage-free campaigning.
Long‑range jumps and hits‑based giving
Some parts of the landscape are relatively smooth: small steps lead to small, fairly predictable changes. Other parts are much harder to read from the ground.
Hits‑based giving fits naturally into this picture as a strategy of long‑range attempts. Instead of carefully walking uphill, you aim at what you believe to be a distant peak and try to get there in one big move - you may build a catapult or launch some parachutes from a helicopter. But you may miss the peak because of wind, fog, an inaccurate map, or any other excuse - and you spent lots of resources for little or no reward.
Fun fact: Moral ambition seems like another kind of hits-based approach (following a “go big or go home” approach), even though Rutger Bregman is from a famously flat country without any peaks - the Netherlands, literally translated to “lower lands”!
Robustness across maps
If different people are using different maps - or if you’re unsure which map is right - one appealing strategy is robustness.
In navigation terms, robust actions are those that:
- move you uphill across many plausible maps,
- avoid deep valleys regardless of how the terrain is scored,
- don’t rely on precise peak locations being correct.
This connects naturally to moral uncertainty and the appeal of interventions that look pretty good under many ethical views rather than amazing under one and terrible under others. Let’s hope the GPS lobby has a strong influence on your moral parliament!
Option value, one‑way doors, and irreversibility
Some decisions are one‑way doors. You can walk through them, but you can’t walk back.
Option value, in this metaphor, is about avoiding getting trapped in a deep well (no play pump will get you out of there!) and not crossing bridges that collapse behind you unless you’re confident you don’t have to go back.
This frames lock‑in effects in an intuitive way without requiring formal models: some paths reduce the space of future paths, and that cost matters even if the immediate move looks good.
Vehicles, movement building, and speed vs. control
Now let’s think a bit more about vehicles. Vehicles let you move faster and carry more weight - we may think about them as the resources and various forms of capital we have available as a movement.
Movement building can help us to:
- build the right vehicles for the right terrain (you don’t need a ship when you’re stuck in the Sahara),
- train good drivers and co-pilots,
- establish rules for responsible vehicle use (how do you responsibly fly a billion dollar rocketship?),
- make sure we have enough fuel to power our vehicles,
- and so on and so forth.
Movement building can be valuable even before we agree on direction. It increases navigational capacity. But ideally, we invest in the kind of vehicles and personnel we really need!
Beyond consequentialism
Looking beyond consequentialist theories, does the landscape navigation metaphor still provide value?
For deontological constraints, I think it does! Here, the landscape is not freely navigable, some paths are simply blocked. Certain shortcuts are forbidden, even if they seem to lead uphill. The landscape now contains walls, fences, and no‑go zones.
For virtue ethics however, I think the picture of a landscape breaks down. In that case we may want to talk about what it means to be a good hiker and how we can practice our virtues in the process of hiking?
Closing thoughts (and an invitation)
This metaphor won’t settle debates, and it’s not meant to. Its value, if any, is in making some of our implicit assumptions more explicit and giving us language to play with:
- Are we in smooth terrain or somewhere harder to read?
- How thick is the fog?
- Are we walking, using vehicles, or relying on satellites?
- Are we exploring new regions or exploiting known roads?
- Which paths are off‑limits, regardless of where they lead?
Personally, I’ve found that thinking in these terms helps me notice when disagreements are really about different maps, different assumptions about the terrain, or different modes of navigation rather than about goals.
If this picture was useful, I’d be very curious what it helped you see. Where does it break? What navigation metaphors do you use when thinking about doing good under uncertainty? What other images or extensions might be worth adding? And importantly: Are there obvious jokes I’ve missed?
Feel free to treat this less as an argument and more as a shared whiteboard.
DC @ 2026-02-11T23:51 (+2)
Riffing:
The strait path; high P(doom) as very dangerous territory, (climbing up a Precipice?), Scylla and Charybdis
Stopping and smelling the roses
Traction/tractability (can your vehicle get a footing?)
The road less traveled by / Neglectedness. Following wellworn roads (convergently instrumental optionality) vs hacking into the jungle, reasoning from first principles, creating a map from scratch.
Dealing with the optimizer's curse would be like... avoiding very high peaks even if you think you can climb it?
Avoiding getting stuck in deep potholes/valleys makes sense.
Viatopia fits well into this
Spaceship Earth as a navigational metaphor. Strategic Command / Mission Control vibes
How far are we going?
Mo Putera @ 2026-02-11T16:53 (+2)
Very interesting, thanks for writing it :) I had a brief chat with Opus 4.6 about your essay and it pointed out that the "robustness across maps" section is probably the most decision-relevant idea particularly under deep moral uncertainty, but also that the literature on robustness is less useful in practice than one might hope, working through cases in global health / AI safety / insect welfare / x-risk mitigation to illustrate. Opus concludes (mods, let me know if this longform quote is low-value AI slop and I'll remove it):
A "robustness across maps" strategy — favoring actions that look good under many theories — has a systematic directional bias. It favors interventions that are near-term, measurable, target existing beings, and operate through well-understood causal pathways. It disfavors interventions that are speculative, long-term, target merely possible beings, and depend on contested empirical models.
This is because the theories that assign high value to speculative, long-term interventions (total utilitarianism with low discount rates, for example) are precisely the theories that diverge most from other theories in their recommendations. An intervention can only be "robust" if theories with very different structures agree on it, and theories with very different structures are most likely to agree on cases where the action-relevant features (existing beings, measurable outcomes, clear causal pathways) are the ones all theories care about.
In other words: robustness-seeking is implicitly risk-averse in theory-space, and this risk aversion is not neutral — it systematically favors the neartermist portfolio. This isn't an argument against neartermism, but it is an argument that "robustness" isn't the neutral, above-the-fray methodology it's often presented as. It's a substantive position that deserves to be argued for on the merits rather than smuggled in as though it were mere prudence. ...
So what does the existing literature actually suggest you should do?
Honestly? The literature is in a state of productive but genuine confusion. MEC is the most formally developed approach but founders on normalization. The parliamentary model handles normalization differently but introduces bargaining-mechanism sensitivity. Robustness-seeking avoids both problems but has the hidden directional bias I described. "My Favourite Theory" (just go with whatever theory you assign highest credence) avoids aggregation problems entirely but seems to throw away valuable information from your uncertainty.
If I were being maximally honest about what the current state of the art justifies, I'd say something like this: the right approach is probably a hybrid where you use robustness as a first filter (if an action looks good across all plausible theories, just do it), MEC-style reasoning for decisions where robustness doesn't settle the question (accepting that normalization introduces some arbitrariness), and a precautionary overlay for irreversible decisions (where the option-value argument from the essay is actually doing real work — preserving future decision-space is one of the few principles that survives across most frameworks).
But I want to flag that this hybrid isn't a clean resolution — it's a pragmatic kludge that inherits problems from each component. The field needs either a breakthrough in inter-theoretic value comparison or a persuasive argument that the problem is fundamentally insoluble and we should accept some particular principled approximation. Neither has arrived yet.
A suggested tweak to the landscape metaphor: think about robustness as the set of directions that are uphill on most maps simultaneously, because it makes visually obvious that this set shrinks as you include more diverse maps, and it makes the directional bias visible — robust paths tend to point toward nearby, well-surveyed terrain rather than distant, poorly-mapped peaks.