Some thoughts on moderation in doing good

By Vasco Grilo🔸 @ 2024-01-20T09:28 (+34)

This is a linkpost to https://80000hours.org/2023/05/moderation-in-doing-good/

This is a crosspost for Some thoughts on moderation in doing good by Benjamin Todd, as published on 80,000 Hours' website on 5 May 2023.

Here’s one of the deepest tensions in doing good:

How much should you do what seems right to you, even if it seems extreme or controversial, vs how much should you moderate your views and actions based on other perspectives?

If you moderate too much, you won’t be doing anything novel or ambitious, which really reduces how much impact you might have. The people who have had the biggest impact historically often spoke out about entrenched views and were met with hostility — think of the civil rights movement or Galileo.

Moreover, simply following ethical ‘common sense’ has a horrible track record. It used to be common sense to think that homosexuality was evil, slavery was the natural order, and that the environment was there for us to exploit.

And there is still so much wrong with the world. Millions of people die of easily preventable diseases, society is deeply unfair, billions of animals are tortured in factory farms, and we’re gambling our entire future by failing to mitigate threats like climate change. These huge problems deserve radical action — while conventional wisdom appears to accept doing little about them.

On a very basic level, doing more good is better than doing less. But this is a potentially endless and demanding principle, and most people don’t give it much attention or pursue it very systematically. So it wouldn’t be surprising if a concern for doing good led you to positions that seem radical or unusual to the rest of society.

This means that simply sticking with what others think, doing what’s ‘sensible’ or common sense, isn’t going to cut it. And in fact, by choosing the apparently ‘moderate’ path, you could still end up supporting things that are actively evil.

But at the same time, there are huge dangers in blazing a trail through untested moral terrain.

The dangers of extremism

Many of the most harmful people in history were convinced they were right, others were wrong — and they were putting their ideas into practice “for the greater good” but with disastrous results.

Aggressively acting on a narrow, contrarian idea of what to do has a worrying track record, which includes people who have killed tens of millions and dominated whole societies — consider, for example, the the Leninists.

The truth is that you’re almost certainly wrong about what’s best in some important ways . We understand very little of what matters, and everything has cascading and unforeseen effects.

Your model of the world should produce uncertain results about what’s best, but you should also be uncertain about which models are best to use in the first place.

And this uncertainty arises not only on an empirical level but also about what matters in the first place (moral uncertainty) — and probably in ways you haven’t even considered (‘unknown unknowns’).

As you add additional considerations, you will often find that not only does how good an action seems to change, but even whether the action seems good or bad at all may change (‘crucial considerations’).

For instance, technological progress can seem like a clear force for good as it raises living standards and makes us more secure. But if technological advances create new existential risks, the impact could be uncertain or even negative on the whole. And yet again, if you consider that faster technological development might get us through a particularly perilous period of history more quickly, it could seem positive again — and so on.

Indeed, even the question of how to in principle handle all this uncertainty is itself very uncertain. There is no widely accepted version of ‘decision theory,’ and efforts to make one quickly run into paradoxes or deeply unintuitive implications.

It’s striking that almost any moral view taken entirely literally leads to bizarre and extreme conclusions:

How are we to wrestle with all these different perspectives?

The case for moderation

One thing that’s clear is that the course of action that seems best likely has some serious downsides you haven’t considered.

Partly this is true due to good old-fashioned self-delusion. But even for an honest and well-intentioned actor, there are good reasons to expect this mismatch to happen theoretically, and it has been seen empirically.

Whenever someone proposes an action that seems unusually impactful, further investigation is far more likely to produce reasons that the impact is less good than it first seemed.

Indeed, there are good reasons to think that aggressively maximising based on a single perspective is almost bound to go wrong in the face of huge uncertainty.

The basic idea is that if your model is missing lots of what matters, and you try to aggressively push for one outcome that makes sense to you, it’ll probably come at the expense of those other values and outcomes that are missing from your model. (This idea is closely related to Goodhart’s Law and the AI alignment problem.)

This kind of naive optimisation is especially likely to go wrong when the things that are missing from your model are harder to measure than the main thing you’re focused on, since it’s so seductive to trade a concrete gain for an ill-defined loss.

There are many more reasons to moderate your views:

All this means that some degree of moderation is crucial. The difficult question then is to moderate by how much and in what ways.

Striking the balance

After FTX, I definitely feel like moderation is even more important than I thought before. But striking the right balance still feels very hard.

I think the question of how much to moderate may well be the biggest driver of differences in cause selection in effective altruism. People who are more into moderation are more likely to work on global health, while those who are less moderate are more likely to work on AI alignment. (I’m not saying this is a good state of affairs – I think there are ways to work on AI alignment that are compatible with moderation – but it seems likely empirically.)

And the tradeoff comes up in many other places, for instance:

The spectrum also has many dimensions. Moderation can sometimes look like humility, prudence, pluralism or cooperativeness. Here I’m just trying to point at the broad cluster of ideas, rather than precisely define a single concept.

So, under what circumstances should you bet against conventional wisdom, and how much should you moderate?

Here are some notes about how I currently think about the balancing act in my own decision making, which I think of as attempt to create a cautious contrarianism:

  1. Use conventional wisdom as your starting point or prior.
  2. Generally stick with conventional wisdom except for a couple of carefully thought through ‘bets’ against it. You should have an explanation for what other people are missing. Spotting one important way people are wrong is already hard enough, so you need to pick your battles — and being unconventional has costs. So for example, if a startup is launching an innovative product, it should probably just apply best practices in its corporate management, rather than also trying to innovate in how to run a company.
  3. In working out what these bets should be, don’t just apply a single perspective. Consider a range of perspectives, including common sense, expert opinion and other plausible models and heuristics, weighing them based on their strength. Seek out the best reasons you might be wrong. Remind yourself that you’re very likely to be deluding yourself.
  4. It’s safest to eliminate any courses of action that seem very bad according to one important perspective. If you can’t do this, proceed cautiously and be open to changing your mind.
  5. In particular, don’t do anything that seems very wrong from a common sense perspective ‘for the greater good.’ Respect the rights of others and cultivate good character. Yes, in principle there are exceptions to this rule, but if you think you’re one of them, you’re almost certainly not.
  6. Once you’ve limited your downsides, then seek the course of action with the most upside according to your different perspectives. It’s OK to have your actions driven by one perspective, and to aim ambitiously at long shots, if other perspectives are ambivalent or neutral about it (rather than very negative). Maximise with moderation.
  7. The more leverage, scale and effect on other people you seek, the more vetting and caution to apply. Chatting about a radical policy with a friend is totally different from pushing for a government to adopt it.

Here are some more notes about the nuances of applying these:

All this is pretty complicated to apply, and I’m not sure it would provide bright enough lines to do much to prevent dangerous behaviour in practice, so more work to develop these norms seems useful. We also need other mechanisms to prevent bad behaviour, like good governance — this post is only about one perspective on the problem.

If the main concern is to avoid dangerous behaviour, then I think point (5) about not harming others is most important.

Part of this is because the cases that seem most problematic historically seem to mostly involve dishonesty, rights violations, and domination over others (e.g. totalitarian communism and fascism).

Cautious contrarianism

There are lots of ways to support radical ideas that don’t have these features, such as non-violent protest or academic debate. It’s possible and necessary to have sandboxes, such as academia, where radical ideas can be explored and developed without immediate attempts to apply them.

Or consider the Shrimp Welfare Project. Promoting shrimp welfare sounds a bit nuts at first, but even if shrimp welfare turns out to be entirely unimportant, it’s not doing direct, serious harm to anyone — the likely worst case is that resources are wasted.

People who want to do good are on the safest ground when they can find projects like these. They’re on the most shaky ground when they try to force change on society as a whole.

Another simpler framework would be ‘constrained maximisation.’ Try to do the most good you can, but within the constraints of respecting rights, having a good character and your other important personal goals.

Here are some things that I think follow from cautious contrarianism:

None of these are absolutes. Gandhi definitely didn’t ‘live an otherwise normal life’ and that was part of his influence. It’s plausible there are cases when you should violate these guidelines, but you should do so deliberately, cautiously, and with considered awareness of the downsides.

Some warning signs that could suggest someone isn’t applying cautious contrarianism:

However, it’s not a warning sign to seriously consider weird ideas with radical implications.

Almost all ideas could lead to crazy, harmful, or weird-seeming implications if pursued to their logical end or allowed to dominate your life. You need to learn the skill of holding multiple conflicting perspectives in mind and coming to some kind of synthesis of them.

Unfortunately there’s no fully principled way to make these tradeoffs, but I think we face something similar in normal life all the time with internal conflicts. Maybe part of you wants to be a parent, but part of you wants freedom. These drives would lead to very different lives, so how do you balance them?

There is no easy answer, and completely overriding either drive would be bad. Hopefully, you can come to some kind of compromise or synthesis that both sides of yourself are happy with.

Likewise, we have to do our best to balance contradictory worldviews and perspectives.

When it comes to effective altruism in particular: doing more good matters and is underappreciated, but it’s not the only thing that counts, and shouldn’t be the only focus of your life.


SummaryBot @ 2024-01-22T14:53 (+3)

Executive summary: Doing more good often requires challenging conventional wisdom, but extreme or unilateral contrarianism risks causing unintended harms. Moderation helps balance ambition with humility.

Key points:

  1. Maximizing good may require radical ideas, but we are often wrong, so moderation balances ambition and uncertainty.
  2. Extremism historically enabled harm despite good intentions; reputation and cooperation matter.
  3. Conventional wisdom is a reasonable prior; make a few careful bets against it after vetting.
  4. Eliminate plans that seem clearly harmful; then seek upside, maximizing cautiously.
  5. Focus especially on avoiding harming others or violating rights.
  6. Have some normality in life, consider multiple outcomes, take disagreement seriously.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Arturo Macias @ 2024-01-20T14:06 (+1)

If you realistic about your plans, no matter how extreme are your ideas or preferences, your actions will be inevitably moderate. Reality impose so narrow limits to what can be attained that you always end up in moderate action of failure.