Keeping Absolutes in Mind

By Michelle_Hutchinson @ 2018-10-21T22:40 (+145)

As effective altruists, we often focus on the relative value of the different ways we could be helping others. We focus on particular charities being more effective than others, or particular jobs being more impactful than others. That makes a lot of sense: there are big differences in effectiveness between interventions, so satisficing could lead to us losing a lot of value. But at the end of the day what really matters isn’t relative: it’s the absolute value our actions bring about. What matters is the number of children who are actually dewormed and the actual increases in global security caused by improvements made to technology policy. Keeping that in mind can be important for keeping ourselves motivated and appreciating the work others are putting in.

It might appear that focusing on relative value is unproblematic, even if it isn’t what ultimately matters. But doing so neglects one of the most exciting things about effective altruism: the fact that each of us can actually have a remarkable amount of impact in improving the lives of others. Thinking only about the comparative impact of actions can lead us to overlook that fact. Say I apply for the job I think will have the highest impact but fail to get it. Later, I dwell on how much less impactful my current job is than the one I first went for, instead of on the impact I’m actually having. Or maybe I work on trying to decrease pandemic risk. While I succeed in reducing pandemic risk, the reduction feels tiny compared to the massive reduction in risk the President could effect. In both cases, I’m likely to feel demotivated about my job. Similarly, were it other people in these roles, my attention being on the comparisons might prevent me from properly appreciating the work they’re doing.

Focusing on relative value might also lead us to neglect possible costs. When buying houses, people tend to switch into a mode where saving an extra £100 seems less important than it would under ordinary circumstances. Similarly, in large organisations (such as universities) where the costs of activities are high, there may be an assumption that additional overhead costs are unimportant as long as they’re small relative to core activities. Yet in absolute terms, these costs may be thousands of pounds.

In cases like those above, it might help to think more about the absolute benefit our actions produce. That might mean simply trying to make the value more salient by thinking about it. The 10% of my income that I donate is far less than that of some of my friends. But thinking through the fact that over my life I’ll be able to do the equivalent of save more than one person from dying of malaria is still absolutely incredible to me. Calculating the effects in more detail can be even more powerful – in this case thinking through specifically how many lives saved equivalent my career donations might amount to. Similarly, when you’re being asked to pay a fee, thinking about how many malaria nets that fee could buy really makes the value lost due to the fee clear. That might be useful if you need to motivate yourself to resist paying unnecessary overheads (though in other cases doing the calculation may be unhelpfully stressful!).

In cases where impact is harder to cache out, like the case of someone working on pandemic preparedness, it might be helpful to make the impact more concrete to yourself. That could be by thinking through specifically how the future might be better due to you, or could be by thinking about ways similar historic work has improved the world.

For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.


undefined @ 2018-11-06T12:28 (+26)

Strong upvote. I think this is an important point, nicely put.

A slightly different version of this, which I think is particularly insidious, is feeling bad about doing a job which is your relative advantage. If I think Cause A is the most important, it's tempting to feel that I should work on Cause A even if I'm much better at working on Cause B, and that's my comparative advantage within the community. This also applies to how one should think about other people - I think one should praise people who work on Cause B if that's the thing that's best for their skills/motivations.

undefined @ 2018-11-09T21:02 (+22)

I really like this mindset as a way to avoid the "elitism" that a lot of people (rightly or wrongly) perceive in EA thinking.

When I encounter someone who's working on a project, my first thought isn't "what's the impact of this project?". Instead, I ask:

For almost everyone I've met within EA, both of those answers are "yes", and that seems to me like one of the most important facts about our community, however different our "relative impacts" might be.

Even outside of EA, I think that a lot of people still share our core goal of helping others as much as possible, and that this goal manifests in the way they think about their work. In my view, this makes them "allies" in a certain fundamental sense. As long as we share that goal, we can find ways to work together, in an alliance against the world's more... unhelpful forces.

Example: I have a friend who's curious about EA, but whose first love is ecology, and who works in a science education nonprofit (but wants to keep looking for better opportunities). I don't know what her actual impact is, but I do know that she really cares about helping people, and I ask her about her work with genuine interest when I see her.

I wouldn't recommend this friend for 80,000 Hours consulting, but I think that as EA grows to incorporate more causes and more people, she'll eventually find a place in the community. And even if she never takes a new job, it's good to just have a lot of people in the world who hear the phrase "effective altruism" and think "yes, those people are on my side, they're trying to help just like I am, I want them to succeed". If we want to make that happen, we should be careful to notice when someone is doing something good, even if isn't "optimized".

Khorton @ 2020-01-17T08:51 (+4)

Revisiting this post a year later has also made me reconsider evaluating myself compared to my past self.

I think it makes sense to compare yourself to your past self, just as it makes sense at times to compare yourself to other EAs or to everyone in your city, but probably doing just one of these things will make you a bit mad. It's probably best to look at your life from different lenses from time to time - some days thinking of how much more good you're doing now than you used to, other days thinking of how much exceptional role models in your career have done, and other days thinking about the absolute number of people you've helped. They can help to motivate you or improve your actions.

Stan Pinsent @ 2022-12-06T15:59 (+3)

This is also a good argument for positive lifestyle changes like eating vegan.

The sheer scale of animal suffering, plus the fact there are definitely more impactful options than going vegan, can make it seem less appealing. But knowing that each year I have (and use) the power to prevent dozens of animals from life in factory farms is empowering.

MaxDalton @ 2022-01-09T09:45 (+3)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I commented on this back in the day, and still like it. I think it's thoughtfully reflecting on some common ways in which EA can be offputting or overwhelming for people, in a way that I think will help them to cope better.

undefined @ 2018-11-07T09:43 (+3)

Absolutely agree! :) I think this also extends to "non-EA" causes and projects that do good: sure, they're not most effective, but they're still improving or saving lives and that's praiseworthy.

Relatedly, I think it's hard to be motivated by subjective expected value, even if that's what most people think we should maximize. When something turns out to not be successful although it was really high expected value (so not the result of bad analysis), the action should be praised. I'm afraid that the ranking of actions by expected value diverges significantly from the ranking by expected recognition (from oneself and others) and I think this should be somewhat worrying.

Coming back to the post, I also think the drop in recognition is too large when absolute value realized is not maximal. I'm curious to figure out what is the optimal recognition function is (square root of expected value?), but I think that's a bit besides the point of this post!

undefined @ 2018-11-07T10:36 (+1)

I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I'm not sure I understand the point in your last paragraph?

undefined @ 2018-11-09T21:07 (+4)

My interpretation of Siebe's point is that we shouldn't try to scale our praise of people with their expected impact. For example, someone who saves one life probably deserves more than one-billionth of the praise we give to Norman Borlaug.

Reasons I agree with this point, if it was what Siebe was saying:

  • The marginal value of more recognition eventually drops off, to the point where it's no longer useful as an incentive (or even desired by the person being recognized).
  • It's easy to forget how easy it is to be wrong about predicted impact, or to anchor on high numbers when we're looking at someone impressive. (Maybe Norman Borlaug actually saved only a hundred million lives; it's very hard to tell what would have happened without him.) Using something like "square root of expected value" lets us hedge against our uncertainty.
undefined @ 2018-11-13T09:36 (+6)

Something along those lines. Thanks for interpreting! :)

What I was getting at was mostly that praise/recognition should be a smooth function, such that things not branded as EA still get recognition if they're only 1/10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective it's not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.

I expect this is partly because I don't have access/don't understand the reasoning so can't assess the expected value, but partly because I'm normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict it's going to be quite effective without knowing anything about the project, while I'd be skeptical about an unknown EA.

Denkenberger @ 2019-01-06T04:44 (+4)

Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!