Ethical offsetting is antithetical to EA
By ClaireZabel @ 2016-01-05T17:49 (+43)
[My views are my own, not my employer's. Thanks to Michael Dickens for reviewing this post prior to publication.]
[More discussion here]
Summary
Spreading ethical offsetting is antithetical to EA values because it encourages people to focus on negating harm they personally cause rather than doing as much good as possible. Also, the most favored reference class for the offsets is rather vague and arbitrary.
There are a few positive aspects of using ethical offsets, and situations in which advocating ethical offsets may be effective.
Definition
Ethical offsetting is the practice of undoing harms caused by one's activities through donations or other acts of altruism. Examples of ethical offsetting include purchasing carbon offsets to make up for one’s carbon emissions and donating to animal charities to offset animal product consumption. More explanation and examples are available in this article.
Against offsetting
I think ethical offsetting is antithetical to EA values, and have three main objections to it.
1) In practice, people doing ethical offsetting use vague and arbitrary reference classes.
2) It's not the most effectively altruistic thing to do.
3) It spreads suboptimal and non-consequentialist memes/norms about doing good.
1) The reference class people pick for ethical offsets is arbitrary.
For example, let's say I cause some harm by buying milk that came from a cow that was treated poorly, and I want to negate the harm. I have a bunch of options.
I cannot undo the exact harm done by my purchase once it's happened, but I could (try to) seek out that specific cow and try to do something nice for her, negating the harm I caused for that specific cow's utility calculus. I could donate some money to a charity that helps cows, negating my harmful effect on the total utility of cow-kind. I could donate some money to a charity that helps all farmed animals, negating my harmful effect on farmed animal-kind. Or I could donate to whatever charity I thought did the most good per dollar, negating my negative impact on the universe most cost-effectively but less directly.
People seem to settle on a sort of broad cause-area-level offsetting preference (e.g. donating to help farmed animals). While reference class seems intuitive, it's ultimately arbitrary*.
2) Ethical offsetting isn't the most effectively altruistic thing.
You should do the things you think are most effectively altruistic, and you should donate to the charities you think are most effective. If you eat dead animals and don't believe animal charities are the most effective charities, I don't think you should donate to them.
Like everything else, ethical offsetting has opportunity costs; you could use that money to donate to the best charity, which is often different from the charity you’re using for ethical offsetting. It causes a harm relative to the world where you donate only to the most effective charity.
Even if you think the charity you donate your offsetting money to is the most effective, I don’t think it’s helpful to do ethical offsetting. Much of the suffering in the world isn’t directly caused by anyone, so an offsetting mindset increases the probability that you’ll miss big sources of suffering down the line. It causes a bias towards addressing anthropogenic harms, rather than harms from nature.
3) Ethical offsetting spreads anti-EA memes and norms
Ethical offsetting reinforces a preoccupation with not doing harmful things (instead of not allowing harmful things to happen, and taking action when they do). But EAs should (and usually do) focus on the sufferers, not themselves.
By encouraging others to offset, we set norms oriented around people’s personal behavior. We encourage an inefficient model of charity that involves donating based on one’s activities, not one’s abilities or the needs of charities that help neutralize various harms. We miss the chance to communicate about core EA ideas like cause prioritization and room for more funding by establishing a framework that has little room for them.
There are some other dangers involved in ethical offsetting, although I haven’t seen much evidence they actually occur: Offsetting may also encourage unhealthy scrupulosity about the harms we inevitably contribute to in order to function (although it could also help alleviate anxiety about them). And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them. This could contribute to careless and destructive norms about personal behavior.
Caveats
Offsetting is better than nothing. There may be situations in which ethical offsetting is the biggest plausible ask one can make. In such situations, I think bringing up the idea of ethical offsetting may be appropriate. And it may be an interesting conversation starter about sources of suffering and ways of alleviating them.
I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed.
It may be really psychologically beneficial for some people, similar to the way donations for the dubiously-named fuzzies (donations for causes that are especially personally meaningful to the donor rather than maximally effective) sometimes are.
I think the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently. I’m most worried about this concern.
*The reference class schelling point is stronger with carbon offsets, where the harmful thing is adding some carbon dioxide to the atmosphere. Carbon dioxide molecules are pretty interchangeable. If you remove as many as you added, you neutralize the harm from your emissions-causing action very directly, which is intuitively appealing.
All suffering may be equally important, but not all forms of harm are the same, or even similar. How similar the harm you offset is to the harm you cause can vary a lot. Few other types of offsetting I’ve heard of allow the opportunity to create a future so similar to the one where the harmful activity had never been done.
undefined @ 2016-01-10T08:12 (+23)
I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don't give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there's also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that's taking ethical behavior in your personal life too far. But I also think that it's possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with "Well, I'm an EA, who cares?" But nobody knows exactly how far is too far vs. not far enough, and EA doesn't help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg "I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it", or a literal way "I am actually going to pay 0.01 cents to offset the costs of this shower."
As such, I think all of your objections to offsetting fall short:
The reference class doesn't particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you're only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.
Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.
Ethical offsetting isn't an "anti-EA meme" any more than "be vegetarian" or "tip the waiter" are "anti-EA memes". Both involve having some sort of moral code other than buying bednets, but EA isn't about limiting your morality to buying bednets, it's about that being a bare minimum. Once you've done that, you can consider what other moral interests you might have.
People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That's okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That's also okay. As long as they're not taking it out of the money they've pledged to effective charity, it's not EA's business whether they want to do that or not, just as it's not EA's business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren't in competition with EA and don't subvert EA. If anything they contribute to the general desire to build a more moral world.
undefined @ 2016-01-13T05:44 (+5)
[written when very tired]
Other forms of morality aren't in competition with EA and don't subvert EA. If anything they contribute to the general desire to build a more moral world.
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don't only do EA things, and I'd agree with you. And you can say people shouldn't let EA ideas determine all their behaviors, and I'd also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn't mean sinning isn't antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Ethical offsetting isn't an "anti-EA meme" any more than "be vegetarian" or "tip the waiter" are "anti-EA memes". Both involve having some sort of moral code other than buying bednets, but EA isn't about limiting your morality to buying bednets, it's about that being a bare minimum.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that's to be expected, and great that they try! But an activity one knows doesn't do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they're merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it's being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it's probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it's a big step back.
That said, I agree with you that:
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg "I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it", or a literal way "I am actually going to pay 0.01 cents to offset the costs of this shower.
undefined @ 2016-01-13T14:23 (+8)
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that's to be expected, and great that they try! But an activity one knows doesn't do the most good (directly or indirectly) should not be called EA.
I think "do as much good as possible" is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it's counterproductive to define this in terms of "well, I guess they failed at EA, but everyone fails at things, so that's fine"; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn't seem very friendly (see also my response to Squark above).
My interpretation of EA is "devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible". This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don't think anybody would think of this as "anti-EA". However, it's not very efficient - if the calculations I've seen around are correct, then despite being a major life choice that seriously limits your food options, it's worth no more than a $5 - 50 donation to an animal charity. This isn't "the most effective thing" by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes - it's part of their personal morality that's not necessarily subsumed by EA, and it's not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it's part of some people's personal morality, and it's not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn't going to EA charity, it's not hurting EA for the person to give it to offsets instead of, I don't know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that's the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending "stop offsetting and become vegetarian" results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending "stop offsetting but don't become vegetarian" results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn't seem strictly worse is "stop offsetting and donate the $5 to a charity more effective than the animal charity you're giving it to now". But why should we be more concerned about making them give the money they're already using semi-efficiently to a more effective charity, as opposed to starting with the money they're spending on clothes or games or something, and having the money they're already spending pretty efficiently be the last thing we worry about redirecting?
undefined @ 2016-01-13T18:53 (+2)
Aren't you kind of not disagreeing at all here?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn't have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it's not the most effective thing to do (with your EA money).
The two claims don't seem incompatible with each other, unless I'm missing something.
undefined @ 2016-01-13T09:06 (+3)
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.
undefined @ 2016-01-13T14:24 (+3)
I gave the example of giving 10% to bed nets because that's an especially clear example of a division between charitable and non-charitable money - eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there's no cost to EA to giving that to offsets instead. I know many other EAs work this way too.
If you believe this isn't enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there's such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you "should have" given to charity.
But if you're not working off a model where you have to agonize over everything, I'm not sure why you should agonize over offsets.
undefined @ 2016-01-15T10:07 (+2)
I don't think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don't reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
undefined @ 2016-01-05T20:48 (+21)
This is a bit of a side point, but to what extent do EAs actually promote ethical offsetting? It seems to me like it normally gets raised in the following ways:
A dominance argument to show that ethical consumption isn't the most important thing to focus on. Hypothetical example: If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to "offset" my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn't do it unless it involves negligible sacrifice.
As an option for non-consequentialist minded people who don't just want to focus on the best activities, because they have special obligations to avoid doing certain types of harm.
It doesn't seem like EAs promote ethical offsetting as a generally good thing to do. Rather, EAs suggest identifying the highest leverage ways for you to make a difference in the world, and focusing your attention on those. (and not worrying about other ways to have more impact that involve more sacrifice)
undefined @ 2016-01-06T05:12 (+5)
I don't think many EAs spend a lot of time promoting it, but I hear EAs discuss the idea positively (and, I think, uncritically) with one another from time to time. It was more common shortly following the SSC article.
undefined @ 2016-01-06T15:37 (+4)
If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to "offset" my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn't do it unless it involves negligible sacrifice.
Does it actually show this? I generally hear the argument go something like this:
- You can probably convert a lot of vegetarians by donating to The Humane League, which is better than becoming vegetarian yourself. Therefore donating to THL is better than being vegetarian.
- Naive estimates say THL does more good than AMF, but AMF has much more robust evidence than THL, so donating to AMF is better.
- Therefore donating to AMF is better than being vegetarian.
Parts 1 and 2 use contradictory claims. Part 1 claims that naive expected value dominates, and part 2 claims that robustness of evidence dominates.
CarlShulman @ 2016-01-06T18:00 (+9)
Michael, do you have an example? I've never seen the union of those 3 in one argument before, although I have seen each of the three claims made by different people.
E.g. it doesn't describe this post by Jeff Kaufman or this by Greg Lewis. The usual reasons I hear from such people favoring AMF over THL are greater flow-through effects or lower weight on nonhuman animals.
Separately, I hear people, e.g. Tom Ash and Peter Hurford, saying something like #2, but they are themselves vegetarian, and not making arguments for offsetting that I have seen. Indeed, they have challenged it on the basis that the estimates for ACE charities are not robust, which is consistent and contra the argument you described.
undefined @ 2016-01-06T22:24 (+3)
You're correct that Tom and I both assert something along the lines of #2 but have never argued #3.
undefined @ 2016-01-06T19:46 (+2)
I hear people separately make #1 and #2, I can't recall hearing someone say both #1 and #2 in a single breath. But if you favor AMF over THL because AMF has stronger evidence behind it, that doesn't preclude going vegetarian. "AMF is better than THL" is not a good argument against being vegetarian, and doesn't show that vegetarianism is negligible compared to AMF donations, which is the argument Ben was quoting.
CarlShulman @ 2016-01-06T21:37 (+5)
So you don't actually hear people making the argument you mentioned, and the published arguments by Kaufman and Lewis don't suffer from the inconsistency you mention? Kaufman makes an argument that counting human and cow lives equally, modest AMF donations can be a bigger deal than dairy consumption, while Lewis argues that if one takes ACE estimates seriously, then modest donations to ACE-recommended charities can be a bigger deal than general carnivory.
On the question of donations to AMF vs THL, Kaufman weights AMF over ACE charities because he cares less about nonhuman animals than humans. Some others do so because of flow-through effects. Lewis is vegetarian, but I think mainly donates to poverty and existential risk related things, and I don't know his precise reasons but they aren't germane to his essay.
"is the argument Ben was quoting."
Ben's description didn't specify someone thinking AMF was better because they didn't believe in the robustness of THL 'animals spared' estimates. You inserted that, which created the tension in your hypothetical argument. People who favored AMF over THL because of flow-through effects, or because of weighting humans more, wouldn't have that tension (I would argue the flow-through view would create other tensions, but that's a different story).
undefined @ 2016-01-08T22:12 (+4)
I think you're right actually. A lot of people who prefer AMF to THL are still vegetarian, and that's totally reasonable and self-consistent.
Jonas Vollmer @ 2019-06-08T16:00 (+15)
One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it's fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
Brian_Tomasik @ 2019-09-19T19:25 (+3)
Good point.
this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
Couldn't one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.
Jonas Vollmer @ 2019-09-20T12:50 (+3)
Thanks, I agree. It still seems to me that a) mainstream people probably matter somewhat less than specific groups, b) we should think about how mainstream people would like to be helped, and that may or may not be through offsetting.
t_adamczewski @ 2020-08-21T17:03 (+11)
I just discovered this related and entertaining passage from Tim Harford's The Undercover Economist (2005).
Here I am, going to a panel discussion organized by an environmental charity, and a very earnest young member of staff is grilling me before I even get past the door of the lecture hall.
“How did you travel here today? We need to know for our carbon offset program.”
“What’s a carbon offset program?”
“We want all our meetings to be carbon-neutral. We ask everyone who attends to let us know how far they came and on what mode of transportation, and then we work out how much carbon dioxide was emitted and plant trees to offset the emissions.”
The Undercover Economist is about to blow his cover.
“I see. In that case, I came here in an anthracite powered steamer from Australia.”
“Sorry . . . how do you spell anthracite?”
“It’s just a kind of coal—very dirty, lots of sulfur. OW!”
The Undercover Economist’s wife gives him a sharp dig in the ribs.
“Ignore him. We both cycled here.”
“Oh.”
Apart from being a good example of how irritating an Undercover Economist can be, this true story should, I hope, provoke a few questions. Why would an environmental charity organize a carbon neutral meeting? The obvious answer is “so that it can engage in debate without contributing to climate change.” And that is true, but misleading.
The Undercover Economist in me was looking at things from the point of view of efficiency. If planting trees is a good way to deal with climate change, why not forget about the meetings and plant as many as possible? (In which case, everybody should say they came by steamship.) If the awareness-raising debate is the important thing, why not forget about the trees and organize extra debates?
In other words, why be “carbon-neutral” when you can be “carbon-optimal,” especially since the meeting was not benzene-neutral, lead-neutral, particulate-neutral, ozone-neutral, sulfur-neutral, congestion-neutral, noise-neutral, or accident-neutral? Instead of working out whether to improve the environment directly (by planting trees), or indirectly (by promoting discussion), the charity was spending considerable energy keeping itself precisely “neutral”—and not even precisely neutral on all externalities, nor even a modest range of environmental toxins, but preserving its neutrality on a single, high-profile pollutant: carbon dioxide. And it was doing so in a very public way.
Brian_Tomasik @ 2019-09-19T21:32 (+11)
I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain's moral parliament).
Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that's acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn't care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn't offset this harm in their eyes, but donating to a developing-world charity would.
Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.
An objection to this perspective can be that in most offsetting situations, the trading partner isn't paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)
undefined @ 2016-01-05T18:57 (+9)
I sympathise with the point you make with this post.
However, isn't it antithetical to consequentialism, rather than EA? EAs can have prohibitions against causing harms to groups of people.
How does this speak to people who use rule-based ethics that obliges them to investigate the benefit of their charitable gifts?
undefined @ 2016-01-05T19:16 (+7)
This will make sense, except that pretty much every argument for offsets that I've seen comes from consequentialists or consequentialist-aligned people.
Offsetting doesn't seem very virtuous, and deontologists generally have a poor model for positive rights/obligations.
undefined @ 2016-01-05T19:56 (+3)
I don't think most nonconsequentialist theories provide a basis to accept offsetting either though. But I'd have to see some people make a positive case for it to know where they're coming from.
undefined @ 2016-01-11T19:55 (+2)
I think they're consistent with a Kantian perspective. Also, a risk averse consequentialist. Also, someone that likes to take responsibility for the consequences of their actions in a like for like manner for ethical-aesthetic reasons.
undefined @ 2016-01-07T16:44 (+5)
Often EAs propose offsetting as a counterargument to "if something harms others you must not do it". So you show that offsetting is better than strict harm avoidance, and then you give reasons why you should instead focus on the most important things.
Offsetting isn't antithetical to EA; to my mind it's a step towards EA.
undefined @ 2016-01-05T21:41 (+4)
Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.
I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.
undefined @ 2016-01-06T05:19 (+1)
Yep. Except I think this would be most of the time, since it people tend to dislike it when you harm others in big or unusual ways, and doing so is often illegal. So at the very least you frequently take hits to your reputation (and the reputation of EA, theoretically) and effectiveness when you cause big unusual harms.
undefined @ 2016-01-05T20:43 (+3)
I am not aware of EA associated people using ethical offsets beyond a small amount they don't consider part of their charity budget. Is there an "Ethical Offsetting is Great for EA" position you are arguing against?
undefined @ 2016-01-06T05:13 (+1)
It's not very common but I've heard it promoted among EAs several times in different EA circles.
undefined @ 2016-01-06T16:30 (+2)
Jeff has advocated this.
undefined @ 2016-01-07T18:05 (+8)
I'm not advocating offsetting, but I don't have a good name for what I am trying to advocate. The idea is that you should prioritize the activities that have the best tradeoff between downside-for-you and upside-for-others. There are ways that this is similar to offsetting (if you can show that the harm caused by X is less than the harm caused by not donating $Y then you should feel fine donating $Y instead of avoiding X) but in this framework you don't get to donating via tallying up your harms and pricing them, instead you set out to do as much good as you can without making yourself miserable.
undefined @ 2016-01-06T15:42 (+1)
I think that your argument is much more likely to discourage people making reasonable use of ethical offsets than anyone engaged in the problem you describe, mostly based of the proportion of such people that actually exist. As such, I think publishing such an argument without having the opposed view being actually promoted by anyone you care to mention is irresponsible.
undefined @ 2016-01-06T20:01 (+1)
I wouldn't make this argument in a context where I don't think the vast majority of people reading it are EAs. It wouldn't make sense in a none EA-dense context, since the argument is "offsetting isn't EA" not "offsetting is bad and no one should do it". Like I said, I think offsetting is better than nothing. The proportions are obviously very different in the EA community than outside it.
I don't want to mention people because a) they may not want their views made public b) it might embarrass them to name them in a context where I'm being critical of their views, and c) in about 2/3 of the cases I remember the conversation was in person, so I can't easily cite the argument anyway.
undefined @ 2016-01-06T22:23 (+1)
The proportions are obviously very different in the EA community than outside it.
This is not at all obvious. All I hear about ethical offsets is at least EA adjacent.
I don't want to mention people because a) they may not want their views made public b) it might embarrass them to name them in a context where I'm being critical of their views, and c) in about 2/3 of the cases I remember the conversation was in person, so I can't easily cite the argument anyway.
Understanding all of this, I still say that it is net negative to publicly make your argument when there is nothing you can publicly cite as promoting what you argue against. If you notice such views in private communications, it may make sense to address them in those private communications.
undefined @ 2016-01-05T19:46 (+3)
Thanks for this post.
It's like we came full circle from people donating minimal amounts of money to charity to relieve their guilt over their perpetuation of global injustice, to people working very hard and doing everything they can to fight global injustice, to people donating minimal amounts of money to relieve their guilt over their perpetuation of global injustice.
Just accept it. Some of your actions will harm others no matter what you do. The only way to make it worthwhile is to go out there and achieve lots of valuable things. Be confident and proud of what you accomplish and you can accept the harm that you will have to commit.
Justin Otto @ 2022-02-23T23:36 (+2)
I strongly agree. It's like trying to avoid a trade deficit with every country you interact with. The currency of value is better if it's not region-locked.
undefined @ 2016-01-14T22:56 (+2)
Thank you for this thought-provoking article! We want to make it the topic of our next meetup, so I’ve tried to clarify what my new position should be.
Your first two points are easily conceded—in my view everyone should direct their donations to the, in their view, most effective charity when offsetting. Your third point is most interesting.
Nino already married your and Scott’s positions, but I find it more useful to structure my thoughts in a list of pros and cons anyway.
On the pro side I see the following arguments:
- Contrary to Claire’s point, I think offsetting also questions the act-omission distinction because instead of forgoing something, one engages in proactive activism. Having done that, it will be harder to later argue that doing good is supererogatory, because it would be inconsistent with one’s past behavior.
- Offsetting can be used as a starting point to extend the circle of compassion in that a person could be brought to care enough about the harm inflicted by friends and family members to offset for them too. (But I haven’t seen this implemented.)
- Charities that advocate for nonhuman animals are probably the most commonly chosen reference class, and they are highly funding constrained, possibly more than they are talent constrained, so that an additional regular donor may be worth many additional vegans.
- Outside EA there are many nonveg*ns that are compassionate and want to reduce suffering but find that for them or in their context, veganism would be hard. Instead of resorting to the defensiveness and denigration discussed at the last meetup, they can join in with highly impactful donations.
- Offsetting can counter the cliché that veg*ns are dogmatic Siths that only deal in absolutes.
- Bridging the schism between veg*ns and nonveg*ns can help make advocacy for farmed animals a universally accepted movement, which would greatly simplify political advocacy.
On the con side I see the following arguments:
- Offsetting also bolsters the act-omission distinction because it fails to provide incentives to scale one’s proactive activism beyond the low level of harm the average person inflicts, so that the offsetter will fall far short of their potential. (Unless they also offset for friends and family members or even larger circles.)
- Offsetting may incur moral licensing when the satisfaction a person gains from “having donated” doesn’t scale in proportion with the size of the donation, so that a small donation makes further donations unlikely to the same extend that a large donation would have.
- Advantage 3 only holds for our current state of an anti-inductive system. In a decade or two there will hopefully be a point when the suffering of farmed animals has been reduced sufficiently to make offsetting much more expensive. At that point, an additional veg*n will be more valuable than an additional offsetter given what the latter can be expected to be able to donate. In short, success in offsetting values spreading diminishes its own value. Core EA ideas don’t suffer from that problem.
- Offsetting when described in terms of offsetting is only compatible with a subclass of consequentialist moralities, so that it’s impact is limited or the framing should be reconsidered.
- Offsetting may signal a readiness to defect (in such situations as the prisoner’s dilemma or the stag hunt), which might interfere with the offsetter’s chances for trade with agents that are not value aligned.
- Offsetting when described in terms of offsetting may in turn introduce (or aggravate) the schism between deontological and consequentialist veg*ns.
- When offsetting funds are taken from a person’s EA budget, it is at best meaningless because the money would’ve been donated effectively anyway, and likely harmful if the reference class is chosen to exclude the most effective giving opportunities.
- When offsetting becomes associated with EA, it may increase the perceived weirdness of EA, making it harder for people to associate with more important ideas of EA.
Some of the disadvantages only limit the scope of offsetting, others could be avoided with different rhetoric. What other pros or cons did I forget?
undefined @ 2016-01-15T00:05 (+1)
Cool, this mostly seems right.
I think the harmfulness of offsetting's focus on collectively anthropogenic sources of suffering is still being underestimated in these conversation. (I'm using "collectively anthropogenic" because there are potential sources of badness like UFAI that are anthropogenic, but only caused by a few people to the idea of offsetting would be useless to spread to most people to address the problem of UFAI. Also, offsetting the harm done by UFAI would be, uh, tricky.) I think offsetting might even reenforce a non-interventionist mindset that could prove extremely harmful for addressing problems like wild animal suffering.
One good aspect of offsetting that I think I initially underestimated is the way it can be used as a psychological tool for beginning to alieve that a cause area matters. For example, I can imagine an individual who is beginning to suspect animals suffering is important, but finds the idea of vegetarianism or veganism daunting, and shies away from it and thus doesn't want to think more about animal suffering. For them, offsetting could be a good bridge step. I don't think this conflicts with anything I said, but I don't want people to feel like it's shameful to use this tool.
I'd want to add on to:
Pro 3: If you're just offsetting, it's worth only as much as one additional vegan (if your numbers are right). I haven't seen evidence that ethical offsetting leads to big regular donors. It may, and if you just meant to bring up the possibility that seems reasonable.
Pro 4: People who eat animal products can donate to animal charities even if it's not offsetting. That's great! But you don't need offsetting to introduce that possibility. I think offsetting harmfully frames the discussion around them "making up" for their behavior, instead of possibly just making large donations that help lots of animals. Many vegetarians enthusiastically make large donations to animal charities, which is wonderful, without worrying about offsetting. I don't know what happened at your last meetup but I think it's awesome when nonvegans donate to animal charities. Pro 6: I'm not sure how offsetting helps bridge this schism well. I can imagine some arguments about how it would help, and others about how it would hurt.
Con 5: I'm not sure how offsetting signals a willingness to defect. Could you explain that more?
Linda Linsefors @ 2021-01-21T10:12 (+1)
Edit: I've posted before reading others comments. Others have already made this an similar points.
Here is a story of how ethical offsetting can be effective.
I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly. With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying].
What I did was to calculate what offsetting would cost, and it turned out to be around one USD, so basically nothing. I could now conclude that:
Flying + offsetting > Going by train
Because I would save time, and I could easily afford to offset more than the harm I would do by flying, and still pay less in total.
Now, since I'm an EA I could also do the next step
Flying + donating to the most effective thing > Flying + offsetting > Going by train.
But I needed at least the idea of offsetting to simplify the calculation to something I could manage my self in an afternoon. In the first step I compare things that are similar enough so the comparison is mostly straight forward. The second step is actually super complicated, but it's the sort of thing EAs has been doing for year, so for this I can fall back on others.
But I'm not sure how I would have done the direct comparison between [flying + donating] v.s. [going by train]. I'm sure it's doable some how, but with the middle step, it was so much much easier.
undefined @ 2018-01-24T00:02 (+1)
While I agree that offsetting isn't the best thing to spend resources on, I don't like the framing of it being 'antithetical to EA'. Whether offsetting is a good idea or not is a good, object-level discussion to have. Whether it is aligned with or antithetical to EA brings in a lot more connotations, with little to gain:
- People who liked offsetting since earlier might think that EA isn't for them.
- People who like the EA-community and do offset might worry whether this means that they aren't 'EA enough' (without even reading the arguments).
- People who are in favor of utilitarian reasoning but don't like the EA community might ignore the arguments.
- The comment section might be used to discuss the definition of EA, instead of whether offsetting is a good idea or not.
undefined @ 2016-01-06T00:34 (+1)
Offsetting can also be viewed as deciding to co-operating in a tragedy of the commons like situation. If a large enough proportion of the population/businesses decided to offset their emissions then presumably global warming would cease to be an issue. This would cost everyone a small amount individually, but the individual gain would be large. Perhaps the money could do more good elsewhere, but defecting simply encourages more people to defect as well and possibly causes the whole deal to collapse.
Not that I offset my carbon, just an interesting thought.
undefined @ 2016-01-06T05:30 (+2)
If everyone "defected" by donating to the most effective charity instead of offsetting, the whole deal wouldn't collapse. The world would be a better place.
So if the problem is that people are copycats so doing a thing encourages other people to do the same, it's better to donate more to an effective charity than to offset, since when people copy you doing that it will make the world even better.
Brian_Tomasik @ 2019-09-19T21:52 (+1)
A problem is that different people have different views on what's most effective. If most people are quasi-egoists, then for them, spending money on themselves or their families is "the most effective charity" they can give to. Or even within the realm of what's normally understood to be charity, people might donate to their local church or arts center. Relative to their values, this might be the best charity to give to.
undefined @ 2016-01-09T11:21 (+1)
The worry is that enough people will defect from the current social norms so that they break down, but not enough people defect to create a new norm of donating to effective charities instead.
undefined @ 2016-01-09T22:08 (+1)
Neither an "offset your harm" nor a "donate to effective charities" norm are especially well established in the general population, though. Your argument sounds like it's based on the former being widespread?
undefined @ 2016-01-10T14:36 (+1)
Global warming offsets are pretty big.
undefined @ 2016-01-11T15:23 (+4)
The idea of global warming offsets is pretty widespread, but I don't think a norm of buying them is. Specifically, I don't think either that they're very widely bought or even seen as something you're supposed to buy.
(My impression is that it's catching on as a norm among sustainably minded companies, though.)
undefined @ 2016-01-05T19:28 (+1)
"I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed."
This has roughly been my reasoning for considering donating small sums to Animal suffering as a cause area and Climate Change as a cause area. (Though I haven't done so yet.) I think it helps people to keep an open mind and am therefore happy to see them offsetting their 'wrong' behaviour.
I agree with Ryan's and Linch's comments as well.