Offsetting is more expensive than it's assumed

By emre kaplan🔸 @ 2024-02-28T09:16 (+21)

Sometimes people seek to offset their harmful behaviours. Counterfactual impact of donations is often used in offsetting calculations. This seems mistaken.

Assume the following situation:

1 dollar donation for an animal product reduction charity results in 1 animal spared from being born into factory farming.

Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.

If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.

A better but still unsatisfactory approach would be looking at Shapley values. Here is a case where Shapley value is still unsatisfactory:

2 people cooperate on a project to spare one animal from being born. Participation of only one person is sufficient for the project to succeed. The counterfactual value of each participant is 0, Shapley value of each is 0.5.

Maybe min(Shapley, Counterfactual) would be a better benchmark for offsetting. But I’m not sure of this.

How much difference does this make?

Many effective charities tend to do institutional work. Institutional work often involves a lot people. In animal advocacy, welfare policies require mass support from the public. A petition easily gets more than 0.1 million signatures. 7.5 million people voted for Prop 12 in California. 

However, the specific supporters from the public are less critical compared to the donor. Many projects wouldn’t start at all without donor support, whereas Prop 12 would still pass even if one fewer person voted for it.

Nonetheless, there are quite a lot of veto-players involved in institutional animal welfare work. Assuming that there are 8 distinct individuals/coalitions that have power to kill a typical animal welfare project, Shapley value might be an order of magnitude lower than the counterfactual value.


Jason @ 2024-02-28T17:11 (+6)

Thanks for posting this!

I think we can run into problems when we attempt to transfer cost-effectiveness analyses that were sound enough to answer "where should I donate?" into the harder question of "how much do I need to give to offset"? As you point out, assigning ~100% of the counterfactual good to the donor is  . . . at a minimum, generous.

When we are asking where to donate, that often isn't a major problem. For example, if my goal is to save lives, I can often assume that errors in assigning "moral credit" will be roughly equal across (at least) GiveWell-style charities like AMF. Because the error term is similar for all giving opportunities, we can usually ignore it because it shouldn't change the relative ranking of the giving opportunities unless they are fairly close.

But offset situations pose a different question -- we are looking to morally claim a certain quantum of good to counterbalance the not-good we are producing elsewhere. That means we need an absolute measure (or at least estimate) of that quantum. As a result, if we want to find the minimum amount necessary to offset, we necessarily must make judgments about distributing the moral credit available.

Some people might also want a confidence interval for their offsetting action -- e.g., "I want to be 99% confident that I am giving enough to actually offset my production of not-goods." This is likely impossible with some interventions. For instance, if I think there is a greater than 1% chance that the critics are correct that corporate campaigns are net-negative in the long run, then my 99% confidence interval will always include negative values. 

Someone who wants confidence in actual offset -- rather than offset in expectancy -- would logically seek "safer" donation opportunities. These would generally have more certain impact and low spread of potential impacts. Perhaps a bundle of interventions could achieve the necessary confidence interval (such as 3 programs with an 80% chance of success and no appreciable risk of being net harmful, or a larger number at lower success probabilities).

Jason @ 2024-02-28T17:36 (+3)

I am wondering if assigning "moral credit" for offset purposes is too complex to do with an algorithm and instead requires context-specific application of judgment. A few possible examples:

Motivated reasoning is always a risk, and any moral-credit granting analysis is more likely to be underinclusive (and thus over-grant available moral credit to influences that were identified) than the reverse. In some or even many cases, it may be necessary to apply an upward adjustment on even min(counterfactual value, Shapley value) to account for these factors.

emre kaplan @ 2024-02-28T18:19 (+4)

Thanks for this comment, it felt awkward to include all veto-players in Shapley value calculation while writing the post, now I'm able to see why. For offsetting we're interested in making every single individual weakly better off in expectation compared to the counterfactual where you don't exist/don't move your body etc. so that no one can complain about your existence. So instances of doing harm can only be offset by doing good. Meanwhile, Shapley doesn't distinguish between doing/allowing, therefore it assigns credit to everyone who could have prevented an outcome even if they haven't done any good.

Richard Y Chappell @ 2024-02-29T00:51 (+2)

Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.

If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.

I'm not sure about this. Suppose that C & M are both committed to offsetting their past consumption, and also that both will count the present co-operative effort, should it go ahead, as a '+1 offset'. Then the counterfactual impact of Alice cooperating with them is saving 1 animal + causing two future animals not to be saved, i.e. an overall negative effect.

So I think the counterfactual approach works fine, and is compatible with your observation that offsetting may be more difficult than would at first appear. (But it really depends upon the details--in particular, whether it's really true that your attempted offset will cause multiple others to do less good in future.)