A summary of Tomasik’s Charity Cost-Effectiveness in an Uncertain World

By Jim Buhler @ 2025-08-11T13:13 (+23)

Written in August 2024. I’ve only made some minor style edits before posting here.

 

Charity Cost-Effectiveness in an Uncertain World is a 2015 blog post published by Brian Tomasik on the website of the Center on Long-Term Risk, in which he argues for the following (among other things I set aside):

The problem of unknown crucial considerations

Tomasik simulates a conversation between his past and “present” (as of 2013) versions of him. 2005-Tomasik argues that seat belts are good since they prevent short-term injuries. 2013-Tomasik changes the mind of 2005-Tomasik by bringing up the impact this has on total human population and animal farming. Then he changes his mind again by bringing up the impact the latter has on wild animal suffering. He does it again with another bigger crucial consideration and so on.

At any point, 2005-Tomasik can decide to stop and say “I tentatively believe the sign of seat belts is [positive or negative], will focus on the evidence I currently have and ignore the crucial considerations I might be missing, assuming they cancel out”.

But the author’s above fictional conversation makes it obvious how this seems arbitrary and unreasonable. He then concludes:

Ultimately, we have no choice but to look at the whole picture and grapple with it as best we can.

Cause robustness

Tomasik writes:

Another way to pick a cause to work on is to look for actions that have broadly positive effects across a wide range of scenarios. For example, in general:

He then further unpacks the last bullet point, making the case for “punting to the future”.

He also dedicates two sections to respond to the two following concerns:

Modeling unknown unknowns

Tomasik writes:

Consider the following narrative. Andrew is a young boy who sees people going to a blood-donation drive. He doesn't know what they're doing, but he sees them being stuck with needles. He concludes that he wouldn't like to participate in a blood drive. Let's call this his "initial evaluation" (IE) and represent it by a number to indicate whether it favors or opposes the action. In this case, Andrew assumes he would not like to participate in the blood drive, so let's say IE = -1, where the negative number means "oppose".

A few years later, Andrew learns that blood drives are intended to save lives, which is a good thing. This crucial consideration is not something he anticipated earlier, which makes it an "unknown unknown" discovery. Since it's Andrew's first unknown-unknown insight, let's call it UU1. Since this consideration favors giving blood, and it does so more strongly than Andrew's initial evaluation opposed giving blood, let's say UU1 = 3. Since IE + UU1 = -1 + 3 > 0, Andrew now gives blood at drives.

Andrew then keeps running into new crucial considerations that flip the sign of his conclusion.

What about future crucial considerations that Andrew hasn't discovered yet? Can he make any statements about them?

Tomasik proposes that (I am hugely oversimplifying) Andrew could look at how he has updated while encountering new crucial considerations in the past, and assume that these updates are representative of future updates if he were to learn and comprehend all the crucial considerations, such that he can extrapolate.

Tomasik also briefly notes some limitations this approach has.

Appendix: Important limitations of this work (in my opinion)

(08-25 Note: See also DiGiovanni 2025 and Buhler 2025 for objections to Tomasik's proposal for modeling unknown unknowns.)