A summary of Tomasik’s Charity Cost-Effectiveness in an Uncertain World
By Jim Buhler @ 2025-08-11T13:13 (+23)
Written in August 2024. I’ve only made some minor style edits before posting here.
Charity Cost-Effectiveness in an Uncertain World is a 2015 blog post published by Brian Tomasik on the website of the Center on Long-Term Risk, in which he argues for the following (among other things I set aside):
- When predicting the long-term impact of our actions, we can’t ignore unknown unknowns. We can’t conveniently assume they “cancel out” and “only use rigorous data”.
- Instead, we can try to account for these unknown unknowns by focusing on “actions that have broadly positive effects across a wide range of scenarios”. He gives examples of what he thinks are promising candidates (“meta-level activities like encouraging positive-sum institutions, philosophical inquiry, and effective altruism in general”), and responds to some anticipated objections.
- We can model unknown unknowns. He explores a few ways.
The problem of unknown crucial considerations
Tomasik simulates a conversation between his past and “present” (as of 2013) versions of him. 2005-Tomasik argues that seat belts are good since they prevent short-term injuries. 2013-Tomasik changes the mind of 2005-Tomasik by bringing up the impact this has on total human population and animal farming. Then he changes his mind again by bringing up the impact the latter has on wild animal suffering. He does it again with another bigger crucial consideration and so on.
At any point, 2005-Tomasik can decide to stop and say “I tentatively believe the sign of seat belts is [positive or negative], will focus on the evidence I currently have and ignore the crucial considerations I might be missing, assuming they cancel out”.
But the author’s above fictional conversation makes it obvious how this seems arbitrary and unreasonable. He then concludes:
Ultimately, we have no choice but to look at the whole picture and grapple with it as best we can.
Cause robustness
Tomasik writes:
Another way to pick a cause to work on is to look for actions that have broadly positive effects across a wide range of scenarios. For example, in general:
- More cooperation, democracy, and effective institutions for positive-sum compromise are better.
- More philosophical reflectiveness and discourse are better.
- Meta-level activities to support the above (like the effective-altruism movement, advising people on career choice for making a difference, and so on) are probably good.
He then further unpacks the last bullet point, making the case for “punting to the future”.
He also dedicates two sections to respond to the two following concerns:
- What if you have weird values? (such that the value of the abovementioned causes is less obvious to you than it is to someone with more typical values.)
- Isn't robustness just risk aversion?
Modeling unknown unknowns
Tomasik writes:
Consider the following narrative. Andrew is a young boy who sees people going to a blood-donation drive. He doesn't know what they're doing, but he sees them being stuck with needles. He concludes that he wouldn't like to participate in a blood drive. Let's call this his "initial evaluation" (IE) and represent it by a number to indicate whether it favors or opposes the action. In this case, Andrew assumes he would not like to participate in the blood drive, so let's say IE = -1, where the negative number means "oppose".
A few years later, Andrew learns that blood drives are intended to save lives, which is a good thing. This crucial consideration is not something he anticipated earlier, which makes it an "unknown unknown" discovery. Since it's Andrew's first unknown-unknown insight, let's call it UU1. Since this consideration favors giving blood, and it does so more strongly than Andrew's initial evaluation opposed giving blood, let's say UU1 = 3. Since IE + UU1 = -1 + 3 > 0, Andrew now gives blood at drives.
Andrew then keeps running into new crucial considerations that flip the sign of his conclusion.
What about future crucial considerations that Andrew hasn't discovered yet? Can he make any statements about them?
Tomasik proposes that (I am hugely oversimplifying) Andrew could look at how he has updated while encountering new crucial considerations in the past, and assume that these updates are representative of future updates if he were to learn and comprehend all the crucial considerations, such that he can extrapolate.
Tomasik also briefly notes some limitations this approach has.
Appendix: Important limitations of this work (in my opinion)
(08-25 Note: See also DiGiovanni 2025 and Buhler 2025 for objections to Tomasik's proposal for modeling unknown unknowns.)
- Tomasik doesn’t explain how “robust causes” (as he defines them) succeed at accounting for unknown unknowns. There is a huge difference between a)“broadly positive effects across a wide range of scenarios” and b) positive effects even in scenarios we haven’t thought of or can’t comprehend.
- He doesn’t address the problem that his evaluation of what causes are robust is itself susceptible to immense error bars due to him not being aware of crucial considerations that may flip the sign of these causes. I.e., he doesn’t explain how the cluelessness problem he tries to address (without naming it) doesn’t also infect his judgment of what is robust.