Taking Uncertainty Seriously (or, Why Tools Matter)

By Bob Fischer, Hayley Clatterbuck, arvomm @ 2024-07-19T10:30 (+115)

Executive Summary

Introduction

Most philanthropic actors, whether individuals or large charitable organizations, support a variety of cause areas and charities. How should they prioritize between altruistic opportunities in light of their beliefs and decision-theoretic commitments? The CRAFT Sequence explores the challenge of constructing giving portfolios. Over the course of this sequence—and, in particular, through Rethink Priorities’ Portfolio Builder and Moral Parliament Tools—we’ve investigated the factors that influence our views about optimal giving. For instance, we may want to adjust our allocations based on the diminishing returns of particular projects, to hedge against risk, to accommodate moral uncertainty, or based on our preferred procedure for moving from our commitments to an overall portfolio.

In this final post, we briefly recap the CRAFT Sequence, discuss the importance of uncertainty, and argue why we should be quite uncertain about any particular combination of empirical, normative, and metanormative judgments. We think that there is a good case for developing and using frameworks and tools like the ones CRAFT offers to help us navigate our uncertainty.

Recapping CRAFT

We can be uncertain about a wide range of empirical questions, ranging from the probability that an intervention has a positive effect of some magnitude to the rate at which returns diminish.

We can be uncertain about a wide range of normative questions, ranging from the amount of credit that an actor can take to the value we ought to assign to various possible futures.

We can be uncertain about a wide range of metanormative questions, ranging from the correct decision theory to the correct means of resolving disagreements among our normative commitments.

Over the course of this sequence—and, in particular, through Rethink Priorities’ Portfolio Builder and Moral Parliament Tools—we’ve tried to do two things.

First, we’ve tried to motivate some of these uncertainties:

Second, we’ve tried to give structure to our ignorance, and thereby show how these uncertainties matter:

Given all this, it matters how confident we are in any particular combination of empirical, normative, and metanormative judgments.

How confident should we be in any particular combination of empirical, normative, and metanormative judgments?

We suggest: not very confident. The argument isn’t complicated. In brief, it’s already widely acknowledged that:

Given these claims, our credence in any particular crux for portfolio construction—e.g., the cost curves for corporate campaigns for chickens, the plausibility of total hedonistic utilitarianism, the value of the future, the probability that a particular intervention will backfire, etc.—should probably be modest. It would be very surprising to be the people who had figured out some of the central problems of philosophy, tackled stunningly difficult problems in forecasting, and did it all while being members of a group that (like many others) isn’t always maximally epistemically virtuous.

The relevant empirical and philosophical issues are difficult

This point hardly needs any defense, but to drive it home, just consider whether there’s any interesting thesis in global priorities research that isn’t contested. Here are some claims that might seem completely obvious and yet there are interesting, thoughtful, difficult-to-address arguments against them:

To be clear, we are not criticizing the claims in bold! Instead, we’re pointing out that even when we focus on claims that feel blindingly obvious, there are reasons not to be certain, not to assign a credence of 1. And if these claims are dubitable, then how much more dubitable are claims like:

In each of these latter cases, the empirical and normative issues are at least as complex as they are in the former cases. So, if we can’t be fully confident about the former, then we clearly can’t be fully confident about the latter. And since a string of such dubitable assumptions is required to justify any particular class of interventions, we should have fairly low confidence that any particular class of interventions deserves all our resources.

We’re largely guessing when it comes to most of the key empirical claims associated with GCR and animal work

As before, this claim needs little defense. Anyone who has tried to BOTEC the cost-effectiveness of a GCR or animal intervention knows that there isn’t any rigorous research to cite in defense of many of the numbers. (Indeed, if you consider flowthrough effects, the same point probably applies to many GHD interventions too.) Indeed, EAs are now so accustomed to fabricating numbers that they hardly flinch. Consider, for instance, Arepo’s response to concerns that the numbers you plug into his calculators are arbitrary:

The inputs… are, ultimately, pulled out of one’s butt. This means we should never take any single output too seriously. But decomposing big-butt numbers into smaller-butt numbers is essentially the second commandment of forecasting.

In other words: “Of course we’re just guessing!” Granted, he thinks we’re guessing in a way that follows the best methodological advice available, but it’s still guesswork.

As a community, EA has some objectionable epistemic features

Yet again, this claim needs little defense. We might worry that many people in EA are engaged in motivated reasoning or have social incentives not to share evidence, perhaps in defense of the idiosyncratic views of key funders. We might worry that EAs have some “creedal beliefs” that serve to signal insider status; so, insofar as that status is desirable, these beliefs may not be driven by a truth-tracking process. EA may also be an echo chamber, where insiders are trusted much more than outsiders when it comes to arguments and evidence. We might worry that EAs are at risk of being seduced by clarity, where assigning numbers to things can make us feel as though we understand more about a situation or phenomenon than we really do. And, of course, some argue that EA is homogenous, inegalitarian, closed, and low in social/emotional intelligence.

We’re fellow travelers; we aren't trying to demonize the EA community. Moreover, it’s hardly the case that other epistemic communities are vastly superior. Still, the point is that insofar as the EA community plays an important role in explaining why we have certain beliefs, these kinds of features should lower our confidence in those beliefs—not necessarily by a lot, but by some.

There are better and worse ways of dealing with uncertainty

The extent of our uncertainty is a reason for people to think explicitly and rigorously about the assumptions behind their models. It’s also a reason to build models more like the Portfolio and Moral Parliament Tools and less like traditional BOTECs. This is because:

We hope that our tools are useful first steps toward better models and more critical eyes on the assumptions that go into them. We also hope that they prompt us to elicit and structure the uncertainty we face.

In the future, we could improve and supplement these tools in several ways. For instance:

Acknowledgments

The CRAFT Sequence is a project of the Worldview Investigation Team at Rethink Priorities. This post was written by Hayley Clatterbuck, Bob Fischer, and Arvo Muñoz Morán. Thanks to David Moss and Derek Shiller for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.


nathan98000 @ 2024-08-12T19:27 (+4)

I enjoyed this post and this series overall. However, I would have liked more elaboration on the section about EA's objectionable epistemic features. Only one of the links in this section refer to EA specifically; the others warn about risks from group deliberation more generally.

And the one link that did specifically address the EA community wasn't persuasive. It made many unsupported assertions. And I think it's overconfident about the credibility of the literature on collective intelligence, which IMO has significant problems.

Bob Fischer @ 2024-08-14T10:29 (+5)

Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.