An epistemology for effective altruism?

By Benjamin_Todd @ 2014-09-21T21:46 (+22)

At 80,000 Hours, we want to be transparent about our research process. So, I had a go at listing the key principles that guide our research.

I thought it might be interesting to the forum as a take on an epistemology for effective altruism i.e. what principles should EAs use to make judgements about which causes to support, which careers to take, which charities to donate to and so on?

I'm interested to hear your ideas on (i) which principles you disagree with (ii) which principles we've missed.

See the original page here.


What evidence do we consider?

Use of scientific literature

We place relatively high weight on what scientific literature says about a question, when applicable. If there is relevant scientific literature, we start our inquiry by doing a literature search.

Expert common sense

When we first encounter a question, our initial aim is normally to work out: (i) who are the relevant experts? (ii) what would they say about this question? We call what they would say ‘expert common sense’, and we think it often forms a good starting position (more). We try not to deviate from expert common sense unless we have an account of why it’s wrong.

Quantification

Which careers make the most difference can be unintuitive, since it’s difficult to grasp the scale and scope of different problems, which often differ by orders of magnitude. This makes it important to attempt to quantify and model key factors when possible. The process of quantification is also often valuable for learning more about an issue, and making your reasoning transparent to others. However, we recognise that for most questions we care about, quantified models contain huge (often unknown) uncertainties, and therefore, should not be followed blindly. We always weigh the results of quantified models against their robustness compared to qualitative analysis and common sense.

The experience of the people we coach

We’ve coached hundreds of people on career decisions and have a wider network of people we gather information from who are aligned with our mission. We place weight on their thoughts about the pros and cons of different areas.

How do we combine evidence?

We strive to be Bayesian

We attempt to explicitly clarify our prior guess on an issue, and then update in favor or out of favor based on the strength of our evidence for or against. See an example here. This is called ‘Bayesian reasoning’, and, although not always it adopted, seems to be regarded as best practice for decision making under high uncertainty among those who write about good decision making process.1

We use ‘cluster thinking’

As opposed to relying on one or two strong considerations, we seek to evaluate the question from many angles, weighting each perspective according to its robustness and the importance of the consequences. We think this process provides more robust answers in the context of decision making under high uncertainty than alternatives (such as making a simple quantified model and going with the answer). This style of thinking has been supported by various groups and has several names, including ‘cluster thinking’‘model combination and adjustment’‘many weak arguments’, and ‘fox style’ thinking.

We seek to make this process transparent by listing the main perspectives we’ve considered on a question. We also make regular use of structured qualitative evaluations, such as our framework.

We seek robustly good paths

Our aim is to make good decisions. Since the future is unpredictable and full of unknown unknowns, and we’re uncertain about many things, we seek actions that will turn out to be good under many future scenarios.

Avoiding bias

We’re very aware of the potential for bias in our work, which often relies on difficult judgement calls, and have surveyed the literature on biases in career decisions. To avoid bias, we aim to make our research highly transparent, so that bias is easier to spot. We also aim to state our initial position, so that readers can see the direction in which we’re most likely to be biased, and write about why we might be wrong.

Seeking feedback

We see all of our work as in progress, and seek to improve it by continually seeking feedback.
We seek feedback through several channels:

In the future, we intend to carry out internal and external research evaluations.

We aim to make our substantial pieces of research easy to critique by:


undefined @ 2014-09-22T10:24 (+5)

When we first encounter a question, our initial aim is normally to work out: (i) who are the relevant experts? (ii) what would they say about this question?

I think this is a valuable heuristic, but that it gets stronger by also trying to consider the degree of expertise, and letting that determine how much weight to put on it. The more the question is of the same type they routinely answer, and the more there are good feedback mechanisms to help their judgement, the stronger we should expect their expertise to be.

For some questions we have very good experts. If I've been hurt by someone else's action, I would trust the judgement of a lawyer about whether I have a good case for winning damages. If I want to buy a light for my bike to see by at night, I'll listen to the opinions of people who cycle at night rather than attempt a first-principles calculation of how much light I need it to produce to see a certain distance.

Some new questions, though, don't fall clearly into any existing expertise, and the best you can do is find someone who knows about something similar. I'd still prefer this over the opinion of someone chosen randomly, but it should get much less weight, and may not be worth seeking out. In particular, it becomes much easier for you to become more of an expert in the question than the sort-of-expert you found.

undefined @ 2014-09-23T14:46 (+3)

I think this is especially true for AI safety. Sometimes people will cite prominent computer scientists' lack of concern for AI safety as evidence it is an unfounded concern. However, computer scientists seem to typically answer questions on AI progress moreso than AI safety, and these questions seem pretty categorically different, so I'm hesitant to give serious weight to their opinions on this topic. Not to mention the biases we can expect from AI researchers on this topic, e.g. from their incentives to be optimistic about their own field.

undefined @ 2014-09-22T12:24 (+4)

Excellent post. One minor question: what if one or two considerations actually do outweigh all others?

I take it that hedgehogs (as opposed to toxes) are biased in the sense that they are prone to focus on a single argument or a single piece of evidence even when other argument or pieces of evidence should be considered. That seems to me to be a very common mistake. But, in cases where a single argument or a single piece of evidence is so overwhelming that other arguments or pieces of evidence become unimportant, it seems one actually should rely only on that single argument or piece of evidence.

undefined @ 2014-09-22T15:07 (+2)

I think the right way is to weight each argument by its robustness and the significance of the conclusions i.e.: If you have a strong argument that an action, A, would be very bad, then you shouldn't do A. If you have a speculative argument that A would be very bad, then you probably shouldn't do A. If you have a speculative argument that A would be a little bit bad, then that doesn't mean much.

undefined @ 2014-09-25T21:21 (+2)

It seems like quite a few people have downvoted this post. I'd be curious to know why to avoid posting something similar next time.

undefined @ 2014-09-27T19:00 (+1)

I don't perceive a need to be frugal with upvotes. I was surprised this article didn't get as upvoted either, because I believe it covers a very important issue. Having read the article, I'm not too surprised by new information. Maybe others feel the same, because as users of this forum we're already familiar with 80,000 Hours methodology, and feel as if this is a rehash.

I've upvoted the article so it will get more visibility, because more important than what's written in it is attracting the critical feedback that 80,000 Hours is seeking.

undefined @ 2014-10-08T15:29 (+1)

Yeah, I'd say the main factor in lack of upvotes was the lack of new insight or substantive points to (dis)agree with.

undefined @ 2014-09-27T09:02 (+1)

When I hover over the 3 upvotes in the corner by the title, it says "100% positive" - which suggests people haven't downvoted it, it's just that not many people have upvoted it? But maybe I'm reading that wrong.

I thought it was a good and useful post, I don't see any reason why people would downvote it - but would also be interested to hear why if there were people who did.

undefined @ 2014-09-27T19:12 (+1)

I believe selecting between cause areas is something that this epistemology may be insufficient for, and may need tweaking to work better. I don't believe this because the methodology is flawed in principle. These methods work by relying on the work of others who know what they're doing, which makes sense.

However, there seems to be few experts to ask for advice on selecting cause areas. I mean, that's a peculiar problem I didn't encounter in any form before effective altruism posed it. I imagine there's not as much expert common sense, scientific literature, or experience to be learned from here. I imagine the United Nations, and governments of wealthy nations, have departments dedicated to answering these questions. Additionally, I thought of the Copenhagen Consensus. The CEA is in touch with the Copenhagen Consensus, correct?