A guided cause prioritisation flowchart

By JackM @ 2022-01-03T20:32 (+44)

Overview

I have previously written about the importance of making it as easy as possible for EAs to make a fully-informed decision on preferred cause area, given the potentially astronomical differences in value between cause areas. Whilst one piece of feedback was sceptical about the claim that these vast expected value differences exist, generally feedback agreed that the idea of making cause prioritisation easier, for example by highlighting key considerations that have the biggest effect on choice of preferred cause area, could be high impact.

In light of this I have decided to progress this idea by putting together a first draft cause prioritisation flowchart designed to guide people through the process of cause prioritisation. The flowchart would ideally be accompanied by guidance assisting making informed decisions throughout the flowchart. I haven’t finalised this guidance, although present a sample for one particular decision in the flowchart. At this point I am attempting a proof of concept rather than delivering a final product, and so would welcome feedback on both the idea and the preliminary attempt.

Introducing the flowchart

In the following section you can see my draft flowchart. The flowchart asks individuals ethical and empirical questions that I view as most important in determining which cause area they should focus on. Only cause areas that are accepted as important by a non-negligible proportion of the EA community are included in the flowchart. In addition, some foundational assumptions common to EA are made, including a consequentialist view of ethics in which wellbeing is what has intrinsic value.

A key component of a final flowchart would be accompanying guidance to help individuals make informed decisions as they progress through the flowchart. I have not put together all of this guidance at this stage, however as part of the proof of concept I have attempted to illustrate what this might look like for the “Can we influence the future?” question. My vision for a final flowchart would be being able to click on each box to be guided to easily-digestible reading enabling an informed choice on how to proceed.

In my view, some strengths of this flowchart compared to the main previous attempt include:

I would like to note that I don’t see my flowchart as clearly better than the previous attempt and I certainly don’t see it as final or even close to final. I think it is likely that substantial improvements can be made on my attempt.

The (draft) flowchart

Sample Guidance

Can we influence the future?

Is it possible to improve the future (beyond 100 years) in expectation?

Key reading:

Whilst it might at first seem unrealistic that we can positively influence the far future (say more than 100 years from now) in expectation, many EAs believe that there are a variety of ways in which we can do so.

One class of interventions that aims to influence the far future in a positive way are those that involve trying to ensure we stay or end up in “persistent states” of the world that are better than others. A persistent state is a state of the world which is likely to last for a very long time (even millions of years) if entered. If we can do things to increase the probability that we end up in a better persistent state than a worse one, we will then have influenced the far future for the better in expectation on account of how long the world is likely to stay in that better state.

There are a number of real world examples of attempting to steer into better rather than worse persistent states:

Outside of the class of interventions that involve steering between persistent states, one can look to speed up progress to improve every time period in the future:

There are also a number of “meta” options to improve the far future:

There are therefore a number of potential ways to impact the far future that have been put forward by EAs. If you think any of the above have serious potential to impact the far future in a positive way, you should answer “Yes” at this point.

Next steps

At this point I would welcome feedback on:

I would also be interested to hear if anyone else would be interested in collaborating on such a flowchart given that there is more work to be done. I should say however that I may abandon this project if feedback is lukewarm/negative and it doesn’t look like pursuing with it would be high impact.


alexrjl @ 2022-01-03T21:47 (+16)

" future people are as morally important as those alive now" seems like a very high bar for longtermism. If e.g. you think future people are 0.1% as important, but there's no time discount for when (as long as they don't exist yet), this doesn't prevent you from concluding the future is hugely important. Similarly for some exponential discounts (though they need to be extremely small).

JackM @ 2022-01-04T15:14 (+6)

Absolutely agree with that. 

My idea of a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple. It may be however that that box can be reworded to something like "Are future people (even in millions of years) of non-negligible moral worth" or something like that.

Ideally someone would read the guidance for each box to ensure they progressing through the flowchart correctly.

alexrjl @ 2022-01-04T22:06 (+6)

I think if you present a simplified/summarised thing along with more detailed guidance you should assume that almost nobody will read the guidance.

JackM @ 2022-01-04T22:16 (+3)

Almost nobody? I'd imagine at least some people are interested in making an informed decision on cause area and would be interested in learning.

You might be right though. I'm not getting a huge amount of positive reception on this post (to put it lightly) so it may be that such a guided flowchart is a doomed enterprise.

EDIT: you could literally make it that you click on a box and guidance pops up so it could theoretically be very easy to engage with it.

TianyiQ @ 2022-01-04T02:39 (+11)

Interesting idea, thanks for doing this! I agree it's good to have more approachable cause prioritization models, but there're also associated risks to be careful about:

Also, I think the decision-tree-style framework used here has some inherent drawbacks:

  1. It's unclear what "yes" and "no" means.
    • e.g. What does it mean to agree that "humans have special status"? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.
      • a. humans have two times higher moral weight than non-humans
      • b. all animals are morally weighted by their neuron count (or some non-linear function of neuron count)
      • c. human utility always trumps non-human utility
    • for another example, see alexrjl's comment.
  2. Yes-or-no answers usually don't serve as necessary and sufficient conditions.
    • e.g. I think "most influential time in future" is neither necessary nor sufficient for prioritizing "investing for the future".
    • e.g. I don't think the combined condition "suffering-focused OR adding people is neutral OR future pessimism" serves as anything close to a necessary condition to prioritizing "improving quality of future".

A more powerful framework than decision trees might be favored, though I'm not sure what a better alternative would be. One might want to look at ML models for candidates, but one thing to note is that there's likely a tradeoff between expressiveness and interprettability.

And lastly:

In addition, some foundational assumptions common to EA are made, including a consequentialist view of ethics in which wellbeing is what has intrinsic value.

I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.

JackM @ 2022-01-04T15:30 (+5)

Thanks for this, you raise a number of useful points. 

A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it's spread out on the Internet it can't be taken back.

I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn't stop them from publishing advice which they regularly update.

A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; instead a more "democratic" way (not too sure what this means exactly) of making the model might be favored.

I like that idea and I certainly don't think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model.

Views in EA are really diverse, so one single model likely cannot capture all of them.

Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole.

e.g. What does it mean to agree that "humans have special status"? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.

As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.

Yes-or-no answers usually don't serve as necessary and sufficient conditions.

I don't think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn't consider that means they ended up in the wrong place. That's fine - but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations.

e.g. I think "most influential time in future" is neither necessary nor sufficient for prioritizing "investing for the future".

Feedback like this is useful and could lead to updating the flowchart itself. I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.

I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.

Fair point. As I said before if an org like 80,000 Hours owned such a model perhaps they wouldn't have to go beyond consequentialism. If CEA did I would suspect that they should.

 

TianyiQ @ 2022-01-06T11:37 (+1)

Thanks for the reply, your points make sense! There is certainly a problem of "degree" to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn't raising my points to dismiss your approach; Instead, they're things that I think need to be taken care of, if we're to take such approach.)

I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.

Caveat: I haven't spend much time thinking about this problem of investing vs direct work, so please don't take my views too seriously. I should have made this clear in my original comment, my bad.

My first consideration is that we need to distinguish between "this century is more important than any given century in the future" and "this century is more important than all centuries in the future combined". The latter argues strongly against investing for the future; But the former doesn't seem to, as by investing now (patient philanthropy, movement building, etc.) you can potentially benefit many centuries to come.

The second consideration is that there're many more factors than "how important this century is". The need of the EA movement is one (and is a particularly important consideration for movement building), personal fit is another, among others.

PeterSlattery @ 2022-01-05T05:57 (+4)

Quick thoughts:

Thanks for this work, I like your approach! It is visually appealing and easy to follow. It is helpful for me but a little incomplete as I'd like to change some parts. 

I think that it could be a good idea to treat this as a project, e.g., 'The EA priorities flowchart (template) project". You could put the current template in an easily editable/accessible format (draw.io) and share it in occasional updates as you develop it.

IMHO, more people are likely to be receptive to the idea of  working together to 'flow-charting how to prioritise by building on your template' than in following whatever prioritisation approach you develop/recommend in a specific flowchart. 

While you can make each template based on your/your team's opinions and attempts to prioritise, I think you should recommend users/reader to take and build their own versions rather than just adapt your perspective. I'd also refer them to relevant resources. 

Hope this helps!  

evelynciara @ 2022-03-14T03:33 (+3)

I'd be interested in an extended flowchart to prioritize among x-risks and s-risks, with questions like:

MichaelA @ 2022-01-09T10:45 (+3)

Thanks for making this! The idea, reasoning, and initial draft all seem promising/reasonable to me. 

Some quick thoguhts:

evelynciara @ 2022-01-04T02:48 (+3)

Why is climate change the result of answering "no" to "We can become safe?" and "Small chance of success OK?"

JackM @ 2022-01-04T15:44 (+4)

Just to clarify this was just my first attempt with no outside review and it is far from final, so I'm open to the possibility that there are problems with the flowchart itself.

Also, as I have said to other commenters my idea of a guided flowchart is that nuances and explanations would be in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.

On your specific question, my thinking was:

  • If we cannot become safe (achieve existential security) then we will hit an existential catastrophe eventually. In this case we can either focus on the near-term or perhaps on the middle-term. Focusing on middle-term (accepting we cannot reduce x-risk) could entail speeding up sustainable economic growth. So tackling climate change would be a good thing to do. Focusing on near term would actually send you to things like global health, animal welfare etc. so now I'm now thinking it's already clear that my flowchart is very incomplete even from my point of view as you may need further questions after "We can become safe?".
  • If you're not willing to bet on small probabilities of success I think that reducing x-risk is not for you, as there is a very small probability that our efforts will counterfactually avert an existential catastrophe. In this case it seems that tackling climate change is the next best longtermist option as we can reliably reduce expected global warming for example through green technology investment.

I guess my main point though is that this flowchart is far from final and there are certainly improvements that can be made! Also that accompanying guidance would be essential for such a flowchart.


 

Denis Drescher @ 2022-01-05T05:30 (+3)

Great chart! Another minor wording thing: I don’t know whether to interpret “Most influential time in future” as “[This is the] most influential time in future” or “The most influential time is still to come.” From the context, I think it’s the second, but my first reading was the first. :-)

MichaelA @ 2022-01-09T11:23 (+2)

I think "Speeding up sustainable progress" is presented here substantially too positively, or more specifically that some very important counterpoints aren't raised but should be. More discussion can be found at https://forum.effectivealtruism.org/tag/speeding-up-development . And I think (from memory) the Greaves & MacAskill paper cited either doesn't mention or argues against a focus on speeding up development?

Tristan Williams @ 2023-04-25T01:26 (+1)

Any update here? Did you refine the flow chart further or is it still the same as above?

Jack Malde @ 2023-04-25T02:13 (+2)

No update. Interest seemed to be somewhat limited.