A perspective on the danger/hypocrisy in prioritizing one single long term risk, and ignore every other risk

By yz @ 2025-05-28T05:29 (+5)

(As I start drafting this, I realized there is this website with a whole series of posts that seem to be related, so want to mention it here as well: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. I have debated with myself if I should even write this post, but figured it may possibly be net positive to overcome my fear of posting and express this view for people to at least reflect.)

Ever since I got familiar with the concept of Effective Altruism, I have learnt that there are many different sub-groups and sub-beliefs. It probably started with making effective donation strategies for charities generally, to ranking cause areas (where I started to feel a bit problematic with lack of actual full pictures and a couple other things, but will probably elaborate this more detailed in a later post), to animal suffering, to focus on X risks such as AI takeover. 

Cause prioritization introduction/context/reminder

In cause prioritization, I learnt about four dimensions (source: Intro to EA handbook, though it is interesting that some people in EA does not seem to know about this):

And overall, for the first three, after some mathematical formulas, it comes to marginal value, per person (or I would think it is better to be per energy/some sort of time unit), or per dollar. 

(From the last one, this seems to be also a guide for personal methods for finding best cause areas to donate too, though, from my observation/online sentiment, the cause prioritization practice has somewhat evolved into a process of convincing others so that everyone need to follow the winning cause area in the arguments.)

We can also notice no mention of urgency explicitly. 

Long term risks combined with cause prioritization

With the X risks combining with cause prioritization, I hear this a lot: X risks are going to effect all of us, thus it is the most important thing to do, and everything else are distractions.  

I found this to be rationalizing/justifying a subjective preference. First of all, I do not think AI safety is not worth investing in, especially in the case of AI agents getting more physical control and no natural/biological understanding of constraint on goals. Same applies to other realistic X risks. But, any one of them, should never be, in my opinion, the only thing to invest in.

Here is a relatively simple case:

 202520282040
person A 50% dying from risk 22% dying from risk 3
person B20% dying from risk 1 2% dying from risk 3
person C30% dying from risk 150% dying from risk 22% dying from risk 3
person D  2% dying from risk 3

I assigned probabilities to attempt being more rigorous, but they are hypothetical scenarios (and they do actually matter). It is not very hard to tell that, without considering urgency in cause prioritization, one can use a simple logic concluding "Risk 1 may affect 2 people, risk 2 may affect 2 people, and risk 3 will affect all." However, by time 2025, two people may have already died if we don't treat risk1. By 2028, 3 people may have already died. Only person D can be actually more affected by long term risk 3.

What are some real life examples of risk 1, and risk 2? Mental health, physical health, violent crimes, human trafficking, etc. What are some correlated groups what may have very low risk in these risks? Those who have higher socio-economic status, younger people, and generally those who are in power. These people might also be the same people who have influences on governments, enough funds to decide where to donate, etc. This worries me the most. The risk 1 and risk 2 may never make to the neglected area.

Purposes/points of the post

  1. I am 70% for adding urgency or diversity in the framework of cause prioritization.
  2. I am 90% for promoting the concept of  a donation portfolio, on the aggregated level, as opposed to convincing everyone to work on one single area.
  3. A call on reflecting the over-simplification when reducing "humans from" a group of "high dimensional" individuals to a single concept/dot. This is also related to other adjacent values in this community related to utilitarianism, but I will not go into details in this post to stay focused.
  4. This is a more complicated point. Maybe, we need to recognize, that sometimes we are just using numbers to rationalize our own subjective choices conveniently,[1] especially when these calculations or numbers support something that will affect us the most. Sometimes, we are limited by the environment we are exposed to, and we really have understanding of the risks. It might be fine though to do this, as altruism in humans are limited by nature,  as long as 1. we are aware of our own limitations/reflect on this/keep this in mind (which I believe will usually translate into some actions), and 2. we do not convince everyone to work on the same one single thing, and claim all other risks are distractions

 

(Wow, this is longer than I expected.)

  1. ^

    In fact, in consulting (strategic, management, economic, all sorts), it is known practice for the decision maker to have a belief/or the team to have a goal, and the analytics/economics consultants back out formulas and assumptions to match that belief/support the goal. Some of these formulas or assumptions are good enough to even be tested in courts. I would not say everyone does this, but it is likely sometimes we are prone to this, either explicitly or implicitly.