AI can solve all EA problems, so why keep focusing on them?

By Cody Albert @ 2025-05-03T21:51 (+8)

Suppose you believe AGI (or superintelligence) will be created in the future. In that case, you should also acknowledge its super capabilities in addressing EA problems like global health and development, pandemics, animal welfare, and cause prioritization decision-making.


Suppose you don't believe superintelligence is possible. In that case, you can continue pursuing other EA problems, but if you do believe superintelligence is coming, then why are you spending time and money on issues that will likely all be solved by AI, assuming superintelligence comes aligned with human values?

I've identified a few potential reasons why people continue to devote their time and money to non-AI-related EA causes:


It's widely believed (at least in the AI safety community) that the development of sufficiently advanced AI could lead to major catastrophes, a global totalitarian regime, or human extinction, all of which seem to me to be more pressing and critical than any of the above reasons for focusing on other EA issues. I post this because I'd like to see more time and money allocated to AI safety, particularly in solving the alignment problem through automated AI labor (since I don't believe human labor can solve it anytime soon, but that's beyond the scope of this post).


So, do any of the reasons presented above apply to you? Or do you have different reasons for not focusing on AI risks?


tobycrisford 🔸 @ 2025-05-04T08:11 (+7)

Even if you're certain that AGI is only 5 years away and will eradicate all diseases, a lot of children are going to die of malaria in those 5 years. Donating to malaria charities could reduce that number.

simon @ 2025-05-04T14:36 (+6)

Personally, I just don't believe that the marginal dollar or hour I spend on anything to do with AGI has any expected impact on it (in particular not on its capability to solve other problems down the line). 
Meanwhile, I can spend money or time productively on many other causes (eg global health).

Cody Albert @ 2025-05-10T03:07 (+1)

That's fair and I don't have a good answer for what the average effective altruist can do to help ensure AI alignment, but there are definitely concrete approaches like career changes to AI policy that can help address this.

simon @ 2025-05-10T13:49 (+1)

Clearly people will have a wide range of views how much impact eg a career change to such fields can have, even when considering a specific (non-average) person. This probably answers a good portion of your question regarding why people focus on other areas.

John Huang @ 2025-05-03T23:05 (+6)

The reason is that AI is at best a tool that could be used for good or bad, or at worst intrinsically misaligned against any human interests.

 

 Or alternatively AI just isn't solving any of our problems because AI will just be a mere extension of power of states and corporations. Whether moral problems are solved by AI is then up to the whim of corporate or state interests. AI just as well IS being used right now to conquer. The obvious military application has been explored in science fiction for decades. Reducing the cost of deployment of literal killer robots.

 

Obvious example, look how the profit motive is transforming OpenAI right now. Obvious example, look how AI is "solving" nefarious actors' abilities to create fake news and faked media. 

There is no theory that our glorious AI overlords are going to be effective altruists, or Buddhists, or Kantians, or utilitarians, or whatever else. As far as I'm aware AI may just as likely become a raging kill all humans fascist. 

Yarrow @ 2025-05-04T17:01 (+5)

A lot of people within the effective altruist movement seem to basically agree with you. For example, Will MacAskill, one of the founders of the effective altruist movement, has recently said he’s only going to focus on artificial general intelligence (AGI) from now on. The effective altruist organization 80,000 Hours has said more or less the same — their main focus is going to be AGI. For many others in the EA movement, AGI is their top priority and the only thing they focus on.

So, basically, you are making an argument for which there is already a lot of agreement in EA circles.

As you pointed out, uncertainty about the timeline of AGI and doubts about very near-term AGI are one of the main reasons to focus on global poverty, animal welfare, or other cause areas not related to AGI.

There is no consensus on when AGI will happen.

A 2023 survey of AI experts found they believed there is a 50% chance of AI and AI-powered robots being able to automate all human jobs by 2116. (Edited on 2025-05-05 at 06:16 UTC: I should have mentioned the same study also asked the experts when they think AI will be able to do all tasks that a human can do. The aggregated prediction was a 50% chance by 2047. We don't know for sure why they gave such different predictions for these two similar questions.)

In 2022, a group of 31 superforecasters predicted a 50% chance of AGI by 2081.

My personal belief is that we have no idea how to create AGI and we have no idea when we’ll figure out how to create it. In addition to the expert and superforecaster predictions I just mentioned, I recently wrote a rapid fire list of reasons I think predictions of AGI within 5 years are extremely dubious.

NobodyInteresting @ 2025-05-04T15:30 (+3)

AGI will never be of use in Agriculture as an example, yes it can replace agronomes, but the major plays in Agriculture are related to human power and investments. 

Can AGI be trained to pick food instead of people, sure, but what will be the cost, are we at that level of dexterity, some crops require immense knowledge, particularly artichokes, tomatoes, peppers on how to be best picked. Okra is literally one of the most hit and miss crops, because the optimal age of the shoots is 4 days, 5 days is too old, 3 days is too young.

And let's say we can completely change the workforce in agriculture, AI won't change policies. Current low yields are due to people using low level of equipment, fertilizer and seed. 

Also parcelization of fields is not solvable by AI but by policy.

Tell me how can AGI help us in Agriculture, I wanna know your viewpoints.