The Pending Disaster Framing as it Relates to AI Risk

By Chris Leong @ 2024-02-25T15:47 (+8)

This post tries to explain one particular frame that plays a significant role in why I prioritise AI safety as a cause area. I suspect many other people who are focused on AI Safety share this frame as well, but I can't recall the last time I've heard it articulated.

Here's two possible questions we can ask:

1) How can I have the greatest impact?:

2) What does the trajectory of the world look like by default?:

Counter-considerations:

Notes:


CAISID @ 2024-02-25T17:09 (+1)

This is an interesting post. What your thoughts on the relationship AI and poverty have with each other? Does the fact that AI has significant impact on poverty levels and vice versa influence your opinions in any way?  I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect? Not a criticism or counter-point, just looking to understand your standpoint better :)

Chris Leong @ 2024-02-25T21:38 (+2)

a) I suspect AI able to replace human labour will create such abundance that it will eliminate poverty (assuming that we don't then allow the human population to increase to the maximum carrying capacity).
b) The connection the other way around is less interesting. Obviously, AI requires capital, but once AI is able to self-reproduce then amount of capital required to kickstart economic development becomes minimal.
c) "I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect?" - How is it having an adverse effect?