To what extent is AI safety work trying to get AI to reliably and safely do what the user asks vs. do what is best in some ultimate sense?

By Jordan Arel @ 2025-05-23T21:09 (+12)

Trying to get a rough estimate for some related research I’m doing.

Specifically, I’m wondering if anyone could give a rough percentage of current AI safety work primarily targeted toward these two buckets. There may be an overlap with some safety work, and some safety work that doesn’t fit either of these, so that the percents don’t add up to 100%, and I’m sure it may often be unclear, just wondering what a rough estimate would be of how much AI safety work is geared toward specifically achieving these two separate goals?

I think one way to conceive of this question is how much AI safety work is primarily focused on intent alignment or control + avoiding other catastrophic outcomes, vs. how much work is focused primarily on ultimately aligning AI to the best version of humanity, something like our coherent extrapolated volition or what we might come to after “the long reflection.”

The intuition here being that getting AI to reliably do what the user wants, even if avoiding catastrophic outcomes, might nonetheless lead down a mediocre trajectory; whereas work that is specifically targeted to the latter goal would be most useful toward getting to the best possible world we could achieve.

Of course, the trade-off on the other end is that AI that tries to do the best thing possible may not be commercially viable and in fact would probably be pretty annoying to users since it seems like it may possibly ignore or redirect 90%+ of requests toward something more likely to result in some ultimately good outcome. And furthermore, this latter goal is probably far less pragmatic for achieving extinction prevention in a timely fashion.

Greatly appreciate rough estimates or any thoughts!