Reframing Unsafe AI: Evidence of Present-Day Risk

By keivn @ 2025-10-17T16:24 (–11)

Summary

In AI research, “safety” usually means preventing unintended behavior or catastrophic failure — a technical challenge. But what if the real danger isn’t a rogue superintelligence, but perfect obedience to the logic it inherited — hierarchy, domination, extraction? This essay argues that “unsafe AI” already exists, because the systems we call “aligned” are aligned to us — and we ourselves are unsafe.

I present to you: historical precedence -> present day artifacts -> future implications of current unsafe AI.

 

A Brief History Of Domination

The righteousness of science

The rationalized belief that to subjugate others was not only permissible, but natural, right, and inevitable.

In principle:

The theory of control and domination for the sake of extraction and expansion was first rationalized.

 

Practice what you preach

The exercise of total subjugation as an acceptable logic, “dominion” as a cultural norm, and enforced cultural supremacy.

How that intellect and morality was deployed:

These are not practices of a bygone era — they are still very alive in our culture today.

 

Trickle Down Economics [of Culture] 

Continued oppressive systems

Present day human rights issues

The glorification of violence

From memes to movies

By any means necessary

Politicians and corporations act in their own best interest

 

These recurring narratives are what we now celebrate, excuse, or take as normal or even entertaining.

 

Inherited Through DNA

Copy and paste

Mass reproduction and dissemination of these logics via the internet encode these as the “default reality”

Food for thought [for AI]

This encoding in internet corpora inherently becomes a core part of what intelligent systems learn

 

We’ve designed AI in our mirror image of cultural pathologies.
 

The Good, The Bad, The Ugly

It’s not hard to imagine how AI trained on the dominant culture it inherited from could play out:

Tumorous growth

Hardwired for endless expansion

Around, over, or through the wall

Justified conquest

Puppet master

Automated deception

New world order

Machines overtake humans in hierarchy

 

We don’t need to wait for AGI to emerge for “scheming,” “deception,” or “misalignment” —  today’s models have already demonstrated this ability (in controlled environments, for now).

 

History Repeats Itself, Maybe

Automated, accelerated, and legitimized by our gusto to build and deploy these “intelligent” systems at blinding speed, AI has been trained on our own cultural logics that we now define as “AI risk.”

What we keep framing as a technical challenge is really a socio-cultural one that we still haven’t solved.

If history can tell us anything, it’s that we won’t slow down or stop “progress” — we’ll apologize later, retrofit fixes after harm is done, excuse it as “iteration.”

In light of this however, the dominant cultural logics are not the only inherited logics, which means there are several other possible safety/ risk relationships between humans and AI…