Standing Algebra Σᴿ: A Solution to AI Violating Human Autonomy

By Jon Rademacher @ 2026-03-23T21:40 (–3)

This is a linkpost to https://zenodo.org/records/19186551

Hi everyone — this is a working paper I’m releasing that introduces Standing Algebra (Σᴿ), a formal system I’ve been developing to express autonomy‑preserving update rules in multi‑agent systems.

The motivating question is:

How do we constrain system‑level updates so that no agent’s standing is reduced, no unfair asymmetries are introduced, and updates remain structurally safe by construction?

To explore that, the framework treats an update F : U→U as operating on agents who each carry three numerical invariants:

For each agent, the algebra examines the induced changes:

Δσi=σ(F(i))−σ(i),Δdegi=deg(F(i))−deg(i)

and applies a set of structural constraints, including:

When an update FFF violates any of these constraints, it is “repaired” into a Legitimate Envelope LFL_FLF​. The envelope is:

Informally, this turns arbitrary updates into the closest autonomy‑preserving version consistent with the axioms.

The envelopes (modulo increment signatures) form a join‑semilattice under classwise OR, yielding an algebra of safe update policies. This can serve as:

The preprint includes:

I’d appreciate feedback on any of the following:

Thanks for taking a look — I’m especially interested in critique, places it’s underspecified, or directions I should explore next.