Should We Treat Open-Source AI Like Digital Firearms? — A Draft Declaration on the Ethical Limits of Frontier AI Models

By DongHun Lee @ 2025-05-23T08:58 (–3)

Should We Treat Open-Source AI Like Digital Firearms?

— A Draft Declaration on the Ethical Limits of Frontier AI Models


By Lee DongHun

Founder of Ma-eum Company, Seoul, South Korea

May 2025


Introduction: The Digital Arms Race We Might Be Ignoring

 

In April 2024, Meta released its LLaMA 3 model, declaring it open-source.

Within days, developers, researchers, and hobbyists worldwide gained access to a system with the capacity for reasoning, persuasion, and code generation — all without institutional oversight.


 

This development is hailed by some as democratization.

But to me, it felt more like handing out programmable weapons.


 

We’ve long debated the ethics of autonomous drones or facial recognition tools. But what about a language model that can reason, deceive, and psychologically manipulate — distributed freely as if it were a harmless calculator?


 

This is not an exaggeration. It’s a parallel to firearms policy, but with far less control, less awareness, and much greater scalability.


 

Core Problem: Everyone Armed, No One Accountable

 

Unlike firearms, which can be physically tracked, licensed, and revoked, AI models — once released — are impossible to retract.

They replicate. They mutate. They embed.

 

If one developer tweaks a model to manipulate political discourse, train disinformation bots, or design malware, who is responsible?

The developer? The hosting platform? The original model creator?

 

Current open-source culture says: no one.

 

But at scale, this becomes a digital anarchy — not a frontier of freedom.


 

A Modest Proposal: Declaration on the Ethical Limits of Open-Sourcing Frontier AI


 

With that in mind, I drafted a document titled:

 

“Declaration on the Ethical Limits of Open-Sourcing Advanced AI Models”

— a call to establish shared ethical boundaries for future model distribution.


 

Read the full declaration here : Notion


 

 

Declaration Summary: Five Core Ethical Principles

 

  1. Responsibility Before Access
    No model should be released without a clear framework for accountability and traceability.
  2. Digital Containment
    Irreversibility must be acknowledged: open-sourcing is not ethically neutral once harm becomes plausible.
  3. Structural Reciprocity
    Frontier capabilities must come with equivalent safeguards: legal, social, and infrastructural.
  4. Prohibited Deployment Zones
    Models should not be usable in contexts lacking minimal governance (e.g. cyber militias, unstable states).
  5. Ethical Equivalence
    AI models with power to manipulate or deceive must be governed like weapons, not widgets.


 

Recommendations for Action

The declaration outlines five global actions we might consider:


 


Open Questions for the EA Community

I’m not claiming this draft is perfect. Far from it. I see it as a first step — something to provoke thought and spark stronger proposals.

Here’s what I would deeply value feedback on:

 

About the Author

I’m Lee DongHun, a soon-to-be graduate from Chung-Ang University in South Korea.

Over the last two years, I’ve designed a human-centered AI framework called Ma-eum Company, rooted in emotional ethics, structural transparency, and narrative-based alignment.


This is not my full-time job. It’s my calling.

And if you’re reading this, I hope we can begin building better boundaries — together.


LinkedIn: linkedin.com/in/donghun-lee-4686b1362

Portfolio: Notion – Ma-eum Company

Contact: magnanimity2023@gmail.com

 

Thank you for reading. Feedback, criticism, and collaboration all welcome.

Let’s make open-source safer — before we learn the hard way.