Should We Treat Open-Source AI Like Digital Firearms? — A Draft Declaration on the Ethical Limits of Frontier AI Models
By DongHun Lee @ 2025-05-23T08:58 (–3)
Should We Treat Open-Source AI Like Digital Firearms?
— A Draft Declaration on the Ethical Limits of Frontier AI Models
By Lee DongHun
Founder of Ma-eum Company, Seoul, South Korea
May 2025
Introduction: The Digital Arms Race We Might Be Ignoring
In April 2024, Meta released its LLaMA 3 model, declaring it open-source.
Within days, developers, researchers, and hobbyists worldwide gained access to a system with the capacity for reasoning, persuasion, and code generation — all without institutional oversight.
This development is hailed by some as democratization.
But to me, it felt more like handing out programmable weapons.
We’ve long debated the ethics of autonomous drones or facial recognition tools. But what about a language model that can reason, deceive, and psychologically manipulate — distributed freely as if it were a harmless calculator?
This is not an exaggeration. It’s a parallel to firearms policy, but with far less control, less awareness, and much greater scalability.
Core Problem: Everyone Armed, No One Accountable
Unlike firearms, which can be physically tracked, licensed, and revoked, AI models — once released — are impossible to retract.
They replicate. They mutate. They embed.
If one developer tweaks a model to manipulate political discourse, train disinformation bots, or design malware, who is responsible?
The developer? The hosting platform? The original model creator?
Current open-source culture says: no one.
But at scale, this becomes a digital anarchy — not a frontier of freedom.
A Modest Proposal: Declaration on the Ethical Limits of Open-Sourcing Frontier AI
With that in mind, I drafted a document titled:
“Declaration on the Ethical Limits of Open-Sourcing Advanced AI Models”
— a call to establish shared ethical boundaries for future model distribution.
Read the full declaration here : Notion
Declaration Summary: Five Core Ethical Principles
- Responsibility Before Access
No model should be released without a clear framework for accountability and traceability. - Digital Containment
Irreversibility must be acknowledged: open-sourcing is not ethically neutral once harm becomes plausible. - Structural Reciprocity
Frontier capabilities must come with equivalent safeguards: legal, social, and infrastructural. - Prohibited Deployment Zones
Models should not be usable in contexts lacking minimal governance (e.g. cyber militias, unstable states). - Ethical Equivalence
AI models with power to manipulate or deceive must be governed like weapons, not widgets.
Recommendations for Action
The declaration outlines five global actions we might consider:
- Establishing an AI Treaty Council under UN or multilateral oversight;
- Creating an AI Geneva Protocol for open-source guardrails;
- Mandating developer-traceable fingerprinting of large models;
- Requiring national licensing for model usage above a safety threshold;
- Supporting filtered, ethics-bound APIs instead of raw model downloads.
Open Questions for the EA Community
I’m not claiming this draft is perfect. Far from it. I see it as a first step — something to provoke thought and spark stronger proposals.
Here’s what I would deeply value feedback on:
- Is open-source AI a net good if it accelerates misuse faster than benefit?
- Should frontier model releases be considered a form of “global externality,” like emissions?
- What thresholds of capability should trigger governance?
- Can EA-aligned actors help draft, propose, or host an international coalition around such declarations?
About the Author
I’m Lee DongHun, a soon-to-be graduate from Chung-Ang University in South Korea.
Over the last two years, I’ve designed a human-centered AI framework called Ma-eum Company, rooted in emotional ethics, structural transparency, and narrative-based alignment.
This is not my full-time job. It’s my calling.
And if you’re reading this, I hope we can begin building better boundaries — together.
LinkedIn: linkedin.com/in/donghun-lee-4686b1362
Portfolio: Notion – Ma-eum Company
Contact: magnanimity2023@gmail.com
Thank you for reading. Feedback, criticism, and collaboration all welcome.
Let’s make open-source safer — before we learn the hard way.