The Bottleneck in AI Policy Isn’t Ethics—It’s Implementation
By Tristan D @ 2025-04-04T06:07 (+10)
This is a summary of Vincent Müller’s article Basic issues in AI policy.
Current and foreseeable AI systems are not moral agents
- "…there is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients). So, the responsibility remains firmly with the humans and for the humans - as well as other animals."
- Therefore, the main ethical issues are about the human design and use of AI.
The main issues in AI ethics
- Privacy & Surveillance
- Manipulation of Behaviour
- Opacity of AI Systems
- Bias in Decision Systems
- Human-Robot Interaction
- Automation and Employment
- Autonomous Systems and Responsibility
- Machine Ethics
- Artificial Moral Agents
- Singularity
See Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy) for a more detailed overview.
AI ethics informs AI policy
- The goal of AI ethics is to say what is right and wrong in the design and use of AI.
- This is important because it defines what the main policy aims of AI policy should be.
- Policy consists of policy aims and policy means.
AI policy aims
- There is global convergence around five ethical principles for AI:
- Transparency
- Justice and fairness
- Non-maleficence
- Responsibility
- Privacy
- These are specific policy aims. AI Policy also needs general policy aims which will differ by nation.
- E.g. China and US placing greater emphasis on geostrategic aims (and thus reluctant to limit automated weapons).
- E.g. EU will be sensitive to monopolies and place great emphasis on privacy.
- General policy aims will be subject to:
- Public opinion
- Lobbying
- Technical feasibility
- Cost
AI policy means
- The practical instruments and methods to further policy aims.
- The main bottlenecks of AI policy are with the appropriate policy means.
- Options
- Educational efforts (e.g. curriculum of AI degrees)
- Framework for legal liability (e.g. insurance)
- Impact assessment tools
- Legal regulation
- PR measures
- Public spending
- Self-assessment frameworks
- Self-regulation (in industry)
- Self-regulation (e.g. a “Hippocratic oath”)
- Supporting ethics by design
- Taxation
- Technical standards (in a framework of legal regulation)
- Moving forward we can draw from political science and ethics-driven policy (e.g. medical or engineering ethics) to overcome the bottlenecks of practical policy means.
cb @ 2025-04-04T06:27 (+2)
"…there is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients).
Seems false, unless he's using "general agreement" and "foreseeable" in some very narrow sense?
Tristan D @ 2025-04-04T22:10 (+1)
There are a variety of views on the potential moral status of AI/robots/machines into the future.
With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness.
Tristan D @ 2025-04-04T08:16 (+1)
I was also interested to follow this up. For the source of this claim he cites another article he has written 'Is it time for robot rights? Moral status in artificial entities' (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).
Beyond Singularity @ 2025-04-05T22:07 (+1)
Thank you for this interesting overview of Vincent Müller’s arguments! I fully agree that implementation (policy means) often becomes the bottleneck. However, if we systematically reward behavior that contradicts our declared principles, then any “ethical goals” will inevitably be vulnerable to being undermined during implementation. In my own post, I call this the “bad parent” problem: we say one thing, but demonstrate another. Do you think it’s possible to achieve robust adherence to ethical principles in AI when society itself remains fundamentally inconsistent?