[Linkpost] OpenAI leaders call for regulation of "superintelligence" to reduce existential risk.

By Lowe Lundin @ 2023-05-25T14:14 (+5)

Leaders of OpenAI, including co-founders Greg Brockman and Ilya Sutskever and CEO Sam Altman, have called for the regulation of artificial intelligence (AI) and "superintelligence" to mitigate existential risks. They propose the establishment of an international regulatory body similar to the International Atomic Energy Agency to inspect AI systems, require audits, test compliance with safety standards, and impose restrictions. OpenAI's CEO warned that failing to regulate AI could lead to significant harm. Critics, however, express concerns about the profit motives behind such calls for regulation. The European Union's proposed AI Act has drawn opposition from OpenAI and Google, with warnings that it could lead to companies ceasing operations if they cannot comply. The debate over regulating AI is ongoing, with policymakers aiming to balance innovation and safety.

Summary taken from News Minimalist

Washington Post

Financial Times