President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

By Tristan Williams @ 2023-10-30T11:15 (+143)

This is a linkpost to https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

Released today (10/30/23) this is crazy, perhaps the most sweeping action taken by government on AI yet. 

Below, I've segmented by x-risk and non-x-risk related proposals, excluding the proposals that are geared towards promoting its use[1] and focusing solely on those aimed at risk. It's worth noting that some of these are very specific and direct an action to be taken by one of the executive branch organizations (i.e. sharing of safety test results) but others are guidances, which involve "calls on Congress" to pass legislation that would codify the desired action. 

[Update]: The official order (this is a summary of the press release) has now be released, so if you want to see how these are codified to a greater granularity, look there[2].  

Existential Risk Related Actions:

Non-Existential Risk Actions:

General

Discrimination

Healthcare

Jobs

Privacy

  1. ^

    Out of 26 distinct proposals, 7 (27%) are geared towards increasing use or capabilities and 2 (8%) proposals are a mixed bag of both encouraging development but also further safety. 

  2. ^

    I can also do a similar post for that if there's interest, but it would be significantly longer


Zach Stein-Perlman @ 2023-10-30T23:53 (+51)

This was the press release; the actual order has now been published.

One safety-relevant part:

4.2.  Ensuring Safe and Reliable AI.  (a)  Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require:

          (i)   Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:

               (A)  any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;

               (B)  the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and

               (C)  the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security.  Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and

          (ii)  Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.

     (b)  The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section.  Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:

          (i)   any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

          (ii)  any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.

SebastianSchmidt @ 2023-11-11T17:04 (+1)

Great to see how concrete and serious the US is now. This basically means that models more powerful than GPT-4 have to be reported to the government. 

Tristan Williams @ 2023-10-31T09:13 (+1)

Thanks, I'll toss this in at the top now for those that are curious

Yonatan Cale @ 2023-10-30T15:09 (+10)

Thank you very much for splitting this up into sections in addition to posting the linkpost itself

Tristan Williams @ 2023-10-31T09:13 (+3)

Anytime :) I didn't do much, but glad to know it was helpful because I was debating whether to continue trying to organize for future stuff

Tobias Häberli @ 2023-10-30T15:02 (+6)
  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. 

Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn't become public knowledge through FOIA requests?

Tony Barrett @ 2023-10-31T03:15 (+1)

Yes I expect that the government would aim to protect the reported information (or at least key sensitive details) as CUI or in another way that would be FOIA exempt.

SummaryBot @ 2023-10-30T12:24 (+2)

Executive summary: President Biden issued an executive order with sweeping proposals for regulating AI systems, including requirements to share safety tests for powerful models and developing standards to ensure trustworthy AI.

Key points:

  1. Requires developers of powerful AI systems to share safety tests and notify government before training models that pose national security risks.
  2. Directs establishing standards and tools for safe, secure, trustworthy AI systems.
  3. Calls for standards to screen dangerous biological materials synthesized using AI.
  4. Seeks international collaboration on developing AI standards and managing risks.
  5. Aims to protect against AI fraud, promote non-discriminatory AI, and support workers impacted by AI automation.
  6. Focuses on privacy protections and evaluating government use of personal data and AI.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.