Summary of the AI Bill of Rights and Policy Implications

By Tristan Williams @ 2023-06-20T09:28 (+16)

TLDR: In the High Level Overview, I attempt to glean the upshots from what is said in the document for the purposes of AI Governance work. Following, Policy Proposals Implications is an attempt to see how this might relate to various current AI Governance policy proposals. Next is Selected Quotes and Layout where I copied all the headers and potentially relevant quotes over to provide a way to engage more without having to read the whole thing. Finally, there is Further Reading, a collection of documents mentioned in the AIBoR. I hope that this helps to better understand where the executive branch is at on AI and helps to craft policy change that is (at least somewhat) more likely to make headway. 

 

Epistemic Status: I have read through the document in its entirety and spent probably about 8 hours crafting this document, but am also new to the AI Governance space so please take what I have to say here as my best attempt, not as any definitive take on the document, especially when I’m trying to abstract outwards to speak on the AIBoR in relation to AI policy proposals 

High Level Overview

Released as a white paper by the White House Office of Science and Technology Policy in October of 2022, the “Blueprint for an AI Bill of Rights” (AIBoR) contains: a core argument for five principles, notes on how to apply these principles, and a “technical companion” that gives further, concrete steps that can be taken “by many kinds of organizations—from governments at all levels to companies of all sizes” (4) to uphold these values. It’s trying to guide AI policy across sectors and touts itself as a “national values statement” (4)[1]

Upshots from the process

Upshots from what they say

The Five Principles AI x-risk Application

  1. Safe and Effective Systems
    1. This is by far the best principle to focus on. It has the most language amenable to adaptation to AI x-risk and I would recommend at least reading my selected quotes from this principle below for more context
  2. Algorithmic Discrimination Protections
    1. You could leverage this principle to potentially focus on how generative models have been discriminatory, but there isn’t much here to relate to x-risk and you’d probably have trouble making the extension as LLMs like OpenAI get better at avoiding these sorts of pitfalls. 
    2. Perhaps you could also argue interpretability is needed to be able to remedy discrimination, pulling this together with the Notice and Explanation principle, but I’m not sure how successful this might be
  3. Data Privacy
    1. LLMs lack of obtaining consent could be leveraged here under this principle, but you’d have to somehow figure out how obtaining consent could be applied to LLMs in a way that isn’t nonsensical 
  4. Notice and Explanation
    1. On the notice side, you could apply this to generative models but all this does is force them to disclose when the model is being used
    2. On the explanation side, you could try to extend this to interpretability where application to generative models would force creators to have a deep understanding of how each output was achieved. But this application doesn’t fit in entirely with their other example cases given. 
  5. Human Alternatives, Consideration, and Fallback
    1. I don’t think this principle will be helpful beyond a few random quotes

Next steps

 

Policy Proposal Implications

If I had the time I’d go through Zach Stein-Perlman’s entire list, but in lieu of that I've created categories that attempt to capture some common policy proposals, with descriptions for anyone who may not be familiar. The list isn't exhaustive, but it hopefully captures a fairly broad range. Below each of those I then assess the relevance of the AIBoR to each, mostly considering if such a method is mentioned in the AIBoR, but also if there are other sentiments that might support it. At a glance, the AIBoR doesn’t give support for many of the various AI policy proposals, the exceptions being decided support for Regulation by Government, Regulation from Within, and Auditing, but generally little support otherwise. 

  1. Hardware Controls: targeting specific changes to hardware that help impede certain worst cases or cap the capabilities of the hardware
    1. Assessment: Nothing here to support this, hardware is rarely (if at all) mentioned.
  2. Monitoring: tracking something that you might not want to (or can’t) regulate, like stocks of cutting-edge chips or the state of frontier AI development in other countries 
    1. Assessment: There is some support for ongoing monitoring, a whole section dedicated to it under the Safe and Effective System principle. But this is mostly connected to monitoring as a means to trigger specific regulation, where the monitoring is focused on things you could control or alter, thus meaning there’s not much support for this category.
  3. Regulation from Within: having some sort of internal process for risk assessment or prevention, that is created, or at least implemented, by some part of the company 
    1. Assessment: While this wasn't a prominent type of proposal in the policy proposals below (even the one mentioned is specifically an internal auditing proposal, a mix between two categories) there is a lot to support this principle in the AIBoR. Many times throughout they speak of things companies should do, in a way that can sometimes be ambiguous as to whether that directive should be fulfilled by governmental regulation or rather taken as an opportunity for companies to proactively build out safe practices themselves. On the one hand, sections like Clear Organizational Oversight (19) seem to indicate the latter, the AIBoR giving directives to be fulfilled by the companies from within. But the Reporting subheading that appears under each principle is a bit more ambiguous. Each Reporting section seems to give rough guidelines for creating reports that the companies fulfill themselves (and make open to the public). But then put this into conversation with the “How These Principles Can Move Into Practice” sections at the end of each Principle, where they give examples of how the principle and its subheadings play out in the world. In these sections they continually point to Regulation by Government and only twice (21, 29) mention an example of Regulation from Within as a successful fulfillment, indicating that Regulation from Within may be a helpful step but might not be enough in the end.
  4. Regulation by Government: these are proposals where the government would handle the given regulatory procedure, whether it be setting requirements for information security to prevent AI model leaks, or setting up incident reporting similar to what the FAA does after a plane crash 
    1. Assessment: My assessment here goes hand in hand with that of Regulation from Within. Throughout the AIBoR they give suggestions like those found in the Reporting sections that seem like they could factor in as a step in a Regulation by Government, but as elsewhere they stop short of saying so. They say the reports should be “open to the public” but they fail to say who the reports should be for, if they should be overviewed by the government or some other entity or if just publishing the report is enough. There are multiple instances of this ambiguity throughout the suggestions they make, but what solidifies Regulation by Government as a method supported by the AIBoR is the “How These Principles Can Move Into Practice” sections mentioned before. Nearly 90% of examples here are examples of Regulation by Government, often implemented by one of the independent agencies of the US government (like the National Science Foundation or The National Highway Traffic Safety Administration), seemingly indicating this is one of the best avenues to follow for future AI policy proposals. My best guess is that they are trying to create principles that can be implemented and used by a wide range of actors in a wide range of situations, principles that can be broadly implemented, but that when it comes down to it the best ways they see these principles being implemented are when government takes the role of the implementer and crafter of these policies. 
  5. Regulation by Legal Code: this could also be a subsection under Regulation by Government, but basically involves changes to the legal code like clearly spelling out who is responsible for harms done by AI
    1. Assessment: Any talk of legal liability in the AIBoR references existing laws, and it proposes no such changes to the legal code in a way that might codify liability for harms done by AI, so this principle isn’t really supported.
  6. Licensing: again a principle that could probably function as a subsection under Regulation by Government, but remaining separate because it’s an open possibility that third parties could do the licensing, these proposals focus on precaution, on making sure those working on frontier AI models (perhaps more specifically those amassing large amounts of cutting edge chips) go through some sort of training or process first to make sure they are prepared to handle such risks 
    1. Assessment: Mentioned nowhere in the AIBoR, nearly no grounding.
  7. Auditing: having an individual or group of individuals who evaluate whether an organization is following its promised procedures and safety protocols, whether by assessing data or testing the model itself
    1. Assessment: This solution is mentioned directly in the sections of each of the first three principles, where they support a certain sort of independent or third party auditing to assess a variety of metrics they’ve put forth. Some instances are more amenable to AI x–risk oversight, like the independent evaluations mentioned in the Safe and Effective Systems section, and others less so, like the independent evaluations mentioned in the Algorithmic Discrimination Protections section, where the assessment is geared specifically towards making sure the system is behaving in a non-discriminatory way. There is support for this assessment being both pre and post deployment, as is mentioned specifically in Ethical review and use prohibitions (38) section of the Data Privacy principle, a section that also highlights the possibility of the auditing not being tied just to more technical considerations but ethical ones as well. 
  8. Funding Preventative Work: mostly just funding alignment research, but also efforts towards interpretability and to improve model evaluation
    1. Assessment: There is pretty much nothing here to support this, as most all of this is aimed at the application layer of the product cycle, with the little that is aimed at pre-deployment assessment focusing more on Regulation by Government or Auditing.

 

Selected Quotes and Layout

This is generally laid out by presenting relevant quotes from the document largely in chronological order, and also gathering the section headers for each of the five principles for a sense of the overall arc of the document at a quicker glance, and pulling the relevant quotes from the summary of each principle right below it.

 

General Quotes

1. Safe and Effective Systems

“Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards…Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.” (5, 15)

2. Algorithmic Discrimination Protections

“Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on any protected status (race, sex, religion, age, disability, etc.)...Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.” (5, 23)

3. Data Privacy

“Ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected…Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first…surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties.” (6, 30)

4. Notice and Explanation

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you…Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.” (6, 40)

5. Human Alternatives, Consideration, and Fallback

“Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access…Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions” (7, 46)

 

Examples of automated systems (53)

Panel attendees:

Further Reading

 

And finally, thanks to Jakub Kraus for his help over multiple iterations of this document, the guidance was quite helpful and much appreciated. 


 

  1. ^

     Citation format is (page number of AIBoR) so (4) is page 4 in the document 

  2. ^

     Namely, health, work, education, criminal justice, and finance, and data pertaining to youth

  3. ^

    An automated system is “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities” including those derived from “machine learning” or other “artificial intelligence techniques” (10)

  4. ^

     As this document was released 9/2022, they should have been aware of GPT-3 and potentially other LLMs already released at the time (i.e. Chinchilla)