Thoughts on SB-1047
By Ryan Greenblatt @ 2024-05-30T00:19 (+53)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullRyan Greenblatt @ 2024-07-23T06:14 (+6)
The limited duty exemption has been removed from the bill which probably makes compliance notably more expensive while not improving safety. (As far as I can tell.)
This seems unfortunate.
I think you should still be able to proceed in a somewhat reasonable way by making a safety case on the basis of insufficient capability, but there are still additional costs associated with not getting an exemption.
Further, you can't just claim an exemption prior to starting training if you are behind the frontier which will substantially increase the costs on some actors.
This makes me more uncertain about whether the bill is good, though I think it will probably still be net positive and basically reasonable on the object level. (Though we'll see about futher amendments, enforcement, and the response from society...)
(LW x-post)
Ben Millwood @ 2024-05-30T12:58 (+2)
Do you have views on how likely the bill is to pass as-is, or whether anyone should be spending effort making this more (or less) likely? Do you have any thoughts on how we can support changes being made to the bill?
(As an aside, I think this post would be improved by saying a little more in the very first paragraph about what SB-1047 is, and key facts about its progress.)
Ryan Greenblatt @ 2024-05-30T16:52 (+5)
Do you have views on how likely the bill is to pass as-is, or whether anyone should be spending effort making this more (or less) likely?
Under my views, it seems worthwhile to spend effort to make the bill more likely to pass.
I don't have next steps for this right now, but I might later.
Do you have any thoughts on how we can support changes being made to the bill?
The open letter from Senator Scott Wiener (https://safesecureai.org/open-letter) discusses how to make him and his staff aware of proposed changes. I don't have a particular proposal for how to effectively advocate for particular changes beyond this.
(As an aside, I think this post would be improved by saying a little more in the very first paragraph about what SB-1047 is, and key facts about its progress.)
Seems reasonable, I might edit to add this at some point.
SummaryBot @ 2024-05-30T12:46 (+2)
Executive summary: The post provides an analysis of the proposed AI regulation bill SB-1047, arguing that it is reasonable overall with some suggested minor changes, contingent on proper enforcement to avoid being overly restrictive or permissive.
Key points:
- The bill aims to regulate AI models that could cause "massive harm" by imposing requirements, while allowing exemptions for models deemed unlikely to have hazardous capabilities.
- Key suggested changes include simplifying the criteria for covered models, clarifying derivative model definitions, and potentially raising the threshold for hazardous capabilities.
- Proper enforcement is crucial, with developers able to claim limited duty exemptions if they reasonably rule out hazardous capabilities through testing protocols.
- The bill de facto bans open-sourcing models with hazardous capabilities, which the author views as a reasonable trade-off if the bar for hazardous capabilities is set appropriately.
- The author is uncertain about implementation details like what will constitute reasonable capability evaluations and the gap between the bill's threshold and catastrophic risk models.
- Overall support is contingent on beliefs around AI risk, the bill not overly restricting AI development in Western democracies, and reasonable enforcement allowing justified exemptions.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.