Spicy takes about AI policy (Clark, 2022)
By Will Aldred @ 2022-08-09T13:49 (+44)
This is a linkpost to https://twitter.com/jackclarkSF/status/1555980412333133824
Linkposting, tagging and excerpting in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.
My favourite excerpts:
The vast majority of AI policy people I speak to seem to not be that interested in understanding the guts of the technology they're doing policy about
...
The real danger in Western AI policy isn't that AI is doing bad stuff, it's that governments are so unfathomably behind the frontier that they have no notion of _how_ to regulate, and it's unclear if they _can_
...
Many AI policy teams in industry are constructed as basic the second line of brand defense after the public relations team. A huge % of policy work is based around reacting to perceived optics problems, rather than real problems.
...
Many of the immediate problems of AI (e.g, bias) are so widely talked about because they're at least somewhat tractable (you can make measures, you can assess, you can audit). Many of the longterm problems aren't discussed because no one has a clue what to do about them.
...
The notion of building 'general' and 'intelligent' things is broadly frowned on in most AI policy meetings. Many people have a prior that it's impossible for any machine learning-based system to be actually smart. These people also don't update in response to progress.
...
The default outcome of current AI policy trends in the West is we all get to live in Libertarian Snowcrash wonderland where a small number of companies rewire the world. Everyone can see this train coming along and can't work out how to stop it.
...
People wildly underestimate how much influence individuals can have in policy. I've had a decent amount of impact by just turning up and working on the same core issues (measurement and monitoring) for multiple years. This is fun, but also scares the shit out of me.
...
Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group wrt some issues.
...
It's very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms. V alienating.
...
IP and antitrust laws actively disincentivize companies from coordinating on socially-useful joint projects. The system we're in has counter-incentives for cooperation.
...
AI policy can make you feel completely insane because you will find yourself repeating the same basic points (academia is losing to industry, government capacity is atrophying) and everyone will agree with you and nothing will happen for years.
...
One of the most effective ways to advocate for stuff in policy is to quantify it. The reason 30% of my life is spent turning data points from arXiv into graphs is that this is the best way to alter policy - create facts, then push them into the discourse.
...
Most policy forums involve people giving canned statements of their positions, and everyone thanks each other for giving their positions, then you agree it was good to have a diverse set of perspectives, then the event ends. Huge waste of everyone's time.
...
To get stuff done in policy you have to be wildly specific. CERN for AI? Cute idea. Now tell me about precise funding mechanisms, agency ownership, plan for funding over long-term. If you don't do the details, you don't get stuff done.
...
Policy is a gigantic random number generator - some random event might trigger some politician to have a deep opinion about an aspect of AI, after which they don't update further. This can brick long-term projects randomly (very relaxing).
...
AI is so strategic to so many companies that it has altered the dynamics of semiconductor development. Because chips take years to develop, we should expect drastic improvements in AI efficiency in the future, which has big implications on diffusion of capabilities.
...
Attempts to control AI (e.g. content filters) directly invite a counter-response. E.g, Dall-E vs #stablediffusion. It's not clear that the control methods individual companies use help relative to the bad ecosystem reactions to these control methods. (worth trying tho)
...
Most policymakers presume things exist which don't actually exist - like the ability to measure or evaluate a system accurately for fairness. Regulations are being written where no technology today exists that can be used to enforce that regulation.
...
For years, people built models then built safety tooling around them. People are now directly injecting safety into models via reinforcement learning for human feedback. Everyone is DIY'ing these values, so the values are subjective via the tastes of people within each org.