FT: We must slow down the race to God-like AI
By Angelina Li @ 2023-04-24T11:57 (+33)
This is a linkpost to https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2
ICYMI: A post in the Financial Times by Ian Hogarth on risks from AI (focusing a lot on the technical alignment problem), that think got a lot of things right. From the comments: see his twitter thread on regulating AGI.
I found the ending moving:
In 2012, my younger sister Rosemary, one of the kindest and most selfless people I’ve ever known, was diagnosed with a brain tumour. She had an aggressive form of cancer for which there is no known cure and yet sought to continue working as a doctor for as long as she could. My family and I desperately hoped that a new lifesaving treatment might arrive in time. She died in 2015.
I understand why people want to believe. Evangelists of God-like AI focus on the potential of a superhuman intelligence capable of solving our biggest challenges — cancer, climate change, poverty.
Even so, the risks of continuing without proper governance are too high. It is striking that Jan Leike, the head of alignment at OpenAI, tweeted on March 17: “Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so? This is quite immature technology and we don’t understand how it works. If we’re not careful, we’re setting ourselves up for a lot of correlated failures.” He made this warning statement just days before OpenAI announced it had connected GPT-4 to a massive range of tools, including Slack and Zapier.
Unfortunately, I think the race will continue. It will likely take a major misuse event — a catastrophe — to wake up the public and governments. I personally plan to continue to invest in AI start-ups that focus on alignment and safety or which are developing narrowly useful AI. But I can no longer invest in those that further contribute to this dangerous race. As a small shareholder in Anthropic, which is conducting similar research to DeepMind and OpenAI, I have grappled with these questions. The company has invested substantially in alignment, with 42 per cent of its team working on that area in 2021. But ultimately it is locked in the same race. For that reason, I would support significant regulation by governments and a practical plan to transform these companies into a Cern-like organisation.
We are not powerless to slow down this race. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs.
If you work at a major lab trying to build God-like AI, interrogate your leadership about all these issues. This is particularly important if you work at one of the leading labs. It would be very valuable for these companies to co-ordinate more closely or even merge their efforts. OpenAI’s company charter expresses a willingness to “merge and assist”. I believe that now is the time. The leader of a major lab who plays a statesman role and guides us publicly to a safer path will be a much more respected world figure than the one who takes us to the brink.
Until now, humans have remained a necessary part of the learning process that characterises progress in AI. At some point, someone will figure out how to cut us out of the loop, creating a God-like AI capable of infinite self-improvement. By then, it may be too late.
More discussion on the LessWrong thread here.
Lizka @ 2023-04-24T12:05 (+3)
Thanks for sharing this here! :) Quick clarification: I think Ian Hogarth (who wrote the piece) isn't actually a journalist at FT. (Btw, see also his Twitter thread on proposals for regulating AGI.)
(I also appreciated the piece and shared a link to it in the April EA Newsletter.)
Angelina Li @ 2023-04-24T12:28 (+1)
Thanks Lizka! Edited the top level post :)