What does the launch of x.ai mean for AI Safety?
By Chris Leong @ 2023-07-12T19:42 (+20)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullEvan_Gaensbauer @ 2023-07-25T01:23 (+4)
In terms of Elon Musk specifically, I feel like it affirms what most of already thought of his relationship with AI safety (AIS). Even among billionaire technologists conscious of AIS and who achieved fame and fortune in Silicon Valley, Musk is an ambitious and exceptionally impactful personality. This of course extends to all his business ventures, philanthropy and political influence.
Regardless of whether it's ultimately positive or negative, I expect the impact of xAI, including for AIS, will be significant. What the quality of the impact will be strikes me as ambivalent. I.e., it's not a simple question of whether it will be positive or negative.
I expect xAI, at least at this early stage, will be perceived as having some mildly positive influence on AIS. There are of course already some more pessimistic predictions. I expect within a couple years the pessimistic predictions may be vindicated as apparently negative impacts on AIS will outweigh whatever positive impacts xAI may have. The kind of position I'm summarizing here seems well-reflected in this post on Astral Codex Ten that Scott Alexander posted last week about why he expects xAI's alignment plan will fail. I've not learned much about xAI yet, though my own model for having an ambivalent but somewhat pessimistic expectation for the company is based on:
- The fact that the same trajectory has been one every other AGI research lab has followed during the last several years. Nobody denies some parts of their research agendas will be fruitful for AI safety/alignment. That's especially true during the first couple years. It's just that such initial, cautious optimism is dashed. Each company eventually succumbs to an AI capabilities arms race that outpaces the progress being made on alignment research.
This is in spite of the fact that the key actors behind the companies, like head researchers, and initial financial backers (including Elon Musk previously at OpenAI), publicly set an intention of avoiding that foreseeable pitfall. That hasn't stopped any of them from falling into that trap. I've seen a lot of people say that of OpenAI, Anthropic AI, and Google DeepMind. I'm not aware of any reason why xAI may be likely to buck the trend.
- A perception that across the vast array of Elon Musk's endeavors, (again, e.g., his business ventures, philanthropy, and activism), his impact has been volatile. I.e., he has had a complicated, high-variance impact that has been beneficial for the world in some ways, but harmful in others. This is more of my personal opinion, though it dovetails with what many others have thought about Musk's impact on AIS.
On one hand, other than maybe Dustin Moskowitz, Elon Musk is arguably the one philanthropist who has done the most to make the field of AI safety what it is today. In the months around the landmark publication of books about AI safety by Nick Bostrom and Stuart Russell about a decade ago, Elon Musk was a major, early backer of both the Future of Life Institute and OpenAI. His public support and advocacy of AI safety as shaped by the philosophies of effective altruism and longtermism may have been just as important.
On the other hand, OpenAI may have never been founded without his support. I remember there were misgivings about OpenAI even as it was being founded, now thought of as vindicated by a common perception that OpenAI is the biggest failure in the history of AI safety. I can't think of a strong reason why Musk founding xAI will be an exception that will buck the trend of most of his past endeavors having quite an unpredictable trajectory and mixed track record.