[Linkpost] Scott Alexander reacts to OpenAI's latest post
By Akash @ 2023-03-11T22:24 (+105)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullGeoffrey Miller @ 2023-03-12T20:03 (+20)
Akash - thanks for posting this. Scott Alexander, as usual, has good insights, and is well worth reading here.
I think at some point, EAs might have to bite the bullet, set aside our all-too-close ties to the AI industry, and realize that 'AGI is an X-risk' boils down 'OpenAI, Deepmind, and other AI companies that aren't actually taking AIXR seriously are the real X risks' -- and should be viewed and treated accordingly.
NickLaing @ 2023-03-14T11:29 (+11)
100% agree.
I like the analogy with Exon mobil, I think it's helpful to keep that comparison in mind.
I mentioned before that I don't think companies that work on AI should have a significant voice in the AI discourse, at least in the EA sphere - we can't control the public discourse.
The primary purpose (maybe 80% + of their purpose) of a company is to make money, plain and simple. The job of their PR people is to garner public support through whichever means necessary. Often that is by sounding as reasonable as possible. Their press releases, blogs, podcasts etc. should be treated at worst as dangerous propaganda, at best as biased and compromised arguments.
Why then do we engage with their arguments so seriously? There are so many contrasting opinions on AI safety even among neutral researches that are hard to understand and important to engage with, why would we throw compromised perspectives in the mix?
I lean towards using these kinds of blogs to understand the plans of AI companies and to understand the arguments we need to counter in the public sphere, not as reasonable well thought out opinions by neutral people.
Minh Nguyen @ 2023-03-22T12:38 (+3)
As a former climate activist who organised a protest outside Exxon offices after my country failed to commit to climate agreements, I can personally confirm Scott's hypothetical.
I also share many of the same concerns of the AGI race dynamics. The current frontrunners in the feared "AGI race" are all AI Safety companies, and ChatGPT has attracted billions into capabilities research from people who otherwise would've never looked into AI.
Just a week ago, Peking University Zhu Song-Chun professor spoke at a CCP conference about how China needs to go all-in to beat the US to AGI. ChatGPT created a very compelling proof-of-concept to pour money into AI.
Counterfactuals and uncertainties aside, the AI Safety community has created the AGI race. I wonder if it's a good idea.
Robin @ 2023-03-15T13:31 (+1)
Good analogy. Note that environmental statements made by oil companies cannot be trusted even for a few years when expected profits increase, even when costly actions and investment patterns appear to back them up temporarily. E.g.
https://www.ft.com/content/b5b21c66-92de-45c0-9621-152aa335d48c
'BPs chief executive Bernard Looney defended its latest reversal, stating that “The conversation three or four years ago was somewhat singular around cleaner energy, lower-carbon energy. Today, there is much more conversation about energy security, energy affordability.”'