Comments on OpenAI's "Planning for AGI and beyond"
By So8res @ 2023-03-03T23:01 (+115)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullNickLaing @ 2023-03-04T03:54 (+13)
Thanks so much for posting this! I find this rather scary just how blase he seems. The basic thesis of plowing on without worrying too much because the potential upside is so great doesn't seem like a well thought out, rational argument.
There are also a number of non specific pseudo inspirational comments that feel both our of touch with reality, and like they could be from a dystopian novel. Like
"the stakes (boundless downside and boundless upside) will hopefully unite all of us." And "we believe that the future of humanity should be determined by humanity" lol.
Part of me wonders whether working for a company on the cutting edge of AI development should almost disqualify you from being part of the public AI safety discourse. I believe that oil companies should have close to no sway in the climate change debate, and cigarette companies should have no say in cigarette regulation. Should we rather see AI progressors more like lobbyists when they are so compromised.
Sam Iacono @ 2023-03-04T05:35 (+13)
Part of me wonders whether working for a company on the cutting edge of AI development should almost disqualify you from being part of the public AI safety discourse.
Strong agreement downvote from me. This line of thought seems so intuitively dangerous. You want to disqualify people making powerful AI from discussions on how to make powerful AI safer? I’m having trouble understanding why this should be a good idea.
NickLaing @ 2023-03-04T12:51 (+5)
Not disqualify then from private discussions - of course they're needs to be loads of private discussions, but from prominent public discussions. Why is that intuitively dangerous?
I'm uncertain about this and keen to hear the counter arguments.
Its intuitive to me that People who are paid to develop something potentially dangerous as fast as possible (weapons manufacturers, tobacco, AI) should not be the ones at the forefront of public discussion nor the ones making decisions about what should be allowed and what not. They will be compromised and biased - the very value of what they do with their lives is at stake. they are likely to skew the discourse away from the rational
The ideal situation might be to have enough powerful AI programmers working on AI safety and governance independently of the companies, that they could lead the discourse and make the discussions.
I'm sure there are strong arguments against this and I'm keen to hear then.
temporalis @ 2023-03-04T19:26 (+14)
Part of the goal is to persuade them to act more safely, and it's easier to do this if they are able to explain their perspective. Also, it allows others to evaluate their arguments. We can't adopt a rule that "people accused of doing something dangerous can't defend themselves" because sometimes after evaluating the arguments they are in the right - e.g. nuclear power, GMOs.
NickLaing @ 2023-03-05T18:51 (+7)
Thanks that's a good point. I hope though that they have less sway than independent people arguing in either direction. I would hope in the case of nuclear power and GMOs it would be independent advocates (academics, public, think thanks) arguing for it who convinced is rather than Monsanto and power plant manufacturers.
But I don't know those stories!
Dov @ 2023-04-16T19:48 (+4)
Although I'd prefer if Soares and Sam Altman saw eye to eye, I think it's inspiring that Altman seems to be soliciting criticism.
Wouldn't it be cool if other cause areas worked like that (e.g. wouldn't it be amazing if industrial animal agriculture consulted Animal Charity Evaluators before opening up factory farms?).
ScreamingBlondGuy @ 2023-03-06T17:45 (+3)
"Sam’s post: At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans ."
To me this means: "Our continuous deployment plan is subject to big changes if we think that the circumstances need it". So basically, they are ready to stop the continuous deployment and reassess their approach if things go south.