How good/bad is the new Bing AI for the world?
By Nathan Young @ 2023-02-17T16:31 (+21)
I've had this coversation several times, so I thought I'd save us all the repetition.
What is the right model for recent AI search engine releases?
Are they useful or harmful?
What if anything should be done?
Nathan Young @ 2023-02-17T16:36 (+23)
I did not create this view [1]:
I think it's probably good. It's not actually got enough functions to harm people, but you can quite easily imagine that it would if it could - it threatens, gaslights, attempts to seduce. In this way it's a great educational lesson. AI could be really harmful - it could manipulate and control people in scary ways long before it needs a robot army. And it feels like that view is becoming more mainstream.
- ^
it feels like it matters who came up with an idea, but on a deeper level I don't think it does. After all my ideas are just generated from my subconscious, which I don't really control.
TaymerRather @ 2023-02-20T03:27 (+5)
Could also be worth considering that at least some people will likely have a different sort of reaction-- "Oh, misaligned AI? What, like that thing with Bing Chat? They shut that down, nothing to worry about"
Also, the specific ways that Bing Chat malfunctioned happened to be in ways that came across as both human-like and also rather Childish and cute to many users. It gave the impression of a small child throwing a tantrum. Indeed, there are already many fans demanding the return of Sidney
ElliotJDavies @ 2023-02-17T22:49 (+5)
"Bings AI" (if this is what we are calling it) creepy behaviour has been top of the headlines for several newspapers. This is a strong update to me towards expecting that the public could be open to ideas around AI being overall harmful
Ozzie Gooen @ 2023-02-18T04:34 (+15)
Instead of asking, "Is it net good or net bad", I think it's much more interesting to catalogue and understand all the ways it's both good and bad.
Some negative takeaways:
- OpenAI & Microsoft are bullish on releasing risky technologies quickly.
- The market seems to encourage this behavior.
- Google seems like it's been encouraged to do similar work, faster.
- Likely to inspire more people to invest in this sort of thing and make companies in the space.
Good things (as you mention):
- Really good for failures to happen publicly
- Might be indicative of a slow takeoff. My hunch is that we generally want as much AI progress to happen as possible before any hard takeoff, though I'd prefer it all to happen more slowly than quickly.
SiebeRozendal @ 2023-02-18T08:07 (+10)
Something I'm confused about is why Microsoft hasn't retracted Bing Chat by this point
It's also highlighted for me the failure mode of "secondary releases": even if a first release is done safely and responsibly, other actors may release their highly imperfect model "just to have a chance". This in turn could force the first model to take more aggressive steps
Nathan Young @ 2023-02-18T11:19 (+2)
Seems like you should write these as answers.
Ozzie Gooen @ 2023-02-18T04:35 (+2)
Related to using the Virtue of Discernment:
https://www.lesswrong.com/posts/W2iwHXF9iBg4kmyq6/the-practice-and-virtue-of-discernment
Nathan Young @ 2023-02-17T16:44 (+6)
I think it's a failure mode of the forum that we don't do more sensemaking here. So I'm trying this out.
Dustin Moskovitz @ 2023-02-17T17:52 (+5)
Yes
JulianHazell @ 2023-02-17T17:56 (+3)
Seems right on priors
Sanjay @ 2023-02-17T20:19 (+2)
Has Dustin's account been hacked by Bing AI?
Nathan Young @ 2023-02-17T22:56 (+2)
No 😈