Ilya Sutskever is starting Safe Superintelligence Inc.
By defun đ¸ @ 2024-06-19T19:11 (+26)
This is a linkpost to https://ssi.inc/
"We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
NickLaing @ 2024-06-19T19:15 (+33)
Is this a disturbing pattern? Disgruntled Engineer leaves AI org and starts new one which claims to be more safety orientated than the last. Then the forces of the market, greed and power take over and we are left with another competitive player in the high stakes race.
Doesn't feel ideal but I'm not part of this scene
huw @ 2024-06-19T19:48 (+41)
I donât understand why we should trust Ilya after he played a very significant role in legitimising Samâs return to OpenAI. If he had not endorsed this, the boardâs resolve wouldâve been a lot stronger. So I find it hard to believe him when he says âwe will not bend to commercial pressuresâ, as in some sense, this is literally what he did.
AdriĂ Garriga Alonso @ 2024-06-19T23:45 (+10)
More than commercial, my understanding from purely public documents is that it was societal pressures.
But I agree with you two on the spirit.
huw @ 2024-06-19T19:55 (+16)
Co-founder Daniel Grossâ thoughts on AI safety are at best unclear beyond this statement. Here is an article he wrote a year ago: The Climate Justice of AI Safety, and heâs also appeared on the Stratechery podcast a few times and spoken about AI safety once or twice. In this space, heâs most well known as an investor, including in Leopold Aschenbrennerâs fund.
I think it would be good for Daniel Gross & Daniel Levy to clarify their positions on AI safety, and what exactly âcommercial pressureâ means (do they just care about short-term pressure and intend to profit immensely from AGI?).
(Disclosure: I received a ~$10k grant from Daniel in 2019 that was AI-related)
Pablo @ 2024-06-19T22:35 (+10)
An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being âsafeâ in their work on AI.
Being safe with AI is hard and potentially costly, so if youâre a company working on AI capabilities, you might want to overstate the extent to which you focus on âsafety.â
t6aguirre @ 2024-06-20T00:03 (+6)
I wonder how do they plan to get GPUs at scale while remaining "insulated from short-term commercial pressures"