Disrupting malicious uses of AI by state-affiliated threat actors

By Agustín Covarrubias 🔸 @ 2024-02-14T21:28 (+22)

This is a linkpost to https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors

OpenAI just released a public announcement detailing how they caught and disrupted several cases of ongoing misuse of their models by state-affiliated threat actors, including some known to be affiliated with North Korea, Iran, China, and Russia.

This is notable because it provides very tangible evidence of many kinds of misuse risk that many people in AI Safety had flagged in the past (like the use of LLMs for aiding in the development of spear-fishing campaigns), and it associates them with malicious state-affiliated groups.

The specific findings:

Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.

These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks. 

Specifically: 


CAISID @ 2024-02-18T00:54 (+1)

This is fairly common, and it's beginning to be a feature of more traditional crime such as fraud too. We're seeing a lot of it suddenly. I'm not surprised common criminals have been using OpenAI offerings, but I am actually a little surprised state actors have been - especially since they have their own capacity for this. I wonder how much of this is the actual state, and how much is state-adjacent groups getting lumped in together.