Come to Oxford/Stanford to Work on Decentralized AI Security (Funded)

By samuelemarro @ 2025-09-08T18:01 (+6)

TL;DR: 5 fully funded visiting positions (4 at Oxford, 1 at Stanford) on improving the security of decentralized AI systems. See at the bottom for details.

 

We’re launching the Institute for Decentralized AI, a project of the Cosmos Institute funded by the AI Safety Fund. The institute is distributed between Oxford, Stanford and Austin and has the goal of building build the protocols, standards, and tooling that make decentralized AI work in the real world. In our first year, we will mostly focus on the security of decentralized AI systems.

 

What do you mean by decentralized AI?

It’s an overloaded term. Most people who say “decentralized AI” mean “blockchain AI” (which can be part of the solution, but we mostly focus on other areas). In our definition, decentralized AI is AI where one or more of the following are distributed (ideally all of them):

Examples of decentralized AI:

 

Why should I, as an EA, care about decentralized AI security?

(Note: This isn’t meant to be a complete argument, but rather to give you a taste of the rationale behind this work)

 

The power concentration argument

Aka “building safe, decentralized AI is good”

 

Most of the AI frontier development is concentrated in a handful of companies/labs. This means that a few key people have massive control over how AI is developed and used. While this in theory should reduce the probability of dangerous or malicious applications of AI, in practice most big AI companies haven’t shied away from military applications of their technology, and most governments are deeply interested in AI as a military or geopolitical tool.

Moreover, there is also a risk of new positive developments of AI not being accessible to the public. So far, the dominant strategy for AI labs has been to make the vast majority of their tools accessible to everyone. There is however no guarantee that this trend will continue, especially if frontier models become more useful in sectors such as war and finance.

There are two solutions:

The underlying rationale is that by distributing data, compute, governance and benefits across as many parties as possible, you avoid a lot of power-related negative outcomes of AI (e.g. benefitting one country at the expense of the other, making political changes disliked by the majority of a population but liked by a powerful minority, or developing technologies widely considered dangerous). Note that distributing governance is not enough on its own: if a company or foreign government controls all of the compute required by an AI system, it is much harder for another entity (e.g. a democratically elected government) to prevent misuse. In short, we need full decentralization.

However, building a decentralized AI system opens up opportunities for malicious use. If a few users can use subterfuges (e.g. hacking, bribing) to misuse the AI for their own benefit, any large-scale decentralized AI system is bound to fail. That’s why we need to develop decentralization-friendly security systems now.

 

The Collective Behavior Argument

Aka “decentralized AI is here”

 

Every time an AI system connects to the Internet and interacts with websites, people, or other AI systems, it becomes part of a global network. This network, like any other, is open to attacks and unintentional failure modes. What makes networks of AI systems particularly dangerous is that a) they are more complex than regular computer networks b) right now, we have very few security systems for them. I will leave you the excellent survey by my colleague here, but here are a few examples of failure modes:

What sets this apart from traditional AI security contexts is that the network is decentralized: the agents are owned by a large number of independent organizations/companies, with very little incentive to hand over oversight to a central controller. The scale of the problem also grows with the size of the network (try setting up a central oversight system for the entire Internet!). Therefore, we need to develop security mechanisms now, while the number of agents is still small, rather than waiting for large-scale security threats to emerge.

 

Why focus on security as opposed to safety?

It’s hard for me to overstate how utterly insecure LLM-based agents are right now, even by AI standards. Most agents are deployed with no jailbreak protections, no input validations, no access control mechanisms, no proper logging tools. We’re creating “agent economies” and “large-scale agent networks” without having any reasonable security mechanism, let alone safety ones.

 

How can I help?

We have 5 fully funded visiting researcher positions that last between 6 and 12 months:

These positions cover flights, accommodation, living expenses and fees. Apply here.

 

If you know someone who might be interested, please forward this post to them (or, if you prefer, we have posts on Twitter and LinkedIn).

If you’re interested in other forms of collaborations, please email collaborations@decentralized-ai.org.

If you have any questions (on the positions, the institute, decentralized AI security, and decentralized AI as a whole), let me know.