What were mistakes of AI Safety field-building? How can we avoid them while we build the AI Welfare?

By guneyulasturker 🔸 @ 2025-10-27T09:08 (+37)

AI Safety is an area that emerged from the EA community. EAs made the first significant efforts to build this field, and I’d say they have done a pretty good job overall. The number of people working on AI Safety has been increasing exponentially. However, many believe that the field-building efforts were not perfect. One example is that the talent pipeline has become over-optimized for researchers.

We now seem to be entering a similar stage with AI Welfare. Following Longview’s recent grants, the number of people working on issues related to digital sentience is starting to grow. Field-building efforts are likely to begin during this period, and we may soon see a new area that is closely connected to EA developing and becoming an independent field, much like AI Safety.

AI Welfare is a complex area. Empirical research is difficult and requires deep philosophical reasoning. Communication can be seen as fringe and is vulnerable to many failure modes. The public is already forming its own views, and there are many potential pitfalls ahead.

As someone who is planning to found a field-building organization focused on AI Welfare, I found this post quite helpful and wanted to gather more input.

So what are some mistakes you think were made during AI Safety field-building? How can we avoid repeating them while developing the AI Welfare field?

If you would prefer, we can also have a short chat and I can write down your comments here.