Digital Minds in 2025: A Year in Review

By Lucius Caviola, Bradford Saad, Will Millership @ 2026-01-06T23:11 (+68)

Note: This post was crossposted from The Digital Minds Newsletter by the EA Forum team, who encouraged the crosspost, with the authors’ permission. It was briefly mentioned in an earlier announcement post. The authors may not see or respond to comments here.


Welcome to the first edition of the Digital Minds Newsletter, collating all the latest news and research on digital minds, AI consciousness, and moral status.

Our aim is to help you stay on top of the most important developments in this emerging field. In each issue, we will share a curated overview of key research papers, organizational updates, funding calls, public debates, media coverage, and events related to digital minds. We want this to be useful for people already working on digital minds as well as newcomers to the topic.

This first issue looks back at 2025 and reviews developments relevant to digital minds. We plan to release multiple editions per year.

If you find this useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.

Bradford, Lucius, and Will

In this issue:

  1. Highlights
  2. Field Developments
  3. Opportunities
  4. Selected Reading, Watching, & Listening
  5. Press & Public Discourse
  6. A Deeper Dive by Area
Brain Waves, Generated by Gemini

1. Highlights

In 2025, the idea of digital minds shifted from a niche research topic to one taken seriously by a growing number of researchers, AI developers, and philanthropic funders. Questions about real or perceived AI consciousness and moral status appeared regularly in tech reporting, academic discussions, and public discourse.

Anthropic’s early steps on model welfare

Following their support for the 2024 report “Taking AI Welfare Seriously”, Anthropic expanded its model welfare efforts in 2025 and hired Kyle Fish as an AI welfare researcher. Fish discussed the topic and his work in an 80,000 Hours interview. Anthropic leadership is taking the issue of AI welfare seriously. CEO Dario Amodei drew attention to the relevance of model interpretability to model welfare and mentioned model exit rights at the council on foreign relations.

Several of the year’s most notable developments came from Anthropic: they facilitated an external model welfare assessment conducted by Eleos AI Research, included references to welfare considerations in model system cards, ran a related fellowship program, introduced a “bail button” for distressed behavior, and made internal commitments around keeping promises and discretionary compute. In addition to hiring Fish, Anthropic also hired a philosopher—Joe Carlsmith—who has worked on AI moral patiency.

The field is growing

In the non-profit space, Eleos AI Research expanded its work and organized the Conference on AI Consciousness and Welfare, while two new non-profits, PRISM and CIMC, also launched. AI for Animals rebranded to Sentient Futures, with a broader remit including digital minds, and Rethink Priorities refined their digital consciousness model.

Academic institutions undertook novel research (see below) and organized important events, including workshops run by the NYU Center for Mind, Ethics, and Policy, the London School of Economics, and the University of Hong Kong.

In the private sector, Anthropic has been leading the way (see section above), but others have also been making strides. Google researchers organized an AI consciousness conference three years after firing Blake Lemoine. AE Studio expanded its research into subjective experiences in LLMs. And Conscium launched an open letter encouraging a responsible approach to AI consciousness.

Philanthropic actors have also played a key role this year. The Digital Sentience Consortium, coordinated by Longview Philanthropy, issued the first large-scale funding call specifically for research, field-building, and applied work on AI consciousness, sentience, and moral status.

Early signs of public discourse

Media coverage of AI consciousness, seemingly conscious behavior, and phenomena such as “AI psychosis” increased noticeably. Much of the debate focused on whether emotionally compelling AI behavior poses risks, often assuming consciousness is unlikely. High-profile comments, such as those by Mustafa Suleyman, and widespread user reports added to the confusion, prompting a group of researchers (including us) to create the WhenAISeemsConscious.org guide. In addition, major outlets such as the BBC, CNBC, The New York Times, and The Guardian published pieces on the possibility of AI consciousness.

Research advances

Patrick Butlin and collaborators published a theory-derived indicator method for assessing AI systems for consciousness, which is an updated version of the 2023 report. Empirical work by Anthropic researcher Jack Lindsey explored the introspective capacities of LLMs, as did work by Dillon Plunkett and collaborators. David Chalmers released papers on interpretability and what we talk to when we talk to LLMs. In our own research, we conducted an expert forecasting survey on digital minds, finding that most assign at least a 4.5% probability to conscious AI existing in 2025 and at least a 50% probability to conscious AI arriving by 2050.


2. Field Developments

Highlights from some of the key organizations in the field.

NYU Center for Mind, Ethics, and Policy

Eleos AI

Rethink Priorities

Longview Philanthropy

Global Priorities Institute

PRISM - The Partnership for Research into Sentient Machines

Sentience Institute

Sentient Futures

Other noteworthy organizations


3. Opportunities

If you are considering moving into this space, here are some entry points that opened or expanded in 2025. We will use future issues to track new calls, fellowships, and events as they arise.

Funding and fellowships

Events and networks

Calls for papers


4. Selected Reading, Watching, & Listening

Books

In 2025, the following book drafts were posted, and the following books were published or announced:

Podcasts

This year, we’ve encountered many podcast guests discuss topics related to digital minds, and we’ve also listed to podcasts dedicated entirely to the topic.

Videos

Blogs and magazines


5. Press & Public Discourse

In 2025, there was an uptick of discussion of AI consciousness in the public sphere, with articles in the mainstream press and prominent figures weighing in. Below are some of the key pieces.

AI Welfare

Is AI consciousness possible?

Growing Field

Seemingly Conscious AI


6. A Deeper Dive by Area

Below is a deeper dive by area, covering a longer list of developments from 2025. This section is designed for skimming, so feel free to jump to the areas most relevant to you.

Governance, policy, and macrostrategy

Consciousness research

Doubts about digital minds

Social science research

Ethics and digital minds

AI safety and AI welfare

AI and robotics developments

AI cognition and agency

Brain-inspired technologies


Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.

Bradford, Lucius, and Will


Kairos @ 2026-01-07T16:01 (+6)

If you're interested in contributing to this space, you should check out the SPAR AI welfare projects! 

Some of them include: 

Larissa Schiavo, Jeff Sebo, and Toni Sims on: Should We Give AIs a Wallet? Toward a Framework for AI Economic Rights

Jeff Sebo, Diana Mocanu, Visa Kurki, and Toni Sims on: Preparing for AI Legal Personhood: Ethical, Legal, and Political Considerations

Arvo Munoz Moran on: Exploring Bayesian methods for modelling AI consciousness in light of state-of-the-art evidence and literature

Check them out and others here: sparai.org/projects/sp26