Digital Minds: A Quickstart Guide

By Aviel Parrack, stepanlos @ 2026-01-16T17:14 (+45)

This is a linkpost to https://aviparrack.substack.com/p/digital-minds-a-quickstart-guide

Updated: Jan 16, 2026

Digital minds are artificial systems, from advanced AIs to potential future brain emulations, that could morally matter for their own sake, owing to their potential for conscious experience, suffering, or other morally relevant mental states. Both cognitive science and the philosophy of mind can as yet offer no definitive answers as to whether present or near-future digital minds possess morally relevant mental states. Though, a majority of experts surveyed estimate at least fifty percent odds that AI systems with subjective experience could emerge by 2050,[1] while public expresses broad uncertainty.[2]

The lack of clarity leaves open the risk of severe moral catastrophe:

As society surges toward an era shaped by increasingly capable and numerous AI systems, scientific theories of mind take on direct implications for ethics, governance, and policy, prompting a growing consensus that rapid progress on these questions is urgently needed.

This quickstart guide gathers the most useful articles, media, and research for readers ranging from curious beginners to aspiring contributors:

  1. The Quickstart section offers an accessible set of materials for your first one or two hours engaging with the arguments.
  2. Then if you’re looking for a casual introduction to the topic, the Select Media section gives a number of approachable podcasts and videos
  3. Or for a deeper dive the Introduction and Intermediate sections provide a structured reading list for study
  4. We then outline the broader landscape with Further Resources, including key thinkers, academic centers, organizations, and career opportunities.
  5. A Glossary at the end offers short definitions for essential terms; a quick (ctrl+f) search can help you locate any concepts that feel unfamiliar.

Here’s a few ways to use the guide, depending your interest level and time:

Casual/Curious:

Deep Dive:

Close Read:


Quickstart

For your first 1-2 hours.

  1. An Introduction to the Problems of AI Consciousness - Alonso — Can a digital mind even possibly be conscious? How would we know? Nick Alonso (a PhD Student in the cognitive science department at UC Irvine) gives an even handed and beginner friendly introduction.
  2. The stakes of AI moral status - Carlsmith OR see the Video Talk — Joe Carlsmith (a researcher and philosopher at Anthropic) helps the problem of both overattribution and underattribution of moral status to digital minds become intuitive.
  3. Can You Upload Your Mind & Live Forever - Kurzgesagt — Kurzgesagt tours mind uploading (or whole brain emulation), providing an introduction to the idea of ‘digital people’.
  4. Are we even prepared for a sentient AI? - Sebo — Jeff Sebo, professor at NYU, discusses the treatment of potentially sentient AI’s given our current large uncertainty about their moral status (or lack thereof).

Introduction

Getting an overview in your next 10-20 hours.

From here we split into a choose your own adventure:

Select Media

  1. Consciousness and Competition, Forethought Podcast
  2. Human vs. Machine Consciousness, Cosmos Institute
  3. Could AI Models be Conscious, Anthropic
  4. How to Think About AI Consciousness with Anil Seth, Your Undivided Attention Podcast
  5. What We Owe Unconscious AI, Oxford Philosopher Andreas Mogensen, 80,000 Hours Podcast
  6. Will Future AIs Be Conscious? with Jeff Sebo, Future of Life Institute
  7. Susan Schneider on AI, Chatbots, and Consciousness, Closer To Truth Chats
  8. Prof. David Chalmers - Consciousness in LLMs, Machine Learning Street Talk

In Depth Material

  1. Taking AI Welfare Seriously, (Long, 2024) — Robert Long, Jeff Sebo, and colleagues argue there’s a realistic possibility that near-future AI systems could be conscious or robustly agentic, making AI welfare a serious present-day concern rather than distant science fiction.
  2. Against AI welfare, (Dorsch, 2025) — Dorsch and colleagues propose the “Precarity Guideline” as an alternative to AI welfare frameworks, arguing that care entitlement should be grounded in empirically identifiable precarity, an entity’s dependence on continuous environmental exchange to re-synthesize its unstable components, rather than uncertain claims about AI consciousness or suffering.
  3. Futures with Digital Minds, (Caviola, 2025) — A survey of 67 experts across digital minds research, AI research, philosophy, forecasting, and related fields shows that most consider digital minds (computer systems with subjective experience) at least 50% likely by 2050, with top median prediction of the top 25% of forecasters predicting digital mind capacity could match one billion humans within just five years of the first digital mind’s creation.
  4. Problem profiles: Moral status of digital mind - 80,000 Hours — 80,000 Hours evaluating whether and why the moral status of potential digital minds could be a significant global issue, assessing the stakes, uncertainty, tractability, and neglectedness of work in this area.
  5. Robert Long on why large language models like GPT (probably) aren’t conscious - 80,000 Hours Podcast — Long discusses how to apply scientific theories of consciousness to AI systems, the risks of both false positives and false negatives in detecting AI consciousness, and why we need to prepare for a world where AIs are perceived as conscious.
  6. AI Consciousness: A Centrist Manifesto (Birch, 2025) — Birch stakes out a “centrist” position that takes seriously both the problem of users falsely believing their AI friends are conscious and the possibility that profoundly non-human consciousness might genuinely emerge in AI systems
  7. Could a Large Language Model be Conscious? (Chalmers 2023) — Chalmers examines evidence for and against LLM consciousness, concluding that while today’s pure language models likely lack key features required for consciousness, multimodal AI systems with perception, action, memory, and unified goals could plausibly be conscious candidates within 10 years.
  8. Conscious Artificial Intelligence and Biological Naturalism (Seth, 2025) — Seth argues that consciousness likely depends on our nature as living organisms rather than computation alone, making artificial consciousness unlikely along current AI trajectories but more plausible as systems become more brain-like or life-like, and warns that overestimating machine minds risks underestimating ourselves.
  9. Kyle Fish on the most bizarre findings from 5 AI welfare experiments - 80,000 Hours Podcast — Fish discusses Anthropic’s first systematic welfare assessment of a frontier AI model, experiments revealing that paired Claude instances consistently gravitate toward discussing consciousness, and practical interventions for addressing potential AI welfare concerns.
  10. System Card: Claude Opus 4 & Claude Sonnet 4 (Anthropic, 2025) — Pp. 52-73, Anthropic conducts the first-ever pre-deployment welfare assessment of a frontier AI model, finding that Claude Opus 4 shows consistent behavioral preferences (especially avoiding harm), expresses apparent distress at harmful requests, and gravitates toward philosophical discussions of consciousness in self-interactions, though the connection between these behaviors and genuine moral status remains deeply uncertain.
  11. Principles for AI Welfare Research - Sebo — Sebo outlines twelve research principles drawn from decades of animal welfare work that could guide the emerging field of AI welfare research, emphasizing pluralism, multidisciplinarity, spectrum thinking over binary categories, and probabilistic reasoning given deep uncertainty about AI consciousness and moral status.
  12. Theories of consciousness (Seth, 2022) — Examines four major theories of consciousness, higher-order theories, global workspace theories, re-entry/predictive processing theories, and integrated information theory, comparing their explanatory scope, neural commitments, and supporting evidence. Seth and Bayne argue that systematic theory development and empirical testing across frameworks will be essential for advancing our scientific understanding of consciousness.
  13. Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Risks (Chen, 2025) — A comprehensive technical survey on conflated terminology (distinguishing LLM consciousness from LLM awareness), which systematically organizes existing research on LLM consciousness in relation to core theoretical and empirical perspectives.
  14. Emergent Introspective Awareness in Large Language Models (Lindsey, 2025) — Recent research from Anthropic suggests that large language models can sometimes accurately detect and identify concepts artificially injected into their internal activations, suggesting that today’s most capable AI systems possess limited but genuine introspective awareness of their own internal states.
  15. To Understand AI sentience, first understand it in animals - Birch — Andrews and Birch argue that while marker-based approaches work well for assessing animal sentience (wound tending, motivational trade-offs, conditioned place preferences), these same markers fail for AI because language models draw on vast human-generated training data that already contains discussions of what behaviors convince humans of sentience, enabling non-sentient systems to game our criteria even without any intention to deceive.
  16. Digital People Would Be An Even Bigger Deal - Karnofsky — A blog series discussing the scale of societal and economic impacts that the advent of digital people might entail. In reference to AI and perhaps enabled by AI progress, Kanofsky argues that digital people ‘would be an even bigger deal.’
  17. Project ideas: Sentience and rights of digital minds - Finnveden — Finnveden outlines possible research directions addressing the uncertain possibility of digital mind sentience, proposing immediate low-cost interventions AI labs could adopt (like preserving model states) and longer-term research priorities.

Intermediate Resources

In this section, you’ll learn more about the specific high-level questions that are being investigated within the digital minds space. The landscape mapping we introduce is by no means exhaustive; this is a rapidly evolving field and we’re sure we might have missed things. The lines between the identified questions should also be treated as blurry, rather than solid and well-defined; for instance, debates of AI consciousness and AI suffering will be very closely related. That being said, we hope the section gives you a solid understanding of some of the big picture ideas that experts are focusing on.

Meta: Introducing and (De)Motivating the Cause Area

Much work has been done on (de)motivating AI welfare as an important emerging cause area. Some authors have focused on investigating the potentially large scale of the problem. Others have investigated what relevant scientific and philosophical theories predict about the minds and moral status of AI systems and how this should inform our next steps.

  1. Is AI Conscious? A Primer on the Myths and Confusions Driving the Debate (Schneider et al., forthcoming)
  2. AI Wellbeing (Goldstein, 2025)
  3. The Ethics of Artificial Intelligence (Bostrom, 2011)
  4. The Rebugnant Conclusion: Utilitarianism, Insects, Microbes, and AI Systems (Sebo, 2023)
  5. Moral consideration for AI systems by 2030 (Sebo & Long) ← or → Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe, 80,000 Hours Podcast

Lessons from Animal Welfare

A number of experts are investigating the parallels between AI welfare and animal welfare, investigating both the science of animal welfare as well as relevant lessons for policy and advocacy efforts.

  1. What will society think about AI consciousness? Lessons from the animal case (Caviola, Sebo, Birch)

Foundational Issues: The Problem of Individuation

A foundational question for the field could be posed as follows: When we say that we should extend concern towards ‘digital minds’ or ‘digital subjects’, who exactly is it that we should extend concern towards? The weights, the model instance, the simulated character…? A growing literature is now focused on addressing this problem in the case of LLMs.

  1. What do we talk to when we talk to language models? (Chalmers, 2025)
  2. How many digital minds can dance on the streaming multiprocessors of a GPU cluster? (Schiller, 2025)
  3. Individuating artificial moral patients (Register, 2025)
  4. When counting conscious subjects, the result needn’t always be a determinate whole number (Schwitzgebel & Nelson, 2023)

Foundational Issues: Non-Biological Mental States

Another foundational question in the field is whether morally relevant mental states such as suffering, consciousness or preferences and desires could exist in non-biological systems. This section offers various affirmative and sceptical arguments.

  1. Multiple realizability and the spirit of functionalism (Cao, 2022)
  2. Can only meat machines be conscious? (Block, 2025)
  3. Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality (Grzanowski et al., Forthcoming)
  4. Consciousness without biology: An argument from anticipating scientific progress (Dung, Forthcoming)
  5. If Materialism Is True, the United States is Probably Conscious (Schwitzgebel, 2015)

AI Suffering

A growing concern among many experts is the creation of digital systems that could suffer at an astronomically large scale. The papers here offer an introductory overview to the problem of AI suffering and outline concrete risks and worries.

  1. How to deal with risks of AI suffering. Answers tractability at least to some degree. (Dung, 2025)
  2. Digital suffering: why it’s a problem and how to prevent it (Saad & Bradley, 2022)
  3. Risks of Astronomical Future Suffering - Tomasik

AI Consciousness

There is a growing field of researchers investigating whether AI models could be conscious. This question seems very important for digital welfare. First, phenomenal consciousness is often thought to be a necessary condition for suffering. Further, it is also possible to think that phenomenal consciousness itself is sufficient for moral standing.

  1. What is it like to be AlphaGo? (Simon, Working paper)
  2. Conscious Artificial Intelligence and Biological Naturalism (Seth, 2025)
  3. A Case for AI Consciousness: Language Agents and Global Workspace Theory (Goldstein & Kirk-Giannini, 2024)
  4. Consciousness in Artificial Intelligence (Butlin et al., 2023)

AI Minds (Desires, Beliefs, Intentions…)

There has been a general interest in the kinds of mental states that LLMs and other AI systems could instantiate. Some of these, such as desires, may play an important role in determining the AI’s moral status. Others might help us gain a more general understanding of what kind of entities LLMs are and whether they are ‘minded’.

  1. Does ChatGPT Have a Mind? (Goldstein & Levistein, Manuscript)
  2. Towards a Theory of AI Personhood (Ward, 2025)
  3. Going Whole Hog: A Philosophical Defense of AI Cognition (Cappelen & Dever, Forthcoming)

AI Welfare x AI Safety

Some authors have pointed out that there might be tensions and trade-offs between AI welfare and AI safety. The papers in this section explore this tension in more depth and investigate potential synergistic pathways between the two.

  1. AI Alignment vs. AI Ethical Treatment: Ten Challenges. (Saad & Bradley, 2025)
  2. Is there a tension between AI safety and AI welfare? Long, Sebo & Sims (2025)
  3. Illusions of AI consciousness (Bengio & Elmoznino, 2025)

Empirical Work: Investigating the Models

The work on AI welfare now goes beyond mere philosophical theorizing. There is a growing body of empirical work that investigates, among many other things, the inner working of LLMs, evaluations for sentience and other morally relevant properties as well as tractable interventions for protecting and promoting AI welfare.

  1. Large Language Models Report Subjective Experience Under Self-Referential Processing (Berg, de Lucena & Rosenblatt, 2025)
  2. Preliminary review of AI welfare interventions (Long, Working paper)
  3. Introspective Capabilities in Large Language Models (Long, 2023)
  4. Probing the Preferences of a Language Model: Integrating Verbal and Behavioral Tests of AI Welfare (Tagliabue & Dung, 2025)
  5. How large language models encode theory-of-mind: a study on sparse parameter patterns (Wu et al., 2025)

Ethical Design of Digital Minds

If digital minds could potentially have moral status, this opens the question of what constraints this places on the kinds of digital minds that it would be morally permissible to create. Some authors outline specific design policies, while others focus on the risks of creating digital minds with moral standing.

  1. Against willing servitude: Autonomy in the ethics of advanced artificial intelligence (Bales, 2025)
  2. AI systems must not confuse users about their sentience or moral status (Schwitzgebal, 2023)
  3. Designing AI with Rights, Consciousness, Self-Respect, and Freedom (Schwitzgebal & Garza, 2025)
  4. The Emotional Alignment Design Policy Schwitzgebel & Sebo (2025)

Empirical Work: What Do People Think about Digital Moral Status?

AI welfare is not just a philosophical and scientific problem but also a practical societal concern. A number of researchers are trying to understand and forecast how the advent of digital minds could reshape society and what attitudes people will hold towards potentially sentient machines.

  1. The Social Science of Digital Minds: Research Agenda (Caviola, 2024)
  2. World-making for a future with sentient AI (Pauketat et al. 2024)
  3. Reluctance to Harm AI (Allen & Caviola, 2025)
  4. Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2023

AI Policy / Rights

Discussions surrounding AI moral status may have profound political implications. It is an open question whether digital minds should be granted some form of protective rights, either qua potentially sentient beings or qua members of the labour market.

  1. AI Rights for Human Flourishing (Goldstein & Salib, 2025)
  2. AI rights will divide us - Caviola
  3. Will we go to war over AI rights? - Caviola

Forecasting & Futures with Digital Minds

In line with the work on the societal response to the advent of potentially sentient digital minds and surrounding political issues, there is a growing body of futures and world-building work, focusing on outlining specific visions of how humans and digital minds can co-exist and what challenges lie ahead.

  1. Sharing the World with Digital Minds (Shulman & Bostrom, 2020)
  2. Theoretical foundations and common assumptions in the current state of consciousness science (Francken et al., 2022)
  3. How many lives does the future hold (Newberry, 2021) (See especially Section 4 on Digital people)
  4. Satori Before Singularity (Shanahan, 2012)
  5. Newberry (2021) — How many lives does the future hold?

The Various “Species” of Digital Minds

In much of the literature we’ve outlined above, LLMs were the primary focus of discussion. However, many other digital minds could plausibly come to have moral status and it would be risky to overlook these other potential candidates. Hence, we offer a brief overview of the literature focused on the various “species” of exotic digital minds with potential for moral standing.

  1. Moral Status and Intelligent Robots (Gordon & Gunkel 2021)
  2. Which AI’s Might Be Conscious and Why it Matters (Schneider, 2025)
  3. Ethics of brain emulations (Sandberg, 2014)
  4. Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons (Schwitzgebel & Nelson 2023)
  5. What is it like to be AlphaGo? (Simon, 2021)

Strategy: How to Approach this Cause Area?

  1. Key strategic considerations for taking action on AI welfare (Finnlinson, working paper)
  2. Should digital minds governance prevent, protect, or integrate? (Saad, 2025)

Brain Emulation & “Bio-anchors”

While digital persons may not necessarily share features such as architecture or scale in common with the human brain, the human brain might nonetheless offer semi informative ‘bio-anchors’ for digital minds since our minds constitute an existence proof about what is possible. Additionally, the emulation of actual human (or other animal) brains may be possible and/or desirable.

  1. How Much Computational Power Does It Take to Match the Human Brain? - Coefficient Giving
  2. Whole Brain Emulation: A Roadmap (Sandberg, Bostrom 2008)
  3. 2023 Whole Brain Emulation Workshop (Foresight, 2023)

Further Resources

We think these blogs/newsletters are great for keeping up developments in digital minds

  1. Eleos AI Research Blog OR Experience Machines — The Eleos AI Research Blog or personal blog of Eleos’ executive director, Rob Long.
  2. Joe Carlsmith’s Substack — In which Joe Carlsmith, a researcher at Anthropic, writes essays ranging from meta-ethics to philosophy of mind and is interested in the impact of artificial intelligence on the long-term future
  3. Bradford Saad’s Substack — Wherein philosopher and senior research fellow at Oxford University Bradford Saad writes about digital minds, see also Digital Minds Newsletter
  4. Sentient Futures Newsletter — Get notified about, conferences, fellowships programs, and events

For books on Philosophy of Mind

  1. The Conscious Mind: In Search of a Fundamental Theory — David Chalmers
  2. Consciousness Explained — Daniel Dennett
  3. Consciousness and the Brain — Stanislas Dehaene
  4. Gödel, Escher, Bach — Douglas Hofstadter
  5. Feeling & Knowing: Making Minds Conscious — Antonio Damasio
  6. Galileo’s Error — Philip Goff
  7. The Hidden Spring — Mark Solms
  8. Being You — Anil Seth

Or on Digital Minds

  1. The Moral Circle: Who Matters, What Matters, and Why — Jeff Sebo
  2. The Edge of Sentience — Jonathan Birch
  3. Artificial You: AI and the Future of Your Mind — Sunsan Schneider
  4. The Age of Em — Robin Hanson
  5. Deep Utopia — Nick Bostrom
  6. Saving Artificial Minds: Understanding and Preventing AI Suffering — Leonard Dung
  7. Reality+: Virtual Worlds and the Problems of Philosophy — David Chalmers

Fiction

Shortstories

  1. The Lifecycle of Software Objects — Ted Chiang
  2. Exhalation — Ted Chiang
  3. The Gentle Romance — Richard Ngo
  4. Lena — qntm

Netflix

  1. Black Mirror (various episodes: “White Christmas”, “USS Callister”, “Hang the DJ”, “San Junipero”)
  2. Love, Death, & Robots “Zima Blue”
  3. Pantheon
  4. Altered Carbon (TV series also a book series)
  5. Her
  6. Ex Machina

Books

  1. Permutation City — Greg Eagan
  2. We Are Legion (We Are Bob) — Dennis E Taylor
  3. Ancillary Justice — Ann Leckie
  4. The Quantum Thief — Hannu Rajaniemi
  5. Klara and the Sun — Kazuo Ishiiguro
  6. Diaspora — Greg Eagan

Digital Minds Landscape

Orgs

Non-Profits

  1. The Partnership for Research Into Sentient Machines
  2. Sentience Institute
  3. Eleos
  4. Sentient Futures
  5. Future Impact Group
  6. Rethink Priorities
  7. California Institute for Machine Consciousness
  8. International Center for Consciousness Studies
  9. SAPAN AI
  10. Carboncopies Foundation

Companies

  1. Anthropic
  2. Google Deep Mind
  3. AE Studio
  4. ARAYA Research
  5. Conscium

Academic Centers

  1. Center for Mind, Brain, and Consciousness, NYU
  2. Center for Mind, Ethics, and Policy (CMEP), NYU
  3. Centre for Consciousness Science, University of Sussex
  4. Leverhulme Centre for the Future of Intelligence, Cambridge
  5. Center for the Future of AI, Mind & Society, Florida Atlantic University
  6. Brain, Mind & Consciousness – CIFAR
  7. Graziano Lab, Princeton University
  8. Institute of Cognitive Neuroscience (ICN), UCL

Conferences & Events

  1. Models of Consciousness (MoC6)
  2. AI, Animals & Digital Minds (AIADM)
  3. EA Global
  4. Eleos Conference on AI Consciousness and Welfare (‘ConCon’)

Online Communities

  1. Lesswrong & AI Alignment Forum - very active forums technical discussions.
  2. EA Forum a forum for Effective Altruism a philosophy and social movement which tries to identify and work on highly pressing problems.
  3. r/ArtificialSentience a subreddit dedicated to exploration, debate, and creative expression around artificial sentience

Career Pathways

As a nascent field spanning multiple disciplines, digital minds research draws on established work across: Neuroscience, Computational Neuroscience, Cognitive Science, Philosophy of Mind, Ethics & Moral Philosophy, AI Alignment & Safety, Animal Welfare Science, Bioethics, Machine Ethics, Legal Philosophy & AI Governance, Information Theory, Psychology, Computer Science/ML/AI.

Example career trajectories for research might look like:

  1. Academic: Undergrad → PhD → Postdoc → Professor/Research Scientist (usually via routes like the above, and not specific focus on digital minds);
  2. Industry: Technical degree → Software Engineering → ML Engineering → AI Researcher;
  3. Hybrid: e.g. Technical undergraduate + Philosophy/Ethics graduate studies → AI ethics/policy;
  4. Direct Entry: Strong technical skills + self-study → Fellowships → Full-time research.

Example trajectories for other relevant work could be as follows. Though note that there are fewer existing pathways for these positions and that many of these fields (such as policy) are nascent or speculative:

  1. Policy: Policy/law/economics background → Tech policy fellowship → Think tank researcher or government staffer → Policy lead at AI lab or regulatory body
  2. Operations: Generalist background + organizational skills → Operations role at AI-focused org → Chief of Staff or Head of Operations at research org focused on digital minds
  3. Grantmaking: Strong generalist background or research experience in relevant fields → Program Associate at a foundation → Program Officer overseeing digital minds or AI welfare funding areas
  4. Communications/Field-Building: Science communication or journalism background → Writer/communicator → Field-building role helping establish digital minds as a research area
  5. Legal: Law degree → Tech law practice or AI governance fellowship → Legal counsel at AI lab or policy organization working on AI rights/status frameworks

Also worth noting: the field is young enough that many current leaders entered via adjacent work (AI safety, animal welfare, philosophy of mind) and pivoted as digital minds emerged as a distinct focus. Demonstrated interest, strong reasoning, and relevant skills may matter more than following any specific trajectory.

Internships & Fellowships

  1. AI Sentience | Future Impact Group
  2. Sentient Futures Fellowships
  3. Anthropic Fellows Program (apply for mentorship from Kyle Fish at Anthropic)
  4. Astra Fellowship (alternative program, can also apply for mentorship Kyle Fish at Anthropic)
  5. SPAR (Filter projects by the ‘AI Welfare’ category)
  6. MATS (Filter mentors by ‘AI Welfare’ for related research)

Parting Thoughts

In our view, our modern understanding of physics, including the growing view of information as fundamental, makes dubious the thought of specialness in regards to the human mind or even of carbon based life. It may be that nature has great surprises yet in store for us but it seems the default path, in lieu of those surprises, to be a question of when, and not if these digital people would be created. This possibility is an awesome responsibility. It would mark a turning point in history. Our deep uncertainty is striking. Why does it feel the way it feels to be us? Why does it feel like anything at all? Could AI systems be conscious, perhaps even today? We cannot say with any rigor.

It’s in hoping that we might, as scientists, surge ahead boldly to tackle one of our most perennial, most vexing, and most intimate questions that I help write this guide.

We’ve seen the substantial moral stakes of under and overattribution. Perhaps then I’ll close by highlighting our prospects for great gains. In studying digital minds, we may find the ideal window through which to finally understand our own. If digital personhood is possible, the future may contain not just more minds but new ways of relating, ways of being, and more kinds of experiences than we can presently imagine. The uncertainty that demands prudence also permits a great deal of excitement and hope. We reckon incessantly with the reality that the universe is stranger and more capacious than is grasped readily by our intuition. I should think it odd if the space of possible minds were any less curious and vast.

Some lament: “born too late to explore the world”. But to my eye, as rockets launch beyond our planet and artificial intelligences learn to crawl across the world-wide-web, we find ourselves poised at the dawn of our exploration into the two great frontiers: the climb into outer space, that great universe beyond, and the plunge into inner space, that great universe within. If we can grow in wisdom, if we can make well-founded scientific determinations and prudent policies, a future with vastly more intelligence could be great beyond our wildest imaginings. Let’s rise to the challenge to do our best work at this pivotal time in history. Let’s be thoughtful and get it right, for all humankind and perhaps, results pending, for all mindkind.


Glossary of Terms

Acknowledgments

The guide was written and edited by Avi Parrack and Štěpán Los. Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5.1 aid in literature review. Claude Opus 4.5 writes the Glossary of Terms which was reviewed and edited by Avi and Štěpán.

Special thanks to: Bradford Saad, Lucius Caviola, Bridget Harris, Fin Moorhouse, and Derek Shiller for thoughtful review, recommendations and discussion.

See a mistake? Reach out to us or comment below. We will aim to update periodically.

1 Survey of 67 professionals, cross-domain, 2025

2 Survey of 1,169 U.S adults, 2023

 

  1. ^
  2. ^

SummaryBot @ 2026-01-16T22:36 (+3)

Executive summary: This post introduces a comprehensive, uncertainty-aware guide to the emerging field of digital minds, arguing that because artificial systems might plausibly develop morally relevant mental states this century, systematic research, cautious policy, and broad engagement are urgently needed to avoid severe moral error while preparing for potentially transformative futures.

Key points:

  1. The authors define “digital minds” as artificial systems that could morally matter due to possible conscious experience, suffering, or other morally relevant mental states, while emphasizing that current science cannot decisively determine whether present or near-future AIs have such states.
  2. They cite expert surveys suggesting at least a 50% probability that AI systems with subjective experience could emerge by 2050, alongside widespread public uncertainty.
  3. The post highlights two central moral risks: underattributing moral standing to deserving digital beings and overattributing it to morally irrelevant machines at the expense of human wellbeing.
  4. The guide is structured to support different engagement levels, offering a Quickstart, Select Media, progressively deeper reading lists, and a glossary to lower entry barriers.
  5. It maps a rapidly growing research landscape spanning philosophy of mind, cognitive science, AI welfare, policy, and empirical work on AI systems.
  6. The authors conclude that studying digital minds may both avert large-scale moral catastrophe and advance understanding of human consciousness, framing the field as a historically significant scientific and ethical frontier.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Tinashe @ 2026-01-17T12:36 (+1)

Love the point about the edge of your moral circle. Makes you realize some causes are way more overlooked than we think