Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?
By Bob Fischer, Adam Shriver, Michael St Jules đ¸ @ 2022-12-05T11:56 (+92)
Key Takeaways
- The Conscious Subsystems Hypothesis (âConscious Subsystems,â for short) says that brains have subsystems that realize phenomenally conscious states that arenât accessible to the subjects we typically associate with those brainsânamely, the ones who report their experiences to us.
- Given that humansâ brains are likely to support more such subsystems than animalsâ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.
- However, even if Conscious Subsystems is true, it probably doesnât imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:
- If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesnât increase the expected value of, say, humans over chickens as much as we might initially suppose.
- Risk-neutral expected utility maximizers are committed to assumptionsâincluding the assumption that all welfare counts equally, whoeverâs welfare it isâthat support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).
- Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as weâre risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, itâs likely that we should be putting far more resources into investigating the welfare of the worldâs small invertebrates.
- We assign very low credences to claims that ostensibly support Conscious Subsystems.
- The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called âhard problemâ of consciousness (which only have indirect empirical support and are far more controversial).
- Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But itâs highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.
- Ultimately, we do not recommend acting on Conscious Subsystems at this time.
Introduction
This is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritizationâi.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that arenât accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019, Shiller, 2016, Muehlhauser, 2017, Shulman, 2020, Crummett, 2022).[1]
If there are such states, then we might think that there is more than one conscious subject per brain, each supported by some neural subsystem or other.[2] Letâs call this the Conscious Subsystems Hypothesis (or Conscious Subsystems, for short).

Conscious Subsystems could affect how we ought to make tradeoffs between members of different species. Suppose, for instance, that the number of these subsystems scales proportionally with neuron count. A human has something like 86 billion neurons in her brain; a chicken, 220 million. So, if there are conscious subsystems in various brains, there could be roughly 400 times as many in humans as in chickens. If we were to assume that every subsystem is in pain when the main system reports pain, then it could work out that in a case where a human and a chicken appear to have comparable pain levels, itâs nevertheless true that there is roughly 400 times as much pain in the human than in the chicken. Given the aim of maximizing expected welfare and all else equal, it could follow that itâs roughly 400 times more important to alleviate the humanâs pain than the chickenâs pain. It matters, then, whether Conscious Subsystems is true.
Accordingly, this post:
- Develops one argument for Conscious Subsystems;
- Explains why Conscious Subsystems, even if true, may not be practically significant in some key decision contexts;
- Provides some reasons to assign low probabilities to the premises of the argument for Conscious Subsystems; and
- Offers some general reasons to be wary of allocating resources based on hypotheses like Conscious Subsystems.
Motivating Conscious Subsystems
We begin by considering a way of motivating Conscious Subsystems: namely, the classic âChina brainâ thought experiment in Ned Blockâs famous 1978 paper, âTroubles with Functionalism.â Very roughly, functionalism is the view that mental states are the kinds of states they are due to their functions, or their roles within larger systems. Block argued that functionalism implies that systems that clearly arenât conscious, are conscious. For instance:
Imagine a body externally like a human body, say yours, but internally quite different. The neurons from sensory organs are connected to a bank of lights in a hollow cavity in the head. A set of buttons connects to the motor-output neurons. Inside the cavity resides a group of little men. Each has a very simple task: to implement a âsquareâ of a reasonably adequate machine table that describes you. On one wall is a bulletin board on which is posted a state card, i.e., a card that bears a symbol designating one of the states specified on the machine table. Here is what the little men do: Suppose the posted card has a âGâ on it. This alerts the little men who implement G squaresââG-menâ they call themselves. Suppose the light representing I17 goes on. One of the G-men has the following as his sole task: when the card reads âGâ and the I17 light goes on, he presses output button O191 and changes the state card to âMââŚ. In spite of the low level of intelligence required of each little man, the system as a whole manages to simulate you because the functional organization they have been trained to realize is yours.
The âChina-brainâ thought experiment makes essentially makes the same point:
Suppose we convert the government of China to functionalism, and we convince its officials that it would enormously enhance their international prestige to realize a human mind for an hour. We provide each of the billion people in China⌠with a specially designed two-way radio that connects them in the appropriate way to other persons and to the artificial body mentioned in the previous example.[3] We replace the little men with a radio transmitter and receiver connected to the input and output neurons. Instead of a bulletin board, we arrange to have letters displayed on a series of satellites placed so that they can be seen from anywhere in China. Surely such a system is not physically impossible. It could be functionally equivalent to you for a short time, say an hour.
If functionalism is true, Block argues, then this system wouldnât just be conscious; it would have exactly the same mental states that you have. If thatâs right, then functionalism implies that a conscious mind just like yours can be composed of other conscious minds. After all, it seems clear that the people of China donât cease to be conscious simply because theyâve taken up this odd work of replicating the functions that give rise to your mind.
Of course, Block offered these thought experiments as reasons to reject functionalism. Many consciousness researchers now endorse âanti-nestingâ principles to prevent their theories from having this implication (e.g., Kammerer, 2015). At the same time, some just bite the bullet, agreeing that while it might be counterintuitive that this system is conscious, you and the âChina brainâ would indeed have the same mental states for as long as it operates (e.g., Schwitzgebel, 2015). Suppose thatâs true. Then, weâre on our way to an argument for Conscious Subsystems, one version of which goes as follows:
- Some neural subsystems would be conscious if they were operating in isolation.
- If a neural subsystem would be conscious if it were operating in isolation, then it's conscious even if part of a larger conscious system.
- So, some neural subsystems are conscious.
We can read Premise 2 as a way of biting the bullet on the China brain thought experiment. So, weâre now left wondering about the case for Premise 1.
But before we turn to that premise, there are two points to note. First, while this conclusion sounds radical, it might not be practically significant as stated. After all, it isnât clear that all conscious states are valenced statesâthat is, states that feel good or bad. So, if weâre hedonistsâthat is, we assume that all and only positively valenced conscious states contribute positively to welfare and all and only negatively valenced conscious states contribute negatively to welfareâthen it could work out that all these conscious subsystems are morally irrelevant. If the states are conscious but not valenced, then they donât realize any welfare at all.
Moreover, even if these subsystems do have valenced conscious statesâand so realize some welfareâit doesnât follow that we can assess the net impact of our actions on their welfare. Suppose we canât. Then, if weâre risk-neutral expected utility maximizersâthat is, we want to maximize utility and weâre equally concerned to avoid realizing negative utility and promote the realization of positive utilityâthe welfare of the subsystems âcancels outâ in expectation.
But letâs grant that if these subsystems are conscious, then they would have valenced states. Moreover, letâs grant that the welfare of the subsystems is correlated with reports or other typical measures of welfare.[4]
This brings us to the second point: namely, that it may not matter whether we have strong reasons to believe any of the premises of the argument for Conscious Subsystemsâor the assumptions we just grantedâif weâre risk-neutral expected utility maximizing total utilitarians. If we assign some credence to each of the relevant claims, then as long as there are enough subsystems, the argument will still be practically significant.
Recall, for instance, that a human has something like 86 billion neurons in her brain; a chicken, 220 million. So, if we thought that thereâs around one conscious system for every 220 million neurons, we would conclude that a human brain supports around 400 times as many conscious subjects as a chicken brain. Given that, if we assign low credences to each premise in the argument for Conscious Subsystemsâe.g., 0.2âit follows that, in expectation, we ought to conclude that a human brain supports roughly 17 times as many conscious subjects as a chicken brain.[5]
We chose this example because most people agree that chickens are conscious. But as Shulman points out, while it isnât clear whether insects are conscious, there is some suggestive evidence in favor of that hypothesis. So, we should assign some credence to the hypothesis that theyâre conscious. If we grant this much, though, then he thinks we should assign some credence to (something like) Premise 1, since, in his view, human brains contain many subsystems that are at least as complex as, and have âcapabilities greater than,â insect brains. And since there are mature insects with fewer than 10,000 neurons, our credences can be lower without threatening the practical significance of the argumentâagain, assuming weâre expected utility maximizers.
Suppose, for instance, that we assign even lower credences to each premiseâe.g., 0.05. And suppose that, conditional on these premises, we assign the same credence to the hypothesis that we can separate all the neurons of a human brain into conscious subsystems with around 10,000 neurons each. Then, we ought to conclude that a human brain supports roughly 1076 times as many conscious subjects as a 10,000-neuron insect brain in expectation.[6] (Shulman (2015) and Tomasik (2016-2017) make similar calculations between chickens, cattle, and insects or springtails, normalizing by insects or springtails, but uncertainty-free and, for some calculations, with diminishing marginal returns to additional neurons.)
Moreover, these credences may be too low. St. Jules (2020), for example, argues that several of the most prominent theories of consciousness, such as Global Workspace Theory, Integrated Information Theory, and Recurrent Processing Theory, imply that many neural subsystems (or, more generally, many very simple systems) would be conscious if they occurred in isolationâor, at least, would have that implication if certain ostensibly-arbitrary assumptions were dropped, namely, assumptions designed solely to block the implication that consciousness systems can be built out of other conscious systems. To give just one example, Global Workspace Theory says, in essence, that a mental state is conscious just when its content is broadcast to an array of neural subsystems. St. Jules points out that a mental stateâs content can be broadcast to all the subsystemâs subsystems even if it isnât globally broadcastâwhich we might call âlocalâ rather than global broadcasting. So, unless thereâs something special about broadcasting to all subsystems rather than some subset of them, Global Workspace Theory implies that if subsystems locally broadcast a stateâs content, then those subsystems are conscious.[7]
Suppose that, on this basis, we revised all our credences upward to 0.2 for each premise and 0.2 for 10,000-neuron subsystems being conscious (based on a comparable credence for 10,000-neuron insects being conscious), both for humans and chickens. Then, in expectation, we ought to attribute around 68,801 conscious subjects to each human brainâand around 177 to each chicken (~389x fewer, which is basically the ratio of the number of neurons in a human brain over the number in a chicken brain). At that point, the practical significance of the argument may be quite radical.
Assessing Conscious Subsystems
Given a commitment to risk-neutral expected utility maximization, there are two basic ways to assess the practical significance of Conscious Subsystems:
- Given some range of reasonable credences, we can consider whether Conscious Subsystems would alter what we would otherwise think we should do in some particular decision context.
- We can consider arguments for adjusting our credences in the claims that support Conscious Subsystems.
Ultimately, the first point is the most important one. Conscious Subsystems matters insofar as it makes a practical difference. So, weâll begin there. Then, weâll spend some time on the second.
Either Conscious Subsystems probably doesnât affect what we ought to do or it should have a minimal impact on what we ought to think we ought to do
There are many contexts in which Conscious Subsystems might be practically significant. Here, though, weâre especially concerned to assess whether Conscious Subsystems should alter the way EAs think they ought to allocate resources. We think that it probably doesnât. Or, if it does, it probably favors very strange courses of action, such as allocating much more toward invertebrate welfare.
Even if Conscious Subsystems is true, neartermists should keep spending on animals
Letâs begin with why Conscious Subsystems probably shouldnât alter the way EAs think they ought to allocate resources. Open Philanthropy once estimated that, âif you value chicken life-years equally to human life-years⌠[then] corporate campaigns do about 10,000x as much good per dollar as top [global health] charities.â Two more recent estimatesâwhich we havenât investigated and arenât necessarily endorsingâagree that corporate campaigns are much better. If we assign equal weights to human and chicken welfare in the model that Grilo, 2022 uses, corporate campaigns are roughly 5,000x better than the best global health charities. If we do the same thing in the model that Clare and Goth, 2020 employ, corporate campaigns are 30,000 to 45,000x better.[8] So, if even the most conservative of these estimates is ten times too high, Conscious Subsystems wouldnât imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals, at least if we estimate the number of human-vs.-nonhuman conscious subsystems as we did earlier, at a ratio of less than 400.
In fact, thereâs a sense in which Conscious Subsystems may bolster the cost-effectiveness argument for spending more on animals, if similarly sized conscious subsystems generate valenced states with similar intensities regardless of the brain to which they belong. To see why, consider that, first, if many brains house many 10,000-neuron conscious subsystems, then itâs likely these subsystems are less sophisticated than the conscious subject who can report their conscious experiences. Still, if conscious subsystems really do produce valenced states, we see no reason to suppose that they produce valenced states that are, say, 8,600,000x less intense than the valenced states of the main subject (a number derived by dividing the number of neurons in a human brain by 10,000), to ensure most of the total valence in a human brain to come from the main subject. We arenât aware of any empirical theory about the function of valenced states that would support such huge differences; likewise, as discussed in our previous post in this sequence, we arenât aware of any compelling philosophical reason to think that the intensity of valenced states scales with neuron counts. So, while the valenced states of conscious subsystems might be less intense, our default would be to assume much smaller differences between the intensities of the states of the subsystems and the intensities of the states of the main subject.
So, since conscious subsystems will vastly outnumber the main subject in many cases, many organismsâ moral value will be based largely on their subsystems, not on the main subjects who can report their experiences (or reveal them via behavior, etc.). So, instead of having to assess complicated questions about the relative intensity of valenced experiences, we can use âsubsystem countsâ to approximate the relative moral importance of humans and animals. And as we suggested earlier, when we do that with an expected number of subsystems roughly proportional to neuron counts, relative subsystem counts donât support allocating resources to the best neartermist human interventions over the best neartermist animal interventions. Instead of undermining pro-animal cost-effectiveness analyses with a human-favoring philosophical theory (namely, Conscious Subsystems itself), Conscious Subsystems appears to support the conclusion that there are scalable nonhuman animal-targeting interventions that are far more cost-effective than GiveWellâs recommended charities.
That being said, we grant that Conscious Subsystems could make longtermist interventions seem better, though the issue is complicated by questions about the role of digital minds in the value of the long-term future. If most of the value of the long-term future is in digital minds, then given the possibility that digital minds might themselves have conscious subsystemsâpotentially even more conscious subsystems than humans (on a per-individual basis)âthen Conscious Subsystems could provide a reason for those already inclined toward longtermism to be even more inclined toward it.
However, itâs hard to believe that this would matter to anyone who has reservations about longtermism. Suppose, for instance, that weâre undercounting the number of conscious beings in the future relative to the number in the present by several orders of magnitude. If your primary reservation about longtermism is, say, complex cluelessness or very low probabilities of making any difference, that undercounting hardly seems relevant.
Conscious Subsystems probably supports spending far more on small invertebrates
Letâs now turn to the possibility that Conscious Subsystems should alter the way EAs ought to allocate resources, albeit in a surprising way. Itâs estimated that there are 1.5 million to 2.5 million mites on the average humanâs body and together they probably have about 1% of the number of neurons as does the average human brain. However, the views about consciousness that support Conscious Subsystems will tend to assign greater probabilities to the hypothesis that those mites are conscious. The lower the âbarâ for consciousness, the more likely it is that mites clear it. So, the expected number of conscious systems on the human body might not be more than an order of magnitude smaller than the expected number in the human brain. Moving on from organisms living on humans, Schultheiss et al. (2022) estimate that 20 quadrillion (20*1015) ants are alive at any moment, and based on some of our past research, we tend to think we ought to assign a non-negligible credence to the hypothesis that theyâre sentient (Rethink Priorities, 2019; Schukraft et al., 2019).[9]
The upshot here is simple. There are relatively restrictive views of consciousness, like certain higher-order theories, and relatively permissive views of consciousness, like panpsychism. The higher our credences in restrictive views, the fewer conscious subsystems we ought to posit in expectationâand the lower the odds that many small invertebrates are sentient. The higher our credences in permissive views, the more conscious subsystems we ought to posit in expectationâand the higher the odds that many small invertebrates are sentient. So, insofar as weâre risk-neutral expected utility maximizers with relatively high credences in permissive views of consciousness, itâs likely that we should be putting far more resources into investigating the welfare of the worldâs small invertebrates. And depending on how many invertebrates we can help and how much we can help them, it could work out that we ought to prioritize them over humans.[10]
We should probably assign low credences to the claims that support Conscious Subsystems
We now turn to issues that are relevant to the credences we ought to assign to the claims that support Conscious Subsystems. The first is that theyâre based on a failure to distinguish neural correlate theories of consciousness with explanatory theories of consciousness. The second is that functionalism probably doesnât support attributing consciousness to neural subsystems per se, whatever it might imply about other entities.
Neural correlate theories of consciousness =/= explanatory theories of consciousness
Again, the basic argument for Conscious Subsystems goes as follows:
- Some neural subsystems would be conscious if they were operating in isolation.
- If a neural subsystem would be conscious if it were operating in isolation, then it's conscious even if part of a larger conscious system.
- So, some neural subsystems are conscious.
In support of the first premise, we have Shulman and St. Jules appealing to the implications of various theories of consciousness and Shulman making comparisons between the subsystems in human brains. A thought experiment supports the second.
Letâs step back and highlight the difference between two types of theories about consciousness. One kind of âtheory of consciousnessâ is a âneural correlate of consciousness (NCC).â This refers to a set of conditions in brains that are, as the name suggests, reliably correlated with conscious experiences. In general, most attempts to scientifically study consciousness are attempts to find the neural correlates of consciousness, primarily relying on finding conditions that reliably correlate with self-reports of consciousness, since consciousness itself cannot be observed in others. Importantly, a neural correlate of consciousness need not provide an explanation of how subjective experiences exist in the physical world. NCCs need only identify patterns between observable features of the world.
A second type of theory of consciousness is an âexplanatory theory of consciousness.â An explanatory theory doesnât merely posit a correlation; it provides some story about how subjective experiences exist in the physical world. In short, an explanatory theory of consciousness is an attempt to solve the âhard problem of consciousnessâ: that is, it attempts to explain why and how we have phenomenally conscious states. Many scientists studying consciousness are explicitly uninterested in solving the hard problem of consciousness or believe it to be unsolvable.
This distinction is important because thereâs a certain type of move thatâs made in discussions about the distribution of consciousness that involves a category error. Consider the following argument schema:
- Brains with property X are conscious.
- Ys have property X.
- So, Ys are conscious.
This argument might be fine for some values of X and Y. However, itâs often confused if X is a neural correlate of consciousness and Y is something for which we donât have any independent reason to posit consciousness. This is because Premise 1 here is really shorthand for a longer claim, which is something like: brains with property X that can self-report consciousness are conscious. Essentially, Premise 1 is really a claim about a very specific kind of systemâthe human systemâthat research has revealed to have the following feature: whenever X is present, systems of that type (people) self-reported conscious experiences of a certain type; whenever X was absent, systems of that type (people) did not self-report having a conscious experience of that type. Premise 1 doesnât say anything about systems that canât self-report consciousness.
Again, this is because in a NCC, X doesnât explain what consciousness is; it isnât an account of consciousness. Instead, it serves as the basis for a research program. Because we know that brains with X often have conscious experiences, we should investigate X in more detail to learn more about consciousness in that organism. If X were supposed to explain what consciousness isâif, in other words, itâs a theory that attempts to provide the list of necessary and sufficient conditions for consciousnessâthen thereâs no conceptual issue here. But the move is a category error when used with a NCC, because such theories arenât attempting to provide lists of necessary and sufficient conditions; they arenât trying to provide accounts of what consciousness is. Rather, these theories are only trying to identify promising features that can be reliably correlated with self-reports of consciousness.
Consider, for example, someone arguing as follows.
- Brains that engage in recurrent processing are conscious.
- Electrons do something that, at an abstract level, could be described as recurrent processing (e.g., âan electron influences other particles, which in turn influence the electron,â etc. (Tomasik, 2020 and St. Jules, 2020)).
- So, panpsychism is true.
This argument is based on a misunderstanding of the first premise, at least as itâs intended by many proponents of recurrent processing theory. These individuals are not saying that anything that exhibits the property of recurrent processing is thereby conscious. Instead, they were making the empirical claim that recurrent processing of a certain sort is reliably correlated with self-reports of conscious experiences in humans. It doesnât make sense to generalize this view to âelectrons influencing one anotherâ because electrons influencing one another is decidedly not correlated with self-reports of consciousness.
Of course, some proponents of recurrent processing theory probably do take themselves to be giving an account of what consciousness actually is. We canât assess such claims here. However, itâs important to recognize that we probably ought to assign much lower credences to the explanatory interpretations of theories than their neural correlate interpretations. Neural correlate interpretations of theories of consciousness have whatever fairly direct empirical support they have (or donât have, as the case may be). Explanatory interpretations of theories of consciousness borrow their support from their corresponding neural correlate interpretations and then go well beyond them, staking out much more controversial positions, and in any case ones that we canât clearly disconfirm empirically.
The upshot here is that Premise 1 faces one of two problems. On the one hand, it could be unmotivated, as itâs a mistake to think that because some neural subsystems have Xâsome neural correlate of consciousnessâthat they would be conscious if they were operating in isolation. On the other hand, it could be that we ought to assign it a rather low credence.
Functionalism doesnât support conscious subsystems
Letâs turn to the second problem for the argument for Conscious Subsystems. Roughly, functionalism about consciousness is the view that a physical state realizes a given mental state by playing the right functional role in a larger system. Crucially, standard versions of functionalism donât entail that all functional roles are conscious: only the ones with the right relationships to sensory stimulations, other mental states, and behavior.
Now consider the following argument:
- Fruit flies have roughly 200,000 neurons.
- A human brain has 420,000x as many neurons as a fruit fly.
- So, if fruit flies are conscious and we canât rule out states being conscious merely because theyâre embedded in a larger system, then human brains contain something like 420,000 as many conscious subsystems as fruit flies.
This argument isnât valid as it stands, but that isnât the point. Instead, the point is that you canât make it valid by adding a standard version of functionalism. Standard functionalism, as noted above, doesnât claim that any system with a certain amount of processing power is thereby conscious. It says that states are conscious if they realize particular functional relationships between inputs, outputs, and other states. But there is no reason to believe that any of the subsets of neurons in the human brain with the same number of neurons in a fruit fly are arranged with the correct functional relationships.
Moreover, there are positive reasons to deny that given subsets of neurons in the human brain are arranged in the right manner to be conscious. Fruit fly brains faced evolutionary pressures and, as such, are designed to realize a set of input-output relations that increase the likelihood of fruit flies surviving and passing on their genes. Human brains also evolved to promote the likelihood of survival and reproduction. However, subsets of human brains, facing evolutionary pressures, including parallel tracks of sensory information, would have evolved to contribute to the overall system in a way that facilitates the overall system behaving accordingly. In other words, the evolutionary pressures on any subset of a human brain would push this subset to realize functions that are different from any function designed to maximize the fitness of a fruit fly.
Granted, someone could insist that the input-output relations that happen to maximize the fitness of fruit flies are also likely to be present in the human brain. However, it seems extremely unlikely that the pattern of neural activations that would lead to maximizing fruit fly fitness through a mental state would just so happen to be the same pattern of activations realized in human subsystems that make small contributions to the behavior of the organism as a whole. In fact, if we just look at the organization of the human brain, the distances that signals need to travel, and overall interconnections, it seems almost certain that there are no roughly fruit-fly-brain-sized subsystems of the human brain that realize identical functions to the fruit fly brain.
Abstracting away from fruit fly brains, itâs likely that some functions required for consciousness or valenceâor realized along the way to generate conscious valenceâare fairly high-order, top-down, highly integrative, bottlenecking, or approximately unitary, and some of these are very unlikely to be realized thousands of times in any given brain. Some candidate functions are selective attention,[11] a model of attention,[12] various executive functions, optimism and pessimism bias, and (non-reflexive) appetitive and avoidance behaviors. Some kinds of valenced experiences, like empathic pains and social pains from rejection, exclusion, or loss, depend on high-order representations of stimuli, and these representations seem likely to be accessible or relatively few in number at a time, so we expect the same to hold for the negative valence that depends on them. Physical pain and even negative valence generally may also turn out to depend on high-order representations, and thereâs some evidence they depend on brain regions similar to those on which empathic pains and social pains depend (Singer et al., 2004, Eisenberger, 2015). On the other hand, if some kinds of valenced experiences occur simultaneously in huge numbers in the human brain, but social pains donât, then, unless these many valenced experiences have tiny average value relative to social pains, they would morally dominate the individualâs social pains in aggregate, which would at least be morally counterintuitive, although possibly an inevitable conclusion of Conscious Subsystems.
Furthermore, the extra neurons in the human brain used to realize some of these functions have other more plausible roles than realizing these functions thousands of times simultaneously, like greater acuity, greater categorization power, or integrating more inputs (Birch et al., 2020), and broadcasting the resulting signals to more neurons after (or processes, according to Shulman, 2020). But even if each type of function thatâs necessary for conscious valence were realized many times in the human brain, each subsystem would need to realize an instance of each type of function and have them all fit together in the right way to generate conscious valence.
In other words, standard functionalism does not support the claim that the mere presence of large numbers of neurons in human brains is evidence in favor of the argument that there are numerous conscious subsystems. That more neurons are devoted to the same functions in one brain than another isnât enough to establish that those functions, especially those generating conscious valence, are realized more often in the first brain than in the second. Those arguing in favor of conscious subsystems need to present some positive evidence for believing that the functions that are realized in subsets of human brains are identical, or at least similar enough, to those in other organisms to also be considered conscious. We donât have such evidence ourselves and we arenât aware of claims by neuroscientists that would suggest that itâs out there.
Again, the upshot here is that Premise 1 of the argument for Conscious Subsystems seems unmotivated: standard versions of functionalism donât allow us to make analogical arguments from the complexity of subsystems or their contributions to generating consciousness or valence to their being separately conscious.
Conclusion
As weâve argued, there are some key decision contexts in which Conscious Subsystems probably shouldnât affect how we ought to act. In part, this is because animal-directed interventions look so good; on top of that, the theories of consciousness that support Conscious Subsystems also support consciousness being widespread in the animal kingdom, which is likely to cause small invertebrates to dominate our resource allocation decisions. However, Conscious Subsystems also shouldnât affect our resource allocation decisions because we ought to assign it a very low probability of being true. The basic argument for it is probably based on a category error. In addition, it doesnât get the support from functionalism that we might have supposed.
Nevertheless, some will insist that the probability of Conscious Subsystems is not so low as to make it practically irrelevant. While it might not affect any decisions that EAs face now, it may still affect decisions that EAs face in the future. In what remains, we explain why we disagree. On our view, while it might seem as though expected utility maximization supports giving substantial weight to Conscious Subsystems, other considerations, such as credal fragility, suggest that we should give limited weight to Conscious Subsystems if we're careful expected utility maximizers.[13]
From the armchair, we canâas we have!âcome up with arguments for and against Conscious Subsystems. However, itâs hard to see how any of these arguments could settle the question in some decisive way. There will always be room to resist objections, to develop new replies, to marshal new counterarguments. In principle, empirical evidence could radically change our situation: Imagine a new technology that allowed subsystems to report their conscious states! But we donât have that evidence and, unfortunately, may forever lack it. Moreover, we should acknowledge that itâs probably possible to come up with inverse theories that imply that smaller brains are extremely valuableâperhaps because they realize the most intense valenced states, having no cognitive resources to mitigate them. So, we find ourselves in a situation where our credences should be low and fairly fragile. Moreover, they may alternate between theories with radically different practical implications.
This isnât a situation where it makes sense to maximize expected utility at any given moment. Instead, we should acknowledge our uncertainty, explore related hypotheses and try to figure out whether thereâs a way to make the questions empirically tractable. If not, then we should be very cautious, and the best move might just be assuming something like neutrality across types of brains while we await possible empirical updates. Or, at least, seriously limiting the percentage of our resources thatâs allocated in accord with Conscious Subsystems. This seems like a good epistemic practice, but it also makes practical sense: actions involve opportunity costs, and being too willing to act on rapid updates can result in failing to build the infrastructure and momentum thatâs often required for change.
Acknowledgments
This research is a project of Rethink Priorities. It was written by Bob Fischer, Adam Shriver, and Michael St. Jules. It is indebted to previous work on this topic by David Mathers. Thanks to Marcus Davis, Jim Davies, Gavin Taylor, Teo Ajantaival, Jacy Reese Anthis, Magnus Vinding, Brian Tomasik, Anthony DiGiovanni, Joe Gottlieb, David Mathers, Richard Bruns, David Moss, and Derek Shiller for helpful feedback on earlier versions of this report. If youâre interested in RPâs work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.
- ^
These non-accessible conscious states go under many names: e.g., âhidden qualiaâ (Shiller, 2016), âparaconsciousnesses,â âunderselves,â and âco-consciousnessesâ (Blackmore, 2017), while âphenomenal overflowâ may be a special case (Block, 2007 and Block, 2011, as well as discussion of the partial awareness response in Kouider et al., 2010 and Tsuchyia et al., 2015). We should also note that there are ways of interpreting some of these authors, such as Tomasik, 2013-2019 and Shulman, 2020, where they do not mean to be talking about inaccessible states. Instead, they may have a view where the components of consciousness are a bit like pixels with a fixed size, so that the more of those pixels you have in each experience, the âmore consciousnessââor the more independently valuable components of consciousnessâyouâve got. Shulman (2020), for instance, writes that â[if] each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.â However, only the valenced components would be valuable given hedonism, which we assume in this report, and weâd expect these valenced components to occur in relatively small numbers and be tied to high-level features of a subjectâs experiences. We donât address that view in detail, though we suspect that some of the arguments below could be adapted to apply to it.
- ^
For present purposes, we're simply granting the assumption behind this inference, which is that if phenomenal states arenât integrated with or accessible to one another, then theyâre possessed by different subjects. However, that assumption is controversial and we aren't endorsing it.
- ^
China had a population of roughly one billion when Block wrote this, which is about 100x fewer people than the number of neurons in a human brain. Itâs an open question whether thatâs enough people for Blockâs purposes, but the general philosophical point remains.
- ^
Itâs complicated to assess how much of a concession this is. For instance, itâs clear that whatever probability you assign to Conscious Subsystems, you should assign a lower probability to the hypothesis that Conscious Subsystems is true, that the states are valenced, and the valenced states are correlated with higher-level reports. Moreover, itâs plausible that even if the subsystems have valenced states, those states donât affect the behavior of the whole organism; so, there arenât any adaptive pressures on those subsystems that would result in correlations with higher-level reports. As a result, the expected value implications of Conscious Subsystems might be trivial. At the same time, someone might argue that some neurons may have specific functions that make such a correlation plausible. For instance, there may be neurons that play a role in generating reportable negative valence but no role in generating reportable positive valence. All else equal, these negative valence-selective neurons may seem more likely to help realize the same negative valence-specific functions they do for reportable negative valenceâand so negative valenceâin subsystems containing them than to help realize positive valence in those subsystems. That being said, whatever variation of this hypothesis we entertain, it isnât clear whether the number of relevant conscious subsystems scales linearly with neuron countsâwhere the relevant ones are those with valenced states that are correlated with the reports of the whole system. So, the open and challenging question is about the discount to apply.
- ^
0.2 * 0.2 * 400 + (1 - 0.2 * 0.2) * 1 = 16.96. Weâre assuming that the subsystems arenât overlapping, and, for simplicity, that if there arenât 400 conscious subsystems conditional on the Conscious Subsystems premises, thereâs just the one conscious system.
- ^
0.05*0.05*0.05*(86,000,000,000/10,000) + (1-0.05*0.05*0.05)*1 â 1076 in a human, in expectation, vs. 1 in the insect.
- ^
Shulman (2020) suggests something similar, claiming that âtrivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples.â Granted, the local / global distinction might be epistemically significant, as we may not have ways to confirm that local broadcasting generates consciousness, whereas we can ask people to self-report about global broadcasting. However, it may not be theoretically significant, in the sense that there may not be any principled reason why global broadcasting would be required for consciousness.
- ^
This range reflects just the default set of parameters in their Guesstimate model, after setting the node âmoral weight (DALY/cDALY)â to 1. We get a range because their Guesstimate model is noisy and different samples give different results.
- ^
For more discussion of the population numbers of different groups of animals and the total numbers of neurons across these groups, see Shulman, 2015, Tomasik, 2015-2019, Ray, 2019, Ray, 2019, Tomasik, 2009-2019, McCallum, Martini and Shwartz-Lucas, 2022. Land use, especially agricultural land use, plausibly has very large impacts on them, given that half of the worldâs habitable land is used for agriculture (Ritchie and Roser, 2019), and climate change also probably has very large impacts on them, good or bad. For a different version of this argument, see Sebo, 2022.
- ^
Furthermore, given objections of fanaticism and decision-theoretic irrationality to expected utility maximization with unbounded utility functions (McGee, 1999, Russell and Isaacs, 2020, Russell, 2021, Christiano, 2022, Pruss, 2022), including the risk-neutral expected value maximizing total utilitarianism weâve assumed in our section Motivating Conscious Subsystems, we should give some weight to alternative decision theories or to bounded social welfare (utility) functions, perhaps aggregating across these views through some method for handling normative uncertainty (MacAskill, Bykvist and Ord, 2020). (Though we should not commit solelyâor perhaps at allâto a version of maximizing expected choice-worthiness, especially with intertheoretic comparisons, to handle normative uncertainty, since that takes for granted an assumption weâre calling into question: expected utility maximization, especially with an unbounded utility function.) Compared to risk neutral expected utility maximizing total utilitarianism, we expect these alternatives to be less fanatical and to give similar or substantially less weight to Conscious Subsystems, and so we expect to give less weight overall to Conscious Subsystems as a result of their consideration. This would also probably mean further discounting animals the more unlikely they are to be conscious, more so than just by their probability of consciousness, and could therefore potentially block the total domination of small invertebrate welfare in the short term. Indeed, this seems to be one of the few ways a total hedonistic utilitarian would prevent small invertebrates from totally dominating in the short term.
- ^
About Global Workspace Theory, Baars (2003) writes:
The sensory "bright spot" of consciousness involves a selective attention system (the theater spotlight), under dual control of frontal executive cortex and automatic interrupt control from areas such as the brain stem, pain systems, and emotional centers like the amygdala. It is these attentional interrupt systems that allow significant stimuli to "break through" into consciousness in a selective listening task, when the name is spoken in the unconscious channel.
- ^
About Attention Schema Theory, Graziano (2020) writes:
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
- ^
⌠and even more so if we consider alternative decision theories, bounded social welfare functions, or limiting aggregation.
Luke Roelofs @ 2022-12-06T23:27 (+3)
I broadly agree that the hypothesis is not firmly established and would have mild, no, or hard-to-gauge practical implications. Two quick comments though:
- The assumption that conscious subsystems donât overlap seems unmotivated, and if itâs relaxed I think the sort of thinking that takes one in the direction of conscious subsystems (that something could be conscious while being a highly integrated component of something else) probably starts to make the individuation of subjects very difficult, producing indeterminate numbers of conscious subsystems.
- Cerebral hemispheres are an especially promising case of conscious subsystems, where no theoretical argument is needed for premise 1 (that the subsystem is capable of consciousness by itself), because people with one hemisphere removed can self-report consciousness. Worth noting that strictly, what can report consciousness is a system consisting of a hemisphere and the midbrain, etc., so drawing the inference to conscious subsystems requires accepting a degree of overlap: if the two hemisphere+midbrain systems are both conscious (as well as the whole brain) they overlap at the midbrain.
MichaelStJules @ 2022-12-07T03:19 (+2)
Hi Luke, thanks for your comment!
I agree with you about overlap and individuation. We decided to stick with this presentation for simplicity and brevity.
Some thoughts, speaking only for myself and not my co-authors:
- I would treat the indeterminacy and issue of what kind of overlap you allow as partly a normative question, and therefore partly a matter of normative intuition and subject to normative uncertainty. If you assign weights to different ways of counting subsystems that give precise estimates (including precisifications of imprecise approaches), you can use a method for handling normative uncertainty to guide action (e.g. one of the methods discussed in https://www.moraluncertainty.com/ ).
- While I actually expect some overlap to be allowed, I think reasonable constraints that prevent what looks like counterintuitive double counting to me will give you something that scales at most roughly proportionally with the number of neurons, if you pick the largest number of conscious subsystems of a brain you can get while following that set of constraints. This leaves no indeterminacy (other than more standard empirical or logical uncertainty), conditional on a set of precise constraints and this rule of picking the largest number. But you can have normative uncertainty about the constraints and/or the rule. As one potential constraint, if you have A1, A2 and A1+A2, you could count any two of them, but not all three together. Or, cluster them based on degree of overlap with some arbitrary sharp cutoff and pick one representative from each cluster. Or, you could pick non-overlapping subsets of neurons of the conscious subsystems to individuate them, so that each neuron can help individuate at most one conscious subsystem, but each neuron can still be part of and contribute to multiple conscious subsystems. You could also have function-specific constraints.
- Furthermore, without such constraints, you may end up with huge and predictable differences in expected welfare ranges between typically developed humans, and possibly whale and elephant interventions beating human-targeted interventions in the near term (because they have more neurons per animal), despite how few whales and elephants would be affected per $ on average. This seems very morally counterintuitive to me, but largely based on intuitions that depended on there not being such huge differences in the number of conscious subsystems in the first place.
On the two hemispheres case, we have a report on phenomenal unity coming out soon that will discuss it. In this context, Iâll just say that 1 or 2 extra conscious subsystems or even a doubling or tripling of the number (in case there would still be many otherwise) wouldnât make much difference to prioritization between species just on the basis of the number of conscious subsystems, and we wanted to focus on cases where individuals have suggested very large gaps between species.
Omnizoid @ 2024-11-13T14:56 (+2)
It may be that certain mental subsystems wouldn't be adequate by themselves to produce consciousness. But certainly some of them would. Consider a neuron in my brain and name it Fred. Absent Fred, I'd still be conscious. So then why isn't my brain-Fred conscious? The other view makes consciousness weirdly extrinsic--whether some collection of neurons is conscious depends on how they're connected to other neurons.
MichaelStJules @ 2024-11-14T03:23 (+2)
(Not speaking for my co-authors or RP.)
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
Omnizoid @ 2024-11-14T09:34 (+2)
But then wouldn't this by brain has a bunch of different minds? How can the consciousness of one overlap with the consciousness of another?
MichaelStJules @ 2024-11-14T16:32 (+2)
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they're all minds. I'm not sure I'd say they're different minds, because if two overlap enough, they should be treated like the same one.
See also the problem of the many on SEP:
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans.
James C Blackmon @ 2023-08-31T23:33 (+1)
Hi people. The (preoperative diagnostic) Wada test in which brain hemispheres are alternately anesthetized while the still-conscious parts of the patient's brain attempt to name and recall presented objects provides strong medical evidence of conscious subsystems. Hemispherectomies, as Luke points out, and even strokes would also seem to be supportive.
But more to the point, the fact that there is a conscious experience of losing neural communication with an entire hemisphere (for example, the realization that one can no longer produce speech or lift one arm) provides, I've argued, good reason to think that the substratum of that experience was conscious prior to the loss. There are alternative interpretations that seem more intuitive at first, but I think they require some serious metaphysical commitments. I have a hemispherectomies 2016 paper on this, and I give a more elaborate defense in my 2021 paper on IIT for anyone who's interested.
MattBall @ 2022-12-08T13:03 (+1)
Do I have this right - Functionalism doesn't support spending more on small invertebrates?
Conscious Subsystems probably supports spending far more on small invertebrates -->
Functionalism doesnât support conscious subsystems
Bob Fischer @ 2022-12-08T13:24 (+2)
Hi Matt! I donât think that follows. At best, those premises cut off one way that functionalism could support spending more on small invertebrates (namely, via Conscious Subsystems), leaving many others open. Functionalism is such a broad view that it probably doesnât have any practical implications at all without lots of additional assumptionâwhich, of course, will vary wildly in terms of the support they offer for spending on the spineless members of the animal kingdom.
MattBall @ 2022-12-08T13:34 (+1)
Thanks Bob -- appreciate it!