Interview with Michael Tye about invertebrate consciousness
By Max_Carpendale @ 2019-08-08T10:13 (+32)
Michael Tye is a prominent philosopher focusing on philosophy of mind. His interests include the philosophy of animal minds and consciousness and in 2016 he published the book Tense Bees and Shellshocked Crabs: Are Animals Conscious? This book features one of the most in-depth discussions of invertebrate consciousness available.
This interview is about potential phenomenal consciousness (especially conscious pain) in invertebrates (especially insects). This is part of an interview series where I interview leading experts on invertebrate consciousness to try and make progress on the question. You can find my previous interview with Jon Mallatt here, my previous interview with Shelley Adamo here, and a post where I justify my engagement with this question here.
1. You write that we are entitled to prefer the proposition that many animals including bees and crabs are phenomenally conscious because they appear to be conscious, and because supposing that they go through what looks like pain, but isn’t actually pain, is ad hoc. What about the counterargument that honeybees and crabs just have many fewer neurons than we do and must economize on space, and so it seems reasonable to imagine that they do not have some necessary component of what makes these experiences conscious in us?
Humans and mammals that are in pain behave in characteristic ways. This behavior is complex, involving much, much more than simply withdrawing the body from the damaging or noxious stimulus, and it is caused by the feeling of pain. (In my 2016, I list various components of this behavior.) If we find a very similar pattern of behavior in other animals, we are entitled to infer that the same cause is operative unless we have good reason to think that their case is different. This was a point that Sir Isaac Newton made long ago with respect to effects in nature generally and their causes. In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference. One other minor point: pain is a feeling. As such it is inherently a conscious state. Necessarily, there is something it is like to undergo pain. So, the question: “What makes the experience of pain conscious?” is really not coherent. If the experience of pain is present, it is automatically conscious.
2. Do you expect that there will be clean determinate answers about exactly which entities are conscious now or at some point in the future? Or do you think there will always be a grey area with a judgement call involved?
Consciousness itself is not a grey matter. There may be greyness with respect to the content of consciousness (for example, am I feeling pain or pressure, as my tooth is being filled?) but not with respect to consciousness itself. Consciousness does not have borderline cases in the way that life does. Still, confronted with a much simpler organism, we may not be able to tell from its behavior whether it has a faint glimmer of consciousness. I see no reason to suppose that in each and every case, we will be able to know with any strong degree of certainty whether consciousness is present.
3. Do you think it is likely that C elegans (the nematode worm with around 300 neurons) is conscious?
Unlikely. There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior. The same is true of the leech.
4. Have you updated your position at all since your 2016 book?
I have finished the draft of a new book on vagueness and the evolution of consciousness. In it, I say something about whether there is a trigger for consciousness in mammal brains and I also say some additional things about the nature of consciousness from my perspective. This develops further claims I have made in the past and connects them to global workspace theory. The book will likely be published in 2020 or early 2021.
5. Do you have any ideas about what the next best steps could be to get to a more certain conclusion about invertebrate consciousness?
We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.
6. Concerning digital minds, do you think that any mind that satisfied some general high-level conditions, such as behaving similarly to an animal we believe to be conscious, would also be conscious? Or do you think it would require a quite similar process or architecture to what we find in human brains?
Behavior is obviously not the same as mental states. But behavior is evidence for mental states, whether experiential states or not. If we manage to build highly complex systems whose behavior mirrors ours or at least is close to it for a range of mental states, we are entitled to infer that they are subject to mental states too, unless, as noted above, we have good reason to think that their case is different. Merely noting that they are made of silicon is not enough. After all, what reason is there to suppose that crucially makes a difference? Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical. The most plausible view is that experiences are multiply physically realized.
7. What do you think of the evidence that complex cognition is able to happen unconsciously in human sometimes. This should arguably make us conclude that consciousness is at least not strictly required to see many of the sorts of indicators of consciousness that we see in honeybees and crabs. Do you think this presents a challenge to claims that invertebrates might be conscious?
It is certainly true that cognition can occur without consciousness. Consider, for example, stimuli that are briefly presented to subjects, and that are then backwardly masked so as to make them unconscious. They may still be processed deeply, with the result that they have high-level content that can prime subsequent behavior. In some of these cases, with slightly different timing and intensity settings, the backwardly masked stimuli may still be visible. Where this happens, the immediate behavior of subjects is very different. Why? The obvious answer is that it is the fact that the subjects are conscious of the stimuli in these cases that makes their immediate behavior different. So, the issue again then is whether the behavior we see in honeybees and crabs is of the former sort or whether it is more like the behavior mammals undergo in response to their conscious states. The answer, I think, is that it is more like the latter. It is also worth pointing out that complex unconscious cognition in humans goes along with conscious activity too. Why think that if there is complex unconscious cognition in some cases in the invertebrate realm, it occurs there without consciousness being present in other cases?
8. We can distinguish between two aspects of pain and they can occur independently: sensory (including qualities such as burning, stabbing, and aching) and affective (the intensity or unpleasantness). If insects can feel conscious pain, do you think it is likely that they would feel a lower degree of affective pain than humans? In other words, would it make sense to say they feel only a fraction of the amount of affective pain as a human would ‘in similar circumstances.’
We know that patients who suffer intractable pain and who have undergone prefrontal leukotomies to reduce their pain level report that they still feel pain but they no longer mind it. For these patients, the affective component of pain has been removed. This is indicated in their behavior. The question for other organisms is again how much their ‘pain’ behavior is like ours. To the extent that they respond as we do, that is evidence that they feel what we do. If their behavior is more muted in various ways, that would be evidence that their pains are not as intense or unpleasant. In this regard, I might note that it makes sense to suppose that their pains are actually more intense than ours! After all, they are much less intelligent than we are, so it would not be unreasonable to suppose that Mother Nature would give them a bigger jolt of pain than we receive in response to noxious stimuli in order to get them to behave in ways most conducive to their survival.
Many thanks to an anonymous donor and the EA hotel for funding me to conduct this interview. Thanks also to Rhys Southan for providing suggestions and feedback.
Brian_Tomasik @ 2019-08-09T10:07 (+15)
Congrats on all these great interviews!
There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior.
There are numerous papers on learning in C. elegans. Rankin (2004):
Until 1990, no one investigated the possibility that C. elegans might show behavioral plasticity and be able to learn from experience. This has changed dramatically over the last 14 years! Now, instead of asking “what can a worm learn?” it might be better to ask “what cannot a worm learn?” [...]
C. elegans has a remarkable ability to learn about its environment and to alter its behavior as a result of its experience. In every area where people have looked for plasticity they have found it.
Max_Carpendale @ 2019-08-09T11:49 (+6)
Thank you! :)
Thanks for mentioning C. elegans behavioural flexibility. I had meant to comment about that, but forgot to. That's a great paper on the subject.
I think people sometimes unfairly minimize the cognitive abilities of some invertebrates because it gives them cleaner and more straightforward answers about which organisms are conscious, according to their preferred theory.
Peter_Hurford @ 2019-08-11T23:07 (+8)
However, there do appear to be very clear behavioral capabilities differences between C. elegans and other invertebrates (e.g., honeybees) as can be seen in our invertebrate sentience table.
eFish @ 2019-08-08T12:18 (+8)
Thank you for doing this, Max (and the supporters). These are good questions that warrant their own book =)
I find this passage making a particularly good point, so I quote it below for those skipped that part:
In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference.
We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.
Comparing with humans, I suppose, should come with many caveats. Still, for ancient(?) feelings like fear and pain, the approach seems valid to my layman perspective in the area.
Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical.
If (globally bound) consciousness is "implemented" on a lower level, then it still may be possible for different physico-chemical brain states for the same qualia to be relevantly identical on that lower level. I mention this because IMO there are good reasons to be sceptical about digital consciousness.
[...] it is is extremely implausible to hold that [...]
A typo
Max_Carpendale @ 2019-08-08T14:11 (+1)
You are very welcome! :)
That passage is also one of my favourite parts of his answers, thanks for highlighting it.
I'll take a look at that David Pearce post, thanks for the link.
Thanks for pointing at the typo, fixed it now.