No amount of smart behaviour will ever amount to proof of consciousness in the fundamental meaning of the term 'consciousness'. I take it that the latest AI models greatly strengthen this point rather than weaken it.
There’ll be no good cross-disciplinary public discussion of the issue of consciousness until people can agree that consciousness in the fundamental (‘experiential what-it’s-likeness’, ‘qualial’) sense has nothing intrinsically to do with intelligent behaviour. What are the chances of this agreement coming about? Close to zero—even if we somehow (impossibly) manage to eliminate all ‘slop’.
Perhaps we could use the term ‘Q-consciousness’ [for qualia-consciousness] to distinguish consciousness proper (consciousness as I understand it—feeling—experience) from anything else that people might want to use the word ‘consciousness’ to mean.
That would certainly help, because I don’t think we’re ever going to be able to stop some people going on thinking—and saying—that a certain kind of intelligent behaviour on the part of something X is sufficient to warrant the attribution of consciousness to X.
Perhaps we could distinguish Q-consciousness from I-consciousness (‘intelligence consciousness’), if only for the sake of discussion.
I say ‘if only for the sake of discussion’ because don’t think ‘I-consciousness’ would ever be a name for a kind of consciousness. It could only be a way of picking out a capacity for a certain level of intelligent behaviour—a level which it would be impossible to specify precisely).
you're right that Roy Wood Sellars is not a panpsychist—but he was unusually aware of its force and importance as a possible position. He was still considering it in 1960 at the age of 80 [Sellars, R. W. (1960) ‘Panpsychism or Evolutionary Naturalism’ Philosophy of Science 27: 329–350.] He spoke of Charles Augustus Strong’s “magnificent attempt to carry panpsychism through” [1932: 296].
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
That reads more as a list of rather random descriptors of a human from psychology and neurology than as a coherent plan for a conscious machine. For one, people can lose proprioception and motor control, yet still report having had consciousness during, once communication is restored.
My own advice to building a conscious machine would be to start with a (literally) deeper study of the brain than just the inter-neuronal level. It's within the neuron that one encounters structures actually reminiscent of (qu)bits. Once this is realized, the intra-neuronal level forms the much more obvious starting point for bridging machine to biological consciousness.. especially considering that the current inter-neuronal biomimickery has created yes great pattern matchers but no that still feel peculiarly unconscious. LLMs are like an externalized cerebellum, the brain part where the inter-neuronal level is most expressed, but which can be removed without affecting the core of consciousness. This additionally raises a fundamental question about whether the neuron is the fundamental building block at all, as implied in your paper.
It's important to realize that the list you mention, which I assume is the roadmap that I provided the link for, was expressed in a regular meeting of the neuroscientists at the Neurosciences Institute (NSI). The participants were all well versed in the extremely complex details of the TNGS and Darwin automata, so the list carries a lot of unspoken context not possessed by the general public. I don't know if you have viewed the video about some of the Darwin automata that I linked to in my original comment, but it gives more of a sense of the underlying complexity of the automata, none of which are claimed to have consciousness yet, but, as I mention in my original comment, "perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way."
The TNGS claims that a neuronal group of between approximately 100 to 10,000 neurons is the unit of selection. It also claims that primary consciousness is a thalamocortical system process, referred to as the dynamic core, that does not require the cerebellum to function. As I also mentioned in my original comment, no other research I've encountered is anywhere near as convincing. If you can create a machine with the equivalent of biological consciousness using your ideas, then I'm all for it.
I had been carrying an assumption I hadn't fully examined — that the 'hard problem' was a problem about consciousness needing explanation, when the move you're making here is closer to an inversion: it's matter that lacks adequate characterization, and consciousness is the one thing we can't coherently doubt because we're already inside it. That reframing genuinely rearranged something for me. Thank you for the precision of insisting that the mystery dissolves not through new discovery but through refusing to beg the question about what matter can be. Here's where I'm honestly uncertain though: if consciousness in some primitive form is inherent in matter, and complex consciousness is explicable by evolution, what does the work of 'stepping down' from primitive experiential properties to the organized subjectivity we actually live in — is that transition itself a hard problem in disguise, or does evolution genuinely handle it without remainder?
Almost everything that Strawson says here is so obviously true that it is a sad reflection on the state of this field that he has to keep reiteratjng it. The only bit he gets wrong is panpsychism itself. In particular this sentence: “There’s nothing in physics (or science generally) that conflicts with the idea that consciousness is in some manner inherent in matter.“ This is wrong, as panpsychism quite obviously contradicts the Special Theory of Relativity: in a conscious mind, the order of well-separated events is indubitable. If you see a flash and a hour later hear a bang, it is indubitable that your flash experience preceded your bang experience. But if the corresponding material brain events are space-like separated then the sequence in which they occur is relative to the inertial frame of reference of the observer. There simply is no objective fact of the matter whether the flash brain-event occurred before or after the bang brain-event. Therefore a conscious mind cannot be embedded in pieces of matter.
The solution to that conundrum is to adopt either substance dualism or idealism.
This is a good review. But I think the problem with the consciousness as the intrinsically “inside” view is that most features of this view can be sloughed off in fairly straightforward experiments, until most of what is considered consciousness is pulled apart into dissociable threads. Sleepwalking and other automatism, drug trips, meditative practices, so-called “lucid sleeping” — these kinds of activity are so disorienting because they seem to reveal to the first-person perspective just how constructed it is.
That all fits in. For however constructed it is, from some point of view, it is still, qualitatively, phenomenologically, just what it is. That's all the 'insideness' that is in question
Not only that, but what we THINK we know of consciousness through experience is largely wrong! I am thinking of cued reporting experiments here, to be clear.
Even the idea of ‘being like something’ presupposes an ‘I’, a point a view in the narrative continuity of ideas. ‘What it is like-ness’ presupposes the idea of itself, and if presented as an explanation of consciousness it also begs the question.
Consciousness (the possibility of having experience and ideas) cannot etiologically ground itself, but only consistently formalise what it already is, and ‘what it is like-ness’ is not logically enough.
I of course agree that it is a valuable ‘part’ of the explanation/rationalisation of the sense of ‘I’ and of all those ideas that people call consciousness, just not a complete explanation.
I read your paper before. Yes, it does beg the question, but only to the extent of our own sense of ‘I am’, which is self-evident but also exclusive to the instance of consciousness that thinks it.
We cannot prove the sense of ‘I am’ for anyone else in particular, not even for Homo sapiens. Attribution of reflexive consciousness may be assumed (as a necessary condition of our own capacity to generate meaning and to sustain its narrative coherence) but it is always negotiated (under the secondary assumption of reciprocal communication) and may be revoked if it fails to satisfy the conditions of generating common meaning. One of the reasons for revocation of the assumption of reflexive consciousness of another entity may be their explicit rejection of the laws of sense (non-contradiction…), since without respecting sense, words and gestures can no longer be interpreted to mean anything in particular, but this is a tangent.
Your view lies in an impressive tradition, including Fichte, Husserl, Dan Zahavi—and my dad (P. F. Strawson). I don't agree with it, but I take my hat off to you.
No amount of smart behaviour will ever amount to proof of consciousness in the fundamental meaning of the term 'consciousness'. I take it that the latest AI models greatly strengthen this point rather than weaken it.
There’ll be no good cross-disciplinary public discussion of the issue of consciousness until people can agree that consciousness in the fundamental (‘experiential what-it’s-likeness’, ‘qualial’) sense has nothing intrinsically to do with intelligent behaviour. What are the chances of this agreement coming about? Close to zero—even if we somehow (impossibly) manage to eliminate all ‘slop’.
Perhaps we could use the term ‘Q-consciousness’ [for qualia-consciousness] to distinguish consciousness proper (consciousness as I understand it—feeling—experience) from anything else that people might want to use the word ‘consciousness’ to mean.
That would certainly help, because I don’t think we’re ever going to be able to stop some people going on thinking—and saying—that a certain kind of intelligent behaviour on the part of something X is sufficient to warrant the attribution of consciousness to X.
Perhaps we could distinguish Q-consciousness from I-consciousness (‘intelligence consciousness’), if only for the sake of discussion.
I say ‘if only for the sake of discussion’ because don’t think ‘I-consciousness’ would ever be a name for a kind of consciousness. It could only be a way of picking out a capacity for a certain level of intelligent behaviour—a level which it would be impossible to specify precisely).
you're right that Roy Wood Sellars is not a panpsychist—but he was unusually aware of its force and importance as a possible position. He was still considering it in 1960 at the age of 80 [Sellars, R. W. (1960) ‘Panpsychism or Evolutionary Naturalism’ Philosophy of Science 27: 329–350.] He spoke of Charles Augustus Strong’s “magnificent attempt to carry panpsychism through” [1932: 296].
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
That reads more as a list of rather random descriptors of a human from psychology and neurology than as a coherent plan for a conscious machine. For one, people can lose proprioception and motor control, yet still report having had consciousness during, once communication is restored.
My own advice to building a conscious machine would be to start with a (literally) deeper study of the brain than just the inter-neuronal level. It's within the neuron that one encounters structures actually reminiscent of (qu)bits. Once this is realized, the intra-neuronal level forms the much more obvious starting point for bridging machine to biological consciousness.. especially considering that the current inter-neuronal biomimickery has created yes great pattern matchers but no that still feel peculiarly unconscious. LLMs are like an externalized cerebellum, the brain part where the inter-neuronal level is most expressed, but which can be removed without affecting the core of consciousness. This additionally raises a fundamental question about whether the neuron is the fundamental building block at all, as implied in your paper.
It's important to realize that the list you mention, which I assume is the roadmap that I provided the link for, was expressed in a regular meeting of the neuroscientists at the Neurosciences Institute (NSI). The participants were all well versed in the extremely complex details of the TNGS and Darwin automata, so the list carries a lot of unspoken context not possessed by the general public. I don't know if you have viewed the video about some of the Darwin automata that I linked to in my original comment, but it gives more of a sense of the underlying complexity of the automata, none of which are claimed to have consciousness yet, but, as I mention in my original comment, "perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way."
The TNGS claims that a neuronal group of between approximately 100 to 10,000 neurons is the unit of selection. It also claims that primary consciousness is a thalamocortical system process, referred to as the dynamic core, that does not require the cerebellum to function. As I also mentioned in my original comment, no other research I've encountered is anywhere near as convincing. If you can create a machine with the equivalent of biological consciousness using your ideas, then I'm all for it.
I had been carrying an assumption I hadn't fully examined — that the 'hard problem' was a problem about consciousness needing explanation, when the move you're making here is closer to an inversion: it's matter that lacks adequate characterization, and consciousness is the one thing we can't coherently doubt because we're already inside it. That reframing genuinely rearranged something for me. Thank you for the precision of insisting that the mystery dissolves not through new discovery but through refusing to beg the question about what matter can be. Here's where I'm honestly uncertain though: if consciousness in some primitive form is inherent in matter, and complex consciousness is explicable by evolution, what does the work of 'stepping down' from primitive experiential properties to the organized subjectivity we actually live in — is that transition itself a hard problem in disguise, or does evolution genuinely handle it without remainder?
Almost everything that Strawson says here is so obviously true that it is a sad reflection on the state of this field that he has to keep reiteratjng it. The only bit he gets wrong is panpsychism itself. In particular this sentence: “There’s nothing in physics (or science generally) that conflicts with the idea that consciousness is in some manner inherent in matter.“ This is wrong, as panpsychism quite obviously contradicts the Special Theory of Relativity: in a conscious mind, the order of well-separated events is indubitable. If you see a flash and a hour later hear a bang, it is indubitable that your flash experience preceded your bang experience. But if the corresponding material brain events are space-like separated then the sequence in which they occur is relative to the inertial frame of reference of the observer. There simply is no objective fact of the matter whether the flash brain-event occurred before or after the bang brain-event. Therefore a conscious mind cannot be embedded in pieces of matter.
The solution to that conundrum is to adopt either substance dualism or idealism.
This is a good review. But I think the problem with the consciousness as the intrinsically “inside” view is that most features of this view can be sloughed off in fairly straightforward experiments, until most of what is considered consciousness is pulled apart into dissociable threads. Sleepwalking and other automatism, drug trips, meditative practices, so-called “lucid sleeping” — these kinds of activity are so disorienting because they seem to reveal to the first-person perspective just how constructed it is.
That all fits in. For however constructed it is, from some point of view, it is still, qualitatively, phenomenologically, just what it is. That's all the 'insideness' that is in question
The Sellars quote is an interesting choice. I dont think he would like your approach here.
http://www.consciousnessitself.org
http://www.integralworld.net/reynolds16.html The Miracle of Matter
http://cms-revelation-magazine.adidam.org/books/ancient-teachings the Ancient (non-dual) Reality Teachings
http://www.dabase.org/gnosticon.htm The Gnosticon
Not only that, but what we THINK we know of consciousness through experience is largely wrong! I am thinking of cued reporting experiments here, to be clear.
Even the idea of ‘being like something’ presupposes an ‘I’, a point a view in the narrative continuity of ideas. ‘What it is like-ness’ presupposes the idea of itself, and if presented as an explanation of consciousness it also begs the question.
Consciousness (the possibility of having experience and ideas) cannot etiologically ground itself, but only consistently formalise what it already is, and ‘what it is like-ness’ is not logically enough.
No worries — it's certainly not offered as an explanation of consciousness
I of course agree that it is a valuable ‘part’ of the explanation/rationalisation of the sense of ‘I’ and of all those ideas that people call consciousness, just not a complete explanation.
as for the sense of 'I', useful to remember that animals also have conscious experience ... https://philpapers.org/rec/STRTIN-3
I read your paper before. Yes, it does beg the question, but only to the extent of our own sense of ‘I am’, which is self-evident but also exclusive to the instance of consciousness that thinks it.
We cannot prove the sense of ‘I am’ for anyone else in particular, not even for Homo sapiens. Attribution of reflexive consciousness may be assumed (as a necessary condition of our own capacity to generate meaning and to sustain its narrative coherence) but it is always negotiated (under the secondary assumption of reciprocal communication) and may be revoked if it fails to satisfy the conditions of generating common meaning. One of the reasons for revocation of the assumption of reflexive consciousness of another entity may be their explicit rejection of the laws of sense (non-contradiction…), since without respecting sense, words and gestures can no longer be interpreted to mean anything in particular, but this is a tangent.
https://michaelkowalik.substack.com/p/theory-of-reflexive-consciousness
Your view lies in an impressive tradition, including Fichte, Husserl, Dan Zahavi—and my dad (P. F. Strawson). I don't agree with it, but I take my hat off to you.
Kind words. I try to do what makes sense to me, until it doesn’t. I think we all do.