I’m fascinated by consciousness. My books include general AI so I need to think about what it is. Short answer: I have no idea. Just using the thing to investigate itself is odd enough, right?
Consciousness is a construct that everything appears within. That is, sensations, feelings, sights, sounds, emotions, feelings (if they are different from emotions—I’m unsure) and thoughts. This is fairly non-controversial and mindfulness practice certainly increases this understanding. This is another, extra thing that also appears within consciousness, that is awareness of conscious activity. This is the mindfulness of mindfulness practice. We can be aware that we are experiencing sensations, thoughts, etc. This is difficult and is a fleeting experience that seems to be so easily distracted away. It’s why constant practice seems to make this experience more readily identified when it does happen. But, of course, we can’t force this awareness, it just appears in consciousness.
No conscious activity can be “made to happen” by a force of “will”. Or some pre-thought that, “I am about to think some particular thought within consciousness”. The act of trying to pre-think is already too late, the appearance in consciousness has already happened. If you examine these things closely you’ll see (perhaps frighteningly) that there is no free-will in conscious activity. It seems to arise all by itself, and all we can do is “watch”. Hmmm. Scary, right? Not really.
This awareness of conscious activity is a little different from the rest of conscious experience, however. It’s a recursive process. That it is a fleeting, non-permanent aspect of conscious experience is specifically brought to mind (Haha) when we also become aware of being aware of conscious activity. I prefer to think of the “self” disappearing within conscious experience, and not being anything permanent, because there are (potentially) an infinite number of recursions of being aware of being aware of being aware, etc. Which one of those is the “self”? The “self” seems to be able to split and gaze upon itself, gazing upon itself, etc. Being aware of conscious activity—being mindful—as like an infinite babushka doll. We most certainly have a feeling of a “self” but it doesn’t hang around long and the infinite recursion makes it seem a silly thing to try to hold onto. And we can’t, anyway, since, like all conscious experience, it disappears regularly.
For those of us (and I’m sure many others) who regularly practice mindfulness, none of the above is controversial. However, I don’t understand the mechanisms. Does anyone? I have some ideas.
Caveat: I am not formally trained in this (I’m an ex-physicist, science fiction writer) and I have no idea what I’m talking about!
If we think of consciousness mechanistically, and focus on the awareness aspect, when we lose awareness (mindfulness—I prefer awareness since mindfulness has so many modern meanings, not to say awareness doesn’t either. Language is a bitch, right?) and then regain it, at that moment some process must have happened to begin firing the neuron, or sequence of neurons, that lead to re-establishing awareness. Something must (Should it? Why?) “happen” within the brain to cause the first neuron to fire. (Maybe there is a simultaneous firing of a group of neurons, it doesn’t matter for this discussion, but simultaneity has its own problems, thank you very much). Is that pre-process the illusive, permanent “self” thing? Is it something “separate”? If we think like that we run into the same recursion problems, what is the process running the pre-process, described above? Hmmm. It seems a “patch” to mask our lack of understanding. (Which is already pretty much everything!). It simply defers the problem.
This mechanistic thinking makes consciousness appear to be discrete, with each experience, or moment of experience, kick started by some “handler” processes, like a complex software system, where something is in control, and while that may well be the case (who really knows), it seems to me a non-Occam’s Razor solution. Where do the handler processes “live”? How do they work? Are they always running or do they “sleep” between episodes? Hmmm. I can’t imagine each episode of conscious experience instigated by a handler process running around trying to figure out what to do next. But, of course, that may simply be because I haven’t thought deeply enough about the problem and I have too little knowledge of neuroscience.
A more likely (over-arching, and simple-minded) explanation of the consciousness experience is that it’s a chaotic, cascading, but not random, process that never stops once it begins. (Which is when? Probably in the womb, I would guess). It’s either running at full throttle or it’s not, there’s no in-between state. It’s off when we have non-dreaming sleep (this is contentious), under anaesthetic, or we’re dead. They’re all (effectively) the same state. When it’s running, consciousness takes as input the output from all prior conscious experiences. I mean everything: simple (recent or old) sensations, horrible sensations (like accidents), thoughts, emotions, feelings, sights, sounds, and awareness. Out of that mixture the next moment of conscious experience occurs, which is then fed into the next as part of the mixture. There is obviously some weighting involved in the input data. At a superficial level the weighting seems to be (at least) based on elapsed time and intensity.
That means from the moment consciousness “arises” in an individual it keeps going taking the previous outputs as the inputs. Like it’s rolling downhill. But, does this just push the turtles further down the pile without addressing the problem of where do they end? How does that first conscious experience “arise”? Is it innate in all biological matter? All matter? Or just assemblies of amino acids? Hmmm. What makes some conscious experiences for some biological entities more “intense”? I would guess that my experience of consciousness is more “intense” than that of a bacteria, an earth worm, or even a mouse. What does that mean?
I don’t think it makes sense to think that conscious experience is a part of all matter. I think there’s more to it than that, but not much more. I don’t think an electron is conscious, even though it can respond to “sensation” i.e. the presence of an electromagnetic field. It’s simply following the laws of physics, which can be reproduced reliably (at least for electrons in electromagnetic fields). However, place that electron in a grouping of organic molecules then… well… it is contributing to the organic compound’s conscious experience by reacting to an electromagnetic field. It may be part of some sensation experience. I think learning and retention is the key difference. The (sufficiently complex—what does this mean?) organic compounds can learn (and retain information, somehow) from the sensation experience (be it pleasurable or otherwise). I think this retention and (potential) learning experience is what makes something (a sufficiently complex organic compound) consciousness.
But even that’s not really enough. It wouldn’t explain how consciousness developed in (sufficiently complex—I’ll stop saying that now) organic compounds. How the conscious experience became “richer” and more complex. Response to stimuli is insufficient, there also needs to be growth and change. There needs to be chaos and competition. Electrons (on their own) don’t have that.
To my thinking, consciousness is synonymous with life that can learn (as an individual) and adapt to that learning. Most living things are conscious. The intensity, the capacities (especially the capacity for awareness) are, of course, varied. However, the differentiation between “being alive” and “being conscious” is problematic. There is room for debate at the extreme end. Is a bacteria conscious? Is plant life conscious? They are certainly alive, they respond to stimuli, they evolve. But do they learn and adapt as individuals to that learning? I would suggest the answer is (possibly) no, they are not conscious.
Arguing about this is how many (bad) software developers create software. They argue and cater for the 1% possibilities while ignoring the functionality of the bulk of the software. The extremes aren’t important, or aren’t important until the basic functionality is well understood and working. Let’s not argue about trees being conscious when we’re no even sure all humans are!
It seems (to me—see Caveat above) that when life arose in the universe, so did conscious experience. They are the same thing and by responding to the laws of physics and to evolution by natural selection (at least, but there may be more involved—see Caveat) increasingly complex lifeforms and ever more complex experiences of consciousness have developed.
And here we are, asking these questions of ourselves. However, as interesting as that is, I don’t think we even know the questions that need to be asked yet. Exciting, isn’t it?