I think (for what that’s worth) many people (including neuroscientists and philosophers) have a definition of consciousness that causes difficulty with understanding and experimentation. They define consciousness to be a human-only thing, something unique to us. They define it as the tool they are using to understand consciousness.
I’m not denying that there seems to be something that is perhaps unique to us, but I don’t think it’s useful to define that special thing as consciousness itself. It’s a part of it, obviously, but I think the definition should be broader and a component of consciousness is our self-awareness.
The narrow definition can end up with some funky, non-physical explanations. I watched an online seminar that discussed the timing of brain responses while the subjects were being scanned. The presenter suggested, and I can only assume it wasn’t a joke, that (consciousness) signals were being cast back in time. This resulted from the individuals being “aware” of a stimulus some milliseconds after their brain had already responded to it. I’m sure that most neuroscientists can’t believe that consciousness is altering the laws of causality, but when they define consciousness as self-awareness, then they may be at a loss to explain how a brain is responding before it knows about an event.
The mistake arises because the critical functioning they are using to understand consciousness arises out of self-awareness. So, it makes sense to define consciousness as that thing itself. I believe they’re making it difficult to explain much of the functionality of consciousness. The definition of consciousness should be the “thing” that all life forms subject to natural selection possess, and self-awareness is a component, perhaps restricted to humans.
Yes, I know, “life forms subject to natural selection” is a tautology, but it’s a useful one to ensure there is an emphasis on natural selection and not only being alive (which is tricky to define).
The broader definition of consciousness allows us more freedom to explain what it is, since the manifestations of consciousness are not the same between life forms. There are striking differences. For example, some animals have vision and others don’t, so that sensation is available/not available. Their conscious awareness differs from those sighted/not sighted. Other sensations are available/absent in some animals. Some people are deaf, but we would still call them conscious.
As far as we know, only one life form has the consciousness experience of awareness (mindfulness). However, maybe self awareness is not a switch. It may be a more like a slider that increases the volume on a sound mixing desk. Perhaps dolphins, chimps, and others have some self-awareness. Experiments seem to suggest that might be true. However, I don’t even know if any other human is self aware. I only know that I am. How do I know that? Hmm.
You don’t need to spend hours meditating to get an idea of how conscious works. Although it may help some researchers if they spent some time like that. Observation of it while it’s doing its thing shows everything “conscious” (narrow definition) in our lives appearing and disappearing, all without our direct involvement. We can’t decide what to think about next because where did that first thought come from? How do we decide what to decide about thinking about next, etc? And down the rabbit hole we go. It’s not useful speculation.
Consciousness (broad definition) seems (to me) to be like an executing object-oriented software module that’s chugging along, taking inputs and executing the tasks its programming expects of it. It keeps running forever in the background until the programmer adds a flag in the software module’s notification system to force it to notify the results of what it has ALREADY done. So the module chugs along, does something, then sighs when it registers the flag to notify the user, so it sends a message, “Just had this thought”, “Just had this sensation”, etc. It’s curious how consciousness seems reluctant to send this notification and will find any way to get out of doing it. It’s called being distracted or lost in thought.
This is awareness (mindfulness). We’ve brought our attention to what is happening in consciousness and we’re notified of WHAT CONSCIOUSNESS HAS ALREADY ACTED UPON. We have no prior influence on what consciousness has done.
However, when our awareness has received the notification, we can examine the contents. We can send back a message to consciousness that says, “I like/dislike what you did there”. Consciousness (the object-oriented software module in my example) then takes that under advisement. If it receives enough return notifications, then it may alter its base code to function differently in later, similar circumstances.
We can see that functioning in an example. If someone tries to startle you by tapping you on the shoulder for the tenth time in succession, you won’t be startled. The module has altered its base response, and now returns an annoyed response instead of jumping out of your seat and swinging your head around to investigate the threat.
I think the smart people (FYI, that’s not me) that are working to understand consciousness might be coming up empty-handed because they’re looking under rocks in the wrong field. They might have more success if they looked in a different field just by altering their definition of consciousness. Instead of attempting to explain experimental results by casting brain signals back in time, researchers could alter their definition of consciousness so that it includes self-awareness is one of its optional properties. A transformation can be useful for solving intractable problems in mathematics. It may be the same with consciousness.