The Limbo from the Labyrinth: Consciousness and the Brain
How subjectivity emerges from an objective world
Consciousness has always been a paradoxical mystery because we have to use it to try to understand it. We can’t under-stand consciousness because we can’t get beyond it until we’re dead and then there’s nothing subjective left to know.
Consciousness presents us with a koan, a puzzle we hope has a solution. If only we ponder hard enough, we think, we might have a conceptual breakthrough — or not, in which case we may never know in full what we really are.
Cognitive scientists are on the trail of the nature of consciousness, as they unravel how the brain works, and philosophers remind them there’s a Hard Problem that remains even after all the scientific work of explaining the neural mechanisms is done.
The Hard Problem: How could anything physical have a private, subjective point of view, a feeling of what it’s like to be that physical thing? Couldn’t the brain get along fine without qualia, without these ghostly mental states that we know just by being ourselves?
The mystery of consciousness
In cognitive science, the brain is thought of as an organic computer, as a processor of information. If the brain is comparable to a computer, the mind must be like software running on the machine. But this analogy leads to some troubling questions about the nature of consciousness.
Taken as a complete or at least a literal description rather than a limited analogy, the computational model of the mind becomes absurd. For one thing, this model entails the homunculus fallacy since computers have users who view the information on a monitor. If the minds of those users were likewise computer programs running on different machines, we’d have an infinite regress of minds within minds and computers within computers.
But if we remove the users and the computer monitors, leaving just the humming desktop computer plugged into the wall, say, with the data flowing through its processors, we’d seem to lose the analogue of conscious awareness.
At first glance, the computer screen that displays the programs in a simplified fashion, with icons and folders resembling the conscious thoughts and concepts that are divorced from the messy neural signaling is like the theater of conscious awareness. We’re not aware of our neural activity as our brains are processing information, just as the monitor doesn’t show the computer program’s ones and zeroes but only simplified representations.
When we think of the dog we just perceived, we don’t remember the sensory data at the fine-grained level at which it was transduced by our eyes and ears. We consciously entertain only fleeting images with certain concepts in the background of our mind, to contextualize and plan for dealing with the memories. We think in linguistic terms that stand for their more complex inner causes, just as the icons on the computer screen stand for more elaborate codes in the software.
There’s no mystery of where consciousness is found in the full computational system that includes the user who’s viewing the screen, moving the mouse, and typing on the keyboard. It’s the user that’s independent of the computer who’s conscious.
So suppose you have an automaton, which is to say a robot controlled by a computer in its head. The computer’s humming along and operating the robot’s limbs, but there’s no screen or monitor attached to that computer to display icons for any user. Where could consciousness be in that system?
That’s the same as asking where consciousness is in us, assuming our brain is like a computer since there’s no screen attached to the brain and no obvious external user who’s operating our mental programs and body. But we are conscious! That’s the mystery.
The full computer metaphor thus has the unsettling implication of dualism or some simulation scenario. Perhaps we’re conscious only because we really are users who exist independently of our bodies in the same way as when you’re using the computer in your office, you’re outside that machine.
This is a farfetched scenario and has little explanatory value. The more independent the user is from the machine, the harder it is to understand how they could interact. The computer’s user physically interacts with the screen, mouse, and keyboard, but the ghostly user of our brain would be nowhere in the physical universe. Again, there’s no organic screen attached to the brain, displaying an analogue of the Windows operating system. And in explaining the mentality of the conscious ghostly users, we’d have to resort to the homunculus fallacy and the infinite regress.
The brain does seem to process sensory information, and we are conscious. Moreover, the brain seems responsible for our mentality and our conscious states. Yet those states appear immaterial and inherently divorced from the neural intricacies. We seem to be ghosts in the machine who know that that’s absurd.
The hard problem of explaining consciousness
We know more and more from cognitive science about how the brain generates conscious experience. The brain processes signals from the senses hierarchically, forming networks of short-term memories that sustain a continuous experience of the outer world. As John McCrone says in Going Inside,
the hippocampus and the prefrontal cortex give the brain two peaks of processing, but the style of convergence is very different in each. In the prefrontal cortex, there is a broad display. The prefrontal has the luxury of space to take the focus of each moment apart and map it onto a potential array of responses. The hippocampus, by contrast, is not for exploring the moment but for recording the results, for fixing what ended up mattering the most for future use. In one direction, the meaning of the moment is fluidly expanded; in the other, it becomes almost digitally concentrated — a frozen frame of information.
How are these processes integrated to produce a subjective flow of mental states? McCrone points to working memory as the key, since memory can
work bottom-up to extract a focus from the moment or top-down to reinstate a pattern of activity. And the ability to juggle a number of memories or intentions at one time was made possible by having a high-level area like the prefrontal cortex which was remote enough from the fray to keep the gist of the idea going. The sensory and the motor hierarchy might have to drop their representation of a particular state to make room for mapping new states, but the prefrontal cortex could maintain a template — a set of pointers — which could be used to stir the intention or memory back to life.
These kinds of neurological details explain the change between mental states, their timing and contents, and how one state flows into the next and each may be interrupted by background fluctuations only to have an underlying state reappear as we seem to focus our intention. But neurology leaves untouched what the philosopher David Chalmers calls “the Hard Problem” of explaining why these neural states are accompanied by qualia, by the feeling of what it’s like to be in them.
For Chalmers in The Conscious Mind, the problem is that there’s no necessary connection between the first- and third-person perspectives since we can conceive of a zombie, for example, so there will always be an open question of why, however complex it may be, this or that physical process or arrangement of materials, as in a brain, is accompanied by a conscious experience.
Behaviorists like Gilbert Ryle and other materialistic philosophers reply that it begs the question to speak of consciousness as a single thing that has to be explained all at once. Instead, there may be many types of conscious experience, each being identical to a different brain process or bodily behaviour. Instead of a ghost in the machine, there may be only the machine’s components and subsystems. Talk of consciousness as something additional to the machine may amount to a category error.
But we could just as easily say that the type of pragmatic, objective questions scientists permit themselves to ask, such as a question about how such and such a system works, are bound to be irrelevant to the mystery of subjective experience. To say that the first-person perspective is just the same as the third-person working of the brain could likewise be a category mistake, a confusion of terms. The question is whether subjectivity and qualia are real. If they are, the objective stance of science can be expected to miss the point (as the philosopher Thomas Nagel pointed out).
We’d be prone to discrediting subjectivity especially if we were the victims of a lingering cult of scientism, in which case we’d trust that scientific, third-personal methods answer all legitimate questions of fact, and we’d fallaciously appeal to the authority of science to silence objections about any leftover data about what it’s like to have first-personal experience.
Attention, familiarity, and second-nature uses
Perhaps the philosophical problem of consciousness will be answered not by losing ourselves in the details of how the brain works, but by contemplating the brain’s overall structure and function. The brain processes sensory information by sending signals between hierarchies of neurons. There are 86 billion neurons in the human brain, affording us 100 trillion neural connections.
Moreover, the brain doesn’t sense itself directly; again, there’s no sense organ in the braincase pointing at the brain, nor is there an equivalent of a computer screen in the skull that lets the brain view even simplified representations of its internal workings. Instead, the brain evolved to cope with the external environment, which is why the brain is wired into that environment via the sense organs, most of which are handily located together in the head.
Keep in mind the outwardly directed, convoluted neural hierarchy, as I relate a curious, perhaps telling feature of consciousness, which is how we often focus our attention depending on how familiar we are with parts of the environment. The more familiar we find something, the more its use becomes second nature, and the more we treat it as an extension of ourselves. The more novel or surprising something is, the more we focus our attention on it.
For example, you might take the same route as you go for walks around your house. Perhaps there’s a ravine nearby just off your route, into which you’ve never ventured despite having passed it by hundreds of times over the years. After walking on the same sidewalks and past the same driveways over and over again, you lose interest in them and take them for granted, while the oddness of the ravine seems to call out to you, tantalizing you with its novelty. Were you to step off the well-worn path, slide down the hill into the ravine, and explore the trees, stream, and wildlife for the first time, you might find yourself experiencing the world again as a child, your eyes wide with wonder.
This fluctuation of attention is even more pronounced in your house or your car in which everything is so well-known that you treat them as extensions of your body. Using these things becomes second nature to you. Similarly, when you first get braces on your teeth, the metal feels strange since your tongue treats it as an intruder in your mouth. Gradually you get used to the feeling of the braces and you come to think of them as part of your teeth.
But the primary example is our use of language. We become fluent with a language when we’re so familiar with its syntax and vocabulary that we literally think in their terms. Our conscious mind somehow slips into those tools so that we take their meanings for granted and no longer notice their arbitrariness, whereas the use of a foreign language seems to generate so many empty noises or scribbles.
Clearly, the same happens with our bodies since as babies and children we have to get used to walking, running, sitting on a chair, holding a fork, riding a bike, and so on. At that young age, we don’t notice how alien our bodies seem because the whole world feels magical. Only when our bodies change as we go through puberty when our critical faculties have sufficiently developed, do we begin to wonder at our strangeness.
Consciousness as a brain-generated limbo
Notice that when something becomes familiar and its use is second nature, you still respond to it but you do so unconsciously. When you walk on the familiar sidewalk, your attention can be elsewhere, such as on what you want to eat for dinner, because your brain runs your mental walking program in the background, calculating distances and keeping a peripheral eye out for obstacles.
The shifting of your focus away from the sidewalk is a kicking-out of your consciousness from the mental gymnastics you’d have to perform were you to pay attention instead to the shades of gray in the concrete, to the weeds growing at the edges, and to how to put one leg in front of the other to walk from one point to the next. Suppose you encounter dog excrement on your path that snaps your focus away from your planned meal and to the sidewalk, as you manage just at the last minute to step over the poop. Perhaps for a while afterward, you find the routine of walking slightly awkward because your consciousness has gotten in the way of the automation of your movements.
In the same way, if you focus on the act of breathing, the unconscious routines suddenly come under your conscious control and you may find that you breathe in fits and starts as your attention shifts with the various other thoughts that pop into your head.
Can you imagine, then, if we did have conscious control over each neural firing in our brain, and we could sense every part of our brain and decide how to think or feel at the cellular level? Unless we knew what each neuron is for and could foresee the consequences of all the trillions of signals our brain is capable of, the total control would reduce us to a chaotic jumble of ineffective reactions. We’d be like a squirrel trying to pilot a space shuttle.
Evidently, then, consciousness acts as a limbo in the brain, as a buffer zone that separates certain spotlighted mental states from the brain’s unconscious labours. Some cognitive scientists call this limbo a “global workspace,” as though different parts of the brain competed for access to a central blackboard.
But this understates the apparent isolation of the blackboard or spotlight, the sense that consciousness is precisely nowhere, that qualia are immaterial and unnatural. Even the temporality of a stream of consciousness is confused by the jumbled overlapping of mental states as they crowd for attention, the working memories contextualizing the later thoughts so that you seem to be having multiple thoughts at once.
Again, unless you’re thinking about what you happen to be sensing at the moment, the feeling of being in a conscious state is one of being disconnected from everything else. Even if a memory or some random firing in your brain triggers your thought about your upcoming meal, the conscious aspect of that thought is its private, subjective form.
You feel as though you, alone in the entire universe, were having that thought about your dinner plans. You feel that if you were an effective liar, not even God could read your mind. You could keep the thought a perfect secret because it exists nowhere in anything subject to objective, third-personal discovery. Yet you in the subjective sense, as the conscious bearer of your mental states are likewise nowhere to be found.
Even if each conscious thought were correlated with a unique brain state, observing your brain would reveal only the objective content of your mental state, not what it feels like to be thinking about dinner while you’re walking home. Your brain may be sustaining your mental states, but the brain never feels like you, the bearer of your mental contents because we’re (fortunately) blind to the brain’s inner workings.
The limbo from the labyrinth
The strangeness of consciousness lies in the fact that conscious states seem immaterial, which is to say they seem to occur precisely nowhere, in a limbo that’s somehow generated by the brain. If you’re thinking of an elephant, there’s nothing elephantine about the brain state that sustains that thought. Just as the meaning of the word “elephant” isn’t reducible to that arbitrary set of letters, the qualitative aspects of our mental states obviously aren’t the same as the objective properties of the underlying brain states.
But perhaps we can begin to solve the Hard Problem by noting how the limbo feel of consciousness is likely produced by the very labyrinthine nature and outward-directedness of the neural hierarchy. The reason our conscious selves feel like ghosts floating roughly in our heads is that there’s no chance of disentangling the neural source of any particular mental state.
Precisely because there are so many neurons and neural connections in each brain, all interconnected by an elaborate, sloppily-evolved hierarchy of lobes and layers and subsystems, none of which we can directly observe because the brain is blind to itself — as R. Scott Bakker points out in his “blind brain theory” — and is wired to process information streaming in mainly from the outer world, each brain state is bound to feel lost in the jumble. We have no Ariadne’s thread that saved Theseus from the labyrinth. Our mental states come and go — often randomly as far as we can tell — as they crowd for our attention.
The brain’s very complexity and primitive, jury-rigged capacity for scrutinizing itself ensure that the product of its outward-directed processing, its picture of the environment, will be relatively complete, whereas its picture of itself will be a scratchy outline by comparison. The brain presents the environment to us as an ordered, three-dimensional field of sights, sounds, and other sensations because that’s what the brain evolved to do. Sometime in the Stone Age, the human brain hijacked itself, turning its intelligence to the task of figuring out what we really are and could be. Through introspection, we could think about our thoughts and form myths and mental models about our true identity.
We seem to be estranged interlopers in nature in part because we can’t complete the complex neurological story that would objectively account for any of our mental states. Our thoughts seem to arise from nowhere because the brain is a Gordian knot that even neuroscientists can’t easily unravel with their brain-imaging machines. The brain state that rises to the level of conscious awareness is like a boy who suddenly freezes in his tracks as he realizes he’s lost in a sprawling maze.
The conscious thought seems immaterial or disconnected from anything localizable as a third-personal object because the thought is a signal sent up in a maze that can never be internally solved, so the act of signaling, of lighting up our synapses is futile. Once a neural pattern predominates in the brain’s total activity, that pattern stands out from the flurry of other neural activities. But because that flurry is so rich and convoluted — billions of neurons, trillions of connections, all active throughout our life even when we’re unconscious — the pattern feels private: ghostly, intangible, isolated, lost in limbo.
Conscious limbo spilling into technological mastery
To that extent, qualia are by-products of neural complexity. But there’s also an evolutionary role, deriving from how we survive by using our intelligence to modify the environment for our benefit. This could be done mechanically with no consciousness, but we can appreciate how the limbo or qualia would facilitate the use of tools to master the environment.
Once the brain’s complexity has the propensity to generate lost mental states, ones that feel conscious in the sense of being “free” or without apparent foundations, we begin to contrast nature with the seemingly abstract, ghostly quality of our inner life. At first we were likely mesmerized by the wilderness since we could at best infer that natural processes are driven by spirits that are comparable to those that seem to inhabit us.
The wilderness is also dangerous and unforgiving, so we had to keep our guard up, which taxed our capacity for conscious alertness. That is, we had to be ever-watchful for prey and predators, and we had to tune into seasonal shifts, and exploit any opportunity for advantage or face starvation or illness and death.
As a result, we used our intelligence to improve our chances by producing equipment that compensated for our bodily shortcomings. The use of fire, wheels, writing, and the like set us on a Promethean or Faustian path toward the total transformation of nature into the artificial wonderland we tend to prefer.
So although we were once fascinated by nature, by necessity we created an alternative environment, a refuge from the wilderness that serves us and thus automates our behaviour. We needn’t be hyperaware of taxis or sidewalks or computers but can multitask because the use of these technologies becomes routine.
That’s because the artificial environment we create is designed to cater to our demands so that the technologies present fewer obvious risks than those posed by the wilderness. Although we readily interpreted nature as being animated by conscious spirits, that animism was a defense mechanism to avoid the terror and disgust we’d have otherwise felt towards nature’s blatant inhumanity and indifference to the preferences of living things.
Consciousness, then, facilitated this survival strategy, enabling us to dominate most environments on the planet, by providing the illusion of a supernatural domain, which compelled us to contrast the outer and inner worlds and to attempt to make the former conform to the latter. Whereas animists had mainly just speculated that nature is intelligently and socially driven, we eventually used the illusion of consciousness that drove that myth, and we did so to create an artificial world that practically vindicates animism by actually operating based on intelligent design.
The sense of alienation we feel with qualia, of being divorced from the rest of reality and of being ghosts that secretly possess and pilot our bodies was easily turned into the anthropocentric conviction that we’re supreme, that we’re isolated and untouchable because we’re meant to act as overlords. Our evolutionary strategy was therefore reinforced by the arrogance that likewise spared us from existential fears and doubts and that drove us to maximize our technological advantage.
Building qualia
This is all pretty abstract, so let’s see if we can gain some further insight into qualia with a little narrative that applies the foregoing explanation. Suppose we have a signaling machine. No one programmed this machine since it’s organic and it evolved by natural selection. The machine couldn’t survive by itself, so it had to defend itself by being safely enclosed in a body. The machine is the brain that controls the body.
Inside that machine there are many tiny parts that generate and receive chemical signals. These signals aren’t random, though, since the machine is wired into the environment by its sense organs, which feed the brain information for it to chew on.
For example, the machine sees a tree, and some of the machine’s parts — we can call them “neurons” — signal to the rest in such a way as to form a neural pattern that indicates a tree is in view. That pattern isn’t randomly correlated with the existence of trees since that neural activity cascades into all sorts of related information that comprises the brain’s knowledge and experience of trees. The signal that indicates the tree is meaningful and useful to the machine.
So a series of signals unfolds: “There’s a tree there, a big one. Healthy green leaves, as revealed by the sunlight on this summer day. A tall tree that could be climbed, thanks to those low, thick branches…” And the machine might drift off into memories of having climbed trees, which makes for its knowledge that the branches could support its body’s weight. And so on.
The brain’s signals amount to its theoretical and experiential knowledge of trees. All of that background knowledge requires many neurons, which in turn can form a great many patterns of signals. These patterns are constantly lighting up in reaction to what’s happening to the brain and its body, shading off into each other, providing context or distractions for the signals that temporarily dominate the neural landscape because of their greater apparent relevance.
Those signals aren’t just academic as far as this machine is concerned, since they’re intertwined with pains and pleasures associated in this case with trees. Perhaps when this intelligent machine was young it was once lost in a forest at night and was scared by the gnarled, shadowy tree trunks and the sound of the wind howling through the branches. This machine’s knowledge of trees means something to it in a thick sense since its signals have emotional resonance.
The pains and pleasures are just powerful chemical signals that hijack the brain, fueling it with promises of what it very much wants or with fears of what it prefers to avoid. When the mechanical brain is in a state of intense pleasure, for example, the brain can hardly think of anything else since the pleasure acts like a drug or a parasite that takes over the brain’s ability to signal to itself.
Another reason all of this signaling isn’t idle is that the signals cascade into practical options of what the brain might do with the information, into potential plans for how its knowledge could be used to control its body and act effectively in the environment. Perhaps this brain might even dare to drastically alter the environment with techniques that are eventually optimized, leading to the invention of technology and to an artificial, intelligently designed, and brain-serving world.
So far, then, there’s no outstanding reason to assume this brain is conscious in the strong sense of having a ghostly first-person perspective. Nothing I’ve said in this section is meant to presuppose there’s a way it’s like to be this particular machine that’s evolved to interact intelligently with the world. The challenge is to deduce from some such third-personal premises that this machine would indeed have qualia.
Applying the above, I think the turning point is the necessary convoluted complexity of the machine in question. Suppose there were a much simpler brain, a mere-tree detector. This machine has a single free neuron that lights up whenever it detects something that looks like a tree. This machine would be about as intelligent as a stud finder (a tool that senses the wooden beams behind drywall in a building). The tree-detector has no background knowledge or ability to do anything with its “awareness” of trees. The one sensor or free neuron lights up or it doesn’t as the case may be, as it matches what it’s seeing with its stored template of what a tree is supposed to look like.
If we turn this tree-finder into the more elaborate brain, we should begin to imagine the latter would have qualia after all because each of its mental states would be and feel as though it were lost in a sea of competing waves. There would be something it’s like to be that brain’s signal of the tree because that signal would be both buoyed by and lost in the billions of other potential signals that swell and recede in the background.
The privacy or apparent secrecy of conscious states, the fact that there’s something it’s like to have them is largely explained, I think, by their asymmetric relation to the rest of the brain: one signal standing out in the crowd, one spot in a labyrinth that includes trillions of potential spots, none of which unravels the whole. We feel conscious when, in signaling some experience, we entertain also the background sense that we’re thrown into this particular experience, as Heidegger put it.
We have our experience in a subjective sense when we intuit that this experience arises from an untraversable labyrinth and that it will dissolve and be replaced by yet another experience. Being conscious is like finding and climbing a tree in a maze that affords a partial view of the context, and like knowing that you can hang onto that vantage point only for so long before you’ll have to look for another one.
Qualia and existentialism
The particulars of qualia, then, are generated by the brain’s complexities. As to why neural signals feel conscious or why they’re accompanied by qualia, this is due mainly to the rootlessness caused by that very convoluted complexity. Any human mental state is effectively a point in a labyrinth of neural associations that effectively represents a state of being — and potentially feeling — internally lost.
The feeling of what it’s like to be in a conscious state is thus largely an inchoate fear that we can never retrace all our mental steps, that our thoughts and feelings seem to come from nowhere, that we don’t know fully what we are because our brain might as well be an impenetrable box, as far as our conscious self is concerned.
Moreover, the way we shift in and out of states of alertness and can operate on autopilot, letting our unconscious mind take over when we’re familiar with circumstances, and can treat them as second nature or extensions of our body provides an evolutionary reason for consciousness — not just for cognitive functions but for qualia. The primary sense of alienation, of being internally groundless, ghostlike, and contrary to the outer world (as the brain presents the latter) drove us to survive by transforming nature.
We could adapt to any environment because our inner strengths were universal. One of these strengths was our intelligence, the use of which may not have required consciousness. But another strength was the accidental emergence of the feeling that our mental states are oddly rootless. That latter self-image was a constant companion, driving our contempt for materiality and our obsession with so-called spirits and eventually gods and angels we could ally with to rationalize our savagery and technological domination.
We might still be awed by our conscious nature, but we shouldn’t infer we’re at a complete loss as to how a material thing could be conscious. To suggest we’re mystified is to support the dualistic narrative that’s evidently the basis of the evolutionary explanation of how consciousness emerged.
Consciousness emerged to enable an intelligent species to survive by lording it over the other beasts of the earth, by deluding ourselves into thinking our accidental alienation from nature reflects a supernatural validation of our superiority and our right to rule the world. To be conscious, a material thing must be complex enough for the convolutions of its inner communications to outstrip the thing’s capacity to orient itself to the environment.
In that case, the signals that “rise to the surface,” being the most useful guides at the time depending on what the brain’s body is doing, stand out paradoxically as evidence that this creature is fundamentally lost even as each inner signal is supposed to be a guide. Too many cooks spoil the broth, and too many neural signals lead to as much confusion as clarity. Our thoughts direct our behaviour, but in so far as we focus our attention on the fact that we’re having some such thought, we feel bewildered because the neural basis escapes us and we can always dread losing one thought to the next, knowing that none is foundational or complete.
In short, the essence of consciousness is the central experience of existentialism. The less in touch we are with our primal fear and awe, and with our absurd lack of metaphysical foundation in life, the more hollow and machine-like our quality of consciousness will be. Ultimately, then, the problem of qualia is an ethical one, the question being what lost creatures should do.