I wanted to tell you about some experiences I’ve had lately in meditation. Of course, I’m going to interpret these experiences in computer science terms – specifically, I’ll use predictive processing to talk about ‘ego’ and ‘no self.’
I’ve been using Sam Harriss’s excellent ‘Awake’ meditation app for the past several months. I’ve had an off-and-on meditation habit for almost a decade now. Before this app, I had some, but not much, to show for my meditation practice. These past few months have been the longest I’ve maintained a consistent meditation habit, and also the most I’ve noticed the habit providing benefits in every-day life.
Because of this app, I feel like I’ve become more aware of some of the lowest layer frames in the consciousness stack. These frames aren’t function calls, so much as ‘stories’ that run in the background and shape all my perception, exactly in the manner the predictive processing expects. Here’s a rough drawing of the frames, as I understand them:
Each of these ‘frames’ consists of a “story” being told about sensations. These are more like “metastories” because they describe a huge space of possible stories. The “space” metastory can generate all kinds of stories about things existing at distances and angles. The “ego” metastory can generate stories about things being ‘inside’ or ‘outside’ of me. Consistent with predictive processing, these metastories filter out signals that don’t align with their grammar and fill in the gaps,and present “missing” signals as if they were present.
We all have blind spots in our eyes, areas where light doesn’t hit the optic nerve. We don’t notice these most of the time because our brains ‘fill in’ in the missing details for us’. This isn’t just happening with our vision – it happens with all of our senses. Our brains use our existing understanding of the world to fill in the blind spots in our perceptions. What we experience day to day is not the directly sensed reality, but rather a hallucination our brain creates for us, based both on our existing beliefs about how the world works, and the sense data we receive. We don’t experience reality directly. We experience hallucinations our brains create, based upon our understanding of reality.
The one frame that doesn’t have a story to it is the bottom frame: raw sensations arising in consciousness. You might argue that this isn’t even a frame at all, but rather how the memory space looks before any frames are applied to it. The stack frames that arise – the stories we tell ourselves – are just appearances in the memory, and not really separate from the memory space itself.
To this effect, Sam keeps pointing out “Everything you experience is an arising in consciousness.” This idea lines up with practical experience. Right now, I can feel my fingers typing on the keyboard, music playing from my computer speakers, my butt on my chair, the cool air in the room, and a muted, positive, calm emotional field. All of these phenomena are, to me, visible only as arisings in my own consciousness.
In computing language, it’s as if consciousness is nothing but a space of working memory in which things can appear. These stack frame metastories appear, and begin to shape the rest of our perception. Most of the time I don’t even notice that the stack frames are there because I take their outputs literally. My working attention is mostly focused on the top stack frames – i.e., elements of consensus reality – and so I barely notice the frames at the very bottom, because my attention almost never goes there.
The meditation practice consists largely of drawing my attention down to these lower level stack frames and bringing them more clearly into my consciousness awareness.
For example, here’s a what a “core dump” might look like for me when I first sit down to meditate:
Of course, this isn’t what I experience, because of how attention works. Attention causes some areas of the working memory to be more visible and obvious than others. If that picture above represents the entire contents of memory, the experienced contents of working memory is more like this:
Most of my attention and focus is, at first, on the rambling output of the “consensus reality” stack frame. I’m not sitting there thinking “People exist and have properties and people think things” – remember, that’s the hidden metastory. The contents of most of my working memory at that moment are like a mundane running commentary, on things that are happening or might happen or did happen: “There’s a bird, it’s warm here, my butt hurts, how will I join those datasets effiiciently, lunch is soon, i should exercise later, what will happen this weekend,” and so on.
When Sam (through his app) asks me to focus on various sensations and feelings, it’s like the scene changes to this:
As I keep drawing my attention back to my breath, other sensations start to become more obvious. At first, it’s still ‘consensus reality’ descriptions of the sensations, but gradually, the sensations themselves become more clear:
At first, ‘there is a bird singing near me’ feels like a story that describes a “a single sensation”. As long as I keep breathing, and my mind settles down, what eventually happens is that this ‘single sense’ starts to diverge into a number of connected, but distinct sensations:
- The sound of the bird
- Judgements about the sound (“pleasant” or “unwanted”)
- A sense that the sound is outside of me
- A sense that the sound is coming from somewhere above me, to my left, slightly behind me.
From a computational understanding of the brain, this makes sense.
When I hear a bird, this signal comes into my brain via vibrations in my eardrums. The bird’s relative location to me must be something my brain is computing from this incoming data. The relative amplitudes and phases of the signals detected in both ears can be used to compute where the bird is, relative to me. But I can’t possibly sense that information, since there is no such thing as a hardware ‘distance’ sensor – only pressure, light, temperature and vibration sensors. Distance must be computed – which means the feeling I have, that “a bird is singing 30 meters away from me” – this feeling is a huge mash of computed concepts that are being computationally inferred from the much, much more primitive incoming signals. My knowledge that birds exist, and that they sing a certain way, is involved in the moment-to-moment experience of hearing the bird sing.
Just as the distance of the bird is being computed, so is the bird’s “outsideness” – the sense that this bird is something outside of me, different from me, other than me. The ‘outsideness’ sensation is probably as close I can find to a ‘binary’ sensation in the brain.
One of the ways Sam Harris tries to get you to see that everything you experience arises in consciousness is to have you focus on something such as your face. He asks “Are you behind your face? Is there a you, separate from your face, observing it? Or is the sensation of your face just appearing in consciousness?” With these questions, I think he’s trying to highlight the “spatial’ and “ego” metastories.
He asks you to “try to see all of these things as just arising,” but that’s something I haven’t yet been able to do. That feeling that I exist, behind my eyes, between my ears, looking out from the world didn’t easily go away. No matter how much I have tried to make that feeling stop existing, that feeling has remained. The feeling that there is a “me” in a “world”, separate from the world, and “observing it,” kept persisting. Those orange bubbles wouldn’t go away no matter how much Sam asked me to consider that they are just more thoughts.
Fortunately, I’ve learned some things from this app and my habit of meditation. Instead of trying to make a feeling go away, I’ve found it’s much more effective to try and inspect the feeling as closely as possible. The act of inspecting the unwanted feeling often reveals that ‘the feeling’ consists of multiple signals bundled together; “i don’t want this” is usually the thing that feels unpleasant – the feeling itself is often just there. This might sound ridiculous, because if you drop a hammer on your toe, clearly the pain of the hammer hurts, right?
I’m not so sure anymore. When I’m in a good mood, if i suddenly get hurt, there’s this instinct to laugh. It’s wonderful to stub my toe and burst out laughing as an automatic reflex, instead of spewing curses and contracting all the muscles in my face and chest, which is what can happen if i’m not in a good mood.
Seeing how hard it is to stop my brain from automatically chopping up the world into bits and bundling lots of different signals together makes me hesitant to consider properties of anything I experience without first looking very closely at it to try and figure out all of the discrete components of the signal bundle.
I used to think I was really bothered by loud noises in the house. I’ve noticed now that what’s really bothering me is my failed attempts to cause those noises not to be present in my awareness. The noises themselves don’t really feel bad, it’s the immediate, involuntary, reaction to the noise, a feeling of “this is outside of me and i don’t want it and it should go away but it’s not going away”, that feels unpleasant.
As I have started to get better at noticing these lower level stack frames day to day, I’ve gotten better at noticing that narrative arising, reminding myself “I don’t need to add even more noise to this scenario.” As a result, the noises aren’t bothering me as much because I’ve reduced the frequency of these involuntary responses.
It feels like realizing I had been pinching myself in response to unwanted external stimuli, and associating the pain of the pinch with the external stimulus. I can’t stop the external stimulus, but I don’t have to pinch myself, either. As I notice my fingers tightly crushing the folded skin of my stomach, my fingers relax a bit, the pain of the pinch goes away. If my attention focuses too much on the “unwantedness” of the sound, I no longer notice my fingers performing that pinch motion, and so the pinch continues.
Sam’s lessons repeatedly ask “is there a sense that there is a you experiencing these things?”, and then points out that this sense is also something that arises in consciousness. The notion of “there is no self” is another key idea that keeps coming up in the app. In terms of the stack frames, what Sam tries to do is get you to notice it’s just a frame. In other words, he’s doing something like this:
Mindfulness acts as a narrative layer that runs on top of consensus reality, specifically focused on my mind and my thoughts, and pointing downwards in the stack, rather than out at the external world. It’s like a stack smashing attack that you execute against yourself, and you briefly see suffering as this awful virus that your own hardware is running.
I tried this same technique of attending more closely to the feeling of “I am behind my face,” and it slipped, suddenly. The feeling didn’t go away, but it disintegrated – things which were bundled together came apart. It was as if every piece of sensory data that came into my awareness was being “tagged” with an additional piece of information: a distance, which was being computed. Consider the feeling of my eyeglasses on the bridge of my nose, for example. Closer inspection reveals this feeling to be a combination of “a feeling of pressure” and “a feeling of distance”. The pressure is being sensed, the computed distance is tagged on afterwards, and these two, distinct sensations are being clustered together when I feel them. If I don’t pay close attention, I don’t notice it happening.
The ‘this is me, this is not me’ sensation is then just another tag, one that’s computed heavily based upon the distance tags. The ‘ego’ tag might be as simple as looking at all the distance tags and sorting them. I think there are deep connections between values systems and sorting algorithms (a values system is a sort on possible outcomes), so it makes sense that some processing sorting all the sensory data based on distance tags would give rise to something which the brain sees as intensely valuable.
Likewise, if I’m mindful about my experience, an unpleasant sound and the desire for the sound to go away are both appearing separately. The desire goes away almost the moment I note it mindfully for what it is. The sound itself persists of its own accord.
It’s like a delivery driver arrives every second, carrying multiple packages, all stacked on top of each other. The delivery as a whole ‘feels’ like a thing, but it’s really a collection of discrete things. In this case, the unwantedness of a sound is like an additional package that comes along with the sound. And if I refuse to open the sound package, and keep trying to send it back, the unwantedness package just gets sent over and over. It’s almost as if the unwantedness package arrives because i’m not opening the “sound” package all the way. The moment I acknowledge the “unwantedness” package, and then open the sound package (instead of trying to give it back to the delivery driver), the “unwantedness” package stops arriving in the stack. It’s as if the act of trying to give the sound package back to the delivery driver is creating the “unwantedness” package, and this package, having nowhere to go, just gets sent back to me.
If you’ve been reading the blog thus far, you know what’s coming: predictive processing. How does that play into these experiences?
“I am a thing, in the room” would be a top-down, conceptual, predictive signal, which is organizing and filtering the data that comes in from the bottom. There is pressure, there is sound – those are the signals coming up from the bottom. The idea that “the self has a location” seems to be a concept informing and shaping the incoming signals from the bottom. This “self” concept adds a layer of “metadata” to “incoming” sensations, encoding “this feeling of pressure, it’s caused by your eyeglasses on your nose”.
While sitting there, I have tried little experiments. I told myself “I am located on the ceiling”, and tried to visualize a “here i am” feeling above me, and felt, however briefly, the flickering sense that my body was below me. I was unable to sustain this concept, probably because there is a super strong bayesian prior inside me that says “my concepts of reality are attuned to reality”, and this concept makes it harder to actively delude myself in the moment. I also tried to ‘invert’ the sense of distance, so that things on the left would ‘feel’ as if they were on the right, and vice versa. I had some success there, too as well – but only briefly, flickeringly. I tried to imagine the door in this office being painted red, and what appears is a single frame buffer worth of false pixels, that concept created by my imagination unable to compete in the predictive processing hierarchy with the “reality” concept.
It’s actually relatively easy for me to apply the “location” concept to emotions and thoughts, which has strange results. The thought producing this sentence snakes up an oddly well-defined path on the left side of my chest, through the area where my heart and lungs interact. When I have this ‘internal experiences have a spatial location’ concept active, strong emotions can feel intensely localized, or else more like a cloud I’m in. When I’m not mindful, feelings usually just feel “there.”
Seeing how readily I can consciously turn off and on the “emotions and thoughts have locations in space” concept makes it clear that, internally, my perception of space and time must be concept-mediated, because of how easy it is to play with the concept and use it for my own experimentation.
By just continually breathing, staying present, and trying to notice as much as i can about my experience of consciousness, my notion of “self” has started to fundamentally shift. In the same way that the top-down concept of “space” applies “an angle/distance” metadata bundle to incoming information, I think the ‘self’ concept is also applying a “self/other” metadata bundle to perceived signals. The feeling of breath inside my lungs, or mild anxiety about the future, or the tension of my tongue curling in my mouth – these are all receiving the ‘self’ label. Meanwhile, the sound of the whirring computer fan is getting the ‘other’ label. Keyboard clicking is an interesting boundary case – it feels roughly possible to shift it to either side of the boundary. I’ve been experimenting with this practice during my sessions, and starting to experience brief moments of calm and peace.
So when a sensation arrives, and I think “I don’t want this”, I’m reinforcing the ego concept’s predictive validity. The ego concept gains Bayesian evidence that yes, the world is composed of bits, some of which are wanted and others are not. This concept is then ‘helpfully’ computed for me, automatically. When the sensation arrives, and I notice the unwantedness. I note “this is just something i’m doing” and my brain stops refreshing the “ego” concept. It attenuates a bit.
I’m guessing that what happens to people who do this meditation often enough, for long enough periods of time, is that this boundary stops arising automatically, as it becomes less and less useful over time. The boundary itself probably starts to feel more and more like just another arising in consciousness. This one is interesting because it’s literally just a single bit. Instead of trying to flip the bit, or maintain at some specific value at all times, my guess is that one of the effects of meditation is that, eventually this bit stops being computed, and thus ego dissolves.
4 thoughts on “Mindfulness as Stack Frame Exploration”