DRMacIver's Notebook
Subsetting Life
Subsetting Life
From Perplexities of Consciousness by Eric Schwitzgebel, page 125:
Look around a bit. Consider your visual experience as you do. Does it seem to have a center and a periphery, differing somehow in clarity, precision of shape and color, richness of detail? Yes? It seems that way to me too. Now, how broad is that field of clarity? Thirty degees? More? Maybe you are looking at your desk, as I am. Does it seem that a fairly wide swath of the desk (a square foot?) presents itself clearly in experience at any one moment, with th shapes, colors, textures all sharply defined? Most people, when I ask them, dendorse something like that. They are, I think, mistaken.
This is as part of a broader segment where he walks you through to realising just how small your field of clear vision actually is - it looks larger because your eyes are saccading all over the place because vision is actually an active process of looking rather than a passive receipt of a visual field, so you are implicitly using that to fill out a wider visual field.
This is an example of one of the interesting things I’ve been coming to terms with over the last year or so of personal development: the fact that people can be and usually are completely and totally wrong about their internal and embodied experiences.
(As per usual when I say “people are completely and totally wrong” I want to emphasise that “people” here very much includes me. I’ve been learning this in large part through becoming fractionally less wrong about my internal experience)
I think one way to understand why we are like this is to think about C++.
(Yes, sorry, this is going to be another slightly strained analogy for how brains are like software development)
C++ is an utter monster of a language. It’s large, it doesn’t so much have sharp edges as is made from razors glued together, and it’s built out of a gradual accretion of features that made sense at the time. The latest C++ standard is 1605 pages long and also costs CHF198. Nobody actually knows the whole thing because every time someone understands C++ in its completion their brain is eaten by creatures from the outer dark. Fortunately this has only ever happened twice.
And yet people write C++. A lot.
How? Well they don’t even try to write using the whole thing. They pick a subset of it that is sufficient for their use case and is more or less manageable, and they use that. Often they don’t even fully understand the bits they use, because they’ve got weird edge cases that basically never crop up - or don’t crop up at all unless you use features of the language outside of your subset.
Typically when people write C++ together they mostly agree on which subset to use - different people will have different preferences, but they’ll largely agree because when you use something your coworkers particularly dislike (even if it’s only in “your” code) they’ll start having angry words with you.
How are these subsets chosen? Generally, conservatively. You write the bits you know. If someone tells you about something neat you might start using it. If something bad happens, you learn to avoid that feature in future (often without really being able to explain why or what happened). Sometimes standards are codified, but more often than not there is a rough and messy process of negotiation and fumbling around that leads to something approximating a reasonable dialect.
Thus, even though every C++ project potentially has the entire vastness of C++ available to them, in practice they spend the entire time in a tiny subset of it that they are able to cope with.
One thing this subsetting enables people to do is have completely wrong mental models of how things work that just happen to work on the subset in question. Often this is a very good thing because these models are so much simpler than the reality. Thus as well as not requiring you to understand things you don’t use, subsetting helps you understand how to work with the bits that you do use by letting you ignore complexity.
Until it doesn’t. The nice thing about these simplified but wrong mental models is that they’re simple. The bad thing about them is that they’re wrong. And then you run into an edge case and discover that the reality is so much weirder and more broken than you had assumed it to be.
Anyway, to cut a long analogy short, life is like this: The actual range of human experience and capability is much more weird and complicated (and, occasionally, richer and more powerful) than we typically experience it as, because trying to keep on top of that all the time is impossible. Instead we’ve each learned a little subset of the possibilities available to us that mostly works, and we develop these simplified and wrong models of how it works (such as treating our field of clear vision as a simple static view of our visual field rather than a dynamically updated approximation to it). These work great, until some jerk does philosophy to us and forces us to confront how complicated being a human being actually is.