#57 On the key (and lock) to knowledge

Dear Mat,

This is a long letter but I feel we’ve alighted on a fundamental, multifaceted philosophical difference and it’s what I’ve been researching full time for several years so I’ve got too much to say.

OK. The brain is an organ situated in a body (including a gut) and a world (including a culture). The modern brain is in the same kind of body as before, but in a very different world. And the world is where most of a brain’s knowledge is. Insert mind-blowing sound effect.

You said:

the brain and gut will be the same in 20 years’ time, when all our theories about them will be refreshed in a new set of pop science books. I know you know this but I don’t think you’re taking this seriously.

This is the issue. I don’t think science gets totally updated every 20 years and everything before it thrown out. Deutsch is right to insist that we actually do get better explanations which don’t totally throw out the pre-existing paradigm. The neuroscience of today is much more compelling than that of yesterday because it is a better explanation that interweaves with other good explanations of different phenomena. (Also the brain and gut both change based on our current knowledge, so they’re not totally independent of the theories about them. Only in what I call the asymptote theory of truth is that possible, where we’re forever approaching some static “true knowledge” that we can never touch but at least we can close the gap. That’s the correspondence theory of truth, in the philosophical literature, and I’m prepared to pronounce it’s time of death as the early 20th century.*) Most of the kind of neuroscience that I’m saying is superseded is based in bad philosophy, indeed bad analogies, rather than a lack of access to new technologies and new anatomical knowledge.

Everything you say about “the unconscious” in your letter is based on… what? You say it has knowledge of complex systems with infinite variables, it “clearly” has a role in creativity, it can make a heart beat (but can’t aid digestion?), it “tracks” hundreds of relationships — where does all this come from?

I’m claiming the neuro- and cognitive science of today (and in classic Deutsch style a lot of it is just about taking seriously the theories proposed a generation ago) totally upends what you think about the unconscious. The kind of knowledge it has isn’t representational in the way people assume. Consider the famous experiment in ethology by Tindbergen, where gulls know to peck their mum’s beak for food. How can they contain representations of mum, food, feeding, pecking, etc.? Famously, they don’t. When experimenters replaced the mum with literally anything that had a red dot on it, the birds pecked that and some different coloured dots happened to work better. The knowledge that was encoded in their nervous systems was of a very heuristic form, with the least content possible, to get a job done, which (in the environment in which it evolved) meant getting fed. The bird’s mind — like our unconscious mind — doesn’t know much. It doesn’t represent (except for a few key exceptions like the cortical homunculus). It just does. For that reason, if it’s in a new environment without red-dot-beaked parents, it’s fucked.

We romanticise the unconscious perhaps even more than we romanticise pure rationality. Both are fantasies. Thinking the unconscious must know things on some deeper, primal level that we can only dimly guess at is understandable — System I always has an answer. That’s what it never fails to do: give an answer. But in domains where there aren’t red dots, it’s just off the top of our heads, literally.

We can’t watch some blobs moving around a screen without seeing a narrative with motives and characters. We can’t read a myth without interpreting it as meaning something important. We can’t watch a person in the witness stand give evidence without judging their personality. We can’t stare at our own face in a mirror in semi-darkness for more than a few minutes without our brain filling-in the blank spaces with random imagery. We can’t un-see patterns once they’re suggested to us. We can’t look at clouds or stars without seeing shapes in them. We can’t see a person from a foreign land without fearing them. In all cases it’s not that our representations or maps are now mismatched with our environment and need updating. It’s that we never had maps. Most of the info we have is in the environment; we had just enough complementary info to be able to do things in that environment. I think a lock/key metaphor is better than a map/territory metaphor: useful knowledge doesn’t look like the thing it’s about and has to be much less detailed than it, but it does make something happen.

To me the upshot is that we shouldn’t be trying to “uncover” the wisdom embedded in our ancient genes and predispositions. Not only has the environment changed, but they never had as much knowledge as we thought and the random answers they generated were simply not wrong enough to kill us. They had some heuristics that worked just well enough for just enough people to survive. Ditto for old myths. Ditto for all knowledge, which as we know can never be confirmed true. But some of it works better at transforming the world. Amazingly, in a world that is transforming at an increasing rate, it is this very development (the advent of explicit explanatory knowledge) that both drives that increase and is needed to navigate it. For that reason, it’s not so much that we hubristically jettison our implicit knowledge (the alleged “unconscious” or the “cultural unconscious” of myth) but that we recognise that even our most recent explicit knowledge will need further improvements all the time as the world is changed by that knowledge**.

Only System II can do it, with the help of what I might call System III: the media and cognitive technologies to which we outsource most of our thinking. Even when it seems like System I is responsible, actual experiments show that there is always less in it than we thought.

Even our emotions, how things feel to us, are largely externally hosted. Change or impoverish the context and you can get a person to not know if they’re angry or horny or afraid. Without contextual cues the “emotion” is simply a raised heart rate and a galvanic skin response: the content of the emotion comes from experience and interpretation added by context. Medicine hasn’t caught up. The law hasn’t caught up. Training for airport security staff hasn’t caught up. Hopefully it will, not to show off our shiny new scientistic learning or to completely repudiate what went before, but so that we can do more things. How did we get emotions wrong for thousands of years when our lives depended on it? First, we could have gotten them way more wrong, our intuitions aren’t so bad. Second, emotions themselves have changed along with culture: we have more and different emotions now! That’s why we need a theory of knowledge as a process that you do, not a thing or commodity that you can get or collect or uncover.

Here are my suggested four strands that lead in this direction.

  1. Deutsch’s theory of explanatory knowledge as knowledge transforming the universe; good explanations = hard to vary, scale appropriate, great reach.
  2. Dennett’s two black boxes. A theory of meaning being part of a causal explanation in many circumstances. And it’s offsite (my word not his), i.e. in the world, so two speakers/computers/etc. communicate by virtue of a shared world even more than a shared language.
  3. Dawkins’ extended phenotype: the beaver’s phenotype includes the dam as much as its tail because it too affects the replicative success of certain genes in the beaver’s genotype. Thus the phenotype is a distributed, non-local object (it already was even if we thought of it as the organism’s body, but this rams the point home with it being a totally spatially separate inanimate object (the dam). Most of the things we care about are objects of this kind (words, concepts, culture for instance) although the causal history is harder to pin down for most things than it is in the wonderful case of extended phenotypes.
  4. What the hell, I need a fourth, and want something about consciousness: Graziano’s attention schema theory. Consciousness — the way things feel, “qualia”, the inner world, phenomenal experience, the whole shebang — is the brain’s way of staying aware of attention: the flood of info being focused on by advanced mammal senses and cognition. It evolved in a social setting, where we want to model others’ attention too. But how to model the current focus of your own or another’s close attention? A simulated feeling of current attention that is multi-modal (visual, auditory, etc.) that lasts as long as attention: it decays with working memory, in a stream, moving from one situation to another. And as the objects in our attention have grown to include symbols that represent just about anything, so too has our conscious experience been enriched far beyond our “unconscious”.

Jamie.

*The two sides on the point I’m interested in, can be sliced and diced many ways, in terms of schools of thought, specific debates, individuals, etc.  One side is rationalism, positivism, empiricism, idealism, representationalism; correspondence theories of truth, semantic internalism; all the big names in philosophy, mathematicians, most scientists and most social scientists. This is the dominant side in terms of numbers, time, prestige and column inches in textbooks. But most professional philosophers and allied thinkers (including me) consider it the wrong side. On the other: pragmatism, naturalism, phenomenology, post-structuralism; coherence theories of truth, semantic externalism; Dewey, Pearce, Heidegger, Wittgenstein (late), Quine, de Beauvoir, Sellars, Derrida, Lewis, Davidson, Dennett, Millikan, Rorty; gender theorists, some cognitive scientists and computer scientists. The names on this side aren’t as well known, probably because philosophy courses tend to be philosophy appreciation, where you learn the old dead guys, like learning science via Archimedes and Galileo. Deutsch, for instance, in his history of bad philosophy, seems only to know the bad ones. Tellingly, the second group have been much more useful in understanding how primates, infants and computers think. And he analytic/continental divide doesn’t apply here, as both sides have come around to the position I’m talking about above. Even my old buddy Peterson is on this side! We have the same reading list, after all, he just takes a crazy leap when it comes to myths — a leap I’ve now pinpointed in Maps of Meaning — that doesn’t even line up with the other stuff he advocates. (And I admit I’m not sure where the following fit in, as I either haven’t read them or haven’t understood what the fuck they’re talking about: Spinoza, Hegel, Nietzsche, Popper, Parfit, Brandom.)

**We can just about manage it. A great example is Tetlock’s super-forecasters. It certainly seemed like people couldn’t predict the future for shit, see Taleb. Even when people seem to do a good job, it’s indistinguishable from a survivorship bias (someone has to win the heads/tails game). Taleb notes that post-fact rationalisation, the narrative fallacy, reframes it as deliberate strategy. Well, so much for forecasters. But then Tetlock held competitions where people could put their money where their mouth was in controlled tests that eliminate the survivorship problem. Amazingly, there really are now some people who are pretty good at forecasting in certain domains previously thought unpredictable. They share common traits in their methodology. Even though they’re not all decision theorists or statisticians, they were all educated and knew something about probability, had adopted a kind of Bayesian habit of updating priors and used heuristics and practices to countermand their innate overconfidence and fallibility. Brilliant. Taleb used to coauthor with Tetlock, but since the publication of Superforecasting, he no longer does and occasionally flames him on Twitter. Taleb himself is nothing if not predictable.


Also published on Medium.