Knowledge without aboutness

Nearest I can figure, we live in a world that unfolds according to locality.

Take the strongest results in physics. The only nonlocal effect is quantum entanglement. Even then, it’s debatable, it doesn’t seem to allow a loophole to communicate faster the speed of light, and its effects wash out at any scale of interest to us — decoherence ensures the regimes of atoms, molecules, chemicals, solids, organisms, brains, people, societies, and stars is one in which locality obtains. The arrow of time, cause & effect, the speed of light, the branching of the multiverse: all propagate through the various fields that comprise spacetime. No jumps or shortcuts. No spooky action at a distance.

So what? It’s sometimes said that progress is made by taking our best ideas and following them as far as they go. If this locality-based idea of the world is our best, I think it has massive repercussions for the humanities. Mainly, they’re of the negative kind — “If this is how the world is, then X can’t be real,” where X is some intuitive or longheld belief about how people, thoughts, language, culture, might work.

This intellectual odyssey has led me to reconstruct the four master concepts I’ve been interested in as a student of literature — meaning, narrative, metaphor, irony — in new terms compatible with a world of local actions and effects, rather than a world of action at a distance.

No mean feat, considering most everything we cherish in the humanities is based on hidden assumptions about non-local effects. This includes very tightly grasped notions like intentionality, content, semantics, reference, teleology, purpose, morality, mathematics, logic, truth, beliefs, free will, qualia (more on these below). The most important one for most of us, and the one least frequently remarked upon, is aboutness: how a thought, word, or image can refer to something beyond itself. No spooky action at a distance means no aboutness. And that seems to mean no meaning. It’s a trip.

I first tugged on this thread trying to write undergraduate English essays. How do you argue the author meant this instead of that? You dive into the mid-twentieth century debates about intention. That leads you to linguistics and the philosophy of language. Then you realise these are based on dubious psychological ideas. You get into the philosophy of mind and what mental representations might be. Then you realise that’s dependent on human exceptionalism, so you have to figure out how any biological system can represent or be about some other part of the world: which leads into a morass of fascinating but fruitless stuff on biological function, teleology, agency, etc. Along the way you have to take repeat, laborious, wasteful detours into free will, coherence theories of truth, the philosophy of science, information theory, the foundations of mathematics, process philosophy, speculative materialism, active inference, feminist epistemology, and many other things developed by people smarter and better than me in every way.

Then you end up thinking that all of them are engaged in some wishful thinking. You realise that by the 1970s the fix was already in and results from physics, molecular biology, neuroanatomy (then called), history, linguistics, and computer science scuppered every worldview conceived in ignorance of these ideas and even the very notion of a worldview.

In short, a desire to know what Franz Kafka or Elizabeth Bishop was on about, led me to try and comprehend the fundamental nature of the universe, in order to see how agglomerations of chemicals like ourselves could ever think and write poems like Elizabeth Bishop about being Elizabeth Bishop.

At length, I’ve figured out ways to account for my four master tropes in ways that are totally shorn of aboutness, teleology, etc. Metaphor can be based in an ability to make analogies, simply to detect similarities between highly disparate stimuli (along the lines of Doug Hofstadter’s). Narrative is best understood as being like constellations: a perceived set of connections between a subset of stars (events in a story), with a gloss of agency applied by our inveterate theory of mind.

Irony and linguistic meaning are based in common ground, conventions arising through signalling games, and the use of inference: a non-aboutness way of knowing the world. But what about bigger picture, capital em Meaning? That’s a long story, incorporating the basic thrum of the universe (locality), the substrate of life (the chemistry of self-replication), the peculiar mode of awareness we have (consciousness), and the politics of free time. It’s my under-construction book: How to Free Time.

This post is preliminary, a sounding out of the most distasteful idea of the book: How can knowledge happen — how can things be meaningful — without representation, content, reference, information, semantics, intentionality, aboutness, signification, or symbolism? 1)I would love feedback on this because it’s the basis of a chapter in a book I’ve been working on for years. The problem is a doozy: How can one part of the world mean another part of it? That is, how can information, data, writing, maps, code, animal calls, artworks, genes, gestures, equations, thoughts, signals, or anything else somehow be about something else. There’s no magical connection between the word (or anything else in that list) and the thing it represents. The unshakable intuition that there is some kind of connection, seems to me to be an artefact of how our brains, language, and culture evolved. This is a big topic that cuts across dozens of disciplines. But analytic philosophy is probably the place that has tackled it most head-on, although mainly in ways that I think are totally unconvincing. In fact, I’m convinced this is the single biggest limitation to most philosophical approaches, worldviews, philosophies of science, ideologies, etc. Almost every thinker ever has been under the delusion that we live in a world of aboutness. (And I know that every thinker ever has felt every other thinker ever has been deluded. Gotta rehearse the classics.) I’ve tried not to use jargon from any particular discipline. But I do want philosophers, in particular, to respond because I still feel like I’m missing some argument from somewhere in that literature that I’ve overlooked, because my conclusion (that there simply is no aboutness, representational content, semantic meaning, intentionality) seems to be a very rare position, considering how obvious and useful it seems to me. Also there are two related posts, one on doing science without the notion of observers and the other about consciousness and shaking the ultimate illusion: perspective. They’re all interrelated and have some overlap.

Meaning is conventional

It’s commonplace to say that meaning in human language is conventional. Most people who have done a BA since the 1970s heard there’s only an arbitrary relation between the signifier (the sound or shape of a word) and the signified (what it refers to). It’s not strictly true because there is onomatopoeia and bouba/kiki phenomena. But it’s nonetheless right that there isn’t anything holding a word’s meaning in place. A dictionary definition isn’t set in stone. It’s updated according to usage. Nor does it have any authority. Lexicographers merely track how a word is used in the wild (of course, there’s some feedback: people look up words in dictionaries to inform their usage).

The words enormity, disinterested, nonplussed simply mean different things than what they did when I learned their definitions twenty years ago. It’s not that the people who use them in new ways need to be scolded into reading the dictionary (which has probably caught up to their usage now anyway). The meaning of a word is only how people use it. This means it’s a collective act of convention and cooperation; and not everybody cooperates so meanings are neither unanimous nor immutable.

I begin with this example, because most people don’t have much emotional connection or ideological loyalty to the meaning of some not quite everyday words. There are even less controversial examples, like words that changed meaning a long time ago, that people might not even be aware of. Awful used to mean awesome, awe-filled, or awe-inspiring: the opposite of its current meaning. You can see the trace of the word awe in it. But if you’re unaware of that etymology, there’s no reason to think the words awe and awful are related.

I press this point because we can see two things.

  1. If you don’t know about some alternative meaning, it doesn’t exist for you. You can’t detect the older meaning of awful if you read it somewhere, even if the author intended that older meaning. The author’s intention is irrelevant here, provided you can’t infer it. And you can infer it only if your personal history includes knowledge of awful‘s etymology. 2)Depending on the passage it’s embedded in, there might be contextual cues. But imagine a simple sentence like, “This painting is awful.” You would need to know which century it was written in to get the meaning.
  2. If you don’t know this history, there’s no reason to suppose there’s any intrinsic connection between the word awful and the older meaning. If there was a nuclear war and no dictionaries survived the fallout, and the only English language users that survived were people without etymological knowledge of awful, then that older meaning would not exist anywhere in the world. There are millions of “older meanings” of words from dead tongues and live ones that are unknown to any current human and unrecorded in any dictionary. To me, those meanings have no existence in this universe. And if that rings true for this example, then one must admit that meanings never exist inside the word itself: they exist only as conventions of use (stimuli and response), distributed among the nervous systems of living people using the word.

Conventions are strategic

And what is a convention? I think a convention is best seen as a pattern repeated over time.

The best framing of how meaning in language (or anywhere else) works is in Brian Skyrms’ compact book Signals (2010): largely unknown outside of certain philosophical circles (linguists seems to have ignored it). Skyrms models the way a word (or any other kind of signal) gets a shared meaning attributed to it.

The process is pretty much the way a price is found in a market of sellers and buyers, or the way animals in a population arrive at an evolutionary stable strategy: a distributed, somewhat enigmatic — but wholly natural — process that can be modelled by game theory. After a few rounds of using a signal, by chance, a convention for treating it one way simply takes hold and as soon as it becomes a quasi-convention, it is worth any other user’s time to follow the established lead. If most other users in a language community say the word nonplussed to mean unimpressed or unbothered, then it’s to my disadvantage if I use it to mean baffled or surprised. Here my disadvantage is minor. But in the trials of life, using a signal the same way others do can offer a real advantage. If you say, “Watch out for that cliff!” And I respond in Saussurian mode with, “Oh yeah, why? What’s a ‘cliff'” then I’ll be snarkily air-quoting into oblivion. Adopters of a signal’s conventional meaning tend to live on and beget children who are also adept at cottoning onto prevailing meanings.

This is the most credible model for how what we call meaning arises in any kind of culture or communication. That includes speech, gestures, emojis, etiquette, ritual, art, fashion, etc.3)`Probably the entire realm of normativity if you’re feeling expansive. Interestingly, even Skyrms is still somewhat wedded to aboutness, in that he thinks signals (although purely conventional) somehow “carry” information and that this information can be about the world.

Still, Skyrms’ convention-based account is a great way to explain why words and symbols hav such traction, leading us to feel words or symbols carry meaning inside them and refer to other parts of the world. He could be way wrong. But it’s a proof of concept for a non-aboutness way for symbols to be arbitrary yet powerful.

A signal can’t contain content and information can’t be about anything

But my further advance on Skyrms’ already outré ideas puts me in a vanishing minority of people who think (1) meaning is only ever conventional and (2) never about anything (lacking what philosophers call intentionality4)This SEP article is great intro to intentionality, especially §9. I hold that all attempts at naturalising this are doomed; I think intentionality should be eliminated. and also that (3) signals don’t contain or carry any information (they lack content in philosophy-speak).

To find someone who swallows all the bitter medicine, we have to turn to Alex Rosenberg.5)For the philosophically inclined, Rosenberg’s key article on the topic is here; but he outlines his view in the popular book The Atheist’s Guide to Reality (2011).

In a nutshell, he says there cannot be any aboutness in the world because it’s something nowhere to be found in our best explanations of the world 6)Philosophy mavens: yes, he has a whole thing on how you can use knowledge “about” the world to show how nothing can be about the world; you just need a more modern idea of epistemology which he gets with an update on Quine’s ideas . There is no known relation or process in physics, chemistry, biology, etc. that could account for how one part of the universe could somehow be about another. (I go even further and say that that kind of relation would probably violate conservation laws, locality, the second law of thermodynamics and so on).

This has massive implications for what we think about (whoops) almost everything. Arguments over what a book or film means, for example, are futile unless they’re arguments over what conventions are dominant or what the films/books can be used for.

Rosenberg is, to me, a Cassandra. His warnings should be heeded.7)I’ve asked him who his allies are. He notes Bill Ramsey, Quine, the Churchlands to some extent, and feels Dennett might be a closet eliminativist with respect to content (Rosenberg, personal communication) — I know what he means; if one changes Dennett’s terminology a little bit he is an eliminativist except for PR reasons.

No aboutness means no truth, proof, or logic

In addition to the three claims above, I hold two more that go beyond what even Rosenberg will sign-on for.

I think that (4) the no aboutness fact has devastating implications for epistemological notions such as truth and knowledge. And (5) I think that the prohibition on information being about anything has big consequences for fundamental theories in quantum physics, computation, information theory, thermodynamics, mathematics, complexity, communicating with aliens, etc. 8)To be clear, if I had any formal training in physics or computer science I would dedicate my life to properly figuring this stuff out. How I wish I could put my money where my mouth is and not just be a guy on the internet with a theory that “physicists should take seriously” — surely the lamest fate for anyone blessed with literacy and web access. Also, to allay fears that I’m a crank, I don’t think the content-free version of information would transform our lives by offering faster-than-light travel, proving P = NP, or find new particles. Merely that it might help eliminate some dead-end ways of thinking about, say, the black hole information paradox, and perhaps expose some questions as poorly posed. It does, however, make predictions, especially in the realm of communication.

Logic and mathematics have internal consistency, sometimes even completeness. But the “unreasonable effectiveness of mathematics” can only be explained by its usefulness as a set of inferences and analogies that sometimes work really well for building new things and making further inferences.

Stephen Wolfram notes how vast the space of possible mathematicses is; most have no relevance to our universe; the history of mathematics is the pursuit of the useful bits. A logical or mathematical statement (or one in natural language for that matter) can’t have some abstract long-range connection to some bit of the world. And without a community of mathematicians, no one would ever think they do because the statements wouldn’t exist.9)This is the bit analytic philosophy people will probably hate the most. Even hardcore naturalists, materialists, etc. feel wobbly talking about mathematics, which has a beguiling, almost supernatural feel to it. Philosophy of mathematics used to be my thing. For mavens: I actually think there are versions of nominalism, fictionalism and even structuralism that, with minor tweaks, are totally compatible with eliminating intentionality, content, and representation. True, you have to jettison truth as being correspondence or propositional, but proof can be retained on purely physical, even computational terms. The seeming alignment of mathematical statements with timeless regularities in the world is best thought of counterfactually. If we lived in an irregular world, would mathematics be unreasonably effective? Nope. Neither would it ever have evolved.

Will 2+2 always and everywhere = 4? Obviously not, on a purely symbolic level because those numerals are historically contingent. In what sense are they timeless? There could be some feature of the world that is timeless. But the symbols are the map, not the territory and even more importantly, the map doesn’t “map onto” or represent the territory: it’s just a subset of the territory that we (also a subset of the territory) use to navigate it, built up from previous navigational attempts.

Importantly, I don’t think I have some theory of everything that people way smarter and better informed than me have missed. All I think, is that certain avenues or framings in physics, mathematics, and computation, etc. would be seen as less fruitful or dead ends if people took seriously the idea that information has no aboutness.

My sneaking suspicion is that a lot of seeming paradoxes would be cleared away. Not answered. I have no discoveries, no special insights. I just think a certain framings and investigations of things could be more streamlined and ultimately that means faster development of technology and know-how.

So how does knowledge happen?

Aboutness is ridiculous when applied to knowledge because it leads to an endless deferment. In the aboutness approach, you aim for an increasingly accurate map, representation, theory, account, description or whatever of reality. How do you know it’s more accurate? One day the big verificationist in the sky will confirm it for you…

In reality, we test our knowledge by trying to make stuff happen. If you’ve got a good theory or a bad theory, it won’t live or die based on whether it’s a better representation of reality: as though there were something in it that had an intrinsic connection to some feature of the world it refers to.10)People —especially scientists, especially physicists and other “hard” scientists — hate this. Understandable. It says that whatever laws, theorems, proofs physics has on the books are just that: on the books. Without being used for something, they’re not knowledge, they’re just entries in some ledger. Imagine a computer spits out permutations of mathematical symbols, and alights on Maxwell’s equations, or the Schrodinger equation or whatever. Knowledge? It would, according to most people’s ideas, be “about” the world and would “contain” knowledge of the universe. I think that’s mad. Comes down to the regrettable fact that most modern knowledge is descriptive: a linguistic or mathematical description of as broad a class of phenomena as possible. In this sense, it’s a bit like a work of history: good, solid, non-narrative history that simply attempts to record or account what happened. What is its value going forward? It will have predictive value in the future in direct proportion to how uniform and regular the phenomena it describes are. For a theorem describing the behaviour of hydrogen atoms, it should do pretty damn well and never give out in our lifetimes. For one describing living systems (with their combinatorial explosion of genes, proteins, organic molecules, nervous systems, etc. it won’t even get 100% of the phenomena currently known to biologists. For something on human culture (like the discipline of history itself) it will have approximately zero predictive value. Not because it isn’t “true” but because human culture is nonuniform, it keeps changing, and will be different in different ways in the future, so a record or description — even a complete one — of past events is not so much useless as irrelevant. Even predictive value is not the currency of knowledge. The best knowledge is embodied in technologies/practices that allow us to make new things happen. Much of it doesn’t even have theorems, formal statements, conjectures, formulae, canonical versions, etc. Those things are only ever aidesmémoire: part of a whole system involved in doing the knowledge. If somebody lost the documents on which the patents for CRISPR were written, it wouldn’t affect the use of CRISPR as a technology used by molecular biologists or indeed by bacteria (from which we borrowed it). The reason knowledge doesn’t lose power if you take away the written version of it is that the written version was not about anything. Neither was the version in people’s heads. An actual process of physically doing stuff is “where” the knowledge is. That’s because we live in a physical universe of stuff happening, not a storyworld that is described or narrated by authors, evil demons, simulators, or gods.

All of that is drenched in aboutness rhetoric. To short-circuit it I use non-intentional analogies. Useful knowledge is like a lock-and-key process. You might have an idea that allows you try something new or build something new and thereby make something new happen. You will have done something that unlocks an unprecedented event.

A chemical catalyst is a good example and another good analogy. A catalyst makes things happen that wouldn’t otherwise, or at least wouldn’t happen as much or as rapidly. But a catalyst doesn’t work in a vacuum: it needs to be embedded in the right environment.

Ditto for knowledge. Even the most vaunted examples of scientific theorems needed surrounding conditions to work. You couldn’t send E = MC^2 to an alien planet or a remote tribe and expect it to do anything without surrounding infrastructure.11)Is this a relational universe like Carlo Rovelli or Karen Barad advocate? Yeah, maybe. Is it some kind of process philosophy? I guess so.

If we really get into it, to how knowledge grows, we end up in a fairly evolution-centred frame. Knowledge ends up being cashed out as whatever processes keep the knower maintained, get it replicated, support its survival into the future. The history of life is, according to some molecular biologists and biochemists, an electron looking for a place to rest; or its the cycling through of the global electron marketplace; or it’s the stable things gradually replacing the unstable; or, most tautologically of all, whatever is good at survival survives. What we call knowledge is a way of doing things that means you don’t dissipate.12)If we want a slogan for the overall ontology of this crazy world, I say we drop Wheeler’s popular “It from bit” — loaded as it is with the intentionality and platonic realism of information — and adopt: “Survival of the it-est.” For an epistemological slogan we wan ditch “might is right” and go with “do is true”.

(Again, if I was a physicist or chemist, I’d try and work this all out in terms of potential energies, energy landscapes, and try and show that aboutness is disqualified on conservation of energy grounds alone.)

In my scheme, there clearly can’t be anything like truth (at least nothing like a correspondence theory of truth). One thing can’t “map onto” reality in a better or worse way. There is no connection or relation between one thing and another like truth value or logical deduction or even mathematical truths.

So how can we know anything? Well, there are non-aboutness (non-referential, non-representational, non-intentional) ways of knowing. Casting around contemporary cognitive science, evolutionary psychology, neuroscience, and AI research I find there are some good candidates for how knowledge can happen in a world without any spooky action at a distance, without any aboutness.

Taking vision as the starting point again, clearly a light-sensitive cell is physically bombarded by a photon that comes from “over there” and purely through trial and error (natural selection) falls into a pattern of responding (in a purely cause/effect manner) to photons of certain intensity one way, and of another intensity another way. Obviously visual systems gets more sophisticated and can tell how a bunch of photons from a particular wavelength typically associate with certain things being a certain distance or angle away from them.

Codes and “information”

So you want to send a coded message to someone. You need them to understand you clearly (difficult enough) and yet for others to not understand the message at all.

Say you’re in a foreign land where only you and your interlocutor speak your language: that’s easy, your language is effectively a code. Otherwise, you have to simulate having a shared language with your interlocutor, one that is not shared by your enemies.

You need a code and a cipher. You take your written message, and put it through a series of steps that encrypt it: a cipher. Then your friend gets the message and has to decode it. Before it’s decoded, it’s meaningless to them. As in, they don’t know what it means, they can’t use it, and they don’t yet know that it means anything (before they decode it, it could be a string of random symbols or gibberish sent by you as a prank). If they have the cipher, they can decode the message.

Here’s the rub. How much information is in the message before it’s decoded versus after it’s decoded? Information theory says the “amount” of information doesn’t change. (It also says there’s the same amount of information in an equivalently entropic string that has no cipher.) Famously, information theory (based on the Shannon–Weaver formalism) doesn’t deal with semantic meaning, only the quantitative measures of information, in terms of how much can be communicated via a given channel and how the ratio of redundancy improves accuracy, etc. It also means that the “amount” of information in a purely random string is maximally high. I don’t think this does anything for us if we’re trying to understand meaning. Not because it’s quantitative, but because it fails to be relational, processual, interactive.13)Note another application is to networks, complexity studies. People from all sorts of domains love modelling the network of connections between people, animals, airports, computers, historical actors, etc. But these connections (although they show as unambiguous lines on a graph) are imaginary. They stand in for some correlated behaviour, some physical events that were linked (via actual physical contiguity) to one another. But just because network graphs and similar “representations” show connections that leap across space and time, this seems to gull people into thinking there actually are long-range, spooky connections.

In other words, quantifying the information “in” a signal is only half (or less than half) the story. The meaningfulness of that information can only be determined by the complementary information held by the receiver.14)Consider the fact that the information sampled from a single die roll is higher than a single coin flip. Under this conception of entropy, there is greater reduction in ucnertianty with a die because it is a definite outcome drawn from a larger space of possibilities. This is madness. I mean, it makes perfect mathematical sense in terms of the enumeration of phase spaces, or combination, permutations, etc. But this has little relevance to the meaningfulness of signals in communication and, more importantly, masks the whole problem with the current notion of information which is incorrigibly intentional or aboutness-laden. Namely, the quantity of information “in” the signal (a heads, rolling a five) is given only if one has complementary info: the size of the state space. Also, even in this idealised example, one needs to have some kind of detector, receiver that is constrained in special ways to dscriminate the type of signal, which itself has to be part of a carefully constrained system, such as a die that can only land on six faces. In other words, the purely informational approach ignores the thermodynamics reality of, well, reality. Hence, folks like Landauer are on the right track by tethering platonic ideas of information entropy to thermodynamic ones. With the cipher, the message is meaningful; without the cipher, it isn’t. The message hasn’t changed, it has been used or utilised.15)This is clearly the case with codes/ciphers but it holds for all information. E.g. an astronomer analysing a bunch of starlight from a region of the sky: what they get out of that info is dependent on their pre-existing knowledge, theories, history, other data already collected, etc. And even then, the info from the starlight can be correlated in arbitrary and infinite ways, yielding more meaning. Suppose a military draft comes into effect and instead of drawing balls from a barrel, the lottery is based on astronomical data gleaned in the aforementioned example. Now the starlight “means” who gets conscripted and someone with access to it who knows the “cipher” can predict who will fight in battle; a queer kind of astrology. But of course, that knowledge isn’t “in” the starlight, it never was; it depended solely on how the starlight is correlated/entangled (I use these terms in a quasi-technical sense) with other information in brains, cultures, technologies. This is another lock/key example.16)For people jumping ahead, yes that mean some signals contain more potential information than others. But that potentiality is impossible to second-guess or know in advance for the same reason David Deutsch says you can’t predict what the next advances in knowledge will be; and the same reason why you cannot devise algorithms that know in advance whether certain mathematical questions have answers; and the same reason you don’t know Wolfram’s Rule 110 will lead to complexity, without running the thing.

Importantly, some small packets of information (quantitatively small according to Shannon–Weaver or physically small by brute atom count) appear to have outsize effects or outsize meaningfulness. This is what I call the seed metaphor. It can be misleading. A tiny seed “contains” all the information needed to make a giant tree. 17)`Examples of scientists beguiled by this idea include Deutsch’s knowledge probe, Dawkins’ replicator bomb, the idea that E=MC^2 represents huge information compression, that DNA “contains” information “about” the organism that develops, etc.

I want to retake this metaphor, because it actually supports my view of things. It’s actually another lock/key example. A seed cannot grow in the wrong medium. It needs soil, water, nutrients, gravity, temperature range, and so on to thrive. The environment provides constant feedback and complementary info, in the form of mycelium, weather, chemicals in the soil, etc. A seed dropped in the vacuum of interstellar space is useless, meaningless. 18)Biologists are the most beguiled and Dan Dennett has been reining them in for years; see especially Darwin’s Dangerous Idea p.195ff.

I use the lock/key metaphor instead, to emphasise the active nature of this relation and the complementarity more clear. A key can be tiny, simple, and low-info and yet unlock a gigantic, complex, high-info apparatus. To a naive viewer, it might look like the key contained an outsize amount of info based on the effects it had. But, really, there was a lot of dormant energy/information/complexity in the lock. True, without the key it was useless. There are certainly asymmetries out there, ready to be exploited. This is what I think knowledge is. A new technique, a new technology, a new life form is really an exploited opportunity to unlock the power of relational information, to climb a fitness landscape, to minimise free energy, to solve an optimisation problem, to find a new stable pattern.

Back to the coded message. A message can instigate a lot, and yet be small, provided it finds the right “fertile ground”: complementary info (the receiver knows what to do with the message). Even without codes, every act of human communication is a lock/key or soil/seed relation. A word, gesture, phrase, action is meaningful to the extent that there is a rich enough context for it to act in.

There are other reasons I think the key is the best analogy. Imagine an ICBM missile silo. Turning a couple of keys is the final step in launching a thermonuclear weapon at an adversary which in turn will trigger a nuclear war. The physical effects or impact could not be larger, at least at a terrestrial scale and certainly on a biological and cultural one. Yet it’s daft to say that those particular keys were especially loaded with meaning or power. They’re keys cut like those for a car’s ignition (here’s a nifty, terrifying video from a Minuteman museum on the actual launch protocols). There’s nothing in the key that makes it so potent. Sure, its teeth are the right pattern to turn to the lock. But that pattern isn’t intrinsically about nuclear war or anything. What confers such power, such causal influence on the keys, is the network of highly complex technology and bureaucracy surrounding it. The latent energy of the nuclear warheads, the engineering of the vast silo, the intricate procedures and training — all of it means that a legion of dominoes is already set up, ready to fall in a complicated pattern, with the keys being the somewhat arbitrarily chosen initiating domino.

Items don’t “contain” information

No aboutness = no content. Certainly no mental content, according to the way philosophers use that term (thoughts aren’t about certain things). But also in the broader sense that certain media have content. The idea that they do is expressed, I think, in the container metaphor of meaning. (And the related conduit metaphor.)

When something is about something else we imagine that it has taken on content, gained or subsumed something “about” the world. Really, it has physically changed — or more commonly the interpreter has physically changed, the signal remains the same, but now has a different meaning.

Words are still the best example. One can simply invent a new noun: it’s born meaningless, and through use it becomes associated with some thing in the world. In no way did the letters or phonemes take on or contain anything from the world in that process (nor did the thing in the world take on anything from the word). The word itself didn’t accumulate or collect any content.

Genes can’t contain information about environments. A sequence of bases is just a sequence of bases: the ones coding for genes don’t have magic dust sprinkled on them. They’re simply the combinations which unlock strings of amino acids, whose properties make them fold into stable proteins, which perform chemical tasks that lead to the survival of the cells in which they were constructed, and hence the replication of the genome that produced them. Every stage of that is trial and error. If you could see, as if in some AR scaffold, the countless ghostly twins of all the genes that didn’t make it, the strings of amino acids that fell apart, the proteins that were ineffectual, the cells that died — you wouldn’t ask what a gene is “for” and wouldn’t ask what information “about” the world is “contained” in a gene. The genes around today are the ones that work in current environments, like keys fit locks; even more detachedly, those genes are matched to their environments only by forces as passive and impersonal and contiguous as those which mean planets tend towards spheres.

But it takes a lot more imagination, more calories, to continually think this. It’s always easier to adopt the mental shortcut of aboutness, intentionality, teleology, animacy, causation-at-a-distance.19)Classic article by Schlottman and Surian on how we perceive causation-on-contact events as being-causation-at-a-distance and how we parse the two types.

No aboutness in objects or media — but surely there is in human thought?

People are still arguing over Kant. He said there’s the thing “out there” (presumably) and then there’s phenomenal knowledge “of” the thing in here; but you can never have “direct access” (in more modern language) to that thing.

Some people say you can have direct access (via mathematics, or logic, or even in some very well performed experiments); others say there’s no way to access the world so it’s only the phenomenal experience part that we can be sure of. Kant himself has some sinuous arguments for how we can get around it (synthetic a priori, categories of understanding). These can be shoehorned into a modern conception of how cognition works and sometimes people writing about psychology or the brain say that Kant has been vindicated.

But here’s the real problem, as I see it. How would this “access” work, even in principle? One photon bounces off some object and then terminates in my retina. First off, I can’t “access” the object directly because it’s only ever traces of photons and indirect effects my brain uses to infer objects. So it’s unclear, on the world side of things, how I access the object. Second, more importantly, on the brain side of things, how would an object (let’s use the photon for now, though even it is indirect effect of the object) become directly accessed? Does it need to make contact with a certain part of my brain? Even other parts of my brain aren’t in “direct” contact with other parts. There is no Cartesian theatre where it all comes together, as Dennett constantly reminds us.

To put it plainly, do I even have “direct access” to my phenomenological perceptions, my thoughts? Who’s the “I” in that sentence? The answer, of course for anyone who’s been listening to the brain science of the last 70 years now (long enough, I feel, for there to be no more excuses for interested parties), is that there is no central, integral, discrete I, no Cartesian or Kantian subject. Such things are fictions (increasingly well understood) of introspection. 20)And yes, the real issue is a cognitive neuroscience explanation of introspection. Graziano’s is a real candidate; it might not be the best one but it’s the right shape of explanation; we want a solution to the so-called “hard problem”.

Back to the photons. Vision, like every other process relevant to thought is essentially macroscopic. That is, it doesn’t depend too much on quantum indeterminism, on superposition, or on entanglement. Everything we can see (and the fact of vision itself) is constrained by locality, distance, and space. Until physicists show that the airy locality-confounding connections of entanglement or wormholes (or some as yet undiscovered process) actually affect macroscopic reality, I’ll assume that the way we see, think, and understand is based in locality, and hence any account of meaning must be too. I make this clear because it’s the biggest barrier to understanding how information (of any kind) cannot have aboutness; and I put it in brute physical terms so that allies among hardheaded scientists might come on board.

The way that information — even in physics and computer science — is talked about is frequently in terms of causation-at-a-distance: that there is some way that information over here can be about some part of the universe over there. It can’t. Someone can infer something of over there if some matter or energy from over there makes contact with something over here that is complicated enough to make inferences (e.g. an eye, a nose, a motion-sensor, an insect). This is causation-on-contact.

Causation-at-a-distance, on the other hand, is teleological. When we see an animal going towards food, we think it is the food over there — the goal or end or purpose — that somehow pulls the animal toward it. We evolved this teleological sense because it is a good heuristic, a great shorthand.21)Note for philosophers: consider that teleology when we talk of cosmic teleology or any notion that the future determines the present, is nothing more than a positing of causation-at-a-distance through time, rather than space. But it’s iffy. We can’t see all the intermediate, microscopic physical events that actually make the animal’s movement a lot more like falling dominoes than we realise. In reality, some photons bounced off the food and went across the space and hit the animal’s retina, or some molecules broke off from it and wafted through the intervening space to land inside the olfactory receptors of the animal. The animal’s brain then clunked through a chain of causes — no gaps — and started actuating the muscles, etc.

In a world based on causation-on-contact, our brains (also based on causation-on-contact, being made of chemicals like everything else we can see) mocked-up a shorthand that uses causation-at-a-distance to summarise super-complicated events taking place in our environment. It’s wrong. Not because it doesn’t work but because it glosses over all details.

Wait a second. Aren’t I saying that there is no correspondence theory of truth? So what does it mean to be wrong if something simply works? Well, our heuristic idea of causation-at-a-distance is not a problem for anticipating the motion of a bird or a toddler. But for understanding things that are more exotic — including information, physics, brain science — or for understanding itself as a process (conscious introspection) it sucks. Doesn’t work. There’s no reason why it would for those domains: they weren’t applicable to terrestrial mammalian evolution, wherein the causation-at-a-distance heuristic evolved.

Alas, it means armchair philosophy doesn’t work for figuring out the how of consciousness, only the what, and even then only some of the what (the neuroscience of vision shows how mistaken we are even about the way things seem to us, let alone the way things are). Armchair philosophy is kneecapped by the illusion of aboutness applied to thought itself.

Non-aboutness ways of knowing

But how does all this very mechanical stuff bubble up to serious human cognition?

Well, I think inference is essentially the same process the eye is going through but at a very sophisticated level of processing. A bunch of stimuli come at various parts of the brain (through an unbroken, domino-like chain of contiguous physical events — natura non facit saltum!), many from other parts of the brain, and because of that evolutionary history of natural selection, and early development triggering selection events within the brain (learning through trial and error to distinguish shadow from object, etc.) our brains learn to infer quite a lot from the signals hitting our cells (through smell, vision, touch, memory, etc.).22)The active inference crowd will be getting excited now — I think they’ve got a great overall framework for how cells and brains establish a boundary and use it to mediate various essays “about” the world around them to aid action. I think this framework can be used without aboutness; this article is fascinating, and I think what it’s advocating can be construed in non-aboutness terms (although it relies heavily on mathematical structures or formalisms being “realised” in cognitive systems, i.e. brains or cells); but then this article worries me.

Inference seems like it’s inferring something “about” the outside world given a sample of data. That’s what it looks like to incorrigibly aboutness-loving brains like ours. We’re mistaken. Inferences are as mechanical as the photo-sensitive cell, or a thermostat (classic example).

Analogy is another great non-aboutness cognitive tool. An animal that can recognise similarities (along various dimensions) between superficially unlike events can learn a lot without aboutness. A primitive visual system notices another blue patch and chalks it up as being the bluebird it saw yesterday. That’s a primitive kind of analogising (what we would typically call categorisation or the formation of concepts). Ratchet that up to a human who can see similarities that aren’t only visual or sonic, but are more abstract, and they can categorise things together simply by noticing and comparing. (Effectively they’re making inferences over time, spotting conventions, affinities, resemblances). Crucial point: if I notice A and then notice B and detect some similarity, resonance, etc. I draw an analogy between them — but doing so doesn’t affect them nor does it imply that A mirrors/represents/refers to B. This is the fundamental fallacy we habitually commit. We project our own noticing of a resemblance or our noticing of an entangled history between two things onto the things themselves, and hence we go around believing that a word refers to something, an image is about something, a string of information represents something, and so on.23)Quick demo of why content isn’t what it seems. I draw an analogy between two instances and label that thing as a rabbit. Then someone else cottons on to a pattern repeated in nature and similarly draws an analogy between multiple instances of what they take to deserve the same label: they call the thing with a fluffy tail they saw yesterday and today, a lapin (French). To the pro-aboutness person this is because my concept of rabbit and their concept of lapin have the same content despite having different labels; it is precisely the content that picks out or characterises our concepts of rabbit and lapin. But this noticing of similarity between my and my Francophone friend’s concepts is itself another analogy: the content-freak has noticed two things with some similarities, and some differences, and has analogised from one to the other. What’s wrong here? Believe it or not, the problem of induction. Tomorrow I see another long-eared mammal but detect some features that are importantly different from the rabbit instances — I call it a hare instead and classify it as a new concept. The French person sees what I call a hare and doesn’t notice the differences, and so calls it a lapin too; they’re not just using a different label, they think it is the same thing that they saw yesterday. Now our content freak cannot say that our lapin and rabbit share the same content. It turns out that “same content” was itself another analogy, always loose and fuzzy, and always open to being scrapped or re-formed. But wasn’t there some real pattern in the world that, at least for a while, we both imperfectly tracked and corroborated? so we both shared knowledge about the same feature of the world? We can’t know. Content is in the eye of the beholder and open to constant refutation or revision, based on subsequent events in the world that can — like Hume’s black swans — overturn any supposed regularity from the past: it can’t be confirmed or verified. You can go around using the same words with speakers of your language because you use that as a tag for some analogy you’ve drawn between various instances of things. But as soon as your friends start using words differently, you’ll follow suit (unless you want to be ostracised) because you’re good at tracking conventions — which are analogies drawn between events that repeat across time. And this protean, shifting nature of our words is absolutely necessary for survival in a niche of protean and shifting conditions: Homo sapiens‘ evolution as uniquely social primates. Words with locked-in meanings wouldn’t work, they wouldn’t have the requisite flexibility. And it’s not clear who would lock their meanings in, prior to the advent of lexicographers and grammar police. So we face a sublime irony: there must be regularities enough in the world for us to have evolved and to have brains and for analogies to ever have any usefulness; and yet we cannot ever know of (as in aboutness or content) such regularities because to do so would require perfectly verified, divine, impossible knowledge that violates the problem of induction. It’s also the case that life couldn’t have evolved in a perfectly regular environment either. The only way to check that an attribution of content is accurate would be to do something that is precluded by the arrow of time, whether or not the world has aboutness.

Finally, simulation can be non-aboutnessy — simulation as in episodic memory, mental time travel to the future, visualisation, and phenomenal consciousness itself. Your visual experience (the Grand Illusion) is a multimedia magic “show” that the brain puts on to aid a certain lifestyle for hyper-social hominins who lived and died by their ability to cooperate, conspire, and care. To do those things, it helps if you model others as being animated by an inner light and if your own attentional processes are likewise attributed a kind of glow, an auratic nonphysical specialness: the inner magic show.24)Again, see Graziano. (Incidentally, this whole paradigm dooms thought experiments involving alien simulations, Boltzmann brains, evil demons, the Matrix and so on. All of them doubt the world but not the integrity of the doubter, the one who can be deceived by “misrepresentations” about the world.)

Are the “qualia” of consciousness about anything? No and I think this is where the illusion of aboutness actually comes from. Once you have that gulf between the arrival of photons and the arrival of the objects they bounced off, you have to mock-up an experience of reality that is non-local. You have to assume there are objects out there divided by empty space and that therefore when you see, hear, understand, communicate, or even just make eye-contact, there are some airy filaments that somehow traverse the intervening space: filaments of aboutness.

Ironically, this illusion over-connects an already connected world; it’s just that it bridges gaps and tries to take shortcuts (spooky action at a distance) because our eyes and visual systems (including the part responsible for phenomenal conscious vision) can’t possibly reconcile the speed and behaviour of photons, with the speed and behaviour of conspecifics and other macroscale objects that actually matter for survival. 25)Doing so would be computationally impossible anyway, I’m pretty sure.

So qualia in our visual experience are not “about” the objects it seems to “depict”. Qualia are a bunch of inferences regarding what’s in front of us (and I do mean in front: our vision is pinhole — although other senses aren’t just in front of us, vision dominates consciousness in the sighted population). The weird thing is that this process of high-level inference-making and analogy-seeking — paying attention to something — is imbued with a nonphysical, ineffable quality.26)Why? I think Humphrey’s answer is best: to make us more in love with the world, to make us value our and our kin’s survival. It’s an impressive system, maintaining a “gapless” fluid stream of conscious experience.27)There actually are gaps, but if you don’t perceive them they might as well not have happened; see Dennett’s discussion of Orwellian and Stalinesque memories in Consciousness Explained. . But even consciousness is not “about” anything. There are no intrinsic connections or relations to the events that populate conscious experience.

Now, perhaps the hardest pill to swallow. Qualia aren’t about events in the world because qualia aren’t real, or at least are nothing like the things we take them to be. There is no magical, discrete, animated object over there in front of you. There’s a mess of fields, excitations that show up as particles, that don’t have colour or essences or categories or content. All of that is just how our minds navigate this world, being themselves part of it, and made up of those selfsame undulating fields. Or superstrings or quantum loops or whatever. It doesn’t matter and you don’t even need to be a reductionist to buy this. The point is that qualia are qualia, not the things they represent and they couldn’t represent anything anyway, because nothing can represent anything else, only resemble it. Subtle difference but it means a lot for meaning.

How did this illusion arise?

Our bodies, including our thoughts, live in a world of chemical reactions. Chemical reactions rely on contact. All our senses depend on contact. No spooky action at a distance.28)Is there really no action at a distance? Quantum entanglement might be something like that, but it appears to have no macroscopic relevance and hints instead at an underlying reality from which space and distance are emergent. We live in the emergent cosmos, so I don’t think entanglement is the road to aboutness (for a while I entertained this idea, noting that David Deutsch posits that every particle contains “entanglement info” designating which branch of the multiverse it’s in.). There’s also gravity, the most famous action at a distance. But now we know that it too is an influence that propagates through space no faster than the speed of light. A gravitational wave can’t exert influence over a distant region without rippling through the intervening space first. (Again, gravity is pretty special, even metaphysically interesting, given it’s not like the other forces and seems to affect all types of particles, all mass and energy. But it can’t get you to aboutness. Besides, it’s all pervasive, where everything is attracted to everything else, not just “connections” between two things in particular.) Wormholes are a theoretical way to shortcut spatial distance. They’re interesting and they point to some fundamental substrate from which our 3-space emerges. But unless someone has a wacko theory of how thoughts follow wormholes to get to their referents, I can’t see it being relevant to anything in our lives, including aboutness. In short, you can’t know about anything outside of your lightcone; indeed you can’t know about anything inside your lightcone. You only know what physically influences you. This rams home the point that aboutness is an illusion, otherwise we could potentially know about things beyond our lightcone of physical effects. You might say, “I can imagine something beyond our light cone quite easily and therefore have knowledge of it.” But how would you go about checking that the thing was really there? Only by physically investigating it, detecting some physical trace, bumping into a photon which had also bumped into it. That’s what knowledge is. Nothing can have an effect, impact, cause, influence over a distance. But the niceness of our senses, especially sight, builds up our mirage of having a perspective here that is of the world that is over there.29)I have another post dedicated to this as the last illusions needing to be dispelled from the folk understanding of the mind because it is the source of illusion itself. But really we’re just part of the world being bumped into by other parts. Through evolution, trial and error, we’ve got nervous systems that bump back in ways that keep us alive.

In our heads, we have neurons doing things in response to certain stimuli (from other neurons or photons, etc.). This is causation-on-contact; dominoes falling; one thing after another; stimuli and response; associative learning; operant conditioning — all those great, mechanistic, non-representational phenomena. And yet when we close our eyes or dream or simply look at the world, we have a cinematic experience. We seem to be representing the world and doing a pretty good job at it, right?

The mind-bending thing, that I still find it hard to remember, is this. It’s not that we lucked onto representation in a world where representations otherwise don’t exist.30)This view isn’t crazy, Daniel Hutto gives a great account of late arising content, i.e. content that evolved only in primates or happens only higher up in the chain of cognition not in basic cognitive processes which are content-free. A lot of 4E cognition people think somewhat along these lines. Frances Egan does great work and almost goes all-out eliminativism, but settles for a “deflationary” account of representation with no metaphysical commitments about intentionality but some elaborate features for representational vehicles that mean they are doing something that deserves the name. It’s that our weird conscious experience31)Whether it’s best characterised as Dennett’s “fame in the brain” or Nick Humphrey’s Gregundrum or Thomas Metzinger’s ego tunnel or, my favourite, Michael Graziano’s attention schema. , which is heavily tilted to the visual is a hoax (an hallucination), that makes us think the world is made up of things like it, i.e. representation-like phenomena. Because we assume the world is populated by perspectives (things that have points of view of other things, like another person who has a certain view of the scene in front of them), we assume there is simply a way, a relation, that one thing has to another whereby it represents it, or maps it, or refers to it, or means it, or carries content about it, etc. So our brains make us think the world has representational things in it (media, symbols, words, blah blah) because the stream of consciousness itself is taken to be a representational thing (as opposed to just some strange experience that is orthogonal to the world rather than isomorphic to it… mixed mathematical metaphor).

For those in the back: our glitchy and illusory consciousness, which frames us and others as representers, lures us into thinking that representation is a thing that many other objects can also do. We especially apply this to thought itself, of course; but it is the true origin of the mistake.

We live in what seems like a simulation of the world. “We know it’s inaccurate in certain respects, but it’s still a crude or warped model, however limited, right?” That’s how we feel. But it is the very idea of a simulation that is crude. There is no such process. There are only non-intentional, associative, inferential, analogy-making, pattern-detecting processes in our brains. There are no representations there or anywhere in the world. The world is a fizz of particles; or it’s a roil of chemical reactions; or at a bigger scale it’s a stretching of space and a cooling of stars. Our consciousness has none of that. Yet we still assume it captures macroscopic reality — in fact we take macroscopic reality to be the mise en scene of our consciousness.

What I call the arena, that we take ourselves to be living in, is a construct or a metaphor (but, confusingly, not a metaphor for reality). It has properties that are orthogonal to reality. I once went into a K-hole and the part of my brain responsible for language bailed on me, while I had a terrifically slanted visual experience. The overall feeling, the qualia, the phenomenality, was just nothing like normal consciousness. It wasn’t like a dream or hallucination; it was “qualitatively” different. It was great. I think normal, sober, alert, noonday, default mode network, consciousness is as far from reality as a K-hole is from it. We hallucinate not only details of the world but whole properties, including intentionality and perspective.32)And I’d add a whole bunch to that list. The old favourites like god, animacy, spirit, soul, essence. But also the ones that philosophers still believe in: modality, mathematics, morality, which along with meaning comprise Huw Price’s four ems that hardcore naturalism is meant to deal with. If I were a cognitive scientist, this is what I’d investigate.

Perspective = context

If I’m right — no intentionality or aboutness in language and thought and that perspective is a kind of illusion too — it doesn’t mean that what we think of as perspective is locked-in or unchanging. It should mean that what feels like perspective (one’s point of view, outlook, viewpoint, or worldview) is purely a function of context. In other words, the inferences you make about the world are entirely dependent on what inferences (broadly construed) you’ve already made. That’s a combination of what you’re born with (various innate schemata) and what you’ve experienced (conventions, patterns, associations, analogies that you’ve been exposed to). You can’t know anything you haven’t “bumped into” at some time. These contextual effects are what you filter new experiences through, i.e. see the world.

In the ongoing debate over how much of this is innate versus learned, I lean closer to the likes of Cecilia Heyes, Dan Dennett and Michael Tomasello who emphasise the learning side. If someone grows up a “wild” child (raised without language or social cues modelled by adult conspecifics) they have an entirely different “worldview” to a typical human. They may well lack a worldview and perhaps even consciousness as we would normally talk about it. To a lesser extent, every person has a different “worldview” (especially nowadays in pluralistic societies) because they’re exposed to different stimuli, different contexts.

And if context is perspective, you can change your own perspective by changing your informational diet — although I think the word information is too tainted now, so maybe I should say you can change your diet of signals, or diet of stimuli. You can set up systems so that you’re only exposed to news stories of a certain political bent. You can make it so that you only read things in a certain language, or by certain authors, or from a certain time period. Doing so will absolutely change your perspective — although it may take a lot of time to override the perspectives formed in your early years and by the cultural milieu you were marinated in.

Ok, so am I crazy?

I keep wondering, Why do I have the most hardcore view around? It must be that a grounding in abstruse literary theory and continental philosophy made me more able to accept claims that are too killjoy, too nihilistic, even for supposed hard-heads in the natural sciences and analytic philosophy. I’m just a pragmatic literary theorist, nowhere near as romantic as a physicist or computer scientist.

Allies

Continental philosophy is generally up for more than analytic philosophy. But it’s less clear what people are claiming. E.g. Derrida: he happily jettisons reference. But then he seems to think that words can refer to other words; that is, aboutness is unproblematic internal to the text.

Other continental theorists might actually come closer; but it’s often hard to figure out what they’re saying or to translate between different vocabularies (my life’s work). Honourable mentions to Karen Barad, Ray Brassier, Judith Butler, Gilles Deleuze, Paul Feyerabend, Michel Foucault, Elizabeth Grosz, Katherine Hayles, Jean-Francois Lyotard, Marshall McLuhan, Quentin Meillassoux, Richard Rorty (late-period), and Joseph Rouse. But none of them actually come out and talk about non-aboutness (actually Brassier & Meillassoux kind of do) as directly as analytic philosophers like WVO Quine, Dan Dennett, Alex Rosenberg, Rorty (in his analytic phase) or the Churchlands (although they stick mainly to the elimination of mental content, mental representation, folk psychology).

I suspect that a lot of people in neuroscience, molecular biology, cognitive science, AI, etc. are actually sympathetic to these ideas but either frame them differently or aren’t aware of the philosophical battles they entail, or just don’t care.

Anticipating objections

Performative contradiction, self-undermining argument. How can I write about the fact that aboutness doesn’t exist? “Performative contradiction” is a bad argument, ironically, because it assumes aboutness. Can I sing a song about not being able to sing? Absolutely. It’s something of a performative contradiction, but it (along with all paradoxes of the same self-referential structure) shows how aboutness has a tenuous grip on the world: doing the singing trumps the “content” of the song. The “meaning” of that song has no binding causal effect on its happening, clearly.

My arguing that aboutness doesn’t exist also has no effect on the world. The world will remain 100% impervious to the “content” of anything I ever say. My words will only have impacts as physical events. And where they do, it is typically because they trigger effects in listeners’ brains. Those effects are historically contingent depending on the conventions detected by the listeners: if I use words that they recognise, they will make certain inferences in a non-aboutness kind of way and I might be able to leverage analogies to trigger new patterns in them.

To be clear, I’m saying performative contradictions don’t exist either, because they rely on aboutness. So I can’t be performing one now.

This is horrible, nihilism, death of meaning, we live in a dead universe. Maybe. That’s not an argument against this, only a good reason why we might hope it’s not true.

There’s a more important reason why this nihilism argument is weak. I admit it’s scary; but isn’t it more amazing that we have this cognition, this experience of the world — from within that world, as part of that word — and that from nothing, our brains magic up a way of navigating the world that assumes there are abstract connections between people, events, words, objects, and symbols even though there aren’t? And how delightful that these fictions clue us into the real relationships between things: the natural cycles and repeating conventions, the similarities or analogies between disparate things, the causal weave of the world, “the correlated pattern in the game”. Meaning is real, but isn’t based on aboutness — it just seems that way to us.

Footnotes

↑ 1. I would love feedback on this because it’s the basis of a chapter in a book I’ve been working on for years. The problem is a doozy: How can one part of the world mean another part of it? That is, how can information, data, writing, maps, code, animal calls, artworks, genes, gestures, equations, thoughts, signals, or anything else somehow be about something else. There’s no magical connection between the word (or anything else in that list) and the thing it represents. The unshakable intuition that there is some kind of connection, seems to me to be an artefact of how our brains, language, and culture evolved. This is a big topic that cuts across dozens of disciplines. But analytic philosophy is probably the place that has tackled it most head-on, although mainly in ways that I think are totally unconvincing. In fact, I’m convinced this is the single biggest limitation to most philosophical approaches, worldviews, philosophies of science, ideologies, etc. Almost every thinker ever has been under the delusion that we live in a world of aboutness. (And I know that every thinker ever has felt every other thinker ever has been deluded. Gotta rehearse the classics.) I’ve tried not to use jargon from any particular discipline. But I do want philosophers, in particular, to respond because I still feel like I’m missing some argument from somewhere in that literature that I’ve overlooked, because my conclusion (that there simply is no aboutness, representational content, semantic meaning, intentionality) seems to be a very rare position, considering how obvious and useful it seems to me. Also there are two related posts, one on doing science without the notion of observers and the other about consciousness and shaking the ultimate illusion: perspective. They’re all interrelated and have some overlap.
↑ 2. Depending on the passage it’s embedded in, there might be contextual cues. But imagine a simple sentence like, “This painting is awful.” You would need to know which century it was written in to get the meaning.
↑ 3. `Probably the entire realm of normativity if you’re feeling expansive.
↑ 4. This SEP article is great intro to intentionality, especially §9. I hold that all attempts at naturalising this are doomed; I think intentionality should be eliminated.
↑ 5. For the philosophically inclined, Rosenberg’s key article on the topic is here; but he outlines his view in the popular book The Atheist’s Guide to Reality (2011).
↑ 6. Philosophy mavens: yes, he has a whole thing on how you can use knowledge “about” the world to show how nothing can be about the world; you just need a more modern idea of epistemology which he gets with an update on Quine’s ideas
↑ 7. I’ve asked him who his allies are. He notes Bill Ramsey, Quine, the Churchlands to some extent, and feels Dennett might be a closet eliminativist with respect to content (Rosenberg, personal communication) — I know what he means; if one changes Dennett’s terminology a little bit he is an eliminativist except for PR reasons.
↑ 8. To be clear, if I had any formal training in physics or computer science I would dedicate my life to properly figuring this stuff out. How I wish I could put my money where my mouth is and not just be a guy on the internet with a theory that “physicists should take seriously” — surely the lamest fate for anyone blessed with literacy and web access. Also, to allay fears that I’m a crank, I don’t think the content-free version of information would transform our lives by offering faster-than-light travel, proving P = NP, or find new particles. Merely that it might help eliminate some dead-end ways of thinking about, say, the black hole information paradox, and perhaps expose some questions as poorly posed. It does, however, make predictions, especially in the realm of communication.
↑ 9. This is the bit analytic philosophy people will probably hate the most. Even hardcore naturalists, materialists, etc. feel wobbly talking about mathematics, which has a beguiling, almost supernatural feel to it. Philosophy of mathematics used to be my thing. For mavens: I actually think there are versions of nominalism, fictionalism and even structuralism that, with minor tweaks, are totally compatible with eliminating intentionality, content, and representation. True, you have to jettison truth as being correspondence or propositional, but proof can be retained on purely physical, even computational terms. The seeming alignment of mathematical statements with timeless regularities in the world is best thought of counterfactually. If we lived in an irregular world, would mathematics be unreasonably effective? Nope. Neither would it ever have evolved.
↑ 10. People —especially scientists, especially physicists and other “hard” scientists — hate this. Understandable. It says that whatever laws, theorems, proofs physics has on the books are just that: on the books. Without being used for something, they’re not knowledge, they’re just entries in some ledger. Imagine a computer spits out permutations of mathematical symbols, and alights on Maxwell’s equations, or the Schrodinger equation or whatever. Knowledge? It would, according to most people’s ideas, be “about” the world and would “contain” knowledge of the universe. I think that’s mad. Comes down to the regrettable fact that most modern knowledge is descriptive: a linguistic or mathematical description of as broad a class of phenomena as possible. In this sense, it’s a bit like a work of history: good, solid, non-narrative history that simply attempts to record or account what happened. What is its value going forward? It will have predictive value in the future in direct proportion to how uniform and regular the phenomena it describes are. For a theorem describing the behaviour of hydrogen atoms, it should do pretty damn well and never give out in our lifetimes. For one describing living systems (with their combinatorial explosion of genes, proteins, organic molecules, nervous systems, etc. it won’t even get 100% of the phenomena currently known to biologists. For something on human culture (like the discipline of history itself) it will have approximately zero predictive value. Not because it isn’t “true” but because human culture is nonuniform, it keeps changing, and will be different in different ways in the future, so a record or description — even a complete one — of past events is not so much useless as irrelevant. Even predictive value is not the currency of knowledge. The best knowledge is embodied in technologies/practices that allow us to make new things happen. Much of it doesn’t even have theorems, formal statements, conjectures, formulae, canonical versions, etc. Those things are only ever aidesmémoire: part of a whole system involved in doing the knowledge. If somebody lost the documents on which the patents for CRISPR were written, it wouldn’t affect the use of CRISPR as a technology used by molecular biologists or indeed by bacteria (from which we borrowed it). The reason knowledge doesn’t lose power if you take away the written version of it is that the written version was not about anything. Neither was the version in people’s heads. An actual process of physically doing stuff is “where” the knowledge is. That’s because we live in a physical universe of stuff happening, not a storyworld that is described or narrated by authors, evil demons, simulators, or gods.
↑ 11. Is this a relational universe like Carlo Rovelli or Karen Barad advocate? Yeah, maybe. Is it some kind of process philosophy? I guess so.
↑ 12. If we want a slogan for the overall ontology of this crazy world, I say we drop Wheeler’s popular “It from bit” — loaded as it is with the intentionality and platonic realism of information — and adopt: “Survival of the it-est.” For an epistemological slogan we wan ditch “might is right” and go with “do is true”.
↑ 13. Note another application is to networks, complexity studies. People from all sorts of domains love modelling the network of connections between people, animals, airports, computers, historical actors, etc. But these connections (although they show as unambiguous lines on a graph) are imaginary. They stand in for some correlated behaviour, some physical events that were linked (via actual physical contiguity) to one another. But just because network graphs and similar “representations” show connections that leap across space and time, this seems to gull people into thinking there actually are long-range, spooky connections.
↑ 14. Consider the fact that the information sampled from a single die roll is higher than a single coin flip. Under this conception of entropy, there is greater reduction in ucnertianty with a die because it is a definite outcome drawn from a larger space of possibilities. This is madness. I mean, it makes perfect mathematical sense in terms of the enumeration of phase spaces, or combination, permutations, etc. But this has little relevance to the meaningfulness of signals in communication and, more importantly, masks the whole problem with the current notion of information which is incorrigibly intentional or aboutness-laden. Namely, the quantity of information “in” the signal (a heads, rolling a five) is given only if one has complementary info: the size of the state space. Also, even in this idealised example, one needs to have some kind of detector, receiver that is constrained in special ways to dscriminate the type of signal, which itself has to be part of a carefully constrained system, such as a die that can only land on six faces. In other words, the purely informational approach ignores the thermodynamics reality of, well, reality. Hence, folks like Landauer are on the right track by tethering platonic ideas of information entropy to thermodynamic ones.
↑ 15. This is clearly the case with codes/ciphers but it holds for all information. E.g. an astronomer analysing a bunch of starlight from a region of the sky: what they get out of that info is dependent on their pre-existing knowledge, theories, history, other data already collected, etc. And even then, the info from the starlight can be correlated in arbitrary and infinite ways, yielding more meaning. Suppose a military draft comes into effect and instead of drawing balls from a barrel, the lottery is based on astronomical data gleaned in the aforementioned example. Now the starlight “means” who gets conscripted and someone with access to it who knows the “cipher” can predict who will fight in battle; a queer kind of astrology. But of course, that knowledge isn’t “in” the starlight, it never was; it depended solely on how the starlight is correlated/entangled (I use these terms in a quasi-technical sense) with other information in brains, cultures, technologies.
↑ 16. For people jumping ahead, yes that mean some signals contain more potential information than others. But that potentiality is impossible to second-guess or know in advance for the same reason David Deutsch says you can’t predict what the next advances in knowledge will be; and the same reason why you cannot devise algorithms that know in advance whether certain mathematical questions have answers; and the same reason you don’t know Wolfram’s Rule 110 will lead to complexity, without running the thing.
↑ 17. `Examples of scientists beguiled by this idea include Deutsch’s knowledge probe, Dawkins’ replicator bomb, the idea that E=MC^2 represents huge information compression, that DNA “contains” information “about” the organism that develops, etc.
↑ 18. Biologists are the most beguiled and Dan Dennett has been reining them in for years; see especially Darwin’s Dangerous Idea p.195ff.
↑ 19. Classic article by Schlottman and Surian on how we perceive causation-on-contact events as being-causation-at-a-distance and how we parse the two types.
↑ 20. And yes, the real issue is a cognitive neuroscience explanation of introspection. Graziano’s is a real candidate; it might not be the best one but it’s the right shape of explanation; we want a solution to the so-called “hard problem”.
↑ 21. Note for philosophers: consider that teleology when we talk of cosmic teleology or any notion that the future determines the present, is nothing more than a positing of causation-at-a-distance through time, rather than space.
↑ 22. The active inference crowd will be getting excited now — I think they’ve got a great overall framework for how cells and brains establish a boundary and use it to mediate various essays “about” the world around them to aid action. I think this framework can be used without aboutness; this article is fascinating, and I think what it’s advocating can be construed in non-aboutness terms (although it relies heavily on mathematical structures or formalisms being “realised” in cognitive systems, i.e. brains or cells); but then this article worries me.
↑ 23. Quick demo of why content isn’t what it seems. I draw an analogy between two instances and label that thing as a rabbit. Then someone else cottons on to a pattern repeated in nature and similarly draws an analogy between multiple instances of what they take to deserve the same label: they call the thing with a fluffy tail they saw yesterday and today, a lapin (French). To the pro-aboutness person this is because my concept of rabbit and their concept of lapin have the same content despite having different labels; it is precisely the content that picks out or characterises our concepts of rabbit and lapin. But this noticing of similarity between my and my Francophone friend’s concepts is itself another analogy: the content-freak has noticed two things with some similarities, and some differences, and has analogised from one to the other. What’s wrong here? Believe it or not, the problem of induction. Tomorrow I see another long-eared mammal but detect some features that are importantly different from the rabbit instances — I call it a hare instead and classify it as a new concept. The French person sees what I call a hare and doesn’t notice the differences, and so calls it a lapin too; they’re not just using a different label, they think it is the same thing that they saw yesterday. Now our content freak cannot say that our lapin and rabbit share the same content. It turns out that “same content” was itself another analogy, always loose and fuzzy, and always open to being scrapped or re-formed. But wasn’t there some real pattern in the world that, at least for a while, we both imperfectly tracked and corroborated? so we both shared knowledge about the same feature of the world? We can’t know. Content is in the eye of the beholder and open to constant refutation or revision, based on subsequent events in the world that can — like Hume’s black swans — overturn any supposed regularity from the past: it can’t be confirmed or verified. You can go around using the same words with speakers of your language because you use that as a tag for some analogy you’ve drawn between various instances of things. But as soon as your friends start using words differently, you’ll follow suit (unless you want to be ostracised) because you’re good at tracking conventions — which are analogies drawn between events that repeat across time. And this protean, shifting nature of our words is absolutely necessary for survival in a niche of protean and shifting conditions: Homo sapiens‘ evolution as uniquely social primates. Words with locked-in meanings wouldn’t work, they wouldn’t have the requisite flexibility. And it’s not clear who would lock their meanings in, prior to the advent of lexicographers and grammar police. So we face a sublime irony: there must be regularities enough in the world for us to have evolved and to have brains and for analogies to ever have any usefulness; and yet we cannot ever know of (as in aboutness or content) such regularities because to do so would require perfectly verified, divine, impossible knowledge that violates the problem of induction. It’s also the case that life couldn’t have evolved in a perfectly regular environment either. The only way to check that an attribution of content is accurate would be to do something that is precluded by the arrow of time, whether or not the world has aboutness.
↑ 24. Again, see Graziano.
↑ 25. Doing so would be computationally impossible anyway, I’m pretty sure.
↑ 26. Why? I think Humphrey’s answer is best: to make us more in love with the world, to make us value our and our kin’s survival.
↑ 27. There actually are gaps, but if you don’t perceive them they might as well not have happened; see Dennett’s discussion of Orwellian and Stalinesque memories in Consciousness Explained.
↑ 28. Is there really no action at a distance? Quantum entanglement might be something like that, but it appears to have no macroscopic relevance and hints instead at an underlying reality from which space and distance are emergent. We live in the emergent cosmos, so I don’t think entanglement is the road to aboutness (for a while I entertained this idea, noting that David Deutsch posits that every particle contains “entanglement info” designating which branch of the multiverse it’s in.). There’s also gravity, the most famous action at a distance. But now we know that it too is an influence that propagates through space no faster than the speed of light. A gravitational wave can’t exert influence over a distant region without rippling through the intervening space first. (Again, gravity is pretty special, even metaphysically interesting, given it’s not like the other forces and seems to affect all types of particles, all mass and energy. But it can’t get you to aboutness. Besides, it’s all pervasive, where everything is attracted to everything else, not just “connections” between two things in particular.) Wormholes are a theoretical way to shortcut spatial distance. They’re interesting and they point to some fundamental substrate from which our 3-space emerges. But unless someone has a wacko theory of how thoughts follow wormholes to get to their referents, I can’t see it being relevant to anything in our lives, including aboutness. In short, you can’t know about anything outside of your lightcone; indeed you can’t know about anything inside your lightcone. You only know what physically influences you. This rams home the point that aboutness is an illusion, otherwise we could potentially know about things beyond our lightcone of physical effects. You might say, “I can imagine something beyond our light cone quite easily and therefore have knowledge of it.” But how would you go about checking that the thing was really there? Only by physically investigating it, detecting some physical trace, bumping into a photon which had also bumped into it. That’s what knowledge is.
↑ 29. I have another post dedicated to this as the last illusions needing to be dispelled from the folk understanding of the mind because it is the source of illusion itself.
↑ 30. This view isn’t crazy, Daniel Hutto gives a great account of late arising content, i.e. content that evolved only in primates or happens only higher up in the chain of cognition not in basic cognitive processes which are content-free. A lot of 4E cognition people think somewhat along these lines. Frances Egan does great work and almost goes all-out eliminativism, but settles for a “deflationary” account of representation with no metaphysical commitments about intentionality but some elaborate features for representational vehicles that mean they are doing something that deserves the name.
↑ 31. Whether it’s best characterised as Dennett’s “fame in the brain” or Nick Humphrey’s Gregundrum or Thomas Metzinger’s ego tunnel or, my favourite, Michael Graziano’s attention schema.
↑ 32. And I’d add a whole bunch to that list. The old favourites like god, animacy, spirit, soul, essence. But also the ones that philosophers still believe in: modality, mathematics, morality, which along with meaning comprise Huw Price’s four ems that hardcore naturalism is meant to deal with.