Wisdom Never Dies

View Original

The Meaning of 1010011

Quantifying Information

Here is a piece of information:

1010011

From an information-theoretic perspective, we can go ahead and calculate the Shannon Entropy of this information, as is done in the graphic at the top of this blog post. The Shannon Entropy represents the average surprise of this message given the number of possible symbols (0, 1) and the number of possible symbol positions (7). It can also be thought of as the number of bits per symbol necessary to encode this message. This is an extremely important and fundamental concept in information theory. Moreover, as the Shannon Entropy has been shown to be equivalent to the thermodynamic entropy of a system, this concept is hard to overstate. As Carlo Rovelli says in “Reality is Not What it Seems” (2014 in Italian; 2017 in English), information is, “…a specter that is haunting theoretical physics, arousing enthusiasm and confusion.” From, “… the foundations of thermodynamics, the science of heat, the foundation of quantum mechanics…” and I will also add, in biology as well.

Qualifying the Quantifying of Information

But what makes this so ‘confusing’, as Rovelli notes? For one thing, although we can calculate the information entropy of 1010011, and know the precise way in which this is related to thermodynamic entropy… the information 1010011 still means nothing. That is, although so far information theory helps us quantify information, it tells us nothing of what it means. Shannon information assumes a common language at the source and receiver of a piece of information, and tells us nothing of this language. In Shannon information, we have syntax, but no semantics. It is us who invest the information with meaning. In fact, in this particular example, when I read the string 1010011 it instantly reminds me of engineering school where my fellow nerds and I would sing this string to the tune of 90’s banger “Come Baby Come” by K7. It could also be the decimal number 83. The syntax alone can yield an infinite variety of semantics.

Consider the symbol 𝛑 (pi). We recognize this as the constant 3.14159… that is the ratio of a circle’s circumference to its diameter in Euclidian geometry. We can represent this with 1 symbol because of our common language. Technically, if we wanted to represent pi in its entirety as a decimal number, we can not, as pi is transcendental and therefore the decimal expansion is mathematically infinite. Thus we see, not only does 1010011 mean an infinite amount of things depending on context, but even one concept (pi) could take anywhere between 1 and infinity symbols to encode. Is there any way for physics and biology to make sense of this context and semantics surrounding information?

Quantifying the Qualifying of Quantifying Information

One approach to quantifying the semantics of information for living systems is to attack this definition directly. Such is the approach of David Wolpert and Artemy Kolchinsky in their 2018 paper “Semantic information, autonomous agency, and nonequilibrium statistical physics“. They clearly state in the abstract, “We define semantic information as the syntactic information that a physical system has about its environment which is causally necessary for the system to maintain its own existence.“ The bulk of their research, then, is to subsequently define “causal necessity”, which they do quite nicely. In this approach, we see that these ‘proto-living’ (“…high levels of semantic information is a necessary, though perhaps not sufficient, condition for any physical system to be alive.“) or even “pre-cognitive” (the term used by Kolchinsky in private correspondence with me) systems are complex physical entities which eventually build up into full-blown life / cognitive systems.

As the team says in the 2017 Quanta article “How Life (and Death) Spring From Disorder“, “…the key point is that well-adapted organisms are correlated with that environment.” Further, from said article, “Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.” Thus, in a sense, not only are early hominins our evolutionary progenitors, but structures such as sand ripples and crystals are our pre-evolutionary progenitors as well.

These insights stem more broadly from a field called non-equilibrium statistical mechanics, which recognizes that life is a thing which gains semantic / contextual information about the environment in order to pump entropy away from itself. Another approach from this field in which organisms correlate with their environment is that of Karl Friston, known as the Free Energy Principle. From Friston’s 2013 work “Life as we know it”, “life—or biological self-organization—is an inevitable and emergent property of any (ergodic) random dynamical system that possesses a Markov blanket… The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket.“

From Life to Consciousness

All this research paints a picture of minute physical systems which build up via self-organization principles into what we call life. As Kolchinsky suggests, this is pre-cognitive behavior. Does this behavior eventually build up into and explain consciousness as well?

Before formally blurting out the answer, which is yes, let us pause for a moment to note, a yes answer does not mean panpsychism, or more accurately, protopanpsychism. There are no “intrinsic natures” here, and one can easily see that not all information qualifies as having an essence of experience or subjectivity or even life. To drive home this point, ask yourself, does 1010011 have its own inner subjectivity? Clearly the answer is no. It has meaning, subjectivity, only in context. This is a physicalism (aka materialism, etc) plain and simple.

Now I’m ready to state, some researchers do believe this is the path to consciousness. Friston, together with neuropsychologist Mark Solms, have created an extension of Friston’s Free Energy Principle which accounts for consciousness as a natural progression of nature from a non-equilibrium statistical mechanics point of view. Mark Solms 2021 book “The Hidden Spring” provides details, including why he believes this solves the “hard problem” of consciousness ala Chalmers.

This is a fascinating theory. As I state in my review of The Hidden Spring:

The heart of this model is a self-organizing Markov Blanket which serves as a sensory/motive boundary which sequesters everything inside from the world at large, yet allows for proper sampling of said world. There is the linchpin insight that solves the hard problem. Such a boundary is what gives us license to take an inner subjective perspective in the first place. Another crucial point, as Solms says, "The answer [to why feelings arose via evolution] starts from the fact that needs cannot be combined and summated in any simple way. Our multiple needs cannot be reduced to a single common denominator; they must be evaluated on separate, approximately equal scales, so that each of them can be given its due." That is, how could you quantify "hunger" vs "warmth" - "need to eat" vs "need for heat" - etc? You can not, hence we have feelings, aka affective consciousness. Affective consciousness naturally precedes cortical consciousness, and in fact is what gives the cortex consciousness.

Thus we’ve gone from seeing information at its most bare as a string of 1’s and 0’s, untethered from meaning, to a system which makes its own meaning by first keeping itself from coming into thermodynamic equilibrium with its environment, then seeing this world through affect-colored glasses. So it is that 1010011 is to me a melody. What is it to you?