Wisdom Never Dies

View Original

Three Ways I've Changed my Mind on Mind

I’ve been studying philosophy of mind (consciousness) for a few years now, and the adjacent field of AI for two decades before that.  What follows are three ways I’ve changed my mind on “mind” during that period of time (the last few years in particular).


1. The Cortical Fallacy is Real

I make this point first because it leads to the next two realizations; it’s that important.

It’s pretty natural for an AI researcher to begin by believing that consciousness is a product of the neocortex (the cortical fallacy), because trad AI is based on artificial neurons, and those of our visual cortex in particular.  This fallacy buys wholesale into another fallacy, that of the “triune brain”, where the brain is built-up evolutionarily in three largely independent stages, the last - and therefore better - of which is the neocortex.  The reality is more nuanced: the cortex is indeed a newer structure, but it developed in tandem with (not over and above) older structures of the brain, such that the older structures and newer structures are together a newer unit, not an older system plus a newer system.

But things get worse for the idea that consciousness is a product of the cortex.  As thinkers such as Mark Solms have successfully argued, consciousness qua feeling existed prior to the cortical structure’s evolutionary appearance (see also this blog post).  Thus, although the cortex can modify or enhance consciousness, it is not responsible for it.  At all.  The cortex is not necessary for consciousness.  This means devoting the bulk of our research to studying the cortex (e.g. our vision system) will not lead to sentient AI.

This raises the question: if consciousness is created without a cortex, can it be created without any brain structures at all?  That is, is consciousness substrate-free?


2. Consciousness is (Essentially) Not Substrate-Free

This is perhaps the biggest course correction I’ve made in thinking about consciousness.  When one imagines silicon-based AI’s becoming sentient, one is imagining that consciousness is not lashed to the mast of biological brains, and thus that biological substrate is not required for consciousness, i.e. consciousness is substrate-free.  How I’ve changed my mind on this particular aspect of mind goes something like the following.

The greatest difference between consciousness and AI - according to me - is that while we (seem) to be the consumers of our own information, AI can do no such thing.  When an AI - when any computer - generates any information, we are the consumers of that information.  Computers, as we know them, do not generate meaning for themselves (discussed further here).  This is a hugely important point.  Hopefully an example will suffice.

When a self-driving car “decides” to accelerate around a slow moving vehicle, this is akin to a thermostat “deciding” to turn on an air conditioner.  They are both Chinese Rooms, a la Searle.  In as much as these systems contain models of the world, said model has no semantic meaning with respect to the inputs and outputs of the system, only syntax.  In addition, each system contains absolutely no model of itself.  For us, somehow our models of the world have semantic meaning as well as syntactic meaning, and these models include models of ourselves.

So, how do nuts-and-bolts inputs to any sensors become more than just mere syntactical signals?  Although I’m not yet sure, I think the answer is something like what Artemy Kolchinsky and David Wolpert have come up with: that Shannon-type information acquires meaning internal to a system when it is involved in pumping entropy away from itself in the course of staying alive (their paper linked here).  If that is true, then something like our complex biological selves is a necessary precursor to consciousness.  And if that is true, we could hardly create consciousness with anything but biological wetware.  Substrate-free consciousness is thus possible in theory, but why swap out materials when nature has already discovered the best ones?


3. Consciousness and Intelligence are Not the Same Thing

At the end of the first realization (the cortical fallacy) I made the crucial statement, “The cortex is not necessary for consciousness”.  What, then, does the cortex do?  The role of the cortex is something like providing what we think of as intelligence to consciousness.  This is the point of Jeff Hawkins’ brilliant 1000 Brains Theory.  By “intelligence” I do not mean what is supposedly measured by IQ tests, nor do I mean intelligence as a comparator among different species (a concept which is excellently encapsulated as umwelt).  Here, “intelligence” is the way we model our world and manipulate those models with predictions, analogies, etc., while “consciousness” gives rich qualia and affect to such things (for more on affect, see Lisa Feldman Barrett’s Theory of Constructed Emotion; good reference here).  What we think of as intelligence comes after - is built on top of - consciousness, evolutionarily.



So those are the three major ways in which I’ve revised my thinking on “mind” over the past few years.  These are significant changes, and it shows that diving into multiple disciplines including philosophy, neuroscience, computer science, etc. is the most fruitful path forward.

Here are two bonus aspects of mind and consciousness, which I’ve not necessarily changed my mind on recently, but which nonetheless remain important points of inquiry.

Bonus Item 1. Processing in the Brain is Not Necessarily Conscious

This seems obvious once you accept that the neocortex is not the seat of consciousness, but a significant amount of processing is done in the brain without our conscious awareness.  The disorder known as blindsight is a particularly dramatic example of this.  Studies like “Size-contrast illusions deceive the eye but not the hand” (Aglioti, DeSouza, Goodale) demonstrate that conscious and non-conscious processing of the same events and interactions occurs all the time and is often a feature (not a bug) of mindedness.

Bonus Item 2. It Might be Helpful to Think of Consciousness as a Non-Binary State

Varying degrees of locked-in or coma-type states show that instead of being either ‘on’ or ‘off’, consciousness might better be thought of as a spectrum of physiological arousal.  This might better fit with models of consciousness that are “strongly emergent”, but I still tend to think of consciousness as being mechanistically or algorithmically derived and only weakly emergent.  If I change my mind on mind again in the coming years, my thinking will likely turn on this point.

My mind continues the beautiful exploration of mind in general.