We generally have a naive picture of our selves --- the "I" that we each imagine being --- as something like a little person inside each of our heads. We envision a homunculus, a cute wee humanoid creature who looks out through our eyes and listens through our ears, takes in all the sensory information we get, perceives it as meaningful, decides with some kind of free will, and then acts by sending out nerve impulses to our muscles.
This model of the mind is, however, obviously flawed. What's inside that tiny guy's head? A still smaller dude? This can't go on forever! At some point, before we run out of atoms, there has to be a self-contained source of meaning that doesn't rely on wheels within wheels to make sense out of life. We need a meaner --- a source of reason and thought and consciousness and "I".
When challenged that way, we have a couple of options. We can invoke a non-physical explanation: a soul or spirit or other such escape from infinite regress. That works --- but it adds a new element to the equation and gives up the hope that we can understand ourselves without outside intervention from the spiritual realm. We also have to be careful to avoid the same trap on the nonphysical side as we are escaping here. It's not enough to say that meaning comes from the spirit. How does the spirit achieve "meaning"?
Alternatively, we can stick to the material world and entertain the hypothesis that there is no single central meaner --- but rather, meaning emerges from complex feedback loops. These loops are instantiated in us by neurons plus other physical components, each of which obeys the laws of nature. Individually, nerves and connections among them and chemicals aren't magic; collectively, they may make a mind. Equivalent loops, under this theory, could be implemented using other mechanisms --- electrical circuits, or interacting nuclear spins, or vibrations in nonlinear media, or whatever. The pattern is what counts, not the method of building it.
Daniel Dennett wrestles with these issues at length in Consciousness Explained and pretty much comes down on the side of physics; so does Marvin Minsky in his book The Society of Mind. Various intelligent critics argue the other way: that meaning is so different from matter that mind can't possibly be an emergent phenomenon. That position seems, however, to be suspiciously parallel to ones that many folks once took (and some still do): that organic chemicals can't be synthesized by inorganic processes, that life can't come from non-life, and so forth.
But the big questions still exist. Is meaning an all-or-nothing proposition? We definitely tend to feel that way --- but are our opinions on the issue merely prejudices, based on our experience with almost-meaningless machines and the contrasts they show with meaningful organic systems? Can little loops exhibit a similarly little quantity of meaning --- a few bits of "sensing" or "knowing"? A thermostat is far from self-aware, but maybe it "knows" something about the temperature, in an exceedingly simpleminded (!) way. A worm may have only a few hundred neurons, and so it can't be expected to do much --- but perhaps it "knows" something. A dog surely knows much more, and people know more still.
The most complex computer software yet written probably has less than a million significant relationships encoded in it. (It has many more lines of code, and may tap huge archives of raw data, but most of that arguably shouldn't count, in an information theoretic sense.) Such programs can't be expected to be intelligent --- but may they not begin to display "meaning"? Chess players often anthropomorphize when playing against a computer, and say "It saw I was threatening its king and so it castled", etc. But is anthropomorphization the wrong word? Or might not our perception of sensing and knowing and meaning and choosing by the machine be partially correct? Are we seeing the program beginning to morph into a meaningful anthropic-like system?
Saturday, July 03, 1999 at 10:08:01 (EDT) = Datetag19990703