Sentience is indescribable

Arguably, one of the most nagging scientific question is the nature of sentience. Can we build sentient computers? Is my cat sentient? What does that mean? Will a breakthrough in cognitive science tell us what are consciousness, sentience and free will?

I conjecture that these topics will forever escape us, at least in part.

Near where I live, there is a forest. I can recognize this forest. If you were to drop me asleep in it, I would immediately recognize it. I would recognize the way the trees have grown, the sounds, the smell, the species… But I could never “explain” this forest. That is, I cannot compress down my experience of this forest to a coherent document that I could share. My forest is indescribable: its entropy is too high.

My brain is limited. I can only describe simple structures with a degree of complexity far lower than that of my brain. My brain cannot describe itself. Software appears sentient (though maybe not sapient) when its entropy becomes comparable to our own brain. To me, Gmail’s spam filter appears sentient. I know the science behind Gmail’s spam filter. It uses some kind of Bayes classifier. But that’s like saying that the brain is made of neurons.

Of course, software can run other software (e.g., your browser runs JavaScript). Similarly, my brain can predict what my wife will say under some circumstances (mostly when I screw up). But if I were to lay down on paper how I manage to predict my wife’s actions, the result would be illegible: I cannot communicate my understanding to another brain.

So, are my computer and my cat sentient? By this definition: absolutely. They are not sapient, that is, they cannot pass for human beings, but I cannot describe how they work. I can merely describe how some parts of them work and make some limited predictions. For example, I can tell how a CPU work, but my description is going to quickly become fuzzy: I am constantly puzzled by how superscalar processors deal with computations. They reorder the instructions in ways that are not intuitive to me. At least as far as my brain is concerned, it is magic! Similarly, a forest is sentient. Earth is sentient.

Of course, this definition of sentience is evolving. A simple engine may be indescribable (magic!) at some point, and then later perfectly describable after some training in mechanics. But as long as I can, eventually, understand how the engine works and communicate my understanding, then I have shown that its complexity is sufficiently below that of our brains.

If you accept my point of view, then it has some consequences for morality. Some say that sentient beings deserve respect. For example, you should not own sentient beings. Yet if sentience is nothing special, but merely a computing system with entropy approaching our own, then why should they deserve special consideration? Perhaps we just want to cling to the belief that sentience is somehow special.

16 thoughts on “Sentience is indescribable”

  1. Interesting, Daniel. As it so happens, I’m writing a paper called “A Theory of Ethics for Sentient Machines”. Part of my answer to your question “why should they deserve special consideration” has to do with the actor’s intentions. In other words, it isn’t so much that one’s ethical stance towards sentient machines is determined by *their* sentience as it is by *your intentions* towards them.

  2. @Paul

    So a rock is sentient, but a crystal is not? The solution space to the quintic polynomials is sentient, but the quartic is not?

    I obviously don’t claim to have defined what sentience is. (I am throwing a conjecture out there.) This being said… Neither a rock nor a crystal, nor a solution space, are systems, so I would reject them.

    the human brain’s capacity for understanding seems like an arbitrary yardstick.

    It is not arbitrary because I am the observer.

    Sentience is not, I conjecture, some absolute property. Some things appear sentient to us because we cannot wrap our head around them.

  3. So a rock is sentient, but a crystal is not? The solution space to the quintic polynomials is sentient, but the quartic is not? What would that mean, that a mathematical statement has sentience?

    At the very least, it doesn’t seem to me that complexity is sufficient for sentience. I’m not convinced it’s necessary either: the human brain’s capacity for understanding seems like an arbitrary yardstick.

  4. I obviously don’t claim to have defined what sentience is. (I am throwing a conjecture out there.)

    Understood. As you said, these topics may forever escape us. I offer counter-examples as an attempt to better see what the boundaries of this conjecture may be.

    Neither a rock nor a crystal, nor a solution space, are systems, so I would reject them.

    Which raises an interesting sub-question of “what is a system?” A rock can be broken into dissimilar, interconnected constituent components. And we can define inputs and outputs on it in terms of forces/reactions. The spam filter starts when an external force initiates the program, it returns a result you could predict. We could define the input to the rock as a sharp blow with a hammer, and the output as the particular cracking pattern. With a perfect crystal we could predict the crack to certain quantum limits, with the rock we couldn’t. Which is a bit more realistic, I suppose, than posing the sentience of a rock: does the process of a rock breaking cause any “sensations” for the rock, or perhaps more properly, the universe, in a way we’d recognize as sentient?

    Sentience is not, I conjecture, some absolute property. Some things appear sentient to us because we cannot wrap our head around them.

    Which is an interesting definition to consider. Instead of “does that object have subjective experiences”, it becomes “does my subjective experience suggest that object also has subjectivity.” I agree that your question is far more practical and definable. But it’s also a less interesting question. In a sense, you’re searching for candidates for a subjective existence, but not addressing the more usual definition of sentience: “ok, the spam filter could have a spark of conciousness. But does it? Or am I just projecting my own subjective experience because I don’t understand the unconcious rules being followed?”

  5. @Daniel

    I agree that my point of view is less interesting in the sense that it constitutes a demystification: there is no magical spark of consciousness.

    And yet, isn’t there? I just sipped my coffee and had an experience of “taste”. I feel “pressure” as I type this post. I’m not convinced time exists, I’m not convinced my sense of self is divided from other senses of self, I’m not convinced of free will. But I am convinced “taste” as something emergent above and beyond the shuttling of chemicals and electricity around a biological computer exists. And that may or may not also occur in the chemical and electrical patterns of a mosquito.

    If a sufficiently advanced algorithm analyzed a drop of coffee, would it too have a sensation of “taste”? Or would it blindly flit 1’s and 0’s around, with no subjective, sentient experience?

    Returning to one of your original conjectures, this may just not be amenable to any sort of analysis or definition. But I personally suspect there’s something incorporating internal memory, self-modification and complexity going on with sentience.

  6. If I spend a week with a program and have the same sort of meaningful talks and emotions you’d have with a friend, then who am I to say that it’s not sentient?

    In my view, sentience, which I guess you call sapience, is just a measure of how close a something is to a human. If it is exactly like a human, then we call it sapient. It doesn’t matter complexity, limits or anything like that.

    I’m not sure what you think is forever going to escape us. If I make a sentient computer, then I would have understood what it takes to be sentient and other people will easily figure out that I have done a sentient computer.

  7. Ha, I was half expecting the “illegible” link would be a photo of your attempt to predict your wife’s actions.

    Good post, I like this idea of relative sentience. It may not be as useful as an absolute, but it can certainly help us, the observers, get a better understanding of it.

  8. The Douglas Hofstadter book referenced in the first comment is “I am a Strange Loop”, and it has a very deep and interesting perspective on this issue. Worth reading.

    Paul.

  9. Fascinating conjecture here. There’s one major counter-argument and that is that the illegibility and high-entropy may be illusory: it may be low algorithmic/Kolmogorov information masquerading as high Shannon information. i.e. a forest, a cat or your wife (and you) might all be low complexity in the sense that pi is low complexity. At least Wolfram and the other digital physics people (like Seth Lloyd) seem to believe it. Maybe we’re all Automaton #29 with different initial conditions or something. An even more intriguing thought is recursive self-description. Somewhere in the expansion of pi, is there a number string that is also a description of an algorithm to generate pi?

    If so, then your brain could possibly be described by a string that is far smaller than the extensive form of the brain itself, and the brain could contain its own compact description and possibly truly understand itself.

    So I’d rephrase your conjecture to include the counter-argument in the generalized either/or form: what’s the true Kolmogorov complexity of the universe (from quarks to quasars and everything in between, including forests, cats, people…)? And is it increasing or decreasing?

    The Shannon:Kolmogorov ratio is in a sense a measure of the sentience level of a universe. If it is 1, the universe is maximally intelligent and entropic. This is one reason some people appear to like the idea that the 2nd law of thermodynamics can be interpreted as the universe evolving into an omniscient entity, a.k.a God. Asimov has a story based on this premise.

    Alternatively (and this is the form in which I am considering the question) is the information capacity of the universe fully utilized? Underutilized information capacity shows up in our universe as symmetries. Some are obvious symmetries, others are deep symmetries. Find the symmetries of a forest and you’ll find out if it has as much information as you think it does.

    I am working on a very related topic… the illegibility/symmetry/information potential of “moves” rather than objects (i.e. I am asking your questions, but not about “noun” entities like cats and forests, but “verb” entities like a punch or a journey or a business decision).

    Anyway, apologies for riffing very metaphysically here.

  10. I guess I agree to a certain degree. We do define as conscious systems that escape our efforts to rationalize them. This is to a degree fairly obvious looking at our history and at the history of our religions even.

    But I agree with some commenter here when they notice that not all systems that currently escape our ability of describing them are marked as conscious.

    Now I suspect part of this is because we do define consciousness in a very anthropocentric way, we really look at systems that have a given communication ability that we can understand. For example I’m pretty persuaded that if we looked at the behavior of our earth ecosystem from the right perspective we could start noticing intelligent reactions that we miss only because we are not looking at it in the right perspective or granularity.

    But still it’s undeniable that there are complex systems that we don’t grasp and that still appear very mechanical to us.

    Also, even if we were to accept the notion that the complexity of a system in relation to another one is what define consciousness, and dismiss all the exceptions as conscious systems that we don’t recognize as such, I feel this is still a non answer as it’s the kind of answer that creates more questions that effectively are the “meat” of the problem, in the end it gives us very little information.

    Some of such questions would be:
    – What is the property for which a given system perceives another system as conscious? In other words, what is the complexity differential for consciousness?

    – More importantly, we define ourselves as conscious, we are self-aware. Is this just because we can’t explain in our symbolic reasoning system ourselves? And if so, is it because of ignorance or is there an inherent property that makes systems like us able to reason about themselves but incapable of comprehend our own inner workings? Can consciousness be cracked? Could we in the future explain ourselves in a way in which we perceive ourselves as mechanical beings?

    And those are really what I feel would be the questions about consciousness. If we had answers to these we could take a system and categorize it as self-aware or not, or understand how a system capable of reasoning about another system will perceive that second system as conscious.

    Now looking at the title of the post it seems to hint at the idea that we can’t answer these, because it we could we could then describe sentience, which we can’t. Indeed I agree as I wrote that if we could we would then categorize ourselves as mechanical. The problem is to prove that we can’t (thus we will always perceive ourselves as conscious)!

Leave a Reply

Your email address will not be published. If you leave an email, you will be notified when there are replies. The comment form expects plain text. If you need to format your text, you can use HTML tags such <strong> and <em>. For formatting code as HTML automatically, I recommend tohtml.com.