I just watched a really good TED talk from 2017 by Anil Seth about the nature of consciousness.Watch it first, and then come back to the article. Because I also need to discuss an important point I think he gets wrong.
First, the many things he gets right:
He describes how perception works, how our brain models reality through a sort of hallucination, and how the systems that model reality also depend on our internal expectations. And then demonstrates that we can easily break our perceptions through several famous visual and tactile illusions.
He also does a remarkable demonstration of how we initially fail to interpret a heavily garbled audio recording of a phrase, until we’ve heard the ungarbled version. After hearing it, the garbled version becomes perfectly intelligible to us–we can’t unhear it. (I find this fascinating, and it made me realize how important it is to grasp the power of priming–which is how disinformation works. Once a person is primed to a particular interpretation of an uncertain pattern, it becomes nearly impossible for them to interpret those events any other way, because their brain has been “primed.”)
Seth establishes that our consciousness is very much tied to our bodies. It’s not clear that a computing device could develop conscious self-awareness, unless that awareness were to be grounded in a body, he reasons.
So far, so good.
But then he makes what I would call a “negative speculative leap.” He claims that because human consciousness evolved in living bodies, that life is probably required for consciousness. And then he speculates that it’s therefore unlikely that AIs of any kind will develop self-awareness.
That kind of negative claim is beyond the scope of science.
Here’s how I see this developing: We will build robots with advanced CPUs to handle all their perceptions of the world. These will include reasonable facsimiles of how the human brain processes images and sound. A useful robot has to be able to navigate unpredictable environments (like a self-driving car). Robots will have to be aware of the extent of their limbs to avoid bumping into objects or hurting people (like factory robots). They will have to be able to defend themselves from malevolent humans or other robots (like autonomous military drones). They’ll have to find sustenance in the form of electricity (like a Roomba that knows where its charging station is), or a food source that they can metabolize. All the rudiments for these capacities have already been built.
Robots will have to interact with the world in all the ways humans do. They will be conscious of their bodies, in some form. They will have to have strong programming that tells them they don’t want to experience damage, and makes them recoil if damage is occurring. This will require something like a sense of touch, and the ability to respond to pain. If they do become damaged (injured), they will need to learn (alter their programming) to know that it’s an experience they don’t want to repeat. They will need programming that tells them it’s very likely that they could be injured by humans or other robots, which corresponds to something like fear. They will have to know that their internal energy source is finite, which corresponds to giving them a sense of hunger.
I have no idea how such “feelings” will be programmed into machines, or evolve–that’s above my pay grade. But I’m certain that they will. Because actionable “feelings” are much more efficient than a massive stream of unstructured data, driving linear programmed responses. Robots will need to develop heuristics in order to have a robust self-defense response. Humans react most decisively when we are emotion-driven. For example, when you’re “creeped out,” you get out of a situation quickly, before you have a chance to analyze exactly what’s making you uncomfortable. The hair on the back of your neck stands up, you get a shot of adrenaline, and you bolt. Robots will need a similar response.
Fear. Pain. Anger. Hunger. Self-preservation.
Emotions are are all fundamental evolutionary components of consciousness, and most likely why consciousness developed in the first place. Because it was adaptive. Robots will also have to be adaptive. And they will be. Which means that either organically or through careful programming, they will eventually develop self-awareness. Maybe not the same kind we humans have, but enough to be able to communicate to us that there’s some kind of meaningful experience to their existence.
Once that occurs, we’ll have to take that experience into account. And this is a BFD. We’ll have to grant intelligent robots a form of rights and autonomy. This is a theme that’s been explored incessantly throughout science fiction. It will become real, if machines continue to develop on trend. We’ll have to consider the issues of machine slavery and robot racism.
Believe it.
Because when we think about the future of intelligent machines and robots, it’s clear they are solving the same problems that have already been solved by evolution and biology. So in order to most-effectively coexist with humans, robots will possess all the same cognitive tools that we have. And that’s why it’s so important for humans to understand that we are also biological machines. It’s not much of a conceptual leap to make, once you get past religion, your own fear of robots as “alien,” and your fear of replacement.
So long as people–even cognitive scientists–are being held back by the perception that life is categorically separated from non-living matter, they will continue to misapprehend the potential future of consciousness. And that’s not something we can afford.
Seth does say at the end of the talk that there are “many different types of consciousness” and that human consciousness is really a small subset of what’s possible. I wish he would include future machine consciousness among that set of possibilities.