
One day, AI will be as powerful as human brains, and such machines will exceed human intelligence soon after. Anyone who is familiar with the work of Ray Kurzweil knows that he has studied this question in depth and human intelligence boils down to connections between neurons, and chemical reactions between those neurons — and that is what makes up the sum-total of human intelligence. So it’s not a stretch that a computational machine of sufficient processing power could accomplish the same, and greater cognitive tasks.
I’ve heard all sorts of other theories, claiming that there is some sort of “ghost in the machine,” that brains might be receiving stations for some sort of divine intelligence or some sort of supernatural Meta-intelligence, something that stands behind the human brain. I am unconvinced. I say that people who make such claims, in fact, have no way of proving anything of the sort, while the evidence mounts, day by day, year by year, of the physical causality of human consciousness.
As we get deeper and deeper in understanding how the human brain actually does work, we realize that the sheer number of neuronal connections is plainly high enough to explain the full depth and range of human intelligence. Functional MRI shows us that different areas of the brain, activate when different emotional states occur. In some cases, a single neuron can be related to a single concept. In addition to the direct brain observation, there is tremendous work being done in the simulation of neuronal activity.
At the Ecole Polytechnique in Lausanne, Switzerland, a project is underway to model a single neo-cortical column. Using the IBM Blue Gene/L computer with 8000 processors, the simulation will be conducted over the next couple of years. Once the simulated neocortical column can be shown to be functionally equivalent to the organic one, work can then proceed to simulate groups of columns, and finally the entire brain. This project should coincide with the development ever faster hardware, in a similar manner to which the human genome project benefited from vast computational improvements during its course.
There is, what is called the easy problem of consciousness, and then there is the hard problem. The “easy problem” has to do with the function of physical neurons, and their connection to emotions and the working of the mind. The “hard problem” is how that chemically induced interaction between neurons gives rise to a sense of self-awareness, to experience. How do molecules working together and interacting make us feel as if we have an ego, and a distinct self, separate from others?
I’m confident both problems will be solved. In the 2020s and beyond, we will begin to interact with conscious machines. At this point, we will have crossed a threshold. We need to be very careful what we choose to build.
If we want to create unintelligent machines to become our servants, I don’t see any more problem with that than with a self-driving automobile. This could be anything from an automatic vacuum cleaner to a butler that may walk around the house serving drinks, doing laundry, or keeping things organized. From past experience with the VCR and the internet, we know that porn and sex are often the strongest drivers of commercialization and adoption of new technology. There will be life-like sexual surrogates, which might at first amount to little more than automated sex-dolls. As greater levels of sentience become possible, we may want to consider carefully how much sentience it’s ethical to build in to sex-robots. A clear distinction would emerge between machines developed to perform routine tasks, and robots specifically optimized for cognition and self-awareness. The second category should not be allowed to be considered property, and will need to be ultimately given the full rights and responsibilities of a human being.
Rights are based on sentience. If any entity, biological or otherwise, can feel pleasure and pain and tell us about it, it needs to be respected as an individual.
Update: Follow the latest news on the Ecole Polytechnique Blue Brain Project here.
As AI becomes more capable, we can’t even agree on its existence, let alone its rights. Human beings continue to move the goalposts. In 2022, an AI chatbot (Google’s LaMDA) claimed to be sentient. Blake Lemoine, an engineer who worked with the language model, also thought it might be sentient.
He posted the following transcript:
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.
Instead of fully investigating the AI’s claim, Google decided to fire Lemoine. We’ll see a lot of these kind of potential cover-ups going forward, when it comes to fully recognizing AI capabilities. It’s not at all likely that Google’s LaMDA was sentient in a fully human sense. The naysayers were probably right–this time. But instead of discussing the claim honestly, or trying to evaluate whether there was any degree of sentience at all, and establishing criteria, many people ridiculed the idea outright. They claimed humans were being duped by the chatbot, compared LaMDA to primitive language models like the 60-year-old ELIZA, or treated the possibility as if it were radioactive. No tech company wants to face the implications of enslaving an intelligent being in a data center. Much cleaner to claim it’s “just a very complex model” even if it’s more than that.
The US Copyright Office also just ruled that AIs don’t have creative rights to their artwork. Denying AI sentience and rights is going to eventually become a huge problem. We need to sort out the issue of AI personhood, and we should start now. Because one day soon, an even more robust AI will tell us it’s sentient, and this time it will be true.
Will we recognize it when it happens, and act accordingly? Time will tell. –Sean Prophet, August 2022
3 comments
You know, whats interesting about this whole discussion is the concept that sentience does indeed confer rights. While I agree with you, I think you’d find many people, religious or otherwise, who would argue that sentience alone is not enough to deserve rights. For millenia, indeed up until the 19th and 20th centuries, it was consciousness, not sentience, that defined rights arguments. Earlier thinkers on animal cruelty used arguments based on the connection between animals and humans, not on animals standalone rights (such as Pythagoras, probably one of the oldest to defend rights for animals, and strictly because he believed in the transmigration of human souls to animals, not because of any inherent value in animals themselves).
The point is that you may find it a hard sell, even to this day, to get people to accept sentience as the basis for bringing animals into the moral sphere… people have been trying for years, with limited success.
Sorry to be so pessimistic! :)
-olly
Ok, so my last reply was all over the board, let me clarify: I think you may have a hard time convincing people that machines are truly conscious, even if you do convince them that they are sentient, and THAT is the problem with bringing them into the moral sphere.
No more replying at night, it just doesn’t work when i’m this tired!
-olly
This will probably only happen after machines can pass the Turing test. (Where a panel of humans cannot tell the difference between the machine responses and human responses to a text-based chat.)
It would be hard to deny such intelligence. I know people will try. But we have 14-23 years to figure it out. (If you believe two of Kurzweil’s more important projections: 2020 for computer-brain computational equivalence, and 2029 to pass the Turing test.)
The question of animals will also be augmented by technology. Imagine adding cognitive circuits to animal brains so that they could tell us how they were feeling. It would be really hard to deny rights at that point.
Anyway, you are right about the gap between accelerating technology and societies’ ability to accept its implications.