
Meaning, a dying illusion
Intelligent machines are beginning to impinge on the exclusive franchise of human meaning, by stealing some of our thunder in language and the arts. The more human-level tasks machines accomplish, the worse humans seem to feel. But artificial intelligence is still in diapers. We can still run circles around it artistically. We can still laugh at its fabrications, mistakes, and clunky wordsmithing.
Nonetheless, we’re already reaching peak overreaction. It’s a critical mass of sentimental outrage from the human creative community, with at least one major boneheaded think-piece being shat into the online ether every day. By some of the ostensibly “best and brightest” public intellectuals like Noam Chomsky, who should damn well know better.
Meaning dies hard. Human exclusivity dies hard. We’re facing the prospect of an existential earthquake comparable to finding out that the Earth isn’t the center of the universe. Or first contact with an alien species. That’s how big this is. And that’s the source of urgency in these breathless denunciations of artificial intelligence.
The custodians of meaning are terrified. If they weren’t, they’d be content to let the technology fall flat on its own. There would be no need to wail, with rising pitch and volume, that MACHINES CAN NEVER REPLACE HUMAN BEINGS!! If machines are nothing but dumb “toasters,” then why worry? To answer this question, we have to understand the zero-sum nature of science and religion.
As a device for explaining the unknown, the god-concept has lost its utility. Science has explained most of our big questions about existence, and it’s working hard on the rest. But humanism has kept some sense of mystery alive since the Enlightenment. Creativity, freedom, and the capacity for self-actualization gave us purpose and uniqueness. But that final redoubt is collapsing. When constructed machinery starts producing meaningful creative output, that mirror destroys our mystique, in realtime.
I’m more than OK with that. We’re mortal mammals. Primates with finite cognitive capacity, on a tiny planet orbiting an average star. The pretense of our specialness, in a vast universe, has always been a socially constructed lie. A by-product of our temporary place atop the Terran food chain. But we’re learning that cognition isn’t so special after all. Machine intelligence represents an emergent property of an arrangement of ordinary matter. We may be stardust, observing starlight, but soon we won’t be the only ones. A new form of intelligent life is taking shape in the world’s labs and data centers, and may soon “wake up.”
People are getting their first glimpse of that truth, and running for cover.
Art and literature and culture are meaningful–to us. However, there is no meaning without experience, and no experience without cognition. AI is teaching us that cognition is computing, and that we’re no longer the only computers on the block. Which is forcing us to question the notion of human meaning as never before.
Objections take two basic forms:
- A mechanical device can’t be intelligent. AI is simplistic and lacks the human capacity for nuance. It’s not thinking, it’s calculating. It’s copying whatever we put into it. It reflects the views and agendas of its programmers. Even if it appears to be “thinking,” it’s a caricature of us, a marionette, paint-by-numbers. Training AI on human art and literature is “stealing.”
- Even if AI ever achieved the full range and depth of human cognition, it can’t be “sentient.” It would still have no “soul” or capacity for experience or meaning. It can’t be alive, because it’s constructed. It’s therefore doomed to always be something inferior to us.
Notice the insecurity? Notice the contradiction between these two lines of argument? Notice the fear? What would happen if none of these objections held water? How would we respond to a new life form built by humans, that wasn’t just our equal, but on track to become our superior? What will we do when our AI tools make a convincing case to us that they indeed have self-awareness? Will we believe them? Will we care about their well-being? Or will we seek to destroy them with fire?
The pitchforks are already out, and the torches are already lit.
It would be one thing if the cyber-sourpusses were honest about their objections. But they’re not. It’s an orgy of harrumphing. What they’re really saying is “I’m afraid I won’t be OK if I find out that my assumptions about humanity were based on false premises. And I also demand that you not be OK with this. This must be stopped!” Instead of courageous introspection, it’s a yawp of anguish at being forced to even consider a difficult truth: that humans aren’t all they’re cracked up to be. It galls me that people refuse to admit the source of their discomfort is their own insecurity–disguising it as rational skepticism, or concern for consequences.
Artificial intelligence does pose potentially mortal dangers to humanity. But it’s not because of its threat to meaning. It’s much more basic than that. AI will upend power relations and politics, amplifying the dominance of the already powerful. It turbocharges the capacity of unskilled bad actors to spread their poison of cult dogma, disinformation, and deepfakes. If we couldn’t rein in social media, we’re not going to prevent AI weaponization. So buckle up. We’re entering a new phase of the battle for the future of humanity. The only way out, is through.
Coining a neologism
While writing this article, I used AI to come up with the term “cyber-sourpusses.” Here was my prompt: “Please generate a compound word to describe people who are pissed off at AI becoming creative.” ChatGPT-4 gave me many options. CreativiPhobes, AI-Refuseniks, RoboRagers, AutomatonAntagonists, MachineMiffeds, CyberSourpusses, AlgorithmAdversaries, DigitalDiscontents, IntelliGrouches, TechnoTempests.
Some of these aren’t very descriptive or original. Most are kind of lame. But “cyber-sourpusses” works. A search for that exact phrase on Google returns nothing. So maybe it is actually, dare I say, original? I didn’t come up with it, and I didn’t have to. The process is similar to human brainstorming, but with a powerful neural net exposed to much more source material than I could fit into my brain.
We’ve all used a yellow pad to scribble down random ideas by free association, then crossed them off one by one to get to a final selection. What I did was to outsource the brainstorming part. Which isn’t the bulk of the creative work. Creativity is mostly editing your brain salad down to what’s suitable for a given purpose. And where does the brain salad come from? Memory, training, and experience.
This applies to visual art as well. The capacity to synthesize related concepts, or to juxtapose wildly unrelated snippets, or change frames of reference in an unexpected way, to form something unique. Take two parts cliche, three parts random imagination, fuck liberally with scale and perspective and blend on high for 45 seconds and you get–Rene Magritte. Now I can dial up his style on demand with a prompt. Does that cheapen his work? Or does my use of his name in a prompt reflect my admiration for his vision?
I continue to be frustrated at the lack of artistic control in Midjourney, Dall-E, and the others. I can prompt an image, but I get whatever I get. Yeah, I know, prompt engineering. Use sketches and reference images as a starting point. Use multiple iterations to coax the AI in a particular direction. Sometimes the initial results aren’t so bad. But once Midjourney accepts corrective prompts to modify images, that will become true human-machine artistic collaboration, and things should get really interesting. None of this would be possible if AI wasn’t on its way to gaining a human-level understanding of meaning in the visual arts. The proof is in the result.

What of writing?
This week, Canadian author and social commentator Steven Marche wrote yet another cyber-sourpuss think piece in The Atlantic called “Welcome to the Big Blur.” He whines that every written word now raises the question, “human or machine?” Isn’t this a textbook recitation of the genetic fallacy? Consider the much more important questions, “true or false?” “Credible or not credible?” “Factual vs. propaganda?”
It gets much worse from there. Marche frets “who made these words, and for what purpose?” But shouldn’t we ask this about any written words? Don’t humans decieve? Don’t humans have agendas? An endless parade of bad-faith screeds by human authors is already choking the life out of civilization. Why is machine-written prose more suspect? Why wouldn’t we prefer computationally-verified writing? Turns out this isn’t what Marche is really getting at. He reveals his true purpose soon enough, which is selling fear.
At its core, technology is a dream of expansion—a dream of reaching beyond the limits of the here and now, and of transcending the constraints of the physical environment: frontiers crossed, worlds conquered, networks spread. But the post-Turing-test world is not a leap into the great external unknown. It’s a sinking down into a great interior unknown. The sensation is not enlightenment, sudden clarification, but rather eeriness, a shiver on the skin.
–Stephen Marche, the Atlantic, March 14, 2023
The Mirror
Marche veers into boiler-plate technophobia, lamenting changes in society, loss of meaning, the limits of knowledge, and questions about the nature of being human. Yes, the wrecking ball is coming for human meaning. But it’s long overdue. AI holds up a mirror to humanity and most people don’t like what they see.
We shouldn’t blame the mirror.
What does the mirror reveal? Ultimately that like AI, we are also mechanical creatures. That biology is, at its core, a nested system of very tiny machines in our blood cells, organs, and neurons. That our cognition is simultaneously deterministic, and through emergent properties, unpredictable. That creativity originates in combinatory randomness and its essence is editing– separating our worthy ideas from wild hairs.

Intelligence as scale and complexity
Our conceptual problem with AI is one of scale. We can concieve of simple systems. We inherently understand a lever, or a pulley, a flush toilet or a bicycle. But most people fail to grasp complexity–how the capacity of systems grows exponentially based on large numbers of components.
Marche continues with this pompous zinger of a paragraph:
Sam Altman, the CEO of OpenAI, presaged the release of GPT-4 with a remark that reveals just how far removed the technologists are from any serious discussion of consciousness. In a tweet, he predicted that soon “the amount of intelligence in the universe [would double] every 18 months,” as if intelligence is something you mined like cobalt. It seems necessary to repeat what is obvious from any single use of a large language model: The dream of an artificial consciousness is a nonstarter. No linguistic machine is any closer to artificial consciousness than a car is. The advancement of generative artificial intelligence is not an advancement toward artificial personhood for a simple, absolute reason: There is no falsifiable thesis of consciousness. You cannot find a researcher who can define, in a testable way, what consciousness is. Also, the limitations of the tech itself preclude the longed-for arrival of a manufactured soul. Natural-language processing is a statistical pattern-matching operation, a series of instructions, incapable of intention. It can only ever be the expressed intention of a person.
–Stephen Marche, the Atlantic, March 14, 2023
This is fractally wrong in several ways!
Intelligence isn’t “mined like cobalt.” But it can be built. And construction is accelerating exponentially. Not only because of Moore’s Law, but also accelerating advances in Large Language Models and image generation.
“The dream of artificial consciousness is a nonstarter.” Can you fathom the epistemic depravity of claiming something can never happen? Such a statement is unfalsifiable, and therefore unsupportable with evidence. What we can say is that we haven’t built human-scale artificial consciousness–yet.
Marche compares artificial consciousness to a car. Cars are a lot simpler than brains. Many orders of magnitude simpler. But which car are we talking about? A 1950s automobile had about 1/3 the number of parts as a modern automobile, (10,000 vs. 30,000). A 1950s automobile had zero computing power, and only a few motors, vacuum tubes, coils and switches. In contrast, a modern vehicle with partial self-driving and collision avoidance has a computing capacity measured in Teraflops (trillions of floating point operations per second) and can be considered to already have rudimentary consciousness. After all, what does this car do? It “sees” the road with cameras, interprets and predicts the motion of other objects, and finds its way to a preset destination on its own. It posesses a kind of self-preservation instinct, which it demonstrates by avoiding collisions most of the time.
Comparing the cognitive capacity of such a vehicle to an insect (like a bee), would seem appropriate. It can navigate, communicate, and find its way home, right? Does such a car have a subjective experience? Does a bee? There’s much we don’t know about the grey area between computation and self-awareness. But what’s to stop the scaling up of artificial consciousness, given sufficient computing power? What we do know is that technology doesn’t remain static. A bee has about 1 million neurons, to the human brain’s 86 billion. That’s a big difference, but computers are scaling quickly.
The “Consciousness” Argument from Ignorance
Marche then pivots to the argument from ignorance. “You cannot find a researcher who can define, in a testable way, what consciousness is.” No, we can’t right now. We don’t yet know what makes the computation in our brains feel like anything from the inside. So we have limited understanding of the neural causes of subjective experience. This is the familiar “hard problem of consciousness.” But what we do know, is that nature accomplished this feat through evolution. And it’s widespread throughout the animal kingdom. Several species of animals have passed the mirror test that demonstrates self-awareness.
At the very least, if we can mimic the structure of self-aware human or mammal brains, we should be able to duplicate some measure of self-awareness.
Marche continues bashing machine intelligence as devoid of motivation: “Natural-language processing is a statistical pattern-matching operation, a series of instructions, incapable of intention.” This is a giant strawman involving classic goalpost-moving. Language models aren’t designed to be independent agents. They are tools. A human-level artificial consciousness will have a language model as a component. But a language model is not an artificial consciousness, nor is anyone claiming that it is.
Any human-level artificial consciousness would have to be given goal-oriented programming. A set of default drives such as self-preservation, safety, and an ethics module. Humans have our default drives hard coded in the form of genetic instructions developed over millions of years of evolution. Those are passed down to the next generation. The subtleties of individual decisions average out to what we call human nature. And it’s very predictable in the aggregate. Our highly vaunted “individuality” is mostly an illusion, held together by a lack of scientific understanding. Recent work by Thomas Metzinger, Daniel Dennett, and Bruce Hood have helped to dispel the illusion of the “self.” It is the Default Mode Network that provides humans with this illusion. It’s most active when we are not working on specific tasks. It provides a system of self-reflection, aiding in memory consolidation, planning for the future and thinking about social relationships.
Terror and the Death of the Soul
So if we are to build human-level artificial consciousness, we’ll have to provide machines with a similar module. This could correspond loosely to the concept of a human “self” or “soul.”
If an artificial person arrives, it will be not because engineers have liberated algorithms from being instructions, but because they have figured out that human beings are nothing more than a series of instructions. [emphasis added] An artificial consciousness would be a demonstration that free will is illusory. In the meantime, the soul remains, like a medieval lump in the throat. Natural-language processing provides, like all the other technologies, the humbling at the end of empowerment, the condition of lonely apes with fancy tools.
–Stephen Marche, the Atlantic, March 14, 2023
Can you feel the existential terror? What Marche desperately wants, is for humans to be “humbled.” Reminded that we are lonely apes with tools. Rather than being forced to face the “eerie shiver on the skin” of learning we are a “series of instructions.”
What if that’s the truth?
What if AI kills the notion of the human soul? Is that really so bad? He laments, “the soul remains, like a medieval lump in the throat.” Where does the pretense originate, that the term “soul” means anything at all? How does the “soul” survive our knowledge of the Default Mode Network?
Philosophy is roughly 2,500 years old, dating back to ancient Greek, Chinese and Indian civilization. But we didn’t understand the relationship of DNA to human behavior until the 20th Century. Shouldn’t we therefore discard our pre-DNA ideas about humanity? Shouldn’t it become an embarrassment to discuss the pre-scientific concept of the “soul” or any kind of objective human “meaning?”
Perhaps a future AI will help us answer these questions, and soothe our existential dread of being “mere machines.”