
A few months ago I was offered the chance to edit a brilliant, original book in which the author made a powerful case that humans should not trust artificial intelligence. To pique my interest, there was a sample chapter that expertly explained the dangers of surrendering control to AI bots, which might give the illusion of understanding what they’re talking about but in reality know nothing.
As an example of how seductive this illusion can be, the author cited the famous case of Clever Hans, a horse that could seemingly perform simple calculations, delivering the correct answer by tapping his hoof. Clever Hans drew crowds of amazed spectators, but unbeknownst to everyone – including his owner – the horse could sense subtle postural changes in the person asking the questions, which signaled when he should start tapping his hoof and when he should stop.
In their sample chapter, the author wrote: “When asked, ‘What is two thirds plus one sixth?’ Hans would tap his hoof eight times, correctly signaling four-thirds.”
I already had my suspicions that the author of this book was not a living human. In my work as an editor, I have learned to detect the telltale signs of manuscripts written by zombies:
- In terms of sentence structure, grammar, spelling etc., their text is flawless from start to finish (in other words, unlike any human I have ever met);
- They appear unusually well read and very smart;
- They provide no personal anecdotes;
- They have a tendency to repeat themselves;
- Sometimes they are startlingly, inexplicably wrong.
This highly polished book about AI ticked all these boxes, including the last. I’m rubbish at maths, but even I had noticed that two thirds plus one sixth couldn’t possibly equal four thirds (ie. one and a third, or eight sixths). After a few seconds of scribbling, I concluded that the correct answer is in fact five sixths.
From their biog, I could see that the human who claimed authorship of this book was an expert in their field and perfectly well intentioned, but all the evidence pointed to their using an AI such as ChatGPT to generate much of their text. If the real author was indeed a zombie, the irony was obvious: in describing the cautionary tale of a horse that could hoodwink humans with the illusion of intelligence, the AI had betrayed its own fallibility.
In their brief about what kind of editing they wanted me to do, the human asked for my help to make the text more positive about AI. Somehow their book had gotten overly critical. It was much too negative for their liking. They were “hugely excited” about the promise of this new technology, but for some reason their own book didn’t reflect this enthusiasm and they needed my help to put a more positive spin on it.
So a book about the potential benefits and risks of AI – almost certainly written by an AI – had become too sceptical about AI for its human handler’s liking.
I politely declined the work, as I do all requests to edit manuscripts that have the distinct whiff of zombie about them.
The coming apocalypse
I’m not joking when I describe AIs as zombies. Like the lurching corpses in zombie flicks – which hunt the living in order to feast on their brains – AI chatbots are parasitic and appear to have many characteristics of living beings, but unlike us they are unconscious entities operating on autopilot.
This is why nonfiction books written by zombies are so very soulless. Like a vacant stare, their soullessness is difficult to define, but you know it when you see it. Their lack of warmth, personal anecdotes and perspectives, or any distinctive writing style, is probably to blame. And while they might give the impression of being flawlessly written by a breathtakingly knowledgeable entity, they can’t be trusted. In fact, they’re far too clever for our own good.
It’s true that predictive and pattern recognition AIs far exceed our own ability to perform data-intensive tasks such as predicting the 3D structure of proteins, identifying promising drugs and detecting cancer. The danger is that the zombie brilliance of AI can make mere humans complacent – not just as authors and readers, but also as musicians, artists, scientists, or indeed anyone else whose profession is being eaten alive by generative AI.
The mathematician Mark Buchanan wrote recently in Nature Physics about the importance of having expert colleagues on hand to bat around ideas so that, collectively, they stand a better chance of hitting upon the correct answers. Could an AI chatbot fulfill that role? Buchanan described asking ChatGPT to prove a mathematical theorem, which it did in a flash and with absolute confidence.
But the proof was wrong. When Buchanan pointed this out, the bot responded: “Of course this proof is flawed, let me show you how to fix it,” and produced another result. It took the human mathematician 30 minutes to identify the problem with that proof. “Yes, of course, that proof is also flawed, let me show you the correct proof,” said the bot.
This one really was sound, as far as Buchanan could tell, but while the collaboration between AI and human had been informative and ultimately arrived at the correct solution, this only happened because Buchanan challenged the supposed proofs churned out by the machine and carefully thought them through for himself.
“It was like having a professor who added mistakes into their lectures, just to see if anyone is paying attention,” he wrote.
But what if no one is paying attention, either because they can’t be bothered, or because they don’t have the necessary time or skills to distinguish truth from delusion?
Already there are reports of AI dragging otherwise perfectly sensible, mentally well humans down delusional rabbit holes, such as the businessman who became convinced he was some kind of mathematical genius as a result of conversations with the sycophantic ChatGPT.
Our increasing reliance on AI to create text may have equally troubling consequences.
Bizarre hallucinations
Weird mistakes or “hallucinations” in AI-generated books, research papers and news stories may go unnoticed, first by their supposed authors and then by their readers, and these mistakes will then be regurgitated by the next hungry bot that feeds on the text – because despite their supposed intelligence, Large Language Models such as ChatGPT know nothing about logic or mathematics, or anything else for that matter. They don’t understand what they consume and excrete, they are simply word-prediction machines – language algorithms that have been fed vast quantities of text, for now most of it created by humans, but an increasingly large proportion generated by other AIs.
So what started out as tightly written, insightful prose, will over time degenerate into “AI Slop”, a.k.a. AI Mush, Guff or Swill. As science author Philip Ball writes in The New World: “Increasingly, generative AI might end up training on data that other AI has produced, a cycle that has been shown to spiral rapidly into gibberish.”
In other words, without vigilant oversight by humans, the zombie army will multiply and spread through the body of human knowledge, gobbling facts and excreting fantasy.
“That’s not going to happen,” you might think. An opposing army of human overseers will fact-check and correct everything the zombies write. But as AI becomes ubiquitous, our ability to provide that oversight will diminish. Quite apart from the impracticality of a finite number of expensive humans overseeing the growing mountain of cheap, AI-generated crap, because the bots make everything quick and easy – and give the illusion of understanding and expertise – most humans will lose the patience and skills needed to distinguish truth from fiction.
Lazy brains
Of course there’s nothing intrinsically wrong with “cognitive offloading” – using mental props such as a shopping list to aid memory, or a calculator to perform difficult sums quickly. But how would children ever learn arithmetic, for example, if there was always a calculator on hand to perform sums effortlessly, without any need to understand what they are doing?
How would you ever learn to write well if an AI zombie had always done it for you?
Cognitive offloading also erodes existing skills. There is evidence that using something as simple as a list to cognitively offload can impair memory and that offloading increases the likelihood of false memories. Similarly, our obsession with using smartphones to record and share experiences may harm our ability to remember them. For example, people who photographed museum exhibits were found to be worse at recalling them later compared with people who relied on their own memory.
Crucially, zombies rob us of our ability to find out things for ourselves. Authors who use AI to write their books are less likely to carry out their own research, as are students who use AI in their assignments. A study found that when students employed AI to synthesise information, as opposed to actively gathering information through a web search engine, their learning was more shallow and they delivered less persuasive arguments.
Other research using EEG (electroencephalography) found that when students just relied on their own brains to write essays, this led to better network connectivity compared with those who used ChatGPT. The brain-only students also found it easier to remember what they’d written.
And what’s true for students must surely be true for authors and anyone else who uses AI to generate text.
Impaired memory and brain connectivity aren’t the only hazards of getting too friendly with zombies. There is also evidence that overreliance on AI may erode critical thinking skills and that “knowledge workers” (such as writers, lawyers and doctors) who put their trust AI do less critical thinking than those who are more sceptical.
Quite apart from the loss of vital cognitive abilities, authors who rely heavily on AI risk losing the joy of creation. Where’s the fun and satisfaction of getting a zombie to write your book for you?
It will also be less enjoyable to read. Again, this is a difficult quality to define, but it has a lot to do with AI zombies’ lack of embodied cognition (at least for now). This visceral quality is unique to living, breathing humans: we don’t think like computers, we think with our entire body – through the pleasure and pain that comes with having a nervous system, hormones, guts and all the rest. And as readers we empathise with a living author’s experiences of these things, because they are embedded in our shared psychology, culture and society.
This is lived intelligence in the real, physical, emotional world, and AI books will only ever generate a hollow echo of it.
Monstrous emissions
Finally, and perhaps most importantly for the future of humanity, hungry zombies gobble up vital resources and generate huge quantities of greenhouse gases.
To fuel the rapidly growing demand for AI, the likes of OpenAI, Amazon and Google are building vast, power-hungry datacentres. In the US alone, they could account for more than 14% of total electricity demand by 2030, and the policies of the current administration will ensure that much of this extra electricity will come from burning fossil fuels. AI datacentres also require billions of gallons of precious water to cool their hardware, just as water scarcity is becoming an existential threat across the world as a result of climate change.
So please don’t let an zombie write your book for you. It might be quick, cheap and superficially fantastic, but on closer inspection it may turn out to be soulless, hallucinatory and planet-wrecking.