Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

More from Benjamín Labatut

harpers.org/archive/2024/07/the-gods-of-logic-benjamin-labatut-ai

Benjamín Labatut

We will never know how many died during the Butlerian Jihad. Was it millions? Billions?
Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire,
consuming everything in its path, a chaos that engulfed generations in an orgy of
destruction lasting almost a hundred years. A war with a death toll so high that it left a
permanent scar on humanity’s soul. But we will never know the names of those who
fought and died in it, or the immense suffering and destruction it caused, because the
Butlerian Jihad, abominable and devastating as it was, never happened.

The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that
animates his science-fiction saga Dune. It was humanity’s last stand against sentient
technology, a crusade to overthrow the god of machine-logic and eradicate the conscious
computers and robots that in the future had almost entirely enslaved us. Herbert
described it as “a thalamic pause for all humankind,” an era of such violence run amok
that it completely transformed the way society developed from then onward. But we know
very little of what actually happened during the struggle itself, because in the original
Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers,
which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing
artificial intelligence or any machine that simulated our minds, placing a damper on the
worst excesses of technology. However, it was fought so many eons before the events
portrayed in the novels that by the time they occur it has faded into legend and
crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in
popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking
over to machines in the hope that this would set them free. But that only permitted other
men with machines to enslave them.” “We do not trust the unknown which can arise from
imaginative technology.” “We must negate the machines-that-think.” The most enduring
legacy of the Jihad was a profound change in humankind’s relationship to technology.
Because the target of that great hunt, where we stalked and preyed upon the very
artifacts we had created to lift ourselves above the seat that nature had intended for us,
was not just mechanical intelligence but the machinelike attitude that had taken hold of
our species: “Humans had set those machines to usurp our sense of beauty, our
necessary selfdom out of which we make living judgments,” Herbert wrote.

Humans must set their own guidelines. This is not something machines can do.
Reasoning depends upon programming, not on hardware, and we are the
ultimate program!

The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to
technology—and forced human minds to develop above and beyond the limits of
mechanistic reasoning, so that we would no longer depend on computers to do our
thinking for us.

1/10
Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god
of machine-logic, seemed quaint when he began writing it in the Sixties. Back then,
computers were primitive by modern standards, massive mainframe contraptions that
could process only hundreds of thousands of cycles per second (instead of billions, like
today), had very little memory, operated via punch cards, and were not connected to one
another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian
Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new
fear that keeps many up at night, a terror born of great advances that seem to suggest
that, if we are not very careful, we may—with our own hands—bring forth a future where
humanity has no place. This strange nightmare is a credible danger only because so
many of our dreams are threatening to come true. It is the culmination of a long process
that hearkens back to the origins of civilization itself, to the time when the world was filled
with magic and dread, and the only way to guarantee our survival was to call down the
power of the gods.

Apotheosis has always haunted the soul of humankind. Since ancient times we have
suffered the longing to become gods and exceed the limits nature has placed on us. To
achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the
means to reach beyond our capabilities. While we tend to believe that it is only now, in the
modern world, that power and knowledge carry great risks, primitive knowledge was also
dangerous, because in antiquity a part of our understanding of the world and ourselves
did not come from us, but from the Other. From the gods, from spirits, from raging voices
that spoke in silence.

At the heart of the mysteries of the Vedas, revealed by the people of India, lies the Altar
of Fire: a sacrificial construct made from bricks laid down in precise mathematical
proportions to form the shape of a huge bird of prey—an eagle, or a hawk, perhaps.
According to Roberto Calasso, it was a gift from the primordial deity at the origin of
everything: Prajapati, Lord of Creatures. When his children, the gods, complained that
they could not escape from Death, he gave them precise instructions for how to build an
altar that would permit them to ascend to heaven and attain immortality: “Take three
hundred and sixty border stones and ten thousand, eight hundred bricks, as many as
there are hours in a year,” he said. “Each brick shall have a name. Place them in five
layers. Add more bricks to a total of eleven thousand, five hundred and fifty-six.” The
gods built the altar and fled from Mrtyu, Death itself. However, Death prevented human
beings from doing the same. We were not allowed to become immortal with our bodies;
we could only aspire to everlasting works. The Vedic people continued to erect the Altar
of Fire for thousands of years: with time, according to Calasso, they realized that every
brick was a thought, that thoughts piled on top of each other created a wall—the mind,
the power of attention—and that that mind, when properly developed, could fly like a bird
with outstretched wings and conquer the skies.

Seen from afar by people who were not aware of what was being made, these men and
women must surely have looked like bricklayers gone mad. And that same frantic folly
seems to possess those who, in recent decades, have dedicated their hearts and minds

2/10
to the building of a new mathematical construct, a soulless copy of certain aspects of our
thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if
we are to believe the most zealous among its devotees, will help us reach the heavens
and become immortal.

Raw and abstract power, AI lacks body, consciousness, or desire, and so, some might
say, it is incapable of generating that primordial heat that the Vedas call tapas—the ardor
of the mind, the fervor from which all existence emerges—and that still burns, however
faintly, within each and every one of us. Should we trust the most optimistic voices
coming from Silicon Valley, AI could be the vehicle we use to create boundless wealth,
cure all ills, heal the planet, and move toward immortality, while the pessimists warn that it
may be our downfall. Has our time come to join the gods eternal? Or will our digital
offspring usurp the Altar of Fire and use it for their own ends, as we ourselves stole that
knowledge, originally intended for the gods? It’s far too early to tell. But we can be certain
of one thing, since we have learned it, time and time again, from the punishing tales of
our mythologies: it is never safe to call on the gods, or even come close to them.

In the mid-nineteenth century, the mathematician George Boole heard the voice of God.
As he crossed a field near his home in England, he had a mystical experience and came
to believe he would uncover the rules underlying human thought. The poor son of a
cobbler, Boole was a child prodigy who taught himself calculus and worked as a
schoolteacher in Doncaster until one of his papers earned him a gold medal from the
Royal Society, and he secured an offer to become the first professor of mathematics at
Queen’s College, Cork, in Ireland. Under the auspices of a university, and relatively free
from the economic hardships that he had endured for so long, he could dedicate himself
almost entirely to his passions for the first time, and he soon managed something unique:
he married mathematics and logic in a system that would change the world.

Before Boole, the disciplines of logic and mathematics had developed quite separately for
more than a thousand years. His new logic functioned with only two values—true and
false—and with it he could not only do math but analyze philosophical statements and
propositions to divine their veracity or falsehood. Boole put his new type of logic to use on
something that to him, a deeply religious man, was a spiritual necessity: to demonstrate
that God was incapable of evil.

In a handwritten note that he titled “Origin of Evil,” Boole subjected four basic premises to
analysis using the principles of his logic:

3/10
1. If God is omnipotent, all things must take place according to his will, and
vice versa.

2. If God is perfectly good, and if all things take place according to his will, absolute
evil does not exist.

3. If God were omnipotent, and if benevolence were the sole principle of his
conduct, either pain would not exist, or it would exist solely as an instrument
of good.

4. Pain does exist.

He subjected these statements to logical analysis, by replacing them with symbols, and
combined them in different ways, through mathematical operations, till all he had left was
a result that, according to his system, was categorically true: absolute evil does not exist
and pain is an instrument of good.

Boole was a man inhabited by the spirit of his time, a spirit that was very different from
ours: he believed that the human mind was rational and functioned according to the same
laws that shape the larger universe; by painstakingly uncovering those laws, not only
could we understand the world and reveal the hidden mechanisms that produce and
guide our own thoughts, we could actually peer into the mind of Divinity. After confronting
the problem of evil, he continued to develop his ideas, trying to create a calculus to
reduce all logical syllogisms, deductions, and inferences to the manipulation of
mathematical symbols, and to cast a precise foundation for the theory of probability. This
resulted in his greatest work: An Investigation of the Laws of Thought, a book that laid out
the rules of his new symbolic logic and also outlined, in the opening chapter, his grand
intention to capture, with mathematics, the language of that ghost that whispers within the
tortuous pathways of our minds:

The design of the following treatise is to investigate the fundamental laws of those
operations of the mind by which reasoning is performed; to give expression to them
in the symbolical language of a Calculus, and upon this foundation to establish the
science of Logic and construct its method.

Boole was convinced that our minds operate on a fundamental basis of logic, but he died
without having reached his goal of creating a system to understand thought; ten years
after publishing his masterpiece, he walked the two and a half miles that separated his
home from the university, got drenched by the rain, lectured all day in wet clothes, and
developed a cold that later became fatal pneumonia. He fell into delirium due to fever and
told his wife, Mary Everest, that he could perceive the whole universe spread before him
like a great black ocean, with nothing to see and nothing to hear except a silver trumpet
and a chorus that sang, “Forever, O Lord, Thy word is settled in Heaven.” He died on the
eighth of December 1864, not long after Mary (according to a story that may be
apocryphal) had wrapped him in wet blankets—following the strange logic of homeopathy,
wherein cures must mimic causes—unwittingly hastening her beloved’s demise.

4/10
His work was inconsequential during his lifetime and ignored for more than eighty years
after his death, until one day a young graduate student at MIT chanced upon The Laws of
Thought, immersed himself in Boole’s strange algebraic logic, and created a practical
application that has, since then, affected every aspect of our lives.

His name was Claude Shannon, a mathematician and electrical engineer who was
working on the most advanced thinking machine of his time (Vannevar Bush’s differential
analyzer, an early computer as big as an entire room), when he realized that Boole’s two-
value logic was the perfect system with which to design electronic circuits. Electrical
switches use binary values (0 for off and 1 for on), and they can be controlled by the
logical operations created by the English mathematician. Incredibly complex
computations can be made just by exploiting a simple duality: true or false, on or off, 1 or
0. That duality is the cornerstone of the Information Age.

Boole’s seemingly useless and highly abstract ideas roared to life in digital circuitry,
completely transforming our technological landscape; the vast majority of technology that
uses electricity relies on them, from vacuum cleaners to intercontinental ballistic missiles.
But their most important application is the fact that they form the basis of how modern
computers “think.” These computers “speak” to one another in Boolean, and they use it to
calculate; every single task they perform boils down to a series of yes-or-no questions
that are processed using Boolean logic. All of their software, every word of their code,
depends on it. Boole’s strange logic is the invisible alchemy that powers the modern
world, and the mud that forms the bricks of the new Altar of Fire, because it is
fundamental to the basic unit that inspired today’s most advanced AI systems: the
artificial neuron.

In 1943, Warren Sturgis McCulloch and Walter Pitts published the first mathematical
model of a neuron. It was extremely simplified and abstract, with none of the convoluted
processes of real biology, but from that simplicity came enormous power. McCulloch, a
neurophysiologist and one of the founders of cybernetics, and Pitts, a brilliant young
polymath who excelled at logic, built on the fact that, in essence, the behavior of a neuron
is binary: when excited, it either fires an electrical impulse or it doesn’t. Based on this
premise, their landmark paper, “A Logical Calculus of the Ideas Immanent in Nervous
Activity,” used a Boolean scalpel to pry open the inner workings of the neuronal
mechanism. According to their scheme, each artificial neuron receives multiple electrical
signals from its neighbors, just as biological ones do; if, together, those signals exceed a
certain threshold, the neuron fires, otherwise it remains inactive. Having created this
mathematical construct, they took their ideas one step further and showed that, since
both the input and the output of a neuron are Boolean, by stringing together these binary
units into chains and loops, a network made up of them could calculate and implement
every possible operation of Boolean logic. What arose from this model was a new
understanding of the brain and the mind: viewed from their perspective, the brain could
be understood as a computing device, a machine that used neurons to perform logic.
Mental activity in humans, therefore, was nothing but binary information processed by
neurons following mathematical rules. Before Pitts and McCulloch, not even Turing had

5/10
thought of using the notion of computation to build a theory of the mind. But while their
model demonstrated that artificial neurons are able to simulate complex cognitive
processes, it turned out to be far too limited to capture the full intricacy of real biological
brains. It was, nevertheless, a monumental insight, because it presented the first modern
computational theory of the mind and offered an answer to one of the great questions in
neuroscience—namely, how a brain can be intelligent. McCulloch and Pitts’s work in
neural networks kicked off the computational approach to neuroscience that led John von
Neumann to create the logical design of modern computers. It opened up a new vista into
how our brains may function and seemed to show how neurons process and transmit
information. Because of its far-reaching consequences, their demonstration that neural
networks can do logic may be one of the most important ideas in the history of human
thought, but with time, their highly idealized neurons were either dismissed or ignored by
scientists working to understand the brain and replaced by different schemes. McCulloch
spent years trying to develop a full mechanistic model of the mind and continued to
search for the logic of the nervous system until his death in 1969, whereas Pitts—who
had devoted his life to the conviction that the mysterious workings of the human mind, our
many psychological feats and shortcomings, found their source in the pure mechanics of
neurons firing electrical impulses in the brain—fell into depression and alcoholism,
suffered delirium tremens, seizures, and episodes of unconsciousness, and died of
bleeding esophageal varices, a condition associated with cirrhosis, alone in a
boardinghouse in Cambridge, Massachusetts, after setting fire to his work on a model of
three-dimensional neural networks. Their artificial neurons, however, survived them and
sparked an approach to computer learning that, some four decades after that landmark
paper, was doggedly and fervently championed, against the prevailing wisdom of the
time, by none other than George Boole’s great-great-grandson Geoffrey Hinton.

Hinton is widely considered the godfather of AI. He is perhaps the single person who has
had the greatest influence on the field in the past several decades. In the Eighties, he
championed an approach based on deep neural nets, mathematical abstractions of the
brain in which neurons are represented with code; just by altering the strength of the
connections between those neurons—changing the numbers used to represent them—
the network could learn by itself. Before him, the dominant paradigm was quite different:
most researchers believed that, for machines to think, they would have to mimic the way
humans reason, by manipulating symbols (words or numbers, for example) following
logical rules, which is what Boole himself believed. But his descendant disagreed: “Crows
can solve puzzles,” he said in an interview with MIT Technology Review last year, “and
they don’t have language. . . . They’re doing it by changing the strengths of connections
between neurons in their brain. And so it has to be possible to learn complicated things
by changing the strengths of connections in an artificial neural network.”

6/10
For the longest time, Hinton’s neural networks could not come alive. There was simply not
enough computing power or training data for them to exhibit intelligence. But then things
changed, violently. Beginning in 2010, he saw his ideas bloom in ways he could never
have imagined, and neural networks became the main focus of international research.
“We ceased to be the lunatic fringe,” he said. “We’re now the lunatic core.” In the
following years, absolutely stunning systems were developed: AlphaGo pummeled the
world champion of Go, Lee Sedol; AlphaFold predicted the shape of virtually every known
protein structure; and programs like DALL-E 2 gave us photorealistic images conjured
from pure noise. And then came ChatGPT, an AI able to do so many things Hinton had
thought were decades away that it put the fear of God in him.

In spring 2023, Hinton quit his job as a vice president at Google to warn the world about
the dangers of his brainchild.

He feels, as he explained at an annual MIT conference, that AI is developing too fast:

7/10
It’s quite conceivable that humanity is just a passing phase in the evolution of
intelligence. You couldn’t directly evolve digital intelligence. It would require too
much energy and too much careful fabrication. You need biological intelligence to
evolve so that it can create digital intelligence, but digital intelligence can then
absorb everything people ever wrote in a fairly slow way, which is what ChatGPT is
doing, but then it can get direct access experience from the world and run much
faster. It may keep us around for a while to keep the power stations running. But
after that, maybe not.

Hinton has been transformed. He has mutated from an evangelist of a new form of
reason into a prophet of doom. He says that what changed his mind was the realization
that we had, in fact, not replicated our intelligence, but created a superior one.

Or was it something else, perhaps? Did some unconscious part of him whisper that it was
he, rather than his great-great-grandfather, who was intended by God to find the
mechanisms of thought? Hinton does not believe in God, and he would surely deny his
ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have
every one of his meals on his knees, resting on a pillow like a monk praying at the altar,
because of a back injury that caused him excruciating pain. For more than seventeen
years, he could not sit down, and only since 2022 has he managed to do so long enough
to eat.

Hinton is adamant that the dangers of thinking machines are real. And not just short-term
effects like job replacement, disinformation, or autonomous lethal weapons, but an
existential risk that some discount as fantasy: that our place in the world might be
supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of
immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how
to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve
got the weights stored in some medium and you can find another piece of hardware that
can run the same instructions, then you can bring it to life again. So, we’ve got
immortality. But it’s not for us.”

Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die
down at the end of the sacrifice and the sharp coldness of the beings we have conjured
up starts to seep into our bones. Are we really headed for obsolescence? Will humanity
perish, not because of the way we treat all that surrounds us, nor due to some massive
unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to
know all that can be known? The supposed AI apocalypse is different from the
mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts,
and inundations that are becoming commonplace, because it arises from things that we
have, since the beginning of civilization, always considered positive and central to what
makes us human: reason, intelligence, logic, and the capacity to solve the problems,
puzzles, and evils that taint even the most fortunate person’s existence with everyday
suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the
Vedic gods who managed to escape from Death, we may shine a light on things that
should remain in darkness. Because even if artificial intelligence never lives up to the

8/10
grand and terrifying nightmare visions that presage a nonhuman world where algorithms
hum along without us, we will still have to contend with the myriad effects this technology
will have on human society, culture, and economics.

In the meantime, the larger specter of superintelligent AI looms over us. And while it is
less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story
intended to attract more money and investment by presenting a series of powerful
systems not as the next step in our technological development but as a death-god that
ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers
of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it
reminds us of a time when we shivered in caves and huddled together, while outside in
the dark, with eyes that could see in the night, the many savage beasts and monsters of
the past sniffed around for traces of our scent.

As every new AI model becomes stronger, as the voices of warning form a chorus, and
even the most optimistic among us begin to fear this new technology, it is harder and
harder to think without panic or to reason with logic. Thankfully, we have many other
talents that don’t answer to reason. And we can always rise and take a step back from
the void toward which we have so hurriedly thrown ourselves, by lending an ear to the
strange voices that arise from our imagination, that feral territory that will always remain a
necessary refuge and counterpoint to rationality.

Faced, as we are, with wild speculation, confronted with dangers that no one, however
smart or well informed, is truly capable of managing or understanding, and taunted by the
promises of unlimited potential, we may have to sound out the future not merely with
science, politics, and reason, but with that devil-eye we use to see in the dark: fiction.
Because we can find keys to doors we have yet to encounter in the worlds that authors
have imagined in the past. As we grope forward in a daze, battered and bewildered by
the capabilities of AI, we could do worse than to think about the desert planet where the
protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future
time, under the heady spell of a drug called spice, to find the Golden Path, a way for
human beings to break from tyranny and avoid extinction or stagnation by being more
diverse, resilient, and free, evolving past purely logical reasoning and developing our
minds and faculties to the point where our thoughts and actions are unpredictable and not
bound by statistics. Herbert’s books, with their strange mixture of past and present,
remind us that there are many ways in which we can continue forward while preserving
our humanity. AI is here already, but what we choose to do with it and what limits we
agree to place on its development remain decisions to be made. No matter how many
billions of dollars are invested in the AI companies that promise to eliminate work, solve
climate change, cure cancer, and rain down miracles unlike anything we have seen
before, we can never fully give ourselves over to these mathematical creatures, these
beings with no soul or sympathy, because they are neither alive nor conscious—at least
not yet, and certainly not like us—so they do not share the contradictory nature of
our minds.

9/10
In the coming years, as people armed with AI continue making the world faster, stranger,
and more chaotic, we should do all we can to prevent these systems from giving more
and more power to the few who can build them. But we should also consider a warning
from Herbert, the central commandment he chose to enshrine at the heart of future
humanity’s key religious text, a rule meant to keep us from becoming subservient to the
products of our reason, and from bowing down before the God of Logic and his many
fearsome offspring:

Thou shalt not make a machine in the likeness of a human mind.

10/10

You might also like