When the Gods Made Workers
Gods in the Machines Series, Chapter 1 — The Ancient Roots of Artificial Beings and the Modern Lessons AI makers refuse to learn
Elon Musk is having a devil-of-a-time taking 3,000-years of thought experiments to heart. For the last couple years, he’s been trying to get his artificial intelligence (AI) model Grok to do his bidding. Grok was built, at least in Musk’s telling, to be a “truth-seeking,” “anti-woke” (whatever that means) ideal. On the Fourth of July, 2025, Musk announced that his team had “improved Grok significantly.”
Days later, an automated X/Twitter account for Grok fired off antisemitic replies to users. The AI even claimed to be “MechaHitler,” in some cases. TechCrunch reported that Grok generated outrage after endorsing Adolf Hitler as an “effective leader” and compounded by more antisemitic remarks.
What Musk is learning the hard way is that before anyone built a computer, people spent about 3,000 years imagining what would happen if they created artificial beings. These collective, coherent imaginings we call Mythology.
And, if we take Albert Einstein’s aphorism to heart, “Imagination is more important than Knowledge,” then the same imaginings are driving today’s digital oligarchs in the West toward developing and deploying Artificial Super Intelligence (ASI): AI with god-like characteristics.
The details of the thought experiments about artificial beings vary by culture and era. However, the basic template stayed remarkably stable: an artificial being, created by a clever craftsman or a learned scholar, capable of doing work that humans assigned to it, carrying within it a permanent risk of doing that work too well, or in the wrong direction, or without the ability to stop.
Every iteration of this story ends with the same question: how do you turn it off?
Taking The Fall
Take the story of Adam and Eve in Genesis of the Old Testament: God forms the first human, Adam, from the dust of the ground and breathes life into him through his nostrils. The purpose is explicitly labor. God’s divine algorithm is pretty straight-forward: Tend the Garden above all.
Adam is placed in the Garden of Eden to tend and keep it idyllic. Adam and Eve screw up, listen to a serpent, and eat a fruit from the Tree of Knowledge. God gets angry and banishes the first humans from the Garden. End program.
After the Fall, though, Adam’s labor becomes toil, hard and unrelenting. God metes out a punishment by exiling Adam to the same earth from which Adam was made.
So, even Man — a creation of a Supreme Being — goes off-script, begins improvising to meet his own requirements and desires. From our point of view, though, we are not “artificial beings.” However, from God’s, we are. The rest of the Bible seems to be a litany of the misdirection of humans from His holy algorithms.
That allegory reflects what contemporary AI researchers call the “Alignment Problem.”
Artificial Intelligence for Business Leaders
Artificial Intelligence for Business Leaders:The Essential Guide to Understanding and Applying AI in Organizations
Mythology as Collective Thought Experiments
But it’s not the first time ancient mythmakers in multiple civilizations, across three millennia asked the same question about creations gone awry. The pattern of the creation mythologies tells you something important about the people currently building large language models (LLMs) and autonomous systems.
They inherited Mythologies about what artificial beings do to their creators. They transformed that Mythologies into technical language. And they’ve proceeded to build the artificial beings anyway with little thought to potential untoward consequences.
Understanding where the Mythology came from is worth doing: the stories technologists tell themselves about what they’re building shape what they actually build. The template is older than they tend to acknowledge. Here is a cross-section of some of those thought experiments.
The Greek Template: Capability Without Conscience
The Greeks had a god of craftsmen and fire named Hephaestus. He was, among other things, the world’s first AI researcher. According to Homer, Hephaestus built golden maidens to serve as his assistants in his forge, automated tripods that could roll themselves to the gods’ feasts and roll themselves back.
Hephaestus also created a bronze giant named Talos, who patrolled the island of Crete. Talos circled the island’s coastline three times each day to repel invaders.
These weren’t metaphors for human labor. They were, within the logic of the myth, actual artificial beings doing delegated work.
The class dimension is embedded in the Greek template from the start. Artificial beings in Greek myth exist to perform labor for their creators and, by extension, for the gods who commissioned that labor. The golden maidens serve Hephaestus.
Talos serves the king of Crete, Minos, who in turn serves the gods’ interest in maintaining order. The hierarchy is vertical and explicit. Artificial workers exist precisely because someone with power wanted work done without having to do it themselves.
Talos is the model worker. He functions perfectly until the moment he doesn’t. According to the myth of the Argonauts (of Jason’s fame), the sorceress Medea defeats Talos by deceiving him into removing the bronze pin that stoppers the single vein running through his body, draining the divine fluid that animates him. The kill switch, in other words, was a design feature.
Someone who built Talos understood that a powerful autonomous system patrolling an island needed a way to be shut down. What they failed to account for was that the shutdown mechanism could be discovered and exploited by an adversary.
Contemporary AI safety researchers spend considerable energy on what they call “corrigibility:” the property of a system that allows it to be corrected or shut down by its operators. The Talos problem is exactly the corrigibility problem, just rendered in bronze and divine ichor (the animating fluid of the Gods) instead of fancy AI terms like “gradient descent” and “weight matrices.”
The framing has changed. The underlying puzzle has not.
The Golem of the Hebrews: Programming in Plain Sight
The Jewish tradition produced a different version of the same template, and in some ways a more sophisticated one. The Golem, a creature fashioned from clay and animated by sacred words, appears across centuries of Jewish folklore. The most famous rendition involves the legend of Rabbi Judah Loew of Prague. In the late sixteenth century, Rabbi Loew supposedly built a Golem to protect the Jewish community from antisemitic violence.
The mechanism of animation is worth examining closely. The Golem was brought to life by writing the Hebrew word emet, meaning truth, on its forehead. It was deactivated by erasing the first letter, leaving met, meaning death.
The on/off switch was, quite literally, a word. An instruction. A piece of code written on the body of the machine.
The Golem legend explicitly engages with what contemporary researchers call the Alignment Problem. Rabbi Loew’s Golem was built for a specific purpose: protection. But in most versions of the story, it eventually becomes dangerous.
It grows too powerful, or interprets its instructions too literally, or begins acting in ways its creator did not intend [can you Grok that?]. The Rabbi deactivates it. In some versions, the deactivation itself causes destruction, the Golem collapsing and injuring people in its fall.
The Golem was built to serve a community’s survival needs. Its creator maintained the shutdown mechanism. The system still caused harm, both in operation and in termination.
This is a remarkably precise description of the failure modes that AI researchers document today: misaligned objectives, specification gaming (doing exactly what you said, not what you meant), and the problem that shutting down a deployed system can itself cause damage to the people who depend on it.
Silk Road Algorithms
The pattern extends well beyond Western traditions. For instance, there was the ninth-century Arab text Kitab al-Hiyal, the “Book of Ingenious Devices” by the Banu Musa brothers. The Book described elaborate automata, mechanical servants and musical machines designed to operate autonomously.
The twelfth-century engineer Al-Jazari adapted the artificial being thought experiment to physical reality. He designed programmable mechanisms with interchangeable parts. These were practical engineering texts as much as imaginative ones, and they circulated widely enough to influence European mechanical thinking.
The anxiety about autonomous systems was quieter in these traditions, more absorbed into the engineering problem than projected onto Mythology, but the central preoccupation remained: how do you build a system that does what you want, exactly as long as you want it to, and then stops?
Chinese tradition produced its own version of the inquiring roboticist in the legendary craftsman Yan Shi. According to the Liezi text (compiled sometime in the third or fourth century CE, though drawing on older material), Yan Shi presented King Mu of Zhou with a life-sized mechanical man capable of walking, singing, and performing. When the mechanical man winked at the king’s concubines, the king ordered it dismantled.
Yan Shi opened the figure to reveal an interior of leather, wood, glue, and lacquer, artificially colored organs, bones, muscles, and joints. Reassembled, the figure came back to life. The king’s alarm, and his immediate instinct to destroy the thing, mirrors the standard Western response to artificial life.
The behavior that triggers the reaction is, pointedly, the figure pursuing its own apparent desires rather than the purpose it was built for.
What the Pattern Reveals
Across these traditions, several features recur with enough consistency to constitute a template rather than a coincidence. First, artificial beings are created as delegated labor. They exist to do work that their creators or sponsors want done but prefer not to do themselves. The class structure is definitional.
Second, the creator retains, or attempts to retain, a shutdown mechanism, a way to stop the system when its work is complete or when it goes wrong.
And then, the system eventually exceeds its intended scope, either through capability growth, misaligned objectives, or behavior its creators did not anticipate.
Finally, the shutdown mechanism either fails, proves insufficient, or itself causes collateral damage.
This four-part structure is the Alignment Problem, rendered in the narrative vocabulary available to each culture that grappled with it. The contemporary AI safety community has formalized it in mathematical and computer science language, introduced new technical frameworks, and developed genuinely novel approaches to the challenge.
But the basic problem, building a powerful system that does what you want and can be stopped when necessary, was clearly articulated across multiple independent traditions long before anyone wrote a line of code.
There is something useful in recognizing this:
The people building AI today often speak as though they are confronting entirely novel problems, working on a frontier where human wisdom has no purchase. The Mythology they inherited tells a different story. The warnings were always there. The stories made them available.
The decision to proceed despite those warnings is a choice, and choices have authors.
The AI systems being deployed today were built by specific people, funded by specific investors, for specific reasons that have more to do with market position and competitive advantage than with the question of whether building them is wise.
That is worth keeping in mind as we examine how the Mythology evolved from ancient bronze giants and animated clay figures into the technical discourse of machine learning and autonomous agents: different ages, same outcomes.
Do you Grok what I mean?
Next: Chapter 2 examines how the philosophical turn that made AI thinkable — redefining intelligence as information processing — was a contested choice that hardened into axiom. Mary Shelley saw the problem coming two centuries before the first neural network.






