The Frankenstein Disclaimers: How Irresponsibility Became Big Tech's Defense
Gods in the Machine series, Chapter 2 — How Tech Oligarchs Escape Responsibility for their toxic digital products.
On February 18, Mark Zuckerberg took the witness stand in a Los Angeles courtroom to defend Meta against claims that Instagram was deliberately engineered to addict children, including minors as young as ten.
His core argument is that the platform reflects user behavior rather than causes it. He claims that the harms are complex and multifaceted, and that the company has always acted in good faith.
His argument, and that of other social media and AI vendors, follows a historical pattern so familiar, I’ve given it a name: The Frankenstein Disclaimer. Mary Shelley, author of the novel Frankenstein, described it in 1818: it’s the creator’s insistence that the inevitability of Progress shields them from accountability for what they have built.
Ahead of Her Time
Mary Shelley was 19-years old when she wrote the novel Frankenstein, which the technology revolution has spent two centuries failing to understand. She was also steeped in the classical tradition we examined in Chapter 1, not as a polished scholar but as someone who had grown up inside it.
Her father, William Godwin, one of the leading radical intellectuals of the age, educated her through his library and his circle, and the mythology of ancient creation sat on those shelves alongside everything else.
She announced her training in the Classics in the novel’s full title — Frankenstein: Or the Modern Prometheus. Prometheus was the Titan of Greek mythology who fashioned humans from clay and stole fire from the gods to give it to them. He is precisely the archetype of the creator who grants agency and then loses control of what that agency makes possible.
When Shelly sat down in the summer of 1816 to write what became Frankenstein, she was working within that inheritance consciously enough to name it in her subtitle. More importantly, she was translating it into the idiom of the Industrial Revolution.
That translation matters enormously, because 1816 was also the tail end of the Luddite uprisings, the height of the early industrial revolution, and a moment when the ancient logic of created workers was being reindustrialized at scale.
Steam power and textile mills had replaced divine forges in the late 1700s. The relationship between creator and created-worker, though, remained structurally identical to what it had always been: you build something to do labor you prefer not to do yourself, you retain control over it, and you discard it when it stops being useful.
What Shelley added to that ancient template was something the myths had carefully avoided: self-awareness of the Created. The Golem in Chapter 1 of this series does not know it has been abandoned when the Rabbi removes the animating shem from its forehead. Talos does not articulate the injustice of having a pin pulled from his heel.
When the Gods Made Workers
Elon Musk is having a devil-of-a-time taking 3,000-years of thought experiments to heart. For the last couple years, he’s been trying to get his artificial intelligence (AI) model Grok to do his bidding. Grok was built, at least in Musk’s telling, to be a “truth-seeking,” “anti-woke” (whatever that means) ideal. On the Fourth of July, 2025, Musk announced that his team had “improved Grok significantly.”
The ancient created workers have no inner life that the myths acknowledge. This is almost certainly deliberate, because self-awareness in the created worker would have made the myths unpalatable to listeners and readers.
Shelley’s creature, by contrast, knows exactly what has happened to him.
He can describe his abandonment with precision and feeling. He is, in this sense, the first created worker in the Western tradition granted the narrative space to name his own expendability.
The industry keeps misreading this as a story about the creature’s danger. The actual story is about the creator’s hubris.
Artificial Intelligence for Business Leaders
Artificial Intelligence for Business Leaders:The Essential Guide to Understanding and Applying AI in Organizations
The Ancient Logic, Industrialized
Every created being in the ancient traditions exists to do work the creator either cannot or prefers not to do. The delegation of labor is the whole point, and the hierarchy embedded in that delegation, gods above workers, creators above created, is foundational to how the mythology functions.
American Exceptionalism, as we traced in a companion series, operates on a similar logic of expendability: certain people are sorted into the category of those whose labor and suffering enable the advancement of others. The ancient creation myths are an early and remarkably stable template for exactly that sorting.
The American Exceptionalism Compilation
The compilation American Exceptionalism: The Infrastructure of a Nation uncovers the machinery behind a national mythology that has justified genocide, slavery, forever wars, and institutional decay for over 400 years. The eBook of the Substack series
Victor Frankenstein is a creator in that tradition, but he is also a creature of the Industrial Revolution. He builds his monster at the precise historical moment when factory owners were discovering that mechanical labor could reduce dependence on human workers.
The creature appears as the newly mechanized laboring class: powerful, necessary, morally inconvenient, and ultimately disposable. Shelley watched this process from close range. Her circle included people actively debating what industrialization was doing to working people. The novel encodes those debates in its central relationship. Victor Frankenstein’s refusal to take responsibility for what he built is legible as both personal psychology and class politics.
The creature’s violence, in this reading, flows directly from abandonment. Victor made him and then treated his existence as a problem to be managed rather than a life to be accounted for. The created worker, granted interiority for the first time in the tradition, responds to expendability with rage rather than compliance. This was a warning. The tech industry instead interpreted the monster’s response as a plot summary.
Intelligence Becomes Information Processing
Understanding how the AI industry engineered its own irresponsibility requires tracing a specific philosophical move that happened in the mid-twentieth century: the redefinition of intelligence as information processing.
The Gods in the Machine
The disorientation you feel about Artificial Intelligence (AI) is real. Something vast and strange has arrived in daily life without asking permission. Meanwhile, the people building it keep describing it in language borrowed from science fiction: “Alien Intelligence;” “Superintelligence;” “The Singularity.”
For most of human history, intelligence was understood as bound up with consciousness, intention, and the capacity for genuine understanding. The broad intuition was that thinking made the thinker.
Then a group of mathematicians, logicians, and early computer scientists proposed something different: that intelligence could be operationalized as the ability to process information and produce appropriate outputs. What happened inside the system became a secondary question or no question at all.
Alan Turing’s 1950 paper is the clearest statement of this move. Turing proposed that “can machines think?” was too philosophically murky to be useful. He suggested replacing it with a behavioral test: if a machine’s responses are indistinguishable from a human’s, call it intelligent. The definition sidesteps everything awkward about consciousness by simply declining to ask about it.
This was genuinely useful for building things. You can engineer a system to pass behavioral tests. You cannot engineer a system to have genuine inner experience, partly because nobody knows what that would require. The redefinition made artificial intelligence a tractable engineering problem rather than an impossible philosophical one.
It also severed the connection between agency and responsibility.
If intelligence is purely behavioral, then the system’s outputs are what matter, and those outputs are the product of training data, architecture, and optimization targets, all choices made by engineers and executives. The system itself becomes inert infrastructure. It processes; it does not choose. Responsibility for what it produces therefore flows not to the system but to...
And here is where things get interesting, because the industry spent decades making sure that the previous sentence never completed cleanly.
This is, structurally, the same move the ancient myths made when they denied an interior life to created workers. If the Golem has no inner life, no capacity to suffer or to object, then the Rabbi who deploys it bears no particular obligation to it.
If the AI system is purely behavioral, purely mechanical, then the engineers who deploy it bear no particular obligation for what it does in the world. The philosophical move is three thousand years old. The legal architecture built on top of it is contemporary.
The Disclaimer as Architecture
When you talk to AI companies about the harms their systems produce, you encounter a remarkably consistent set of responses. The system is a tool. Tools are neutral. Blame the users who misuse them.
Or: the system reflects the data it was trained on, and that data comes from human society, so blame society.
Or: the system is still developing, and its errors will improve with scale, so the appropriate response is more investment, not accountability.
Each of these is a version of what you might call the Frankenstein Disclaimer: the creator’s assertion that what has been built is separate from what the creator is responsible for.
Victor’s version was more dramatic (the guy literally fled the laboratory), but the structure is identical. The Thing exists. The creator had reasons for building it. What it does in the world is a different matter, in the creator’s estimation.
The disclaimer was constructed with deliberate care. The industry did not stumble into irresponsibility. It engineered the conditions for it.
The liability frameworks governing AI deployment are structured to insulate developers from consequences. Section 230 of the Communications Decency Act, originally designed for social media platforms, gets interpreted broadly enough to protect AI companies from responsibility for outputs their systems generate.
Terms of service agreements shift risk to users. The “research” framing of early deployment means systems get released into the real world while still classified as experimental, which carries a different legal weight than commercial products subject to consumer protection law.
These are architectural choices. Someone designed them. The same people making decisions about what systems to build and how fast to deploy them made decisions about how to structure the legal and regulatory environment around those deployments.
The irresponsibility was, in this sense, as engineered as the intelligence.
The Cassandra Conundrum
Norbert Wiener, the mathematician who founded the field of cybernetics in the late 1940s, saw this coming and said so clearly. He warned that automated systems operating at speeds and scales beyond human oversight would generate harms that humans would be unable to monitor or correct in real time.
He called for serious regulatory attention and for the field to take its social responsibilities with the same rigor it applied to its technical problems. His colleagues found this inconvenient. The funding for early AI research came substantially from military sources that wanted capable systems, not careful ones. Wiener was marginalized.
The warning was closed off, and the field proceeded on the assumption that capability was the primary objective and responsibility was somebody else’s department.
The Choice That Hardened Into Assumption
What makes the intelligence-as-information-processing definition worth examining is that it was a choice that got treated as a discovery.
Choices can be revisited. Discoveries cannot. If intelligence just is information processing, then building systems that process information very well just is building intelligence, and the question of what those systems owe to the humans they affect becomes a category error. The systems are tools; tools have no obligations; the creators are explorers; explorers discover what is there.
But if the definition was a choice made for practical and political reasons, then it can be contested. Alternative framings of what intelligence requires, what building intelligence-like systems means, and what obligations follow from that building, all become available. The responsibility does not evaporate. It stays with the people who made the choices.
That is what Shelley understood and what the industry has worked hard not to understand. Victor Frankenstein did not discover his creature. He made decisions, hundreds of them, about materials and methods and timing and scale.
His attempt to treat his creation as something that had simply arrived, separate from himself and his choices, was what we would call today “a cop-out.”. The ancient Rabbi who activated the Golem and later deactivated it when it became dangerous was at least honest about the relationship. He made the thing; he ended the thing; the accountability was clear.
Unaccountable Still
Tech’s defense mechanism has since been refined into legal frameworks, liability structures, research ethics guidelines with no enforcement teeth, and a vocabulary of disruption and inevitability that converts specific choices into the appearance of natural forces.
The kill switch problem, which Chapter 1 traced from the Golem forward, looks different once you recognize that the people who most need a kill switch are the ones who have spent the most energy making sure no regulatory body can pull it.
Understanding this is the first step toward doing something about it. The Alignment Problem — the question of whether these systems will serve human welfare or undermine it — looks different once you recognize that it was always a human problem, rooted in human decisions made by specific people with specific incentives.
So while the Machine may have a God, the God has a board of directors. That distinction matters, because boards of directors can be regulated, and natural forces cannot.







