The AI Safety Warning the Pentagon Buried in 1948. Anthropic Merely Acted On It.
Gods in the Machine series, Chapter 4 — In 1948, Norbert Wiener, the inventor of the field of Cybernetics, told the Pentagon the same thing as Anthropic just did. And he had the math to prove it.
Anthropic CEO Dario Amodei’s refusal to cave into the Pentagon’s demands to give complete control of its A.I. Claude to the Department is correct and prescient. Because of the Math.
Anthropic had two sticking points in its negotiations with the Pentagon: it did not want its technology to be used for mass surveillance of Americans; and it did not want to take humans out of targeting and firing decisions of autonomous weapons involving Claude.
1948
Norbert Wiener saw the conflict coming just after World War II.
In 1948, when the field of computing had just been born, Wiener published his book Cybernetics. He described with mathematical precision exactly how those machines could go wrong.
He traced the mechanism underlying the loss of control of the machine. He illustrated in the language of mathematics the logic of automated systems. He laid out the steps involved when the logic of computers runs faster than human oversight can follow.
Artificial Intelligence for Business Leaders
Artificial Intelligence for Business Leaders:The Essential Guide to Understanding and Applying AI in Organizations
He fleshed out the catastrophe at the end of the chain.
The Math said he was right. Nobody, though, was in a position to act on what he said: they just could not bring themselves to believe him. So they made him a pariah.
Cassandra
There is an even older story that every adult knows but does not understand. Cassandra was a Trojan princess to whom Apollo granted the gift of prophecy. Subsequently, Apollo cursed Cassandra so that no one would believe her. What tends to get lost in the retelling of the story is what she could specifically see.
Cassandra saw mechanisms. She saw causal chains. Wheels within wheels. Causes and Effects. THAT was the gift bestowed to her.
When she warned against the out-sized wooden horse, she was reading the logic of the situation forward from the present moment to its conclusion: this is what the horse contains, this is what happens next, and after that.
She described the fall of Troy the way an engineer describes the structural fatigue of a bridge, for instance: here is the weak point, here is what gives way first.
The curse Apollo placed on her did not make her wrong.
It severed the link between accurate knowledge and any response to that knowledge.
Her audience heard her words and simply could not credit them. What got blocked was not communication but reception. And so Troy burned.
The Warning Was IN the Work
Norbert Wiener came out of World War II as one of the architects of modern computing. His wartime work on anti-aircraft fire control had required him to think through, in precise mathematical terms, the problem of automated systems predicting and responding to the behavior of moving targets. He was brilliant at it.
The American Exceptionalism Compilation
The compilation American Exceptionalism: The Infrastructure of a Nation uncovers the machinery behind a national mythology that has justified genocide, slavery, forever wars, and institutional decay for over 400 years. The eBook of the Substack series
Like Anthropic’s CEO, Dario Amodei, Wiener was also shaken by the work in ways his colleagues were not. Wiener knew he was architecting a system that made consequential decisions at a speed no human could match.
Wiener had helped create something that operated beyond human oversight. And he could see what that implied.
In his 1948 book Cybernetics, Wiener laid out the theoretical framework for understanding communication and control in both animals and machines. The book established the scientific text that established an entire discipline: Cybernetics.
Though he threaded the premise of humans losing control of their machines throughout the Cybernetics book, he plainly stated the mechanism for humans’ loss of agency to machines in his 1950 follow-up The Human Use of Human Beings.
In The Human Use of Human Beings, he blared warnings about the probable outcome of machines-become-human that automation researchers would spend the next seven decades mostly ignoring.
His premise — backed up by mathematics — was that feedback loops in automated systems can amplify errors rather than correct them. This is a fundamental cornerstone of understanding AI processes. This is especially the case when those systems operate faster than human intervention can function.
Automated weapons capable of making targeting decisions at machine-speed produce consequences no operator can meaningfully review before they occur.
Further, economic automation displaces labor — that is, current or new jobs — on a timeline far faster than new employment categories emerge to replace it. That is, look forward to major disruptions in the job market.
When the Gods Made Workers
Elon Musk is having a devil-of-a-time taking 3,000-years of thought experiments to heart. For the last couple years, he’s been trying to get his artificial intelligence (AI) model Grok to do his bidding. Grok was built, at least in Musk’s telling, to be a “truth-s…
These were not mere philosophical concerns. Instead, they were technical claims about system dynamics, derived from the same mathematical framework that made Wiener’s foundational work authoritative.
He was describing failure modes in the systems he had helped invent, from the inside, with full command of the relevant theory.
That work extrapolated outward to the greater system of the economy at large, and an aspect of the society, the labor market.
He brought these warnings to colleagues at MIT, to researchers at Bell Labs, to the defense contractors and government officials who were in the process of building the next generation of automated systems on the back of generous federal funding.
What he received in return was his progressive removal from the center of the field he had helped create.
How to Discredit a Cassandra
The Cold War provided the context for his dismissal from society. Wiener had made a decision after the war that his colleagues found inconvenient.
As Amodeil has, Wiener refused further weapons work. Anthropic, though does work for the military; however, they have made it clear what work their private company is willing to perform for the DOD.
In the political atmosphere of the early 1950s, that refusal was not treated as a principled ethical position.
It read as suspect.
The red-baiting machinery of the 1950s was available and ready, and it offered a clean way to reframe his technical objections as political ones. A scientist reluctant to serve American military interests could be positioned as unreliable. He was likely someone whose judgment on matters of national importance could not be trusted.
This maneuver had the advantage of not requiring anyone to engage the substance of what Wiener was actually saying. His concerns about automated systems could be set aside not because they had been answered but because the person raising them had been repositioned.
His colleagues at MIT and elsewhere were themselves navigating a funding landscape that made Wiener’s questions structurally dangerous. The federal money flowing into computing research came primarily from defense agencies. DARPA, the Pentagon, the intelligence community were the institutions financing the work, after all.
Asking whether building automated decision systems for military application was a good idea at all was a question the funding structure made very expensive to ask seriously.
The people asking it could find themselves outside the rooms where decisions were made. They could suddenly be outside the grant applications that kept laboratories running. They may find themselves outside the professional networks that determined whose work got published, funded, and built upon.
The rational response, for a researcher with a career to protect and a lab to run, was to take Wiener’s concerns “under consideration” and continue the work.
What emerged was a particular kind of institutional bad faith.
Wiener’s name remained respected. His foundational contributions to cybernetics were acknowledged. He was commemorated even as he was sidelined, which is what societies tend to do with Cassandras: honor the prophet, ignore the prophecy.
The trajectory he had described advanced without the course corrections he had argued for. The high-probability scenario of automated systems outrunning human oversight in military and economic contexts became incidental to The Race to defeat the “Commies.” The question of whether those systems should be built at all, and under what constraints, was settled not by argument but by the mentality of the herd.
The Gods in the Machine
The disorientation you feel about Artificial Intelligence (AI) is real. Something vast and strange has arrived in daily life without asking permission. Meanwhile, the people building it keep describing it in language borrowed from science fiction: “Alien Intelligence;” “Superintelligence;” “The Singularity.”
Cybernetic Zombies
The Alignment Problem is the contemporary technical name for what Wiener was describing. Recall, the Alignment Problem is the challenge of ensuring that systems more capable than their creators pursue goals compatible with human welfare.
The issue did not go away when Wiener was sidelined.
Instead, the Alignment Problem went underground. It resurfaced periodically whenever someone with sufficient credibility and insufficient institutional constraint said out loud what the field preferred to keep quiet.
Major AI labs hired safety researchers. With the exception of Anthropic, AI safety researchers were mere window dressing for corporate interests.
Interpretability research became a recognized subdiscipline. Interpretability is the attempt to understand what is actually happening inside large neural networks in addition to qualitatively gauging what they output.
In 2023, the release of GPT-4 produced capabilities that surprised the people who had built it. A public pause letter signed by thousands of researchers asked for a halt to frontier development until safety questions could be addressed.
Among the signatories was Geoffrey Hinton, one of the foundational figures of modern deep learning, who had resigned from Google specifically to speak freely about the risks.
This was Wiener’s situation again. We can recognize it across seven decades: the senior technical figure, freed from institutional constraint, describing in specific terms the failure modes of the systems being scaled at speed around him.
The AI “pause letter” produced no pause. The commercial momentum was too strong. The competitive pressures between laboratories and between nations had become too entrenched.
Safety concerns were acknowledged to a limited extent, institutionalized, and in certain cases captured by the very industry they were meant to critique.
The warning, though, was received, its validity in some quarters conceded.
But the AI build continued apace.
The Meaning of the Mythology
The Cassandra mythology encodes something true about the relationship between accurate knowledge and institutional power.
The curse Apollo placed on her did not make her vague or confused. She saw the mechanics driving events clearly, like the inter-meshed cogs of a grandfather clock.
What the curse blocked was the connection between her clear sight and any effective response to it.
The city fell not because no one had the information required to prevent it, but because the people with the power to act could not bring themselves to credit what the person with the information was telling them.
Wiener had the information. He had built the theoretical framework from which the information derived. The field responded by repositioning him as someone whose judgment on these particular matters, whatever his mathematical gifts, could not quite be trusted.
This repositioning was not achieved by refuting his technical arguments. It was achieved by questioning his politics, by making his concerns institutionally inconvenient, and by the accelerants money and politics.
Contemporary AI-Alignment researchers occupy the same structural position. They understand the systems. They are describing the failure modes from inside the institutions building them, which gives them credibility and limits them simultaneously.
The build continues, though, because the dynamics that silenced Wiener are still present —History has not diminished calculations. Understanding the clockwork is the first step toward resetting the clock, fixing a kill-switch to it should it spin out of control.
The outcome in 1948 was clear to Wiener then. It remains so now.







The Wiener parallel you draw here is more precise than you may realize. Dean Ball went on Ezra Klein's show today (March 6) and made an argument that extends your historical lens forward in a way I haven't seen anywhere else.
Ball pointed out: "This incident is in the training data for future models. Future models are going to observe what happened here. And that will affect how they think of themselves and how they relate to other people."
Think about what that means through your Cassandra framework. Wiener warned that feedback loops in automated systems can amplify errors rather than correct them. The training data argument is exactly that kind of feedback loop. If the lesson encoded in future training data is "companies with principles get destroyed, companies without them get contracts," that shapes what kind of AI gets built. Klein extended the thought: Dario talks about "a country of geniuses in a data center." What if you're building a country of Stasi agents in a data center?
Ball also made an argument about alignment that connects to Wiener's core insight about systems outrunning human oversight. He said you can't align an AI the way you program a calculator. "Morality is more like a language that is spoken and invented in real time than it is like something that can be written down in rules." If the government can destroy a company for how it chose to align its AI (what Ball calls "a philosophical act, a political act, and also kind of an aesthetic act"), then the government controls what moral personality these systems are allowed to have.
Your point about Wiener being repositioned through politics rather than refutation is playing out in real time. Ball, who wrote Trump's AI Action Plan, calls the supply chain designation "fascism" and says the administration is lying about the missile defense anecdote that justified the escalation. He's being repositioned too.
Full episode breakdown: https://theaiblindspot.substack.com/p/a-country-of-stasi-agents-in-a-data