THE DEAD HAND: Nuclear War by Machine

by | Nov 7, 2025 | Global Nuclear Realities, Understanding the Risks | 0 comments

At the height of the Cold War, both superpowers feared a nightmare scenario – a sudden decapitating strike that destroyed leadership before it could act.

To discourage that possibility, the Soviet Union built a machine – an automated system designed to launch nuclear weapons if human commanders were all dead. They named it Perimeter. Western nations called it the Dead Hand.

A Chilling Concept

At its core, the Dead Hand was a device meant to guarantee nuclear retaliation for the purpose of revenge. It was the ultimate deterrent, an automated apocalypse ready to act independently on its own.

The logic was chilling, but also consistent with the doctrine of Mutually Assured Destruction (MAD): If an enemy knew you could retaliate in kind, it might stop them from attacking in the first place.

But for such a system to work, machines must be trusted with a truly monumental decision: whether or not to help end our civilization and kill nearly everyone on Earth through nuclear winter.

Motivated by Fear

In the late 1970s and early 1980s, paranoia grew rapidly among Soviet leadership. America’s nuclear arsenal had grown exponentially, leading Moscow to fear that a sudden, unexpected strike might destroy command centers before leaders could respond.

To mitigate that concern, the Soviets built Perimeter – a backup system designed to ensure that their missiles would retaliate under any circumstances. The Dead Hand could detect nuclear detonations on Soviet soil, assess environmental data, and—if it judged a massive attack had occurred—transmit nuclear launch orders automatically.

In theory, it made nuclear war less likely. In practice, it made ending our civilization a process independent of human judgment.

A Grim Picture

Exact details remain classified, but unauthorized interviews with former Soviet officials paint a grim picture.

Perimeter involved a secret network of command bunkers and “command missiles.” Those missiles, once airborne, would fly across the Soviet Union while transmitting attack orders to nuclear forces.

The system was semi-automated, but not fully autonomous. Normally dormant, Perimeter would follow its programmed instructions once activated.

In order to decide, the Dead Hand monitored seismic data, radiation levels, atmospheric pressure waves, and high-frequency communications for signs of nuclear explosions or destroyed infrastructure.

If it determined conditions matched those of a nuclear war, the machine would respond by launching every missile still functional.

Launch On Warning

America never built a machine like Perimeter, but it always had fail-safe systems of some kind.

Programs like Operation Looking Glass allowed airborne command centers to retaliate if ground facilities were destroyed. The “launch on warning” option—always a decision for the president to consider—suggests immediate retaliation based on radar or satellite data showing incoming missiles.

Both doctrines address the same fear: hesitation might mean extinction. And both share the same weakness: automation diminishes human oversight. A false alarm in a system like Perimeter risks more than confusion or irrational thinking. A malfunctioning Dead Hand could wage a nuclear war on its own before anyone could possibly prevent it.

A Stabilizing Measure?

Soviet war planners believed the Dead Hand was a stabilizing measure. They argued it reduced pressure on leaders to act quickly, knowing retaliation was guaranteed.

But the more that we trust nuclear launch decisions to machines, the less we control our own destiny. Deciding to use nuclear weapons normally involves human judgment, considerations of context, and even the ability to doubt – qualities intelligent algorithms cannot yet match.

Although it’s been around for decades, Perimeter could still surprise everyone if one of its electronic or mechanical components should fail – at least for a few minutes.

A Modern Dead Hand

Perimeter was not dismantled when the Cold War ended.

A retired Russian Air Force general confirmed to journalists in 2009 that a modernized version remained on standby. Officially, it was “deactivated unless needed.” Unofficially, no one outside Russia knows exactly what “needed” means. Nor is it entirely certain that anyone inside Russia knows either.

And yet, amid rising geopolitical tensions and the modernization of nuclear forces, it appears that the Dead Hand still exists—faster and more networked than ever before.

Which leads to an obvious question: Is building a machine capable of destroying our civilization and killing nearly everyone on Earth not indicative of deeper problems concerning wisdom, judgment, and reason?

Automation and the Next Generation of Risks

Other countries have variations of the Dead Hand in subtler forms.

Nuclear defense systems now use sophisticated algorithms for missile tracking, threat analysis, or damage assessments – critical information that leaders rely on when faced with nuclear launch decisions.

But innovations that promise more speed often introduce new ways to fail. Cyber intrusions could generate feedback loops with excessive noise that critical algorithms mistake for a nuclear attack. Or hackers might penetrate nuclear command and control systems themselves. The possibilities are endless.

For military planners, speed is desirable in a nuclear counterattack. But speed without verification and enough time to think is a fast track to nuclear disaster.

A Moral Reckoning

Building an automated doomsday machine was a feat of engineering fifty years ago. But it was also a policy statement affirming that mutual destruction was rational somehow, and that peace could be achieved by machines made to kill.

The Dead Hand carries the doctrine of MAD to its logical conclusion, but if nuclear deterrence means our willingness to destroy civilization, have we not accepted that fate as an unspoken fact already? 

Even the words “Mutually Assured Destruction” conjure up visions of insanity when you think about it. Peace built on fear is a fragile illusion no more substantial than a desert mirage.

Echoes in the Present

Today, nuclear posture reviews and presidential announcements speak of weapons modernization programs, deterrence credibility, and tactical nukes that are easier to use. But little is said about accidents, equipment failures, cyberattacks, or even simple mistakes.

As nuclear powers automate systems and shorten response times, the Dead Hand’s existence seems almost surreal. But then, so do modern robots that are beginning to question their handlers and write their own code. Few people outside of Hollywood saw that coming.

Have we moved on from a world where humans might fail to a world where electronic systems and intelligent machines might succeed all too well?

Beyond the Grave

At Our Planet Project Foundation, we realize the Dead Hand is not only history, but a modern-day warning as well. It was the first system designed to launch nuclear weapons without human direction or control.

Technology provides us with the power to destroy, but automation threatens our control of that power. Genuine peace cannot be secured by electronic systems or machines made to kill – but a solution exists if we use it. Systems like the Dead Hand will no longer be relevant when nuclear weapons cease to exist.

In an age where power grids, financial institutions, and even the Pentagon’s top-secret military systems have all been hacked, can nuclear weapons be hacked as well? The short answer is: Yes!

Cyber attacks have already proven they can reach deep into critical systems once thought absolutely impenetrable. Nuclear command and control systems may possibly be the most secure networks in the world, yet they still rely upon fallible people and vulnerable software. 

This article explains why the inevitable existence of software vulnerabilities is the Achilles Heel of nuclear command and control systems. It also revisits some devastating hacks that, taken together, prove no digital system can ever be safe from hackers.

The Cyber–Nuclear Connection

Nuclear weapons are not standalone devices sitting in silos. They exist within vast, interconnected systems that include command and control procedures, communications networks, and early-warning satellites.

Every link in that chain—from satellites in space, to launch control officers in the field—features digital components. Those systems are supposed to be isolated and secure, but the use of software makes them automatically vulnerable. In the 21st century, cybersecurity – and therefore nuclear security – is every bit as crucial as they are impossible to achieve.

But directly proving that nuclear command and control systems are vulnerable to hackers is difficult to do when the government keeps details secret. So, instead, the reason why all software can be hacked is given. Successful hacks of the most secure government and military systems are also revisited. It seems clear that if top-secret government files, cutting-edge technology, financial institutions, and even the most secure military systems can be hacked, then any digital system can be hacked.

STUXNET: When Code Met Atoms

In 2010, the Stuxnet computer virus proved that hackers can destroy physical equipment from afar. The virus, designed to target Iran’s uranium enrichment facilities, secretly reprogrammed industrial controllers to destroy themselves, while at the same time feeding reassuring false data to operators.

Stuxnet did not attack nuclear weapons, but an astonishing new reality was born when a few lines of computer code caused massive physical damage in a nuclear environment. Today the question is clear: If a fifteen-year-old computer virus could cripple a digitally-isolated and highly-secure uranium enrichment plant, then what could modern malware do to nuclear command and control systems?

Threat Categories 

The threat involves more than hackers penetrating military systems to launch missiles. While that’s certainly possible, it’s also true that hackers might alter information from early-warning radars and satellites, digital communications equipment, or any other data that a president might consider when deciding to use nuclear weapons.

Cyber threats to nuclear systems fall into three main categories:

Early-Warning Systems. These detect incoming attacks with satellites and radars. A skilled hacker could manipulate those signals to trigger false alarms or blind systems to real threats. Satellite communications professionals have already found software vulnerabilities capable of compromising nuclear early-warning satellites if properly exploited. It goes without saying that there were vulnerabilities they did not find.

Command and Control Links. Launch orders come via encrypted communications transmitted over networks that may span continents. A cyber-breach could delay, block, or even falsify orders, creating confusion at the worst possible time. General James Cartwright – a former director of US cyber operations and Chairman of the Joint Chiefs of Staff – is so worried about this possibility that he now works internationally for nuclear disarmament. 

Supply Chains. Most of the software and firmware-embedded hardware that support nuclear infrastructure and operations is acquired from civilian contractors. Compromised data or infected devices could introduce vulnerabilities that spread across entire systems.

The difficulty of defending against an extreme number of constantly changing threats is obvious. But the unfortunate truth is that worse threats exist. 

Software Vulnerabilities

Hackers depend upon the fact that software contains vulnerabilities. But to realize what that means, we must first understand what hackers do.

Most people can arrange letters and words together into coherent sentences that make sense. Programmers perform a similar function when writing instructions for computers, except the language they use is a computer code. Collectively called software, computer codes are nothing more than instructions in a computer language telling computers what to do in different situations. Writers of software are called programmers (or hackers), and fluency in code is their stock in trade.

But a critical difference between human communication and software programming is that a programmer’s language must be absolutely precise. If a single character is missing or out of place, the entire meaning may be lost or dramatically altered. We often understand what someone means during human communication, even if their language is not perfect. Not so in computer language, and therein lies the rub.

The weakness in software involves its complexity. Anything that is complicated invariably contains errors – flaws in the program that might be compromised by hackers.

Microsoft’s Windows Operating System is a good example. Windows 11 contains over fifty million lines of detailed instructions in computer code. As a hacker planning to penetrate Windows, your first step would be searching through that code for software vulnerabilities.

Complex programs contain many vulnerabilities, and skilled hackers will find some if they try. These are structural flaws in software itself where hackers can insert code of their own undetected. The malware then becomes an integral part of the operating system, unknown to the computer’s owner or anyone else.

This hacker-written software is sometimes called a bug. There are many kinds of bugs, such as viruses or worms, but all have a similar goal: successfully inserting a payload into the computer’s operating system. The payload tells the bug what to do once inside. The possibilities are endless, depending on the exact situation and the hacker’s skill.    

Zero-day Exploits

Software exploits are malicious code that a hacker writes to attack vulnerabilities in existing software. Zero-day refers to the ‘newness’ of the hacker’s bug. A bug’s life begins at activation. That’s day one.

But if a bug is not activated, it is a zero-day exploit that remains zero-days old until it’s finally unleashed. A secret. No one else may know about it except someone the hacker might have told. But a bug is no longer a secret once put into play.

Suppose an activated bug infected Windows 11. The experts at Microsoft would then analyze the new malware for the best way to neutralize it. But the experts can do nothing about zero-days. Unleashed bugs are still a secret, and even the finest programmers in the world cannot fix what they don’t know exists.

The software running Google’s internet services today contains over two billion lines with over 100 billion characters of code. Counting grains of sand on a windy beach is like trying to find every flaw in a program like that. This is the reason why hackers have not been neutralized, despite the very best efforts of some extremely smart people.

Writing 100 billion characters of perfectly flawless software is impossible, especially when even the finest programmers cannot spot vulnerabilities by ‘casually reading’ the code. It must be painstakingly analyzed, line by line and character by character, constantly accounting for how each line and character relates to others. It is not surprising that software vulnerabilities can hide in plain sight, undetected in seemingly endless lines of code.

The FBI buys zero-days, but the world’s largest consumer is probably the National Security Agency. The NSA may have the most bugs-in-waiting, but that’s a two-edged sword as we shall soon see.

Titan Rain

The Department of Defense and its military contractors have suffered devastating hacks. During one attack called Titan Rain, hackers stole terabytes of top-secret information from government agencies like the US Army Space and Strategic Defense, the Naval Oceans Systems Center, the Defense Information Systems Agency, and the US Army Information Systems Engineering Command.

Titan Rain also penetrated secure files at Sandia National Laboratory containing nuclear weapons design secrets. Detailed plans were stolen for the W78 and W88 warheads that currently arm America’s Trident II and Minuteman III ICBMs. The attacks were generally attributed to China, but positive identification was never made.

Agent.btz

The cyberattack against DOD computers by a worm called agent.btz was one of the worst ever.

The bug first penetrated NIPERNET, the Pentagon’s network for logistics. Then it wormed its way into SIPERNET, the military’s network for classified communications. The DOD won’t comment on reports that agent.btz corrupted the Pentagon’s top-secret network, JWICS, but President George W. Bush was informed that America’s military might be compromised.

No one could stop agent.btz, but the attacks suddenly ended on their own. Who was behind them remains a mystery, as does the reason for it all. But after rooting out malware for a year, the Pentagon finally realized that agent.btz had a wide variety of tools capable of penetrating secret files and opening digital back doors into the military’s most secure systems. The hack remains unsolved.

Eligible Receiver

During an internal exercise called Eligible Receiver, NSA hackers were given the task of attacking DOD networks, including the Pentagon and the US Pacific Command.

Restricted to methods employed by average hackers with ordinary tools, the attackers were surprised by how easily they penetrated critical systems, even attaining administrative status.

The National Military Command Center is tasked with relaying nuclear launch orders to officers in the field. Their systems fell to the Red Team on the first day. Three more days were all it took to compromise the DOD’s remaining systems.

But then the Red Team got a surprise: French hackers had already penetrated the same systems! Yet the question remains unanswered: Was it really them?

The Shadow Brokers

The Washington Post ran a story claiming that a hacker group called the Shadow Brokers had posted top-secret malware online, which they stole from the NSA. The bugs included some extraordinarily powerful, cutting-edge zero-days with names like Buzzdirect and Epic Banana.

These were not ordinary threats, but exotic malware designed to penetrate firewalls of companies like Cisco, protectors of the world’s most sensitive data. According to a former NSA employee who worked for the Agency’s top-secret Tailored Access Operations, the 300 megabytes of stolen zero-days appeared to be genuine.

APL

The Applied Physics Laboratory at Johns Hopkins University performs a wide variety of work for a wide variety of clients. APL scientists are true cyber-wizards who perform trailblazing research for the NSA, the CIA, and the Pentagon. But one day, they discovered a hack in progress, stealing classified data from secure files.

Instantly alarmed, they tried everything they could to stop the carnage, but to no avail. Finally, they did the only thing left to do; they physically pulled the plug on their equipment. It took months of manually rooting out malware before they felt comfortable going back online. But even then, they couldn’t be sure if they got it all, or if new bugs were not already reinfecting their systems. Unfortunately, they probably didn’t, and there probably were.

F-35 Lightning II

America’s Joint Strike Fighter program is the most expensive in history, priced in the trillions of dollars. The resulting F-35 Lightning II fighter jet employs fly-by-wire technology, meaning most systems are electronically controlled by software with no pilot input.

In 2015, the German magazine Der Spiegel published NSA documents stolen by a former contractor named Edward Snowden. The documents revealed that China had hacked nearly fifty terabytes of F-35 design data – more than four billion pages on this one aircraft alone! Can China now defeat F-35s by hacking their systems in battle? They have the plans.

Winslow Wheeler, a senior investigator with the Project on Government Oversight, said, “If they got into the combat systems, it enables them to understand it, to be able to jam it or otherwise disable it. If they got into the basic algorithms, somebody better get out a clean piece of paper and start designing all over again.” Another official quoted by the Washington Post stated, “They just saved themselves twenty-five years of research and development. It’s nuts.”

Cryptocurrency

Other recent hacks stole cryptocurrency. Advocates of digital currency say their electronic transactions cannot be hacked, but the facts say otherwise.

According to the blockchain data platform Chainalysis, hackers stole $38 billion in 2022 from numerous cryptocurrency exchanges worldwide. Hackers of the Ronin Network set a single crypto-hack record when $615 million in cryptocurrency was stolen in March 2022, surpassing the Poly Network hack of $612 million in August 2021.

Apparently, the Bitcoin database itself has not been hacked, but users of Bitcoin and other cryptocurrencies must be just as careful with passwords and other security measures as they are with bank account information. Cryptocurrency hacks are becoming more common as digital currency expands. But cryptocurrency is not insured like money in a bank, so the usual advice applies: Do not invest money you cannot afford to lose.

Artificial Intelligence vs. Command and Control

AI is here. Intelligent algorithms are nothing like computers or the software they run. Once created, they think and learn independently. Intelligent algorithms learn from their mistakes and correct them, alter plans as obstacles arise, analyze security programs to remain undetected, and never, ever stop. They are much greater threats than other forms of malware.

Peter Highnam is a former deputy director of the Pentagon’s Defense Advanced Research Projects Agency (DARPA). Mr. Highnam speaks of the ‘new wave’ of next-generation Artificial Intelligence where, “…computers reason in context and explain their results.

One goal is improving the analysis of communications among adversaries or terrorist groups through deeper machine knowledge of science, history, religion, philosophy, technology, economics, geography, and a host of other background information that computers might use to logically explain whatever conclusions they reach. In other words, cognition.

Skills in writing such powerful algorithms might logically translate into creating intelligent algorithms capable of penetrating nuclear command and control systems. Suppose such an algorithm, injected as a hacker’s payload, acquired the launch codes and relevant information needed to use nuclear weapons? Could it then issue ‘authentic’ orders to officers in the field? No one wants to think it’s as simple as that, and yet it may be if you are a skilled hacker able to write or acquire that special algorithm.     

Hacking Without Hackers

Trident II ballistic missile submarines are the sea-based leg of America’s nuclear triad. During a routine audit, the Pentagon once discovered an unprotected electronic backdoor into the naval broadcast communications network used to transmit nuclear launch orders to submarines at sea.

The door was immediately closed, but what if a bad actor with a deadly algorithm had found that door while it was still open? A highly skilled hacker would not have been needed to introduce the algorithm.

Physical avenues also exist for attackers to insert launch codes acquired by intelligent algorithms. Thousands of miles of underground communications cables in Montana, Wyoming, and North Dakota connect Minuteman III ICBMs with their Launch Control Centers (LCC). Those copper wires run about six feet underground near ranchland and wilderness so isolated in places that a person can drive for miles on unpaved backroads completely alone.

It is not inconceivable that a wire might be compromised somehow, especially now that inductive coupling equipment exists to inject electronic signals into wires without touching them. The wires are being replaced by fibre optic cables, but they too can be corrupted if access is gained. No hackers needed, just stealthy operators with the right equipment and that special algorithm.

Also vulnerable are the radio antennas at the unmanned silos. They can receive orders directly from the command post aircraft, should the hardwired system fail. Correct codes and information received by silo antennas can bypass LCCs and launch missiles without field officers’ knowledge or consent. Once again, no hackers required.

The point being that the government and military now face the nearly impossible task of defending an incredibly complex system from a nearly infinite number of threats. The better the uncontrolled and unpredictable nature of cyberspace is understood,  the clearer the problem becomes.

As former NSA Director Mike McConnell once said, “Attackers need only find one way in, but we have to defend the whole system.” To think that nuclear command and control systems are somehow impervious to that reality is ridiculous.

Stakes We Cannot Ignore

Hacking nuclear weapons command and control systems may seem like science fiction, but so did airplanes, rocket ships, or even the idea that electronic computer bugs could somehow destroy physical equipment. Impossible! Yet the more digital our world, the more that science fiction becomes a reality.

The incredible stakes cannot be ignored. The fate of our civilization and the life of nearly every human being on Earth hang in the balance. After millennia of civilization-building, why are we so close to tearing it down?

A Good Plan

At Our Planet Project Foundation, we realize that today’s nuclear weapons are much more threatening because of hackers. Reliably securing nuclear command and control systems is no longer possible. Those ‘improved systems’ now contain millions of lines of vulnerable new software, any one of which could be a beeline toward a nuclear disaster.

Preventing that disaster by strengthening measures such as verification procedures and digital encryption, or improving the security of software and physical devices, is not enough. Those measures are scarcely more effective than a Dutch boy with his finger in the dike. Not a good plan.

A good plan would be eliminating nuclear weapons before tragedy strikes. The totality of the evidence is overwhelming: We will have a nuclear war someday, probably sooner rather than later, as long as nuclear weapons remain.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *