The Trident II D5 missile is powered, in the first moment of its flight, by steam. Inside the launch tube, an explosive charge vaporises a tank of water in a flash. This creates a sudden huge increase in pressure that pushes the missile – which weighs almost 60 tons and is one and a half times the length of a Routemaster bus – out of the submarine, up to the surface and a short distance into the air above the surface of the ocean. As soon as the missile senses that it has begun to fall back towards the waves, the first of its rocket engines ignites and it begins to climb into the air. The power of these engines is phenomenal; in less than two minutes, the missile reaches 24 times the speed of sound, covering five miles a second.
The last publicly known Trident missile test by the UK was in June 2016. HMS Vengeance, then submerged off the coast of Florida, released a missile programmed to head south-east across the Atlantic, crossing thousands of miles of unpopulated ocean to a point below the southern tip of Africa, but this did not happen. According to defence sources, the missile headed instead towards the mainland United States.
Dr Beyza Unal, a senior research fellow in the International Security Department at Chatham House, remembers discussing the malfunction with colleagues at the UN. “There were rumours about it,” she recalls. The MoD’s explanation was that the missile (which did not contain nuclear warheads) was not faulty, but that it had been supplied with the wrong information; the missile itself quickly recognised the mistake and self-destructed. While Downing Street claimed that the test had therefore been “successful”, the pattern of events were of the kind Dr Unal and her colleagues look for. “If a cyberattack happened to a missile system,” she explains, “that is the kind of consequence that we would see – the ballistic missile or the cruise missile going off from its route.”
It is far from the only scenario, however, being discussed in the increasingly pressing field of nuclear cyber security. For more than 70 years, a small group of nations has used the exclusive control of weapons capable of killing vast numbers of civilians to maintain what Winston Churchill described as “the delicate balance of terror”. But in recent years, as cyberattacks have become more sophisticated and effective, it has become increasingly likely that they will in some way compromise the world’s most dangerous weapons.
Another recent event with all the hallmarks of a hack took place on the morning of the 13 January this year, when every smartphone in the state of Hawaii suddenly displayed the message: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” A second message, sent 38 minutes later, acknowledged that the alert had been a false alarm. While the incident was blamed on an individual pressing the wrong button, Unal says it illustrates that a cyberattack on a nuclear defence system need not directly affect munitions. For Unal the most vulnerable parts of the nuclear weapons complex are “communications, and command and control. The vulnerability relies on the communication channel, and based on misinformation, the decision maker makes a faulty decision. That is, I think, the most worrisome part.”
Dr Andrew Futter is the director of research for politics and international relations at the University of Leicester. For his third book on nuclear arms policy, Hacking the Bomb: Cyber Threats and Nuclear Weapons, Futter interviewed people in both the cyber security and nuclear parts of the MoD and the Royal Navy, former officials of the US Defense Science Board and the Obama administration, as well as experts from Russia, China, India and Pakistan. With many of these people he discussed what would happen “if the Cuban Missile Crisis happened today – would JFK have the same amount of time? Would you have tweets coming out all the time, and CNN watching the ships moving in?” The broad consensus is that a modern nuclear crisis would take place, he says, in a “different information environment,” one in which “others may interfere with or obscure” the information needed to make the right decision. In a present-day Bay of Pigs scenario, “the time between discovery and decision would be compressed, and that doesn’t normally make for good decisions.”
Misinformation poses the most serious risk, says Futter, to “those ICBMs in the US and Russia that only need a few minutes to go.” Simple interference in communications – Unal points to satellites as a potential weak point – could be enough to stop the most important military decisions being made with a cool head. “Keeping weapons on high alert in a cyber environment,” says Futter, “is an enormous risk.”
Beyza Unal recalls the story – related memorably in David E. Hoffman’s Pulitzer-winning investigation of automatic nuclear systems, Dead Hand – of the most cool-headed decisions of the Cold War. The Russian lieutenant-colonel Stanislav Petrov was in charge of the Serpukhov-15 early warning station on the night in September 1983 when the Soviet Union’s satellites, sending data to the country’s most powerful supercomputer, registered a nuclear attack by the US. Despite being warned that five ICBMs were on their way to the USSR, Petrov told the decision-makers above him that the signals were a false alarm. “And he was right,” says Unal. “But a cyberattack could look like that, a spoofing of the system. Some say that humans are the weakest link in cyber issues. I say humans are both the weakest link and the strongest link. It depends on how you train them.”
Petrov was able to make the right decision because he had spent a decade working on the base, developing its systems. A good sense of what a glitch looked like, paired with his contextual knowledge of how the US would be expected to act, allowed him to read the situation correctly. An automated system – devoid of hunches, experience or wider geopolitical understanding – may not have made the same call.
Despite this, armies around the world are upgrading their nuclear weapons with greater automation and connectivity, potentially at the expense of training. “In the US,” says Futter, “there’s a real enthusiasm towards moving to, and I quote, ‘internet-based or internetted systems’ for use in command and control. That sends shivers up my spine.”
While misinformation is very dangerous, a hardware vulnerability is more serious still. Both experts, however, agree that these vulnerabilities exist. Unal is concerned by what she describes as “the supply chain vulnerability,” because it is “something that states generally can’t do much about. The nuclear weapons system is composed of small components. All those components cannot come from the nuclear laboratories. How many computer chips can a lab produce? So, you need to rely on the supply chain for certain components. Do you know that those components are all secure?”
In 2014, a Pentagon investigation found Chinese-made components and materials in Boeing and Lockheed military planes and in Raytheon missiles. But it is the smaller subcontractors used by large defence companies that worry Futter and Unal. “Even if [the component] is made in the UK,” says Unal, “how do we know that the company that produces those chips is paying attention to the cyber security risks? It’s hard to regulate these things, but there should be a reporting structure on how they are securing components.”
Data breaches and other incursions have been reported by defence companies, but smaller subcontractors could be wiped out by the reputational damage of going public. For this reason, Unal says we “probably don’t know the vast majority of the vulnerabilities” that exist in the supply chain.
Futter says it’s important also to remember that in the case of the US-built Trident system, “we don’t write the coding in the missiles”. In a world in which a new car comes with over 100 million lines of code, Futter says he doubts that “anyone in the UK could really check through that coding, to see that it is exactly what we think it is. If anybody managed to find a vulnerability in Lockheed or one of their subcontractors in the US and compromised the Trident SLBM, there would be nothing we could do about it.”
Futter says the attitude of “a number of British officials” he’s spoken to “was that Trident submarines can’t be hacked because they’re lying on the bottom of the North Atlantic somewhere. But these submarines rely on different systems for the nuclear propulsion plant, navigation, and even for things like the toilets, fresh air and fresh water. All these different computer systems have to be written and built by somebody. The submarines come back into port and have to be updated.” Unal agrees: “how do you do maintenance? You infiltrate the system. You do that using a [back]door, and that is an attack vector.”
Trident, however, is low on the list of concerns in this field. Both experts point to the India-Pakistan region as an areas in which, says Unal, “there is a high likelihood of nuclear weapons use. The threshold is really low. Any uncertainty created through jamming or spoofing information could create an attack.” Between the world’s only hostile nuclear neighbours, Futter says the “timelines are so short for decision-making that even something like a denial of service attack, if it happened in the middle of a crisis, could escalate things quickly.”
Futter says the relationship between the US and China could also be compromised due to the nature of their systems. “China has linked in the support systems for conventional and nuclear weapons,” he explains, “so an attack to try to shut down a conventional Chinese weapons system could accidentally compromise a nuclear weapon. And then you’ve got all sorts of escalation.”
Of the nine nuclear powers, one state values nuclear cyber security more than any other. “Russia now sees the possibility of this happening – it’s even stated in its national security documents – as an enormous threat,” says Futter – “one of the biggest threats it faces.”
This concern is well informed; Russia is probably the only state to have hacked an adversary’s weapons during conflict.
In the spring of 2013, a Ukrainian army officer called Yaroslav Sherstuk developed an app to speed up the targeting process of the Ukrainian army’s Soviet-era artillery weapons, using an Android phone. The app reduced the time to fire a howitzer from a few minutes to 15 seconds. Distributed on Ukrainian military forums, the app was installed by over 9,000 military personnel. By late 2014, however, a new version of the app began circulating. The alternate version contained malware known as X-Agent, a remote access toolkit known to be used by Russian military intelligence. The cyber security firm Crowdstrike, which discovered the malware, said that X-Agent gave its users “access to contacts, SMS, call logs and internet data,” as well as “gross locational data”. In the critical battles in Donetsk and Debaltseve in early 2015, the app could have shown Russian forces where Ukraine’s artillery pieces were, who the soldiers operating them were talking to, and some of what they were saying. It may be, then, that Russia’s concern – Futter describes it as “panic” – about the risks of hybrid warfare is based on the knowledge that it has been used in battle, and it works.
In the UK, by contrast, there has been little public debate on the cyber security of nuclear weapons. Text searches show that in six years, none of the Updates to Parliament prepared by the MoD on the nuclear deterrent contains a single mention of cyber security. Hansard shows fewer than five mentions of the subject in a decade; no cabinet member has ever spoken publicly on this issue.
This may be partly to do with the fact that nuclear weapons are already hugely expensive and politically divisive. “If they ask for the extra money for cyber security,” says Futter “then someone will ask them how secure it really is.”
But Beyza Unal underlines why this question must be asked anyway. On the question of why states haven’t yet hacked each other’s nuclear weapons systems, she says the first point to recognise is that “we don’t know if they have or not. Maybe they have already. Even if they haven’t done it, probably they will, because there is no system that prevents them hacking each other’s weapons systems.”
Nuclear weapons are supposed to be political rather than military ordinance: they keep the peace without ever being used. But without open discussion of the risks, without preparation and training
at all levels to defend against them, and without international agreement on the boundaries of such actions, the barriers to their use are being silently and invisibly eroded.