New Times,
New Thinking.

  1. Spotlight on Policy
  2. Tech and Regulation
  3. Cybersecurity
6 September 2019updated 07 Jun 2021 2:47pm

GCHQ’s real agent “Q”

By Will Dunn

For somebody who has worked at GCHQ for almost 19 years, Ian Levy is disarmingly frank about his background. He has been “breaking computers”, as he calls it, since his parents bought him a Sinclair ZX-81 – “a lovely thing” – in 1981. Ten years later, at the Warwick University, he completed a maths degree, followed by a PhD and postdoctoral research in computer science, until he was faced with a choice: “become an academic, [or] get a real job.” With a grin he observes that “the civil service is halfway between, right?”

Of his entry into the intelligence services, Levy remembers that “you had to do a maths test. If you got one answer correct, you got an interview.” With three hours to complete the exam, the test was not whether the interviewee could find a solution, but how they found it. “All of the problems had a trick. They’d take you an hour to do, or five minutes, if you could spot the trick.” It is the mindset of looking around rules, finding holes, that Levy says is valuable in security. The real question GCHQ wants know of its people is, he says: “can you be a devious little sod?”

One vital use for this deviousness is to discover security vulnerabilities, which Levy describes as “mistakes that haven’t been exposed yet”. When companies approach the National Cyber Security Centre for advice on their products, it is the job of Levy’s people to look at the software as a set of rules, and to ask “are they complete? Are they consistent? Is there a way through them?” These are the questions that criminals will ask of a company’s products, so it’s important that the NCSC asks them first.

The new networks
Such questions are particularly pertinent now, as governments, businesses and individuals prepare for the next big shift in communications technology – 5G. Faster, more distributed mobile communication may enable a variety of new technologies in transport, healthcare, business and public services. As more and more devices become “smart” and connected, however, there is some concern that this will greatly increase the opportunities for criminals to gain access to these devices.

But Levy says it’s important to remember that, as with many technological advances, a lot of what’s being said about 5G is hype. “For the next couple of years,” he points out, “the vast majority of 5G use cases are faster cat videos”. The promises of 5G could easily remain unfulfilled, he says. “Unless we can get the use cases right, and the commercials around that – I don’t see how it’s going to take off in the way that everybody says.”

It’s also important to consider which of the 5G promises we actually want. “Some of them make sense – smart cities, autonomous vehicles, smart manufacturing,” Levy says, but remote surgery – which was put forward as a use case for 5G in a recent report produced by Deloitte for DCMS – he finds “really scary”, because “you’ve got a guy on a beach in Mexico, on holiday, and he’s doing remote surgery… that doesn’t sound like a great idea. First, you don’t know how many Mai Tais he’s had.”

Secondly, in a quick demonstration of devious thinking, he suggests that one way to disrupt a remote medical procedure would be to simply “sit outside a hospital with a wideband RF generator, and just deny [the hospital] service.” But the technologies behind 5G are not entirely new. Levy says that while the radio technology used for 5G is new, “it’s broadly an evolution of the 4G radio. It’s not new physics.” The big difference is in “the way the networks are laid down, and where the intelligence sits”. While a 4G network is typically controlled by a single “core” (which Levy describes as “the brain” of the network), 5G networks are likely to use a “virtual” core that is hosted on a number of different servers. Levy explains that this from a security point of view, this is actually a benefit. “Virtualisation, we understand as a community, because it’s what Amazon and Azure and Google run on. The amount of security research that’s already been done against the infrastructure that we’re going to run the 5G stuff on – X86 servers, virtualisation platforms, orchestration platforms – is huge compared to the amount of research that’s been done against proprietary telecoms kit.” If 5G networks employ more “commodity hardware and software”, he says, they could actually be easier to secure.

Give a gift subscription to the New Statesman this Christmas from just £49

At the same time, he adds, “you could easily build a 5G network that was badly architected, and stupid, and easy to break”. Does he worry that telecoms companies, in their rush to capitalise on the 5G hype, will do just that? “There’s always a risk, but I don’t think it’s a high risk in the UK.”

That said, Levy agrees he would welcome more security in the telecoms networks, although this could be as much a matter of skills, management and organisation as hardware and software. Last April, an attacker the NCSC identified as Russian military intelligence gained access to network infrastructure in the UK. “The Russians were scanning parts of the internet, looking for things that looked like certain types of equipment – equipment that’s used in telecoms networks. And when they found some, they tried a bunch of things to try to get access to them. Simple things, not massively sophisticated. And in a few cases, they were successful.”

Levy won’t say how unsophisticated the attack was – but when asked if it was something as simple as a network employee using “12345” as a password, he says it “wasn’t quite that bad, but along those lines”. The crucial mistake, however, was that sensitive parts of the network – the “management plane” – were accessible from the internet. “There was no Russian kit in that network,” he says, but “poor operational security” left the door open enough that only a light push was needed. Levy clearly empathises with the people whose job it is to update and patch networks, a process he describes as “very, very risky”. When the O2 data network was taken down by an expired certificate last year, the company claimed the outage cost tens of millions of pounds. The question of whether telcos could be made more secure by regulation is “a matter for the ministers”, but the mobile standards that operators follow are, Levy explains, designed to make sure that phones and equipment from different providers work together. Standards alone do not confer security. “A vendor following the 5G standard doesn’t automatically make great products – as we’ve seen with Huawei.”

The Chinese puzzle
Diplomacy and espionage have always been about communication. As the internet has grown and become mobile, these activities have moved online and the infrastructure behind the internet has become a critical strategic asset. Nowhere is this more evident than in the debate on the safety of using equipment made by Huawei, a company closely connected to the Chinese government.

While the NCSC can say with “high confidence” that Russia is “almost certainly” to blame for certain cyberattacks, it is much harder to say what Huawei is or isn’t doing, because Huawei’s equipment has been bought and installed deliberately by telecoms providers, who would like to use it, if it can be shown to be safe. One of the techniques by which the NCSC tries to establish that the software in a mobile phone mast or a network switch will securely handle your texts and browsing data – without quietly copying in the People’s Liberation Army – is called “binary equivalence”. Levy explains that this means testing the source code for the products Huawei is running in the UK, by “building” this source code into the machine code that operates the infrastructure. If the NCSC’s Huawei Cyber Security Evaluation Centre (HCSEC) can “come up with the same answer” as the binary code running in the equipment then, says Levy, they can prove “these binaries are equivalent, they do the same thing and nothing more”.

The problem is, the NCSC can’t get Huawei’s source code to produce the same binary twice. In fact, he says, “Huawei’s build process is so arcane and complex that even they can’t build to the same output twice.” With 85,000 engineers and a huge variety of products, which are often customised for individual customers, it is extremely difficult for the British security services “to say that the source code… is or is not the thing that is running in the network today”.

But this doesn’t mean it’s definitely spyware? “No, no,” he replies, breezily. “It’s just terrible engineering.” In cyber security, as in medicine, people go to experts looking for definitive answers and return with probabilities. Levy points out that in the offices of HCSEC, in Banbury, Oxfordshire, there are around 20 technical experts; a single Huawei product can contain 300 million lines of code. “I’m not going to guarantee you anything.”

Perhaps more reassuringly, Levy says his approach is therefore to “assume that there is a problem in every bit of kit, regardless of whether it was made by Huawei, Cisco, Nokia, Ericsson… assume it’s the worst possible vulnerability that could be in there, and try to build a network so that it doesn’t fail catastrophically if that’s true.”

But the government has not created new specialised risk management units for Ericsson and Nokia. Alongside the question of whether or not a piece of Huawei kit is, as Levy puts it, “a bag of spanners” is the fact that it was made by a Chinese company. Levy says that GCHQ began looking at Huawei equipment in 2003, under the assumption “that any company in China could be compelled by the Chinese state to do anything they want… to assume anything else would be crazy. They helpfully codified that into law, a couple of years ago.” China’s “geopolitical and legal differences”, he says, mean that from a security point of view, “Huawei is riskier than other vendors”.

However, it would also be simplistic to think that espionage by the Chinese state would only come via Chinese equipment. Levy points to Juniper Networks, a large American producer of network technology. “Juniper’s source code system was hacked,” says Levy, “probably by the Chinese, and back doors were put into Juniper’s code. Nobody noticed for a year.” No company is above suspicion. “It’s a different set of risks, depending on how you use different vendors.”

Active defence
At the other end of the scale from state actors and “advanced pernicious threats” are the huge number of low-level cyber threats – which Levy has described as “adequate pernicious toerags” – that, cumulatively, constitute a significant threat. Combating these has involved looking “broadly [at] how the internet works”, and finding ways to secure “all these protocols that people have always said are terrible”. The NCSC has worked with HMRC, for example, to fight the huge number of phishing and spoofing attacks that were directed at British taxpayers. When the programme began, “HMRC was the 16th most phished brand in the world, which is significant, given you’ve got people like Paypal. A couple of weeks ago, it was 146th.”

The NCSC is also looking at the SS7 standard, which carries information about phone calls, and which is currently abused by attackers to make phone calls appear to be coming from other numbers, such as banks, giving overseas gangs a foothold in telephone scams. The NCSC’s innovations in “active cyber defence” can also, in some cases, be transferred to the private sector. Levy says BT is beginning to protect its customers in the same way that his team protects the public sector, and “they’re stopping 135 million malware connections a month”. But he also appears to have a realistic sense of how people see GCHQ, and to what extent they are prepared to trade security for privacy.

Trusting the spies
Last November, the NCSC published a rare glimpse of the decision-making that happens within the security services. The Equities Process is a flow chart that describes what the UK’s digital spies are supposed to do when they discover a vulnerability, a mistake in someone else’s code that allows them to access and even operate computers and equipment without permission. For people working in intelligence, the temptation to hold on to these vulnerabilities and quietly use them for surveillance must be extremely strong. But, says Levy, that “intelligence benefit… has to be balanced with the potential harm of leaving this thing unfixed”.

If GCHQ discovers a vulnerability that lets them into an intelligence target’s phone, then – does the intelligence come first, or the security of everyone else who has that model of phone? Levy says it is the latter. “Our default is to release – to go and tell the vendor that there is a problem. There’s got to be an overriding intelligence case for that not to be the case.” The NCSC has disclosed, in the past year, vulnerabilities that would have created “an awesome intelligence capability… and yet we still released it”.
This changes, however, if the vulnerability is discovered by another country. Intelligence sources shared with our spies – such as the EternalBlue exploit used by the WannaCry ransomware, which privacy campaigners say was developed by the NSA and shared with GCHQ – must, in the interests of diplomacy, remain hidden. “We’d go back and make representations that we’d like it fixed,” Levy offers, and says it would theoretically possible to disclose an ally’s exploit “but it would be in extremis”.

Levy himself has repeatedly said that for government agencies to request “backdoor” access to encrypted services would be “monumentally dumb”, because a backdoor for one country’s spies is a backdoor for everyone else,
if they can get it open. But he has suggested ways to create the conditions for “exceptional access” to people’s conversations – subject to permission obtained by a court order – in the same way that previous generations of spies and police officers have tapped phones. Edward Snowden tweeted that Levy’s idea was “absolute madness… anyone on your friend list would become your friend plus a spy”, but Levy says the paper in question was intended as a way to have less emotive, more evidence-based conversations about the trade-off between privacy and security.

“The other question,” he adds, is “privacy from whom. Privacy from government is one thing. One of the things that worries me a bit is that people don’t understand the privacy impacts of just using the service. You saw the announcement recently about Facebook and Whatsapp – now, Facebook do have access to the Whatsapp data. What does that mean for the privacy of Whatsapp users? Nobody’s explained that to them. You’re going from a position of incomplete knowledge in the user base, and then trying to build on top of that, and that’s dangerous. That’s the sort of conversation we want to have.”
But does he understand why people might trust a social media company more than their own government? Levy laughs. “I really have no question about why they don’t trust me, don’t worry!”

The serious question, he says is of “informed consent. You understand what you’re doing when you interact with the government; do you understand what you’re doing when you interact with some of these big services? When you try to explain to people what the change in their personal risk is, if we do or do not do this – here’s the change in the risk of your communications being intercepted; here’s the change in the risk of you being blown up… When you have that conversation, you need to be able to explain where they are today, and what difference this brings. Rather than [saying], you have no risk today, and this is massive risk – which is how people like to talk about it”.

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football