New Times,
New Thinking.

  1. Spotlight on Policy
2 March 2020

With facial recognition technology, how safe is your data?

The Met Police has defended its use of this technology amid criticism and controversy, but all online databases have vulnerabilities. 

By Samir Jeraj

Metropolitan Police Commissioner Cressida Dick recently defended the routine use of facial recognition technology by the police to monitor and identify people. During a speech at the Royal United Services Institute, Dick said she supports the “proportionate use of tech in policing”, and claimed the “loudest voices” in the debate over this technology are “sometimes, highly inaccurate and/or highly ill-informed.” [Read: Met chief Cressida Dick sparks row after calling facial recognition critics “ill-informed”​] 

The Met’s move to deploy facial recognition technology, announced earlier this year, has been controversial. Critics argue that the tech breaches civil liberties laws and the long-held principle of “policing by consent”. Alongside this, it has been widely reported that the technology the Metropolitan Police uses is unreliable and misidentifies women and people of colour.

But beyond these controversies, there is another concern. Is the biometric data gathered by facial recognition technology being gathered and stored securely enough? Could your “face” be stolen like your credit card details?

There is much biometric data, such as that gathered by facial recognition systems, already online. Whether unlocking phones via fingerprint or sending off a DNA testing kit, people regularly hand over their data to private companies by consent. This poses clear problems. The credit agency Equifax says facial recognition increases the risks of identity fraud, stalking and “predatory marketing’, where data is used to  provoke extreme emotion in order to manipulate people. A password can be changed, but your DNA, face or fingerprint are practically unchangeable.

Methods of storing biometric data are far from secure and all online databases have vulnerabilities. In 2019 hackers accessed an estimated 7.9 billion consumer records. In August Biostar 2, a breach in the online biometric database run by private company Suprema and used by, among others, the Metropolitan Police, exposed some 27.8m records, and 23 gigabytes of biometric data. A year earlier, details for 92 million MyHeritage accounts, a genealogy and DNA testing service, were found online following a hack eight months earlier.

Raw DNA data can be downloaded from platforms like AncestryDNA, meaning the service is only as secure as the device and connection  used to access it. Norton Security, maker of a  popular anti-virus software, notes that “as biometrics become more common, your biometric information will likely be available in more places which may not employ the same level of secure storage.”

When contacted, the Met stressed that whenever it deploys facial recognition technology it creates a “bespoke watchlist”. This includes people who are “wanted for serious offences including knife and gun crime as well as those with outstanding warrants who are proving hard to locate.” The Met went on to say that, when you are scanned and not a match, the data is automatically deleted. Data from matches is deleted after 31 days if there is “no legitimate policing purpose for retention”.  When data is retained as evidence in a case or complaint, it is deleted at the conclusion. However, the Met did not answer questions about what type of database is used, whether Excel, Access, or something bespoke, or about how information is stored, whether on a cloud-based system or a something physically isolated from the internet. 

Give a gift subscription to the New Statesman this Christmas from just £49

As well as databases being hacked, the technology used to collect facial recognition data, such as CCTV cameras themselves, could be breached. In 2018, US company NUUO was warned its commonly used CCTV software was vulnerable to attack. A cyber security company was able to access the cameras, control them, view and alter content, and access the wider network to which they were connected. A year earlier Surveillance Camera Commissioner Tony Porter warned in his annual report that “individual or state actors” were a hacking risk for security cameras.

If all this is making you reach for the nearest ski mask and wish for a giant delete button, you are not the only one. A number of tactics have emerged to  disrupt facial recognition technology, from wearing 3D face masks to “invisibility cloaks” to specially-designed camouflage, extreme make-up and radical styling. Nowhere is the use of facial recognition technology by the state more resisted than in Hong Kong. The city has relatively strict policies on taking and keeping biometric data, but protesters fear they could soon be subject to the highly invasive technological surveillance used over the border in China. In response they attack and sabotage cameras using paint and laser pointers, and the police in turn have mounted their cameras on poles to keep them out of reach.

Whether we consent to it or not, our facial data is almost certainly online. Risks of theft, fraud, and state repression are real. By routinely deploying facial recognition, the UK police is normalising its use – we can expect private organisations to follow. For the moment there is neither the adequate policy and legal framework to keep things in check, nor the assurance that data will remain safe.

Content from our partners
How to solve the teaching crisis
Pitching in to support grassroots football
Putting citizen experience at the heart of AI-driven public services