New Times,
New Thinking.

  1. Spotlight on Policy
10 July 2017updated 09 Sep 2021 5:38pm

The ID card you didn’t know you had

Law enforcers and businesses can now routinely identify you, at a distance and without your knowledge, using a data source you can’t change: your face. 

By Oscar Williams

Just over a year ago, the Guardian published a story headlined: “Face recognition app taking Russia by storm may bring end to public anonymity.” If it sounds like click bait, it isn’t. The story featured FindFace, a Russian dating app that lets users identify people with 70 per cent reliability, simply by capturing a photograph of a crowd.

At present, FindFace is limited to users of a Russian social media site – but facial recognition is hardly confined to Russia. Police, retailers and social media firms in the UK are increasingly using the technology to identify members of the public without, some critics say, adequate scrutiny.

The issue came under the spotlight in April when it was revealed that South Wales police force was set to deploy facial recognition to monitor football fans who were in Cardiff for the Champions League final. Faces were scanned and cross-referenced against 500,000 custody images stored by local police forces.

It’s not the first time facial recognition has assisted with policing at major events. Over the last two years, the technology has also been deployed at Download Festival and Notting Hill Carnival. British forces are considered to be at the forefront of the field internationally.

Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford, says the application is still in its infancy, but advancing rapidly: “The hype is on the capacity of this technology to identify people through facial recognition without knowing what you’re looking for, automation throughout,” says Floridi. “The hype is still slightly unjustified to be honest. But the hope [is] that one day, we may just be a step away [from automating the process]. It’s difficult to tell, because technology is developing so quickly that, and I’m serious about this, it could really be around the corner.”

One of the most pressing ethical issues, Floridi suggests, is not whether the police should be using the technology, but what happens to the thousands of newly collected images after the event: “If the police were to say: ‘The data would be destroyed after 24 hours. We just check them. We’re not interested in finding you. We’re interested in finding these five people, who we are worried might be carrying a bomb into the stadium.’ Okay, well I can live with that. It’s the boundless, endless repurposing of data that worries everyone. In short, we are entering into the unknown. It’s like saying – trust me, I’m going to use the data forever, but I’m not going to misuse them. That’s a bit of a blank cheque.”

It’s a concern shared by Millie Graham Wood, legal officer at Privacy International, who says the technology goes much further than CCTV: “We are talking about sophisticated tracking technology; that might be cross-referencing our faces with custody records and other datasets, to build up a detailed profile of you. How can we be confident that this is not dragnet surveillance, where every single face that is recorded does not end up on a permanent searchable database?” She adds that the technology is developing rapidly and the police are demonstrating a real appetite for using it: “We need an urgent public debate about the use of such intrusive surveillance.”

Give a gift subscription to the New Statesman this Christmas from just £49

South Wales police force did not respond to a request for interview, but shared public videos in which officers discuss how the technology works. Assistant chief constable Richard Lewis said the facial recognition had already helped the force to catch a man who was wanted on recall to prison.

A report published by the Royal Society and British Academy last month found that the UK’s current framework for data management is failing to keep pace with technological progress. The institutes called for the government to establish a new, independent body to oversee the management of data by the public and private sector. Floridi, who was part of the working group that wrote the report, said the organisation could assist law enforcement agencies. “It would be for that body to advise the police for an ethical treatment of the deployment and use of facial recognition. It would go a little bit beyond the strictly legal context that would ensure compliance.”

The independent body could also help retailers to deploy the technology with greater transparency. In 2015, a survey of 150 retail executives from IT service firms Computer Services Corporation found that a quarter of shops used the technology, a figure that rose to 59 per cent for fashion retailers. But despite the proliferation of facial recognition, shoppers remain largely unaware of when they are within its gaze.

Duncan Mann, chief operating officer at retail analysis firm Hoxton Analytics, told the BBC in March that the technology is popular among shops that want to compete with online retailers: “Online retailers gather all kinds of information about shoppers, and physical stores also want to understand how people behave in a shop.”

So how might it work? Maja Pantic, a professor of affective and behavioural computing at Imperial College London and a world leader in facial recognition research, says that shops may be able to link a face to a name and email address when a customer makes a purchase in store and signs up for a loyalty scheme. If the shopper then returns and CCTV is able to identify them, the retailer could send a follow up email if they leave without making a purchase.

Under the EU’s General Data Protection Regulation, which comes into force next year, the analysis of facial images for the purposes of identification is explicitly defined as biometric data, alongside fingerprints and iris scans, and must be treated accordingly. A spokesperson for the Information Commissioner’s Office said: “Any organisation must make sure that any images are only used for a specific purpose and that people are aware they may be recorded and that appropriate measures are in place to keep the recorded images secure.”

At Imperial, Pantic is working on FACER2VM, the largest UK project in the field of biometrics. Her work could revolutionise how facial recognition is deployed: “My group is working on a very specific topic and that is recognising people through facial expression. It means that even if you have a very, very sophisticated mask, the kind Hollywood produces for actors to look like another person, we would be able to recognise people by the dynamics of their faces for as long as we observe them. This is for the highly secure systems, where you really want to exclude the possibility of having an intruder, such as for pilots or the FBI. […] We are working very closely with the Home Office.” Nevertheless, she says that the technology should only be used for the initial identification, rather than in a court of law, because accuracy can’t be guaranteed.

Despite recent breakthroughs in her own team, Pantic is concerned about facial recognition research being conducted behind closed doors in Silicon Valley. Artificial intelligence or machine learning, the process that underpins facial recognition, improves as it analyses new data. As such, Google and Facebook have a major advantage over academics in the form of their vast image banks. The disparity is widened, Pantic says, by the brain drain from academia to universities and the ability for big firms to easily buy startups: “This is very scary. It could mean complete innovation concentration in a very few companies.”

A Facebook spokesperson pointed to the firm’s Facebook Artificial Intelligence Researchers (FAIR) initiative, which they said “actively engages with the research community through publications, open source software, participation in technical conferences and workshops, and collaborations with colleagues in academia”. The spokesperson added that the firm complies with EU data protection law. Google did not respond to a request for comment in time for publication.

The opportunities and risks associated with artificial intelligence may finally be gaining traction across Westminster and Whitehall. At the end of June, the House of Lords appointed an ad hoc committee to consider issues related to the economic, ethical and social implications of the technology. Floridi says he’s optimistic that the proposal for an independent body to monitor data will be welcomed by government: “I would be very surprised if it was not well received and I expect things to move in that direction in a year or two.”

But he’s more cautious about the government’s approach to surveillance, in which facial recognition is likely to play an increasing role. Floridi concludes that in the wake of recent terrorist attacks, the issue has been “politicised and for the wrong reasons”: “This is coming top down. It’s an instinctive reaction; more risks, more monitoring, which is just not what we need. We don’t need a bigger surveillance state. We need more social acceptability and preferability for the solutions we need to have. You can’t just impose it because you say so. That’s the major opportunity [the government] is missing.”

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football