Face-Off: Live Facial Recognition Technology & the GDPR

The Information Commissioner’s Office has announced that it has launched an investigation into the use of live facial recognition technology in King’s Cross, London by the Police. This follows investigations the Information Commissioner’s Office has been carrying out into use of the same technology by South Wales Police force.

The recording of citizens in public spaces in the UK is nothing new. Neither is the controversy surrounding this practice, from Orwell’s predictions about Big Brother to the outcry that Google Glass caused not long after its launch. Maintaining an adequate balance between individual privacy and public safety has always been difficult.

The most recent controversy in this area is the creeping use of live facial recognition technology both by public bodies and private actors. Live facial recognition technology is different from traditional CCTV because it does not simply record people but compares their biometric data to that contained in databases.

Data Protection Law

In the opinion of the Information Commissioner’s Office, the use of “software that can recognise a face amongst a crowd then scan large databases of people to check for a match…is processing personal data.” Accordingly, data protection law applies.

Just like the processing of any other personal data, the use of facial recognition technology to process personal data must have a lawful basis and a legitimate aim. In particular, because this data is biometric data, it is “special category” data under the GDPR.

The Police use this technology in an attempt to catch criminals. However, the technology captures thousands of people’s biometric data as they go about their daily lives so it raises privacy concerns and concerns as to how that data is processed.

R (Bridges) v Chief Constable of South Wales Police

The use of facial recognition technology by South Wales Police sparked calls for the force to end this practice. In particular, the United Nation’s Special Rapporteur on the Right to Privacy, Joseph Cannataci, described the force’s use of the technology as “chilling”. In response to the Police’s use of the technology, a local Cardiff resident took legal action supported by the privacy and civil liberties organisation, Liberty.

In brief, Mr Bridges was concerned that the Police captured his image whilst he was out shopping in Cardiff city centre. On the one hand, the Police argue that the use of the technology is proportionate and lawful. On the other, Mr Bridges and his supporters argue that it is indiscriminate and raises serious privacy concerns.

Conclusion

The case was heard over several days in May 2019, with the Information Commissioner’s Office intervening in the case. The judgment is expected later this year so keep an eye on our website for an update on the outcome.

The controversy surrounding this technology is not unique to the UK. The city of San Francisco has recently voted to ban the use of facial recognition technology by local agencies such as police or transport authorities. Other US cities are considering following suit.

Even if the technology is deemed lawful in the UK, issues remain. For example, one study has shown that the technology is prone to error and biased, particularly when dealing with women and people with darker skin.

The use of new technology to keep the public safe is usually to be applauded. However, anyone considering the use of facial recognition technology should seek advice lest they fall foul of data protection laws.

Share our insights

One comment

  1. “The algorithms of the law must keep pace with new and emerging technologies,” so said the Divisional Court in the case of R (Bridges) v The Commissioner of South Wales Police & Others [2019] EWHC 2341 (Admin). In handing down its judgment, the Divisional Court has refused Mr Bridges’ application for judicial review on all grounds. The full judgment is available here – https://www.judiciary.uk/wp-content/uploads/2019/09/bridges-swp-judgment-Final03-09-19-1.pdf

    The court concluded that although the automated facial recognition technology engaged Article 8 of the European Convention on Human Rights, the use was subject to specific legal controls and was legally justified. It was not disputed that the technology was deployed for a legitimate aim and for specific and limited purposes connected to this aim. The court considered whether a less intrusive method could be used and whether a fair balance has been struck. It decided that the use struck a fair balance and did not disproportionately interfere with Article 8 rights. The court seemed to place weight on the fact that if there was no match, the individual’s data was deleted immediately after processing.

    It is notable that the court determined that the use of the technology does entail sensitive processing of personal data within the meaning of section 25 of the Data Protection Act 2018. However, the court decided that the processing and collection of the data was lawful and met the conditions in the 2018 Act which apply to law enforcement agencies.

    The court was also satisfied that there was no breach of the public sector equality duty and that the software did not produce results that are indirectly discriminatory.

    The Information Commissioner’s Office had intervened in the case arguing that the legal framework for police use of automated facial recognition technology was insufficient. Although welcoming the finding that the use of the technology is processing personal data, a spokesperson has commented that they will be reviewing the judgment carefully. Mr Bridges, has also reportedly said that he will be continuing his fight against the use of the technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.