The US-based facial recognition company Clearview AI once told investors that “almost everyone in the world” could soon be identifiable via its biometric database. The company’s methods have been declared illegal in multiple jurisdictions—most recently in the UK. But do European regulators actually have any power to stop Clearview AI?


On 26 May, the UK’s Information Commissioner’s Office (ICO) joined an increasingly long list of regulators—including in Canada, Australia, Italy and France—to order the company to stop collecting data about its residents.

Clearview AI “scrapes” images of people’s faces from publicly available sources and derives biometric information from each picture using its proprietary facial recognition algorithm. The company says it has amassed over 20 billion facial images in this way. 

Clearview AI enables its clients—mostly law enforcement agencies—to upload a picture of someone’s face, which can then be matched with an entry in its database. The company then provides a link to the source of the image, which should identify the person in the client’s uploaded photo.

“Agencies that use our platform can expect to receive high-quality leads with fewer resources expended,” Clearview AI states on its website

“These leads, when supported by other evidence, can help accurately and rapidly identify suspects, persons of interest, and victims to help solve and prevent crimes.”

But Clearview AI’s business model is highly controversial, and the company is subject to multiple legal challenges from people who are trying to stop it.

Lucie Audibert, a lawyer and legal officer with the campaign group Privacy International, was involved in a complaint against Clearview AI that was considered as part of the UK ICO’s investigation into the company.

“They keep arguing that their tech is useful to law enforcement—but that’s not a reason to close your eyes on the harms and the major societal shift that this tech can cause,” Audibert told me.

“Courts have, time and again, found mass surveillance systems unlawful and in violation of fundamental rights, even though they were very useful to law enforcement and intelligence agencies.”

Investigations Under the GDPR

Clearview AI has faced complaints in at least six European countries, where its methods are allegedly illegal under the General Data Protection Regulation (GDPR).

European regulators have invariably agreed that Clearview AI’s methods are incompatible with EU law and have imposed a range of sanctions—some more severe than others.

In Hamburg, for example, in January 2021, the state’s data protection authority found Clearview AI had unlawfully processed an individual’s data by deriving biometric information from his facial image. But the regulator’s decision was only to order Clearview AI to delete the complainant’s biometric data—not the image itself.

Other regulators have gone further. In March, the Italian data protection authority fined Clearview AI €20 million, banned the company from processing any images or biometric data of people in Italy, and required it to delete the information it already possessed.

The recent decision from the UK’s ICO involved a similar set of orders as the Italian regulator, along with a £7.5 million (€10 million) fine against Clearview AI.

However, Clearview AI maintains that, because does not offer its services in Europe, neither the ICO nor any other European regulator has the power to impose such sanctions or penalties. 

Doing Business in Europe

Clearview AI has catered to European clients in the past. 

The Swedish regulator fined the country’s police authority €10,000 for using Clearview AI unlawfully. Police services in the UK have also used Clearview AI on a “free trial” basis.

But the company claims that it no longer accepts European customers and thus does not fall under the jurisdiction of European regulators. 

Legally speaking, Clearview AI is arguably missing the point. 

“They dispute the applicability of GDPR to their activities, arguing that they’re based in the US,” said Privacy International’s Audibert. 

 “But GDPR was drafted specifically to protect all people in Europe against abuse of their data, including by companies that do not do business in Europe.”

The GDPR’s broad territorial reach extends to companies based overseas if they “monitor the behaviour” of people in the EU or the UK. Several European regulators have asserted that this provision applies to Clearview AI’s activities (though this has yet to be tested in court).

And legal text aside, it remains unclear how European regulators can enforce their orders against Clearview AI—or recover the millions of dollars they assert the company owes them.

Clearview AI did not respond to a request for comment.

Technical Issues

 On a technical level, it’s unclear whether Clearview AI could actually comply with regulators’ other demands—even if it wanted to—without shutting down altogether.

 “Clearview’s main problem is that they simply cannot comply with regulators’ orders to delete and stop collecting data,” Audibert said.

 “Their data scraping is, by definition, indiscriminate—so they cannot possibly figure out which photo is one of a UK or French resident, and which isn’t.

“No technical adjustment can possibly allow it to filter out faces of people in certain countries. Even if they tried to filter out based on the IP address location of the website or the uploader of the picture, they would still end up with faces of people who reside in the UK.”

Beyond Europe

The walls are closing in on Clearview AI outside of Europe, too, with authorities in Canada and Australia also having found the company’s operations incompatible with their domestic privacy laws.

And Clearview AI also has legal issues at home.

In May 2020, the American Civil Liberties Union (ACLU) of Illinois launched a court case against Clearview AI under the state’s Biometric Information Processing Act (BIPA).

In a settlement with the ACLU in May 2022, Clearview AI agreed to several restrictions of its activities—including that it would not offer access to its database to any private entities across the whole of the US.

But two weeks after reaching the settlement, Clearview AI announced a new product: Clearview Consent.

According to the company’s website, Clearview Consent is intended for commercial purposes: “airport and travel identity checks, secure building access, in-person payments, online identity verification, multi-factor authentication, fraud detection, user onboarding and more.”

However, Clearview AI says access to its 20 billion facial images won’t be included in this product.

“They must have fought hard in the settlement to be able to offer derivative products without access to the database,” Audibert said.

Clearview AI’s numerous legal battles may have slowed and limited its rate of expansion. Compliance with some regulatory demands would, arguably, be fatal to the company’s business model. 

But facial recognition is a booming industry, and Clearview AI has ambitious plans. 

Below is a list of ongoing or concluded legal and regulatory actions involving Clearview AI 

DateRegionHeadlineURL Link

10 March 2020

United States: Vermont

Attorney General files lawsuit

28 May 2020

United States: Illinois

ACLU files lawsuit

9 July 2020

United Kingdom; Australia

ICO and OAIC open joint investigation

18 August 2020

Germany: Hamburg

Data protection Authority (DPA) demands Clearview AI answer questions about its operations or face €10,000 fine

21 January 2021

Germany: Hamburg

DPA orders Clearview AI to delete biometric data about an individual complainant

3 February 2021


OPC declares Clearview AI unlawful

12 Feburary 2021


DPA sanctions police for using Clearview AI to identify individuals.

27 May 2021

United Kingdom

Privacy International files complaint with ICO

27 May 2021


Privacy International files complaint with CNIL

27 May 2021


Hermes Center files complaint with Garante

27 May 2021


Homo Digitalis files complaint with Hellenic DPA

27 May 2021


Noyb files complaint with DSB

14 October 2021


OAIC concludes investigation, orders Clearview to stop processing data of individuals in Australia,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf

14 December 2021

Canada: British Columbia, Alberta and Quebec

Regulators order Clearview AI to stop processing data

10 March 2022


Garante fines Clearview AI €20m and orders the company to stop processing data about individuals in Italy

11 May 2022

United States: Illinois

ACLU and Clearview AI settlement announced, banning Clearview from offering services to US private entities (among other settlement terms)

23 May 2022

United Kingdom

ICO announces £7.5m fine and orders Clearview AI to stop processing data of individuals in the UK


PrivSec World Forum

Part of the Digital Trust Europe Series   -   will take place through June, July & September 2022, visiting five major cities; 

Brussels   | Stockholm   | London   | Dublin   | Amsterdam

PrivSec World Forum is a two-day, in-person event taking place as part of the Digital Trust Europe series. Data protection, privacy and security are essential elements of any successful organisation’s operational make-up. Getting these things right can improve stakeholder trust and take any company to the next level.

PrivSec World Forum will bring together a range of speakers from world-renowned companies and industries—plus thought leaders and experts sharing case studies and their experiences—so that professionals from across all fields can listen, learn and debate.



PrivSec World Forum