After Canada, now Australia has found that controversial facial recognition company, Clearview AI, broke national privacy laws when it covertly collected citizens’ facial biometrics and incorporated them into its AI-powered identity matching service — which it sells to law enforcement agencies and others.
In a statement today, Australia’s information commissioner and privacy commissioner, Angelene Falk, said Clearview AI’s facial recognition tool breached the country’s Privacy Act 1988 by:
collecting Australians’ sensitive information without consent
collecting personal information by unfair means
not taking reasonable steps to notify individuals of the collection of personal information
not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure
not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
In what looks like a major win for privacy down under, the regulator has ordered Clearview to stop collecting facial biometrics and biometric templates from Australians; and to destroy all existing images and templates that it holds.
The Office of the Australian Information Commissioner (OAIC) undertook a joint investigation into Clearview with the UK data protection agency, the Information Commission’s Office (IOC).
However the UK regulator has yet to announce any conclusions.
In a separate statement today — which possibly reads slightly flustered — the ICO said it is “considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws”.
A spokeswoman for the ICO declined to elaborate further — such as on how long it will be thinking about maybe doing something.
UK citizens should be hoping the regulator doesn’t take as long “considering” Clearview as it has chewing over (but failing to act against) adtech’s lawfulness problem.
Meanwhile, other European regulators have already hit users of Clearview with sanctions…
Back on the other side of the world, the OAIC isn’t wasting any time acting against Clearview nor mincing its words.
In public comments on the OAIC’s decision (pdf) finding Clearview breached Australian law, Falk said: “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair. It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification,” she also said, adding: “These practices fall well short of Australians’ expectations for the protection of their personal information.”
The OAIC also found the privacy impacts of Clearview AI’s biometric system were “not necessary, legitimate and proportionate, having regard to any public interest benefits”.
“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” said Falk.
“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
Australia’s regulator said that between October 2019 and March 2020 Clearview AI provided trials of its tool to some local police forces — which conducted searches using facial images of individuals located in Australia.
The OAIC added that it is currently finalising an investigation into the Australian Federal Police’s trial use of the tech to decide whether the force complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks. So it remains to be seen if local law enforcement will get a sanction.
Earlier this year, Sweden’s data protection watchdog warned the country’s cops over what it said was unlawful use of Clearview’s tool — issuing a €250,000 fine in that instance.
Returning to the OAIC, it said Clearview defended itself by arguing that the information it handled was not personal data — and that, as a company based in the US, it did not fall under the jurisdiction of Australia’s Privacy Act’s. Clearview also claimed to the regulator that it had stopped offering services to Australian law enforcement shortly after the OAIC’s investigation began.
However Falk dismissed Clearview’s arguments, saying she was satisfied it must comply with Australian law and that the information it handled was personal information covered by the Privacy Act.
She also said the case reinforces the need for Australia to strengthen protections through a current review of the Privacy Act, including restricting or prohibiting practices such as data scraping personal information from online platforms. And she added that the case raises additional questions about whether online platforms are doing enough to prevent and detect scraping of personal data.
“Clearview AI’s activities in Australia involve the automated and repetitious collection of sensitive biometric information from Australians on a large scale, for profit. These transactions are fundamental to their commercial enterprise,” said Falk. “The company’s patent application also demonstrates the capability of the technology to be used for other purposes such as dating, retail, dispensing social benefits, and granting or denying access to a facility, venue or device.”
Clearview was contacted for comment on the OAIC’s decision.
The company confirmed that it will be appealing — sending this statement (below), attributed to Mark Love, BAL Lawyers, representing Clearview AI:
“Clearview AI has gone to considerable lengths to co-operate with the Office of the Australian Information Commissioner. In doing so, Clearview AI has volunteered considerable information, yet it is apparent to us and to Clearview AI that the Commissioner has not correctly understood how Clearview AI conducts its business. Clearview AI operates legitimately according to the laws of its places of business.
“Clearview AI intends to seek review of the Commissioner’s decision by the (Australian) Administrative Appeals Tribunal. Not only has the Commissioner’s decision missed the mark on the manner of Clearview AI’s manner of operation, the Commissioner lacks jurisdiction.
“To be clear, Clearview AI has not violated any law nor has it interfered with the privacy of Australians. Clearview AI does not do business in Australia, does not have any Australian users.”
The controversial facial recognition company has faced litigation on home soil in the US — under Illinois’ Biometric Information Privacy Act.
While, earlier this year, Minneapolis voted to ban the use of facial recognition software for its police department — effectively outlawing local law enforcement’s use of tools like Clearview.
The fallout from Clearview AI’s scraping of the public web and social media sites to amass a database of over 3BN images in order to sell a global identity-matching service to law enforcement may have contributed to an announcement made by Facebook’s parent company Meta yesterday — which said it would be deleting its own facial biometric data mountain.
The tech giant cited “growing concerns about the use of the technology as a whole”.
Update: In addition to the above statement, Clearview’s founder, Hoan Ton-That, has also issued a personal response (pasted below) to the OAIC’s decision — in which he expresses his disappointment and argues that the privacy commissioner’s decision misinterprets the value of his “crime fighting” technology to society.