Clearview AI Ordered To Delete All Facial Recognition Data of Australians, Set To Appeal

James Dargan

Facial recognition firm Clearview AI has been ordered to cease collecting photos of Australians from the internet, after it was revealed police in some states had trialled the technology.

On Wednesday, the information and privacy commissioner determined Clearview AI had breached the privacy of Australians by collecting images of them online, and ordered the company to delete all images of people in Australia within 90 days and not collect any more.

“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” commissioner Angelene Falk said.

“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”

But Clearview AI has stood its ground, saying it operates legitimately in Australia and intends to appeal the decision.

Clearview AI is a facial recognition service that claims to have built up enormous databases — containing more than 3bn labelled faces — through the controversial practice of scraping photos from Facebook and other social media sites.

Last year, it was revealed the US-based company had offered trial services to police in Australia — specifically Queensland police, Victoria police and the Australian federal police. Reports suggested more than 2,000 law enforcement agencies across the globe had been using Clearview’s services in early 2020.

In response to the scandal, the Office of the Australian Information Commissioner (OAIC) launched an investigation in cooperation with the UK’s information commissioner office in July 2020.

Clearview AI stands ground

Mark Love, a lawyer for BAL Lawyers acting for Clearview AI, said the company had gone to “considerable lengths” to cooperate, and claimed “the commissioner has not correctly understood how Clearview AI conducts it business”.

He said Clearview AI intends to appeal to the Administrative Appeals Tribunal.

“Clearview AI operates legitimately according to the laws of its places of business,” he said.

“Not only has the commissioner’s decision missed the mark on the manner of Clearview AI’s manner of operation, the commissioner lacks jurisdiction.”

The Australian founder and CEO of Clearview AI, Hoan Ton-That, said he respects the effort the commissioner put into evaluating the technology he built but said he is “disheartened by the misinterpretation of its value to society”.

In its response to the OAIC, Clearview AI argued its images were collected from the internet without requiring a password or other security clearance. The company also said the images held by Clearview AI were published in the US, not Australia, so Australian privacy law should not apply.

The OAIC report noted Clearview AI has not offered services to Australian organisations since March 2020, but the commissioner said that didn’t go far enough and all images of people in Australia must be removed.

Police forces in Australia have downplayed their use of the service, and the OAIC’s investigation found Clearview did not have any paid customers in Australia. However, the OAIC report found officers in Australia had successfully searched for suspects, victims and themselves in the Clearview databases.

The OAIC is still finalising a report on the AFP’s trial of the technology and whether it complied with the federal privacy code for government agencies.

The ABC reported last year through documents obtained under freedom of information law that at least one officer tested the software using photos of herself and another member of staff, while the Australian Centre to Counter Child Exploitation conducted searches for five “persons of interest”.

The UK’s ICO is separately considering its next steps and any formal regulatory action under the UK’s data protection laws.

It is the second ruling the OAIC has made against facial recognition technology this year, after a similar finding was made against convenience store giant 7-Eleven last month over the collection of facial images of people filling out customer surveys across 700 stores, in a bid to weed out fake responses.

Facebook this week also announced a move away from the use of facial recognition technology on its platform, citing concerns over privacy and transparency around the use of the technology.

SOURCE

Total
0
Shares
Leave a Reply
Previous Post
National Quantum Computing Centre, Oxford Quantum Circuits Sign Agreement Designed to Boost UK Quantum Capability, Improve QC Access

National Quantum Computing Centre, Oxford Quantum Circuits Sign Agreement Designed to Boost UK Quantum Capability, Improve QC Access

Next Post

Who is Liable for Faulty AI?

Related Posts
The Deeptech Insider