The facial recognition company faces a £ 17 million fine for privacy
- Published
Sharing information
An Australian company that claims to have a database of over 10 billion facial images is facing a potential £ 17 million fine for handling personal data in the UK.
The Information Commissioner’s Office said it had significant concerns about Clearview AI, whose facial recognition software is used by law enforcement.
He told the company to stop processing the UK’s personal data and delete any in its possession.
Clearview said the regulator’s claims were “factual and legally incorrect”.
The company – which has been invited to comment – said it is considering an appeal and “further action.”
It has already been found to have violated Australian privacy law, but is seeking a review of that ruling.
“Search for faces on Google”
The Clearview AI system allows a user, such as a police officer trying to identify a suspect, upload a photo of a face, and find matches in a database of billions of images collected from the Internet and social media.
The system then provides links to where the corresponding images appeared online.
- The database company must remove snaps taken in Australia
- The legality of online face collection called into question
The company promoted its police service as a “Google face search”.
But in a statement, the UK Information Commissioner said Clearview’s database could include “a substantial number of UK people” whose data may have been collected without the knowledge of the people.
The company’s services are believed to have been pioneered by a number of UK law enforcement but have been discontinued and Clearview AI has no UK customers.
The ICO said its “preliminary view” was that the company appeared to have failed to comply with UK data protection laws for:
- Failure to treat information of British citizens fairly
- Failure to implement a process to stop data retention indefinitely
- Not having a legitimate reason to collect the information
- And don’t inform people in the UK about what’s happening to their data.
UK Information Commissioner Elizabeth Denham said: “I have significant concerns that personal data has been processed in a way that no one in the UK would have expected.
“UK data protection legislation does not prevent the effective use of technology to fight crime. But to enjoy the public’s trust in their products, technology providers must ensure that people’s legal protections are respected and respected. “.
The decision is provisional, and the ICO said any statements from Clearview AI will be carefully considered before a final ruling is issued in the middle of next year.
“UK best interests”
Hoan Ton-That, managing director of Clearview AI, said: “I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and my intentions.
“My company and I have acted in the best interest of the UK and its people by assisting law enforcement in solving heinous crimes against children, the elderly and other victims of unscrupulous acts … We only collect public data from the open internet and comply with all privacy and legal standards. ”
There are some signs that big tech companies are becoming more and more wary of facial recognition.
In early November, Facebook announced it would no longer use facial recognition software to identify faces in photos and videos.
But online tools and search engines using facial recognition technology continue to work, privacy activists warn.
Related topics
- Facial recognition
- Information Commissioner’s Office
- Data protection
- Privacy
- Australia
The database company must remove snaps taken in Australia
- Published
- November 3
The legality of online face collection called into question
- Published
- May 27
Twitter asks AI company to stop “collecting faces”
- Published
- January 23, 2020
Hacked the company database with collection of faces
- Published
- February 27, 2020
Read More about Tech News here.
This Article is Sourced from BBC News. You can check the original article here: Source