Clearview AI is back in the privacy headlines, this time for a large fine (about $10M) in the UK for building a database of 20B personal photos, identifying them, and selling them. The UK authorities have said this violates privacy laws in the UK because the Britons who are likely to be in the database did not consent to use of their photographs, many of which were harvested from social media and other websites, in this manner. Clearview AI is used by some law enforcement agencies for its facial recognition services, but repeatedly runs afoul of privacy laws because of consent and other privacy concerns. The company has been fined or otherwise penalized for privacy violations in France, Australia, Italy, and the US before now.
Why It Matters
Clearview's algorithm is a "big data" tool that clearly gives regulators and consumer rights groups pause. The fact that Clearview can take unidentified photos, assemble a database of billions of them, and use its tools to put a name to them, is unnerving to many. The bigger takeaway from the UK fine, however, is this: the EU and UK take seriously the idea that personal information -- even if it is willingly shared by the person -- is not up for grabs by companies that weren't given permission to use it. That is a lesson other companies can heed without making headlines. If you don't give notice and get any required consents to use someone's personal data, you may face a privacy problem.