Clearview's Facial Recognition AI Software Unveiled by Server Error

Clearview AI is increasingly viewed as a significant privacy threat, even among other privacy-challenged tech giants like Google. A recent report by Tech media outlet highlights a concerning incident: due to a server misconfiguration, Clearview AI inadvertently exposed its source code to anyone with internet access. A security researcher from Dubai-based SpiderSilk identified this issue, revealing a repository that included source code for the company’s mobile apps (for Windows, Mac, iOS, and Android) alongside pre-release versions meant for testing.

This breach also compromised Clearview’s Slack tokens, which could grant unauthorized access to internal communications without the need for a password. Additionally, the leak disclosed details about Clearview’s discontinued prototype “Insight” camera, which was found to have captured 70,000 videos from a Manhattan residential building. Clearview stated that it collected this footage solely for debugging purposes and with permission from the building’s management.

Clearview AI is known for its facial recognition technology, which can identify individuals using data scraped from public-facing platforms like Facebook and Instagram. The company markets its services primarily to law enforcement and businesses, allowing them to identify individuals by simply uploading a photo. This is not the first time Clearview has faced scrutiny; earlier, a list of companies using its services was leaked.

CEO Hoan Ton-That has defended Clearview's data practices, asserting that the company should have the right to store publicly available information, similar to Google. However, this latest incident raises significant concerns not only about the potential for privacy violations but also about Clearview's ability to secure its own data.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles