Privacy Experts Warn: Google’s Call-Scanning AI May Default to Censorship

At its recent I/O conference, Google introduced an innovative feature utilizing generative AI technology to monitor voice calls in real time for patterns associated with financial scams. This development has raised alarms among privacy and security experts, who argue that it marks the beginning of a concerning trend toward centralized censorship. They caution that once client-side scanning becomes integrated into mobile infrastructure, it could lead to significant implications for user privacy.

Google's demonstration showcased a call scam detection feature expected to be included in a future Android OS version, which is projected to run on approximately three-quarters of the world’s smartphones. This feature is powered by Gemini Nano, Google’s smallest AI model designed to operate entirely on-device.

This represents a form of client-side scanning, a controversial technology that has sparked significant debate in recent years, especially regarding the detection of child sexual abuse material (CSAM) and grooming activities on messaging platforms. Apple previously scrapped a client-side scanning initiative for CSAM in 2021 after facing major privacy backlash. Nonetheless, policymakers continue to pressure technology companies to develop methods to identify illegal activities occurring on their platforms, raising concerns about the potential for widespread content scanning driven by government or commercial interests.

Response to Google’s demo was swift. Meredith Whittaker, president of the encrypted messaging app Signal, voiced her concerns on X, stating: “This is incredibly dangerous. It paves the way for centralized device-level client-side scanning. From detecting ‘scams,’ it is a small step to monitoring ‘patterns associated with reproductive care’ or resources for LGBTQ communities or whistleblowers among tech workers.”

Cryptography expert Matthew Green, a professor at Johns Hopkins University, echoed these sentiments on social media, warning that a future could involve AI analyzing texts and voice calls to report illicit behavior. He cautioned that service providers may require a "zero-knowledge proof" for data verification, potentially stymieing the use of open clients. Green noted that this scenario of inherent censorship could be just a few years away from becoming technologically viable. “We’re not far from this tech becoming effective enough, likely within the next decade,” he predicted.

European privacy and security advocates swiftly raised their voices as well. Lukasz Olejnik, an independent privacy consultant based in Poland, acknowledged the potential benefits of Google’s anti-scam feature but warned about the risk of the underlying infrastructure being used for social surveillance. He cautioned, “This indicates that capabilities are being developed to surveil calls or documents in search of illegal or harmful content, according to subjective standards.”

Olejnik elaborated further, emphasizing that this technology could lead to alarming outcomes, such as warning notifications, blocking of certain content, or reports being sent to authorities. “This poses a serious threat to privacy and fundamental freedoms,” he added.

Olejnik reinforced the idea that while on-device detection could offer enhanced privacy, much remains at stake regarding the broader implications of AI and large language models integrated into our operating systems. He warned of a future where such technologies could manipulate societal behavior on an unprecedented scale. “If this capability exists and is built into systems, we might be on the brink of significant risks regarding the use of AI to control societal behavior. How can we effectively govern this? Are we approaching a dangerous precipice?”

Michael Veale, an associate professor of technology law at UCL, also expressed deep concerns about the potential for function creep resulting from Google’s call scanning AI, suggesting it creates an infrastructure that regulators might exploit for other purposes.

Privacy advocates in Europe have heightened reasons for alarm, particularly with the EU's controversial legislative proposal aimed at mandating message scanning, which critics contend could undermine democratic rights by requiring platforms to intercept and analyze private messages by default.

Despite assurances of technology neutrality in the proposal, the expectation is that platforms would resort to client-side scanning to comply with detection orders to identify both known and unknown CSAM, as well as real-time grooming activity.

Recently, hundreds of privacy experts signed an open letter warning that this plan could result in millions of false positives daily, as the client-side scanning technologies anticipated for implementation may be unproven, flawed, and susceptible to manipulation.

As of the latest update, Google has not responded to concerns regarding the potential privacy implications of its conversation-scanning AI.

We’re excited to announce an upcoming AI newsletter! Sign up here to receive it directly in your inbox starting June 5.

Most people like

Find AI tools in YBX