On Tuesday, the BBC reported on the case of Pa Edrissa Manjang, a Black Uber Eats courier who received a financial settlement from Uber after he was denied access to the app due to "racially discriminatory" facial recognition checks. Manjang, who had been using the platform since November 2019 for food deliveries, raised concerns about the legality of AI systems in the U.K. The lack of transparency around rapidly deployed automated systems, purported to enhance user safety and service efficiency, often leads to individual harm, making it challenging for affected users to seek recourse for AI-driven biases.
This lawsuit stemmed from numerous complaints regarding failed facial recognition checks after Uber introduced its Real Time ID Check system in the U.K. in April 2020. Utilizing Microsoft’s technology, the system requires users to submit a live selfie that is matched against a stored photo for identity verification.
Failed ID Checks
According to Manjang’s complaint, Uber suspended his account following a failed ID verification process, citing "continued mismatches" in the selfies he provided. In October 2021, he initiated legal action against Uber, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).
Years of legal deliberations ensued, with Uber unsuccessfully attempting to have Manjang's claim dismissed. The EHRC highlighted the case's complexity, noting it remained in “preliminary stages” as of fall 2023. A final hearing was set for November 2024, but it was canceled after Uber offered a settlement, leaving the specific details of the agreement undisclosed. Uber declined to comment on what went wrong during the ID checks.
Despite the settlement, Uber did not concede that its processes were flawed. In a statement, the company asserted that account terminations aren't solely based on AI assessments and emphasized that its facial recognition checks included thorough human reviews. “Our Real Time ID Check aims to provide safety for app users, ensuring decisions are made with oversight, not in isolation,” Uber stated, reinforcing that automated checks weren't behind Mr. Manjang’s loss of access.
However, it is evident that serious issues occurred during Uber’s ID verification process in this case.
Pa Edrissa Manjang (Photo: Courtesy of ADCU)
Worker Info Exchange (WIE), a digital rights advocacy group supporting Manjang, successfully obtained his selfies from Uber through a Subject Access Request under U.K. data protection law, demonstrating that all submitted photos were indeed of him.
“After his dismissal, Pa repeatedly communicated with Uber, urgently asking for a human review of his submissions. Each time, he was told, 'we were unable to confirm whether the provided photos were yours, and due to continued mismatches, we have made the final decision to end our partnership with you,'” WIE noted in a report on “data-driven exploitation in the gig economy.”
The details disclosed concerning Manjang’s complaint reveal that both Uber’s facial recognition process and the claimed human review system fell short in this instance.
Legal Perspectives on Equality and Data Protection
This case underscores the inadequacies of U.K. law in regulating AI technology. Manjang’s settlement was achieved through a legal framework based on equality law, specifically a discrimination claim under the U.K. Equality Act 2006, which includes race as a protected characteristic.
Baroness Kishwer Falkner, Chair of the EHRC, criticized the necessity for Manjang to initiate legal action to comprehend the opaque processes impacting his employment. “AI is inherently complex and poses unique challenges for employers, lawyers, and regulators. As AI usage grows, it can contribute to discrimination and human rights violations,” she stated. “We are especially troubled that Mr. Manjang was unaware his account was being deactivated and lacked a clear and effective means to contest the technology used against him. Transparency in AI usage by employers is essential.”
From a data protection perspective, the U.K. GDPR is supposed to offer robust safeguards against opaque AI processes. Manjang's access rights were crucial in acquiring evidence to demonstrate the flaws in Uber's ID checks. Without this evidence, the company might not have settled. The challenge of proving a proprietary system's inadequacy without access to personal data benefits larger, resource-rich platforms.
Enforcement Gaps
Beyond the rights associated with data access, the U.K. GDPR aims to ensure individuals are protected against automated decisions that significantly impact them. It mandates a lawful basis for processing personal data and encourages entities to conduct data protection impact assessments to assess potential harms from AI systems.
However, effective enforcement is vital. In particular, the Information Commissioner’s Office (ICO) has not investigated complaints related to Uber’s flawed ID checks, despite reports dating back to 2021.
Jon Baines, a senior data protection specialist at Mishcon de Reya, remarked, “The ICO’s lack of enforcement has weakened legal protections for individuals. Existing frameworks should be capable of addressing some potential harms from AI systems.” He underlined that the ICO has the authority to examine both individual claims and broader compliance under the U.K. GDPR.
“We should expect the ICO to adopt a more proactive stance,” he insisted, questioning its apparent inaction regarding Uber’s practices.
We reached out to the ICO for clarification on whether it is scrutinizing Uber's AI usage concerning ID checks in light of ongoing complaints. A spokesperson emphasized the necessity for organizations to understand and mitigate risks stemming from biometric technology but did not address specific inquiries related to Manjang's case.
Meanwhile, the government is currently revising data protection regulations through a post-Brexit data reform initiative while also confirming it will not introduce dedicated AI safety legislation at this time. This decision comes despite Prime Minister Rishi Sunak's declarations that AI safety is a priority for his government.
Instead, the government plans to lean on existing laws and regulatory bodies to manage emerging AI risks, with a modest allocation of £10 million to regulators for research and tools aimed at examining AI systems—far from sufficient for addressing fast-evolving challenges.
Overall, this case highlights both the pressing need for comprehensive regulatory frameworks addressing AI’s ethical implications and the urgency for effective enforcement to protect individuals from biases inherent in these technologies. A stronger legal approach similar to the EU’s risk-based AI regulation may better signal the seriousness of these issues, but such measures require genuine political will and commitment to enforcement.