Decolonizing Artificial Intelligence: Addressing Bias Through Historical Context
Technological advancements often seem like new beginnings, offering opportunities to reshape the future. However, they invariably enter a world rife with persistent issues, such as racism. In a time when society is confronting the deep-seated roots of racial inequality, a recent paper from researchers at DeepMind and the University of Oxford advocates for "decolonizing" artificial intelligence. The goal is to prevent the amplification of societal prejudices in today's powerful machine learning systems.
Published this month in Philosophy & Technology, the paper emphasizes the need for historical context to understand and mitigate technology's biased outcomes. Co-author Marie-Therese Png, a PhD candidate at the Oxford Internet Institute and former UN technology advisor, remarked, "To address racial and gender biases in technology, one must recognize that these systems of oppression are deeply entrenched in histories of colonialism."
The authors highlight how past injustices shape contemporary technologies. For instance, understanding the disproportionate impact of predictive policing on African Americans requires knowledge of the historical context of slavery and colonialism, which has conditioned societal values.
With colonial powers having once controlled nearly every nation, the principle of decoloniality involves acknowledging these exploitative dynamics and their lasting effects on modern society. The paper points to algorithmic discrimination in U.S. law enforcement as a current issue, which particularly affects people of color. It also critiques the exploitative practices surrounding “ghost workers” in developing countries, who contribute to data annotation for tech companies—a modern parallel to colonial labor extraction.
In a similar vein, the authors draw a comparison between the beta testing of potentially harmful technologies in non-Western nations and historical medical exploitations by colonial powers. Png underscores a key tenet of coloniality: the valuation of lives differs based on race and nationality. Co-author Shakir Mohamed posed a critical question in a previous blog post: “How do we make global AI truly global?” This queries whether AI can equitably serve both privileged and marginalized communities.
The paper outlines a "critical technical practice" approach for AI developers, urging them to assess the cultural assumptions embedded in their technologies and their societal impacts with an "ethical foresight." The suggested strategies range from algorithmic fairness to inclusive hiring practices and conscientious AI policymaking. Notably, the authors advocate for technologists to learn from marginalized communities, drawing inspiration from grassroots organizations like Data for Black Lives to challenge the prevailing narratives of "technological benevolence."
Implicitly, the authors argue for a departure from a longstanding notion of technological neutrality, which suggests that developers bear no responsibility for how their tools are used. Though the paper was initiated prior to George Floyd’s death and the subsequent national discourse on racial equity, the urgency for technology’s role in addressing social injustice has never been clearer. Prominent AI institutions, including OpenAI, have shown a willingness to engage with these pressing issues by publicly supporting movements like Black Lives Matter.
As Png highlights, this evolving discourse allows for conversations about race within technological spaces without fear of dismissal or professional repercussions. Her co-author, William Isaac, expresses hope that this momentum toward understanding and advancing racial equity within the tech industry will endure.
The paper serves as a roadmap for navigating often superficial discussions about race in technology, linking advanced machine learning with centuries of global history. Yet, Png notes that decolonization goes beyond academic discourse; it requires dismantling technologies that perpetuate marginalization. “We aim for a genuine transfer of power,” she states.
AI has the potential to replicate historical injustices if its design ignores the complexities of the past. As AI systems increasingly infiltrate our lives, understanding their implications becomes paramount. "It’s vital to articulate and identify these systems," Png explains, "for they are manifestations of coloniality, white supremacy, and racial capitalism."
The research poses important questions regarding the development of genuinely decolonial AI. Initiatives like Deep Learning Indaba and Mechanism Design for Social Good are pioneering this vision, yet the path ahead is largely uncharted. Could a decolonized AI incorporate non-Western philosophies of fairness? How would projects that include programming in Arabic or other languages be classified?
Png acknowledges uncertainty in these areas but emphasizes that the immediate challenge is to decolonize our existing technological landscape. The vision for AI free from colonial influences—one that transcends mere resistance to establish a fair and equitable foundation—remains speculative, much like the broader societal ambitions for equity and justice.