Nearly 200 AI leaders, researchers, and data scientists signed an open letter released on Tuesday by the Responsible AI Community. This letter condemns Israel's recent violence against Palestinians in Gaza and the West Bank. It criticizes the use of AI technologies in warfare, stating that such technologies aim to increase efficiency in human loss and perpetuate anti-Palestinian biases in AI systems.
The letter, initially circulated by Tina Park, head of Inclusive Research and Design at the Partnership on AI, calls for the cessation of technology support to the Israeli government and the end of its defense contracts. It emphasizes that the current crisis is rooted in a historical context that predates October 7, 2023, citing that AI-driven technologies have contributed to over 11,000 strikes against targets in Gaza since the onset of the conflict.
Prominent signers of the letter include Timnit Gebru, AI ethics researcher and DAIR founder; Alex Hanna, director of research at DAIR; Abeba Birhane, senior fellow at Mozilla; Emily Bender, linguistics professor at the University of Washington; and Sarah Myers West, managing director of the AI Now Institute.
However, Israeli and Jewish AI leaders have vocally opposed the letter, arguing it reflects a series of instances where prominent AI ethicists either celebrated or ignored the Hamas attacks on October 7. The letter does not address the Israeli hostages in Gaza or condemn Hamas for its actions.
Jules Polonetsky, CEO of the Future of Privacy Forum, expressed distress, stating that the letter fails to condemn the atrocities committed by Hamas. He emphasized the complexity of moral considerations in military conflicts and questioned the effectiveness of a one-sided narrative in resolving violence.
Yoav Goldberg, a professor at Bar Ilan University, noted that some AI technologies have likely "saved countless Palestinian lives" by aiding in intelligence and surveillance. He remarked that before the October 7 attacks, many potential assaults may have been averted due to these systems.
AI engineer Shira Eisenberg highlighted that AI technologies, such as Israel's Iron Dome, play critical roles in wartime operations, emphasizing the importance of responsible AI use amidst ongoing conflict.
Furthermore, several Israeli and Jewish leaders within the AI community have reported feeling shock and disappointment over anti-Semitic sentiments in social media discussions following October 7. They feel betrayed and isolated by individuals whom they believed shared their values.
Eran Toch, an associate professor at Tel Aviv University, expressed that the lack of empathy from the broader AI community is particularly painful for Israeli members invested in resolving the Palestinian conflict and advocating for human rights.
Concerns over anti-Semitic conspiracy theories related to Israel's use of AI technology have also arisen among these leaders. They argue that such narratives revive historical tropes linking Jews to oppression.
In response to media inquiries, Gebru declined to comment further, emphasizing her focus on supporting Palestinians.
The letter has sparked a schism within the AI community and the broader tech sector, exacerbated by actions taken by some AI leaders to withdraw from prominent events like the Web Summit in response to controversial comments about Israel’s actions.
Dan Kotliar, a researcher at the University of Haifa, highlighted that the implicit support for Hamas's actions in the letter raises questions about the ethics of those who promote responsible AI, further complicating their credibility.
Talia Ringer, an assistant professor at the University of Illinois at Urbana-Champaign, expressed a deep personal connection to the events since the October 7 attacks, which affected her family. She noted a lack of space for mourning in the AI research community and highlighted her struggle with anti-Semitic remarks made by former friends.
While Ringer finds value in the concerns about AI in warfare, she criticized the phrasing suggesting that history did not start on October 7 as dismissive of the traumatic impact of that day. Ultimately, Ringer's personal devastation makes it difficult to concentrate on AI issues at this time, as her focus remains on the human toll of the ongoing conflict.