In May 2024, around 200 employees from Google DeepMind, representing roughly 5 percent of the division, signed a letter urging the company to end its contracts with military organizations. They expressed concerns that AI technology is being utilized for warfare. The letter emphasizes that the concerns are not tied to the geopolitics of specific conflicts but highlights Google's defense contract with the Israeli military, known as Project Nimbus. It also notes that the Israeli military employs AI for mass surveillance and target selection in its Gaza bombing campaign, with local weapon firms required to purchase cloud services from Google and Amazon.
This situation underscores tensions within Google, particularly between its AI division and the cloud business, which provides AI services to militaries. At this year's Google I/O conference, pro-Palestine protesters were seen at the entrance, voicing opposition to the Lavender software, the “Gospel” AI program, and Project Nimbus.
The rapid integration of AI in warfare has prompted some technologists involved in creating these systems to raise their voices in protest. However, Google made a commitment in 2014 during its acquisition of DeepMind that its AI technology would never be used for military or surveillance purposes. The internal letter states, “Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI and contradicts our mission statement and AI Principles.”
The DeepMind staff are urging leadership to investigate claims regarding the use of Google cloud services by militaries and weapon manufacturers. They call for an end to military access to DeepMind's technology and the establishment of a governance body to prevent future military applications of AI technology. Despite these employee concerns and requests for action, reports indicate that Google has yet to provide a meaningful response.