Concerns Raised by OpenAI and DeepMind Employees Over AI Industry Practices
On June 4, 2023, thirteen former and current employees from OpenAI and DeepMind released a joint open letter voicing their concern about the rapid development of the artificial intelligence (AI) industry without sufficient whistleblower protections. This message has garnered attention from various international media outlets.
The authors argue that the profit-driven motives of AI companies result in inadequate regulatory oversight, leaving current corporate governance frameworks ill-equipped to handle the industry’s challenges. They highlight several risks associated with unregulated AI, including the dissemination of misinformation, reduced autonomy of AI systems, and heightened social inequalities.
The letter emphasizes that most external stakeholders remain largely unaware of these risks. AI companies are minimally required to share information with the government and are not obliged to engage with civil society. The signatories assert that relying on voluntary information sharing is insufficient for effective oversight and that existing legal protections do not adequately shield whistleblowers. Many employees find themselves constrained by confidentiality agreements that prevent them from disclosing their companies' developments.
Among the signatories are individuals with ties to DeepMind, while several current OpenAI employees chose to remain anonymous. Notable organizers include former OpenAI engineers Daniel Ziegler and Daniel Kokotajlo. Ziegler, who contributed to the ChatGPT project from 2018 to 2021, expressed concern about the industry's pursuit of commercialization at the expense of addressing potential risks. He stated, "Rather than blaming OpenAI, I encourage all leading AI firms to genuinely invest in enhancing regulation and transparency to build public trust."
Kokotajlo echoed these sentiments, revealing that he lost faith in OpenAI's ability to act responsibly earlier this year, particularly regarding the company's pursuit of general AI systems. He voiced concerns about how quickly companies stray from established protocols when dealing with powerful yet poorly understood technologies.
In response to the open letter, OpenAI reiterated its commitment to protecting whistleblowers by establishing an anonymous reporting hotline and a safety committee, affirming that tackling risks through scientific methods is essential.