Former NSA Chief Joins OpenAI Board Amid Growing Security Concerns
OpenAI announced Thursday that retired Gen. Paul Nakasone, the former head of the National Security Agency (NSA), will join its board of directors. He will also participate in the board’s “security and safety” subcommittee.
This strategic appointment appears aimed at addressing concerns from critics who argue that OpenAI is advancing too rapidly without adequately assessing the potential risks to its users and society at large.
Nakasone brings an extensive background in military and cybersecurity, with decades of experience at the Army, U.S. Cyber Command, and the NSA. Regardless of differing opinions about the operations of these agencies, his expertise is undeniable.
As OpenAI solidifies its role as a key AI provider not only for the tech sector but also for government, defense, and major enterprises, such institutional knowledge is invaluable. This move may also reassure apprehensive investors, especially considering the connections he has within federal and military organizations. Nakasone stated, “OpenAI’s dedication to its mission resonates with my own values and experiences in public service.”
This alignment is noteworthy, as both Nakasone and the NSA have recently defended controversial practices, such as acquiring data of dubious origins for surveillance purposes, which they claim is legal. Similarly, OpenAI has also drawn scrutiny for its methods of sourcing vast amounts of internet data without apparent authorization, often justifying its actions by insistence on the lack of legal constraints.
According to the release from OpenAI, Nakasone's insights will bolster the organization’s efforts to harness AI for enhanced cybersecurity, improving the detection and response to cyber threats. The potential benefits of AI in protecting vulnerable institutions like hospitals, schools, and financial centers could open new market avenues.
As a member of the safety and security committee, Nakasone will be instrumental in guiding the board on crucial decisions regarding the safety protocols and operational strategies of OpenAI. However, the exact functions and processes of this newly established committee remain unclear. Several key safety personnel have left the organization, and the committee is currently undergoing a 90-day review of its operational safeguards.