OpenAI Updates Policy to Permit Military Applications: What You Need to Know

Update: OpenAI has released a statement clarifying that recent changes to their policy were made to better accommodate military customers and approved projects. The company emphasizes that its tools cannot be used to harm individuals, develop weaponry, conduct surveillance, or damage property. However, OpenAI acknowledges that national security use cases align with its mission. For instance, the company is collaborating with DARPA to develop innovative cybersecurity tools aimed at protecting the open-source software that critical infrastructure and industries rely on. The previous policy's wording left some ambiguity about whether these beneficial applications would be permissible under the term “military.” Thus, the update aims to provide clarity and promote constructive discussions.

Original story follows:

In a surprising update to its usage policy, OpenAI has now opened the door to potential military applications for its technologies. Previously, the policy explicitly prohibited the use of its products for "military and warfare" purposes; however, that language has now been removed, leaving room for interpretation as OpenAI did not refute its newfound openness to military uses. This change was first reported by The Intercept and appears to have taken effect on January 10.

Such unannounced policy updates are not uncommon in the tech industry, as companies adapt to the evolving landscape of their products. OpenAI’s recent announcement regarding the public rollout of customizable GPTs and a vaguely defined monetization strategy likely prompted revisions to existing policies.

However, the alteration of the no-military clause cannot solely be attributed to the introduction of this new product. The claim that omitting “military and warfare” simply makes the policy "clearer" lacks credibility, as OpenAI’s statement regarding the update suggests. This represents a significant, consequential shift in policy rather than a mere rephrasing.

For further reference, you can view the current usage policy and the previous one , with highlighted screenshots of relevant sections below:

Before the policy change:

After the policy change:

The entire policy has clearly undergone a rewrite. Whether this rewrite enhances readability is subjective; I believe a bulleted list of prohibited practices is more straightforward than the generalized guidelines presented. OpenAI's policy writers seem to favor the new structure, which may provide them with the flexibility to interpret certain previously barred practices more favorably. As stated by the company, "Don’t harm others" is a "broad yet easily grasped" guideline applicable across various contexts.

As OpenAI representative Niko Felix pointed out, there remains a categorical restriction against the development and use of weapons, which is listed separately from "military and warfare." This distinction recognizes that military activities extend beyond mere weapon production, also encompassing research, investment, and infrastructure support.

It is here that I speculate OpenAI might explore new business avenues. The Defense Department engages in numerous activities that are not solely focused on combat; military entities are significantly invested in basic research and small business funding.

OpenAI’s GPT platforms could be invaluable for army engineers tasked with summarizing decades of records related to regional water infrastructure. Businesses often face challenges in defining their relationships with government and military funding. The controversy surrounding Google’s “Project Maven” underscored the complexity of this issue, especially as the multibillion-dollar JEDI cloud contract occurred with less public outcry. For instance, an academic researcher receiving an Air Force Research lab grant might utilize GPT-4, whereas a researcher within the AFRL on a similar project could face restrictions. Where ought the line to be drawn? Even a strict “no military” policy must offer some flexibility.

However, the complete removal of “military and warfare” from OpenAI’s prohibited uses indicates that the company is at least open to engagement with military clients. I reached out to inquire whether this interpretation is accurate, alerting them that anything less than a denial would suggest confirmation.

As of now, there has been no response. I will update this post should I receive any further information.

OpenAI, military applications, AI technology

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles