DARPA Unveils Two-Year Competition to Develop AI-Driven Cyber Defense Solutions

As part of an ongoing initiative to enhance software security, the Defense Advanced Research Projects Agency (DARPA) is set to launch the AI Cyber Challenge, a two-year competition aimed at utilizing AI to identify and rectify software vulnerabilities. This contest invites U.S.-based teams to strengthen "vital software," particularly critical infrastructure code, in collaboration with notable AI startups Anthropic and OpenAI, along with technology giants Microsoft and Google. The Linux Foundation’s Open Source Security Foundation (OpenSSF) will serve as an advisor to the challenge, which features a prize pool of $18.5 million for top performers.

DARPA also plans to allocate $1 million each to up to seven small businesses eager to join the competition. “Our goal is to develop systems that can autonomously protect any software from cyberattacks,” stated DARPA program manager Perry Adams, who initiated the AI Cyber Challenge. He emphasized the impressive potential of responsibly applied AI in bolstering code security.

Adams pointed out that open source code is becoming increasingly prevalent in critical software solutions. According to a recent GitHub survey, an astonishing 97% of applications utilize open source code, while 90% of companies integrate it in some capacity.

While the rise of open source fosters innovation, it also introduces significant vulnerabilities and exploits. A 2023 Synopsys analysis revealed that 84% of codebases had at least one known open-source vulnerability, and 91% contained outdated open-source components. Moreover, a Sonatype study noted a staggering 633% year-over-year increase in supply chain attacks targeting third-party, often open source, elements.

In response to high-profile security incidents, such as the Colonial Pipeline ransomware attack and the SolarWinds supply chain breach, the Biden-Harris Administration issued an executive order last year aimed at enhancing software supply chain security. This order established a cybersecurity safety review board to evaluate cyberattacks and provide recommendations for improved defenses. Additionally, in May 2022, the White House collaborated with The Open Source Security Foundation and Linux Foundation to call for $150 million in funding over two years to address persistent open source security challenges.

With the launch of the AI Cyber Challenge, the Biden Administration clearly recognizes the pivotal role AI can play in cybersecurity defense. According to Adams, “The AI Cyber Challenge offers an opportunity to explore the possibilities when cybersecurity experts and AI specialists collaborate with a unique suite of resources. If successful, we anticipate this challenge will not only lead to innovative cybersecurity tools but also showcase how AI can reinforce societal resilience by protecting critical systems.”

While discussions often highlight AI’s potential for misuse in cyberattacks—such as generating malicious code—many experts believe that advancements in AI can fortify organizations’ defenses, allowing security professionals to perform tasks more effectively. A survey by Kroll indicated that more than half of global business leaders are currently integrating AI into their cybersecurity initiatives.

The AI Cyber Challenge will kick off with a qualifying event in Spring 2024, where up to 20 top-scoring teams will secure a place in the semifinal round at the DEF CON conference later that year. Up to five finalists will each receive $2 million, progressing to the final competition at DEF CON 2025, where the top three teams will be awarded additional prizes, including a $4 million grand prize for the first place.

Winners will be encouraged, though not mandated, to open source their AI solutions. The AI Cyber Challenge builds upon the White House's earlier announced model assessment from this year’s DEF CON, which aims to identify vulnerabilities within large language models like OpenAI’s ChatGPT and seek solutions for potential exploits. This initiative will also analyze how these models adhere to the principles set forth in the Biden-Harris administration’s “AI bill of rights” and the National Institute of Standards and Technology’s AI risk management framework.

Most people like

Find AI tools in YBX