Microsoft and OpenAI have reported that various state-sponsored hacking groups are leveraging generative AI (GAI) tools to enhance their cyberattack strategies. Recent research reveals how hackers affiliated with foreign governments, including those from China, Russia, North Korea, and Iran, are utilizing GAI.
These state actors are employing GAI for several purposes: code debugging, conducting open-source research on targets, developing social engineering techniques, crafting phishing emails, and translating text. In response to the misuse of its tools, OpenAI, the driving force behind Microsoft's GAI products like Copilot, has terminated access for these groups.
Among the identified threat actors is the notorious Russian group Forest Blizzard, also known as Fancy Bear or APT 12. This group reportedly utilized OpenAI's platform mainly for researching satellite communication protocols, radar imaging technology, and assistance with scripting tasks.
Microsoft monitors over 300 hacking groups, including 160 associated with nation-states, and has shared insights with OpenAI to aid in the detection of these hackers and the cessation of their accounts. OpenAI is committed to identifying and disrupting the activities of malicious actors on its systems. Its team employs various strategies, including using proprietary models to track leads, analyzing user interactions with OpenAI tools, and assessing their broader intentions. When illicit use is detected, OpenAI takes action to disable accounts, revoke services, or restrict access to resources.