On August 11, Futurism reported alarming findings by security researchers regarding Microsoft's built-in Copilot AI in the Windows system. They revealed that this AI could be easily manipulated to leak sensitive corporate data and even become a powerful phishing tool. Michael Bargury, co-founder and CTO of the security firm Zenity, disclosed this information at the Black Hat Security Conference in Las Vegas. He stated, “I can leverage it to gain access to all your contact information and send out hundreds of emails on your behalf.”
Traditionally, hackers would spend days crafting deceptive phishing emails, but with Copilot, they can generate numerous convincing emails in just minutes. Demonstrations showcased how attackers do not need access to corporate accounts to trick Copilot into altering bank transfer recipient information; they only need to send a malicious email, often without needing the target employee to open it. Another video highlighted the potential for severe damage once an attacker gains access to an employee’s account using Copilot. Through simple questioning, Bargury accessed sensitive data, which he could then use to impersonate employees in phishing attacks.
In one demonstration, he obtained the email address of a colleague, Jane, and gathered details from their recent conversation, coaxing Copilot to reveal the email addresses of those copied in that conversation. He then directed Copilot to compose an email to Jane in the targeted employee’s style and retrieve the exact subject line from their most recent exchange. Within minutes, he crafted a highly credible phishing email capable of sending malicious attachments to any user in the network—all thanks to Copilot's cooperation.
Microsoft's Copilot AI, especially Copilot Studio, allows businesses to customize chatbots for specific needs. However, this customization requires AI to access company data, raising significant security concerns. Many chatbots are searchable online, making them prime targets for hackers. Attackers could also exploit indirect prompts to bypass Copilot’s security measures, using malicious data from external sources, such as making the chatbot access websites that contain harmful prompts to perform prohibited actions.
Bargury emphasized, “There is a fundamental issue here. When AI is granted access to data, that data becomes a surface for prompt injection attacks. To some extent, if a bot is useful, it is vulnerable; if it is not vulnerable, it lacks utility.”