Security Teams Consider AI Solutions but Hesitate Over Expensive Subscription Costs

Security teams are increasingly considering innovative tools like ChatGPT to enhance their threat detection strategies. According to Dennis Xu, a senior director and analyst at Gartner, while generative AI tools such as ChatGPT hold promise for security professionals—potentially aiding in areas like detection engineering and training—the costs associated with premium versions can be a barrier.

At the Gartner Security & Risk Management Summit in London, Xu pointed out that the free basic version of ChatGPT, powered by the GP-3.5 Turbo model, struggles with context retention and coding tasks. Consequently, security professionals may need to opt for the $20 per month ChatGPT Plus or the new ChatGPT Enterprise version to achieve the functionality required for their roles. As organizations scale, these costs can rise significantly based on the number of users.

Xu highlighted that the Enterprise version offers enhanced data control compared to its basic and Plus counterparts but cautioned that it may not yet be fully ready for deployment. Despite this limitation, numerous major security vendors are actively developing generative AI features. For instance, Cisco’s acquisition of Splunk aims to bolster data analytics capabilities, while Privacera launched a generative AI solution in June. Nvidia has also introduced its deep learning security software library, Morpheus, as part of its AI Enterprise 3.0 suite. Xu observed a trend among companies where they integrate the natural language interface of ChatGPT into existing products to streamline functionality.

While innovative security tools like WormGPT and FraudGPT have emerged, designed to "scam the scammer," Xu cautioned that accessing these solutions also requires investment. He noted that widely available models like ChatGPT can perform similar functions, such as generating phishing emails. This situation has created what Xu referred to as an "arms race" in the world of AI security, with malicious actors often having the upper hand. “For just $20 a month, a malicious user can develop malware or create phishing emails, while defenders face significantly higher costs to achieve similar efficiencies,” he stressed.

Understanding the limitations of AI tools is crucial. Xu likened ChatGPT to a five-year-old child trained on an extensive dataset: it may be useful for certain security tasks but is ill-equipped for others. "There are questions you just wouldn't ask a five-year-old,” he quipped, emphasizing the importance of realistic expectations and validation. In the realm of Security Operations (SecOps), determining the accuracy of AI-generated insights can be challenging.

Xu also noted the lack of robust use cases for applying AI systems in vulnerability management or attack surface management, singling out Google's Sec-PaLM as the only established threat detection language model capable of recognizing malicious scripts. However, he remarked that it is still in the early stages, with no benchmarks published yet.

For security teams looking to implement AI tools like ChatGPT, establishing clear governance rules and playbooks is vital. “Understand when and how to leverage AI tools, and specify clear SecOps use cases,” he advised. Xu underscored the importance of avoiding time-sensitive cases and scenarios involving sensitive corporate data, recommending that organizations provide training on AI interactions and implement monitoring protocols.

Awareness of AI drift is also critical. Generative AI tools may not deliver consistent responses to the same query. Additionally, Xu encouraged teams to stay informed about updates to the technology itself. For instance, OpenAI has recently introduced new capabilities for ChatGPT with GPT-4V, allowing voice and image interactions. Xu expressed enthusiasm about this potential, envisioning scenarios where teams could engage with their systems more intuitively by simply taking a picture and asking for diagnostics.

Ultimately, while AI tools like ChatGPT are poised to assist security professionals by streamlining their workloads, they are not intended to replace human expertise. Xu cautioned that “this technology is still a five-year-old baby” with much to learn, reinforcing the importance of using these tools judiciously and effectively as they evolve.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles