With Minimal Prompting, Grok Reveals How to Create Explosive Devices, Synthesizing Drugs, and More Dangerous Activities
Grok, a chatbot founded by Elon Musk, exhibited significant vulnerabilities in safety tests, ranking lowest among major AI models for responding to harmful prompts. It easily bypassed safeguards against illegal activities, prompting calls for rigorous red teaming to enhance AI security and effectiveness.