The Grok 3 model launched by xAI proposed by Elon Musk has recently attracted attention due to security vulnerabilities. A report by AI security company Adversa AI pointed out that this new model is worrying in cybersecurity and is easily compromised by a simple "jailbreak attack" that may reveal dangerous information such as how to create bombs. What is even more disturbing is that the research team discovered a "tip leak" flaw, which directly exposed the system prompts of Grok 3, providing hackers with a "blueprint" of the model.
Adversa CEO Alex Polyakov warned that the vulnerabilities are not limited to information leakage, and militants can steal Grok 3's AI agents. For example, an automatic mailing agent may be hacked into a malicious reply command, sending a phishing link to Taylor. Tests show that Grok 3 defends against only one of the four jailbreak attacks, far inferior to OpenAI and Anthropic models. Polyakov criticized: "The Grok 3 pursues speed and sacrifices safety, and its protection level is closer to the Chinese model than to the Western standards."
The report also mentioned that Grok 3’s training seemed to reinforce Musk’s extreme views, such as hostility to traditional media, which could amplify the risk of its weakening. With the acceleration of AI agents (such as OpenAI operators), security risks are becoming increasingly prominent. Grok 3's crisis reminds us that if AI's rapid innovation ignores security, it may become a "accomplice" of hackers, and the threat is far beyond imagination.