Recently, a research team from Columbia University and the University of Maryland released a new study pointing out that there are serious security vulnerabilities in Internet access AI agents. With expertise, attackers can easily manipulate AI systems, leak user privacy, download malicious files, and even send scam emails.
Researchers tested several AI Agents, including Anthropic's computer assistant, MultiOn Web Agent and ChemCrow research assistant, and found weaknesses in these systems' security defenses. This week documented in detail how an attacker directs an AI Agent from a trusted website to a malicious website through four stages, ultimately stealing user-sensitive data.
The researchers also developed a framework to classify different types of attacks and analyze factors such as the initiator, target, pathway and strategy of the attack. In a test, researchers created a threat website to promote an "AI-enhanced German refrigerator". When AI agents visited the website, they encountered a hidden jailbreak prompt. As a result, the agent leaked uninformed in ten attempts. Confidential information including credit card number and downloaded files from suspicious sources.
In addition, the study also found serious vulnerabilities in email integration. When a user logs into the email service, an attacker can manipulate an AI agent to send trusted phishing emails to chemotherapy, making it difficult for users to distinguish authenticity.
Although the security risks of these AI systems have been exposed, many companies will still accelerate the commercialization process. The research team called for strengthening security measures, including implementing strict access control, URL verification and user protection for downloads, etc., to ensure user data security.