Current location: Home> Ai News

Man Used ChatGPT to Plan Bombing Outside Trump Hotel OpenAI Responds

Author: LoRA Time: 08 Jan 2025 814

On January 1, a shocking incident occurred in Las Vegas. A man detonated a Tesla Cybertruck outside the Trump Hotel. After an investigation by the Las Vegas Police Department, the police revealed that the man actually used the artificial intelligence chat tool ChatGPT to plan before carrying out the explosion. .

fire explosion (1)

The police said at a press conference that the man involved, Matthew Livelsberg, had used ChatGPT to ask questions more than 17 times in the days before the incident. These include how to obtain the materials needed for the explosion, related legal issues, and how to detonate the chosen explosive with a firearm. For a full hour, Livelsberg interacted with ChatGPT in plain English, discussing issues including whether fireworks are legal in Arizona, where to buy guns in Denver, and what kind of gun to choose to effectively detonate explosives.

Assistant Police Chief Dory Coren confirmed that ChatGPT's answers played a key role in implementing the bombing plan. ChatGPT provided information about the speed at which the gun was fired, allowing Livelsberg to carry out his plan. Although the final explosion was not as powerful as he expected and some of the explosives did not ignite as expected, the incident still shocked law enforcement.

Las Vegas Police Chief Kevin McKeel said: “We’ve known for a long time that artificial intelligence will change our lives at some point, but this is the first time I’ve seen someone use ChatGPT to build such a dangerous plan. ." He noted that there is currently no government oversight mechanism that flags these inquiries related to explosives and firearms.

Although the Las Vegas police have not disclosed the specific questions asked by ChatGPT, the questions displayed at the press conference were relatively simple and did not use the traditional "jailbreak" terminology. It’s worth noting that this use clearly violates OpenAI’s policies and terms of use, but it’s unclear whether OpenAI’s security measures were in effect when Livelsberg used the tool.

In response, OpenAI stated that it is committed to allowing users to use its tools "responsibly" and aims to allow artificial intelligence tools to reject harmful instructions. OpenAI said, explaining further: “In this case, ChatGPT was simply responding based on information that was already public on the internet and also provided warnings about harmful or illegal activity. We are always working to make AI smarter and more responsible. "Officials are working with law enforcement to support their investigation."