Current location: Home> Ai News

The U.S. Pentagon is using AI to accelerate its "kill chain"

Author: LoRA Time: 20 Jan 2025 324

As artificial intelligence technology advances rapidly, leading AI developers like OpenAI and Anthropic are working hard to work with the U.S. military, seeking to improve the efficiency of the Pentagon while ensuring that their AI technology is not used for lethal weapons.

Dr. Radha Plumb, the Pentagon's chief digital and AI officer, said in an interview with TechCrunch that AI is not currently used in weapons, but it provides the Department of Defense with significant advantages in the identification, tracking and assessment of threats.

Robot Artificial Intelligence AI

Dr. Plumb mentioned that the Pentagon is accelerating the execution of the "kill chain", a process that involves identifying, tracking and neutralizing threats, involving complex sensors, platforms and weapons systems. Generative AI shows its potential in the planning and strategy stages of the kill chain. She noted that AI can help commanders respond quickly and effectively when faced with threats.

In recent years, the Pentagon has become increasingly close to AI developers. In 2024, companies such as OpenAI, Anthropic, and Meta relaxed their usage policies to enable U.S. intelligence and defense agencies to use their AI systems, but still prohibited the use of these AI technologies for purposes that harm humans. This shift has led to a rapid expansion of cooperation between AI companies and defense contractors.

Meta, for example, in November partnered with companies including Lockheed Martin and Booz Allen to apply its Llama AI model to the defense sector. Anthropic has reached a similar cooperation with Palantir. Although the specific technical details of these collaborations are unclear, Dr. Plumb said that the application of AI in the planning stage may conflict with the use policies of multiple leading developers.

There has been a heated discussion in the industry about whether AI weapons should have the ability to make life and death decisions. Anduril CEO Palmer Luckey mentioned that the U.S. military has a long history of purchasing autonomous weapons systems. However, Dr. Plumb dismissed this, stressing that in any case, someone must be involved in making the decision to use force.

She pointed out that the idea of ​​automated systems making life-and-death decisions independently is too binary, and the reality is much more complex. The Pentagon's AI system is a collaboration between humans and machines, with senior leaders involved in the decision-making process.