ARTICLE AD BOX
US used Anthropic's Claude AI during a mission to capture former Venezuelan President Nicolas Maduro. Anthropic CEO Dario Amodei has earlier raised concerns about AI being used for lethal purposes.

Anthropic's Claude AI was used by the US during the military operation conducted by the country to capture former Venzeuela President Nicolas Maduro, according to a report by the Wall Street Journal. The deployment of claude was reportedly done via Anthropic's partnership with data company Palantir technology whose tools are commonly used the US Defense Department and federal law enforcement.
During the mission, Maduro along with his wife were captured by the US along with the bominb of several sites in Caracas. However, Anthropic's usage guidelines explicitly prohibit Claude from being used to faciliate violence, develop weapons or conduct surveillance.
An Anthropic spokesperson while speaking to WSJ about the deployment said, “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,”
“Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.” they added
The report also noted that Anthropic's conerns about how Claude can used by the Pentagon have led to the US administration official to consider cancelling its contract which totals around $200 million.
Anthropic first company to used by US DoD:
Anthropic is said to be the first AI company whose tools were used by the US Department of Defence during a classified operation. The report also does not discount the possibility that other AI tools were used during the Venezuela operation for unclassified tasks. The areas where these tools could be used could range from summarizing documents to controlling autonomous drones.
At an event last month, US Defense Secretary Pete Hegseth had said he is creating an “AI-first, war-fighting force”.
“Responsible AI at the War Department means objectively truthful AI capabilities employed securely and within the laws.” he said
“We will not employ AI models that won’t allow you to fight wars,” Hegseth added in a comment that is said to be referring to the discussions that US officials have had with Anthropic.
Notably, Anthropic was awarded the $200 million by the Department of Defense last year. However, Anthropic CEO Dario Amodei has often raised conerns about use of AI in lethal operations and surveillance.
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” Amodei said during an event last month

1 hour ago
1






English (US) ·