AI for warfare: West Asia’s flare-up should focus attention on red lines for autonomous weapons

1 hour ago 1
ARTICLE AD BOX

logo

AI capabilities are improving at a pace that neither governments nor companies fully comprehend. (istockphoto)

Summary

The US-Israel attack on Iran not only deploys AI, but comes amid a standoff between Anthropic and the Pentagon over curbs on AI use. Liberal democracies like India need clear laws and standards for state use of AI so that disputes don’t arise at a time of crisis.

A rift between the AI firm Anthropic and the US federal government has broken into the open at a fraught geopolitical moment. Negotiations between the AI firm and the Pentagon over deployment of frontier models by the US military stalled after Anthropic refused to permit the use of its models for two specific use cases: domestic surveillance and lethal autonomous weapons.

Open AI has struck a deal with the Pentagon in the meantime, even as the use of AI in the battlefield accelerates with deployments by Israel and the US in the ongoing war in West Asia.

Dario Amodei, founder of Anthropic, has argued that his proposed limits are non-negotiable. When talks stalled, the disagreement turned political. Trump criticized the firm in sharp terms, calling it a “Left Wing, Woke” company in a Truth Social post. Secretary of War Pete Pete Hegseth labelled Anthropic a ‘Supply Chain Risk,’ which means companies that do business with it cannot engage with the US military.

This dispute has no precedent. Never in American history has a leading company been treated, in effect, as an enemy of the state for declining to military cooperation.

It is easy to view this episode as another instance of industrial policy under US President Donald Trump, where he seems to have picked Open AI at the cost of a rival. His administration has intervened aggressively in technology markets several times before. For instance, the US government provided billions in support to chip-maker Intel and took an equity stake to stabilize domestic chip capacity late last year. But this moment goes beyond picking winners.

AI capabilities are improving at a pace that neither governments nor companies fully comprehend. No one knows where the capability frontier lies. Removing one of the most capable AI firms from the national security ecosystem is therefore a high-risk gamble.

Some researchers argue that AI improvement is exponential. A recent study by Model Evaluation & Threat Research found that the length and complexity of tasks AI systems can autonomously complete have doubled roughly every seven months since 2019. The implications of supply chain exclusion decisions today will likely compound over time.

A second layer complicates the picture. As AI systems scale, the values of their creators will shape their deployment boundaries. Founder philosophy will bleed into national strategy. Amodei has consistently argued for collective responsibility in the AI community. Anthropic’s Responsible Scaling Policy encourages greater transparency around model development, risk standards and thresholds. But these simultaneously embed leadership-driven judgments about acceptable use.

Third, this debate should not be read as uniquely American. Amodei’s two red lines rest on two institutional gaps that have wider significance: the US lacks a comprehensive federal privacy law that would limit domestic surveillance use and has no universally accepted testing standards that can certify AI systems as reliable enough for lethal or high-stakes battlefield contexts. In one case, there is no law, and in the other, there is no fool-proof technical benchmark.

Other democracies face similar deficits. In India, public sector adoption of AI is accelerating across law enforcement and intelligence operations. Yet our data protection framework grants broad exemptions to the state. We have also not developed procurement safeguards that tie public sector AI deployment to enforceable reliability and accountability standards.

Finally, the dispute implicates the global debate on tech sovereignty. For years, the US has urged allies to trust its firms as suppliers of critical infrastructure, platforms and services. If Washington questions the loyalty of its own AI companies, other governments will certainly claim a licence to do the same, at times without clear strategic gain.

The Financial Times recently reported that European discussions about reducing dependence on US technology have “alarmed” its own military officials, given the continent’s reliance on American software and networks to run critical systems.

In Dennis Feltham Jones’s 1966 novel Colossus, the US builds a supercomputer to control its nuclear arsenal. The Soviet Union builds its counterpart. Each system can command missile silos and manage military infrastructure. Designed to eliminate human error, they instead outgrow human control. Colossus is science fiction, but its core anxiety is not. It asks what happens when states outsource existential decisions to systems they do not understand—and when strategic competition accelerates deployment before governance catches up.

For liberal democracies like India, the immediate lesson is to establish clear laws and procurement standards for state use of AI, so that political authority and market innovation operate within defined boundaries instead of colliding in moments of crisis.

Over the longer term, the state must also tolerate ideological divergence from its most advanced firms because frontier innovation depends on the freedom to disagree—a strength the US cultivated for decades and has now put at risk of erosion.

The author is a public policy expert and partner at Koan Advisory Group, New Delhi.

Read Entire Article