Anthropic’s warning raises urgent questions about the use of AI for mass surveillance by the state

6 days ago 6
ARTICLE AD BOX

logo

The issue of mass surveillance and erosion of individual privacy by the state has become increasingly urgent, almost in lock-step with the digitalization of information. (istockphoto)

Summary

Anthropic’s clash with the Pentagon has brought into focus a troubling prospect: AI could help the state stitch together scattered digital traces and surveil citizens at scale. This power raises questions about privacy, freedom and the future of dissent in every democracy.

Late last month, the US Department of War labelled Anthropic, creator of the artificial intelligence (AI) system Claude, “a supply chain risk,” and began taking steps to remove it from all use in the department, including by contractors and subcontractors to its roughly $1 trillion annual spending.

There are news reports that no company that does work with the Pentagon will be allowed to engage in “commercial activity” with Anthropic. President Donald Trump called Anthropic a “radical left, woke company” that he “fired… like dogs.”

OpenAI, the creator of ChatGPT and leading system in the AI race, swooped in within a day. It is Anthropic’s main rival and both companies are far ahead of challengers like Alphabet, DeepSeek, Meta and Microsoft. Anthropic is estimated to lose about $200 million in direct contracts. Its preference for the public good over private gain is widely viewed as a heroic, almost quixotic, act.

Why did matters come to this head? In a public statement, Dario Amodei, CEO and co-founder of Anthropic, said it is because Anthropic demanded two safeguards in using its product. One, that it should not be used for mass surveillance against Americans. Two, it should not be deployed for fully autonomous weapons. Amodei’s own words are worth quoting at length because they summarize the argument best.

First: “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties… Under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant… Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.” Today’s column focuses on this issue.

Second: “Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons… without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that [trained professionals] exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”

This is a frightening prospect, especially given that makers of AI systems themselves do not understand how their systems make choices. This subject deserves a separate discussion.

The issue of mass surveillance and erosion of individual privacy by the state is one that has become increasingly urgent, almost in lock-step with the digitalization of information.

The discourse is distressingly familiar. According to the state, there are rising threats from terrorists, bad actors, rogue nations, hostile governments and “the enemy within.” To keep its ‘good’ citizens safe from these threats, the state asks (in not so many words) for a tradeoff: give up some individual rights (especially on privacy) to get more security.

Critical security theorists call this approach ‘securitization,’ implying that security threats are socially constructed. The state creates narratives of fear (“only we can save you”) that are mostly bogus to enlarge its power to jail opponents or dissidents and curb civil liberties.

The so-called ‘war on terror’ in the US after the 9/11 attacks led to ‘enhanced interrogation’ of unlawfully detained individuals, the Patriot Act and large-scale warrantless wiretapping by units like the National Security Agency to ostensibly identify ‘terrorists’ among the populace.

In India, digital tools have already massively expanded the state’s armoury of surveillance and control. Some attempts have failed. For example, the ham-handed effort to mandate a state-run cybersecurity app on phones (Sanchar Saathi). Others have been scandalous, such as the alleged use of Pegasus spyware to surveil journalists and political opponents. Other ongoing efforts have been far more successful, especially the use of the Aadhaar system for large-scale data collection.

Yet, many other efforts are generally under-recognized, often covert and almost always with no public oversight. Laws such as the Telecommunications Act and Digital Personal Data Protection Act (both from 2023) are meant to safeguard the interests of data users.

Perhaps they do, but they also empower the state over individual privacy in the guise of security. The state uses its Central Monitoring System to monitor communication across mobile, landline and internet platforms, and Network Traffic Analysis to surveil emails, social media posts and Voice over Internet Protocol calls, apart from facial recognition technologies and drone surveillance in public spaces.

Amodei warns that AI can be used to stitch this “scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”

This is not a sci-fi fantasy. The Chinese state is already using AI to systematize data, a project fed by 700 million cameras, biometrics, facial recognition, voice identification, ID cards, WeChat, Alipay, e-commerce, medical records, hotel stays and so on. But that is China, a regime like no other, governing a citizenry that has never had individual freedom of the sort that India, the US and most of Europe are justifiably proud of.

The Indian state, however, has a relatively dubious record on handling dissidence. A sedition law designed by the colonial state was used to jail figures such as Mahatma Gandhi and Bal Gangadhar Tilak. The Bharatiya Nyaya Sanhita retains a provision to punish sedition (even though the word has been dropped), while the Unlawful Activities (Prevention) Act continues to be used to jail activists; think of the cases of Umar Khalid, Disha Ravi and Kanhaiya Kumar.

The use of AI for mass surveillance is a temptation for all ‘securitized’ states, India included. How far the Indian state has already gone with AI use on its massive data collection and how much further it will it go are known. It may be necessary to begin thinking now about ways for civil society to monitor the Indian state’s use of AI (itself improving by leaps and bounds) as it adds unprecedented new power to surveil the thoughts and behaviors of citizens.

The author is a professor of geography, environment and urban studies and director of global studies at Temple University.

Read Entire Article