ARTICLE AD BOX

Summary
Declarations and lofty rhetoric masked a stark reality at New Delhi’s AI summit: the US and China won’t slow down and tech firms won’t restrain themselves. With risks mounting, can middle powers like India and Canada unite on AI safety rules?
Last week’s AI Impact Summit ended the way these gatherings routinely do. This time with a ‘New Delhi Declaration,’ a non-binding hymn to cooperation and the hope that “AI could be made to serve humanity.” It’s the sort of empty language that dozens of countries and international organizations can sign up to without changing a thing.
The most revealing statement came from the industry. Hours before the declaration, OpenAI CEO Sam Altman offered a bit of moral arithmetic in an interview with the Indian Express. “People talk about how much energy it takes to train an AI model,” he said, “but it also takes a lot of energy to train a human. It takes, like, 20 years of life, and all of the food you eat during that time before you get smart.”
Altman likely meant it as a quip. It landed, however, as a sobering reminder that the people steering the AI race are starting to talk about raising children the way they talk about training machines. So much for human-centred AI.
New Delhi should have been a turning point for middle powers, from India and Brazil to Canada. Instead, it showcased the deadlock that has come to define global AI governance. AI superpowers won’t meaningfully restrain themselves, AI companies won’t elect to slow down and everyone else is signing empty statements while being propelled by a fear of missing out.
The drift is apparent in the meetings themselves. The first at Bletchley Park in 2023 was branded as an AI ‘Safety’ summit. That was dropped from the title in Seoul’s ‘AI Summit.’ The theme then shifted to ‘Action’ in Paris and ‘Impact’ in New Delhi. The word that started the series has been edited out. India this year got frontier firms to sign to broad commitments to study the impact of AI, but even these are voluntary.
Middle powers, meanwhile, can’t wait for Washington or Beijing to take the reins. This year alone, American tech giants are expected to collectively invest some $650 billion in AI. Such astronomical spending accelerates deployment, but it also distorts incentives away from safety and towards recouping a return. And with so much of the US economy now riding the tech boom, the White House has little appetite for rules that might slow it down.
China has its own safety labs and voluntary commitments from companies. But the government leaves scant room for plurality of opinions or public debate about risk—especially if it collides with President Xi Jinping’s ambition to lead the world in technology. Safety leadership is unlikely to emerge from Washington or Beijing.
At the same time, the harms are already piling up. Women and girls are digitally undressed, cyber attackers exploit new tools at scale and reports link teen suicides with the use of chatbots. AI systems are becoming exponentially more powerful and the rush for agents only encourages humans to cede more power to machines, raising fears of more existential risks.
In that geopolitical race, hopes for meaningful US-China collaboration on safety are increasingly a fantasy. As was hinted at in Davos, each side can use the other’s acceleration as an alibi for why they can’t slow down even if they want to. It’s why middle powers matter more than ever. India hosted this year’s gathering explicitly to position itself as a bridge between the rest of the world and the US-China rivalry.
During a side event on safety, computer scientist and ‘AI Godfather’ Yoshua Bengio said that it’s ultimately up to these governments to unite and break the superpower deadlock before AI concentrates power. Courting favour from Washington or Beijing in a bid to get ahead is a self-defeating strategy that cements dependence, not sovereignty—let alone safety.
A middle-power coalition needn’t beat the US or China on frontier AI. It just needs to make access to markets of billions, as well as their schools, hospitals, courts and power grids, conditional on measurable safety commitments. They can start with near-term essentials: disclosures of the data that goes into these tools and the energy use needed for training and running models. Mandate standardized, independent safety evaluations before deployment in sensitive domains like policing or politics. Insist on incident reporting and public transparency around model failures and risks.
The easiest thing policymakers can do right now, Bengio warned, is listen to the voices that make them feel good—which overwhelmingly belong to those selling the technology. But organized backlash is growing, uniting people across identities and political lines. “Governments won’t do anything until the general population wakes up,” he said.
Delhi’s traffic gridlocks last week became a metaphor for the global AI safety debate: We keep convening, everyone is trying to get ahead and nothing moves. Declarations don’t protect, rules do. ©Bloomberg
The author is a Bloomberg Opinion columnist covering Asia tech.

2 days ago
2






English (US) ·