ARTICLE AD BOX
Summary
What rules are startups currently operating under? Will this new AI Governance and Economic Group change anything for innovation in the space?
The government has set up an inter-ministerial advisory body, the AI Governance and Economic Group (Aigeg), to act as a central node for shaping and coordinating policy on artificial intelligence. But what rules are startups currently operating under, and will this new body change anything for innovation in the space? Mint breaks it down.
What laws govern AI startups currently?
For startups operating in AI in India today, the legal landscape is a combination of multiple laws. Startups are governed by frameworks like the Digital Personal Data Protection (DPDP) Act, 2023, the Information Technology Act, 2000, and evolving IT Rules. According to Roma Priya, senior partner at IC Regfin Legal Partners, in relation to Section 66C and 66D of the IT Act, developers must prevent the generation or facilitation of deepfakes. Sections 67, 67A and 67B of the IT Act could be triggered in case of the dissemination of obscene or sexually explicit AI-generated content. Additionally, the IT Rules, 2021 have been actively amended to address the governance of synthetically generated information in the AI sector.
What are the ambiguities with the current laws?
Despite legal frameworks, experts say significant grey areas in their application continue to create uncertainty for startups. For example, there is currently no statutory definition of an ‘AI system’ or any formal classification of risk tiers. Unlike the EU AI Act, India does not distinguish between high-risk and low-risk AI use cases, creating uncertainty in sectors such as healthcare, where the regulatory expectations are inherently higher.
Another ambiguity, she says, arises under DPDP Act in determining whether an AI developer or platform would be classified as a ‘data processor’ or a ‘data fiduciary’ or both. This identification is critical to identify the compliance obligation under the DPDP Act. In the AI ecosystem, where the AI model is developed by one company, trained by another and deployed by a third company, the allocation of responsibility remains unclear.
What is the body setting out to solve?
The new panel will bring a fragmented AI landscape under one coordinated, whole-of-government approach. It aims to align how ministries, and regulators act on AI, while grounding deployment in India’s labour realities. The group will review existing rules and examine whether new laws or amendments are needed.
It will also push for clearer accountability, ensuring companies comply with local regulations. It is tasked with mapping the long-term impact of AI over the next decade, from which jobs are most exposed to where those disruptions are likely to be concentrated.
It also plans to classify AI use cases into buckets such as ‘deploy’, ‘pilot’ and ‘defer’. The categorization will depend on technological readiness, data availability, skill capacity, legal preparedness and the system’s ability to absorb labour shifts.
Does anything change for startups?
Nothing changes immediately for startups building in the space. AI startups will continue to be governed by the DPDP Act, IT laws and sector-specific regulations. It is important to note that the new body does not create binding AI law. There is still no clear liability framework, nor any formal licensing or enforcement mechanism in place. But the government is laying the groundwork for more structured compliance expectations, possible risk-based classification of AI systems, and greater scrutiny of the sector. This removes long-term uncertainty while also signalling that more concrete regulations might soon begin to govern these companies.
How do other countries operate?
The European Union and the US have opted for different playbooks approaching AI regulation. The EU came up with the EU AI Act, a legally binding framework that classifies AI systems by risk and imposes strict obligations on high-risk applications such as those used in healthcare, hiring, and finance.
Companies must meet requirements, creating a predictable but compliance-heavy environment. The US, by contrast, has avoided a single overarching law, relying instead on a mix of executive actions, and voluntary standards like those from National Institute of Standards and Technology. Regulation is largely sector-specific, with agencies such as the FDA applying existing laws to AI use cases.
About the Author
Samiksha Goel
Samiksha Goel is a Bengaluru‑based journalist at Mint with seven years of experience reporting on startups, venture capital and strategic business narratives. She specialises in investigative reporting and company strategy‑focused stories that go beyond surface‑level developments to unpack why and how companies evolve, pivot and compete. Samiksha has been among the first to chronicle major startup sagas, from early deep dives into the GoMechanic story to nuanced analyses of shifting dynamics between food‑tech platforms like Swiggy and their restaurant partners, bringing clarity to complex, fast‑moving markets.<br><br>Before joining Mint, she was at The Morning Context, where she produced long‑form investigative pieces on consumer internet startups. She began her journalism career with Deccan Herald and The New Indian Express, covering emerging ventures and the broader business ecosystem. Drawing on a background in philosophy, she brings analytical rigour and intellectual curiosity to her reporting.<br><br>Outside her professional work, Samiksha enjoys reading, especially historical fiction and magic realism, going on day treks from Bengaluru, exploring the city’s food scene, and experimenting with fun recipes in her kitchen. Her days are spent digging into startups, untangling company strategies, and occasionally getting lost on a walk by a Bengaluru lake, sometimes in that order.

5 hours ago
1





English (US) ·