ARTICLE AD BOX

Summary
Indian high courts have issued clear instructions barring the use of artificial intelligence (AI) in important judicial processes, while allowing its controlled use for low-risk tasks. Policymakers have much to learn from the judiciary’s approach to this technology.
Earlier this month, when the Gujarat high court and Punjab and Haryana high court barred judicial officers from using artificial intelligence (AI) for drafting judgements or doing legal research, they set out a clear framework for governing AI in high-stake settings.
Judges and court staff are prohibited not only from using AI to decide cases, but also from drafting orders, evaluating evidence or assisting them in judicial reasoning.
The Kerala high court took a similar stance in 2025. India’s Supreme Court has also kept AI away from the sphere of legal logic.
These restrictions follow years of digitization under the e-Courts programme. E-filing, video hearings and real-time case access are routine now, with AI layered in for support. AI is used for transcription, translation and flagging defects in filings, but with human oversight. Tools like Suvas, LegRaa and Supace assist research and access, but do not influence judicial outcomes.
Hence, the recent curbs are best understood as a careful line being drawn around AI. They don’t reject but confine this technology. AI is being embraced as a tool for efficiency, but kept out of decision-making to preserve human agency in matters of justice. After all, this is a domain where errors—especially AI ‘hallucinations’—can be disastrous.
The Gujarat high court’s policy is illustrative for its clarity of demarcation. The evaluation of evidence, writing of rulings and the rationale behind them are treated as non-negotiable human domains. However, administrative tasks such as transcription and case management remain open to tech assistance.
This approach stands in contrast with the government’s tightening grip on the internet. Faced with deepfakes and synthetic media, policymakers are moving from a ‘techno-legal’ stance to enforceable mandates, including labelling requirements.
The intent is understandable, but its application raises practical challenges. Who bears responsibility for labelling—the creator, platform or distributor? How is compliance enforced at scale, especially when content is modified or stripped of identifiers? Courts sidestep these ambiguities by regulating points of decision, not flows of content. Accountability lies with the judge who is responsible for the outcome.
By excluding AI from influencing that, courts preserve due process and institutional legitimacy.
This caution is grounded in the limitations of current AI tools. AI can generate fluent and persuasive outputs but might also fabricate facts. A ‘hallucinated’ addition to case law—created by court precedent rather than legislation—could imperil justice.
This is not to suggest that the government should replicate the judicial model. Courts have a narrower mandate and operate within a controlled setting, while governments must balance innovation, economic growth, public safety and fundamental rights.
While courts can keep AI out of decision-making, businesses and other operations may find its use worthwhile at various levels. Even so, the judiciary’s approach offers a useful template. It has identified where the risks are most acute and least reversible and sought to curb exactly those.
Yet, rather than stifle AI adoption, it allows the controlled use of this technology in lower-risk areas. Crucially, it anchors all AI use in a clear bedrock of human accountability.
The country’s broad policy challenge is to define where human judgement is an absolute must, and then enforce restraints effectively, rather than rely on rules that may come to exist only on paper.

6 days ago
2






English (US) ·