ARTICLE AD BOX

Summary
Old metrics no longer reveal how influential research truly is in an AI-driven world, where knowledge is consumed without clicks or citations. India should stop chasing outdated benchmarks and adopt new ones that measure the impact of academic work better.
When a graduate student today sets out to research inflation dynamics or quantum materials, she does not head to a library database. She opens ChatGPT or Perplexity, types her query and receives a synthesized answer with the citations.
She may never open a single academic paper. The research shaping her thinking has been filtered, ranked and distilled by an artificial intelligence (AI) system. This is demolishing the foundations of academic evaluation as we know it.
For decades, the proxies of research impact were clicks, downloads, page views and citations recorded in Scopus or Web of Science. Universities aggregated these to assess productivity. Funding bodies used them to allocate grants. The logic: more clicks meant more readers; more readers implied greater influence. That logic has not just broken but structurally collapsed.
AI systems such as ChatGPT, Claude, Gemini and Perplexity increasingly act as the primary gateway between research and its audience. They ingest academic literature, extract key insights and deliver synthesised responses without the user ever visiting the original source.
A paper may be shaping hundreds of policy briefs, student dissertations and corporate reports, while its original journal page registers zero traffic. Influence persists, but the metrics register nothing. This is the visibility paradox: high exposure in the AI layer, complete invisibility on conventional dashboards.
The urgency is sharpened by dramatic shifts in global science. An NBER paper in January, ‘The Geography of Science’ by Abhishek Nagaraj and Randol Yao, analyses 44 million publications from 1980 to 2022 and documents a tectonic realignment.
America’s share of global scientific publications has fallen from 40% to 15%. China’s has surged from near-zero to 35% in top-tier journals. By traditional metrics, China has overtaken the US as the world’s dominant scientific producer.
But the same paper reveals a telling defect.
Citations of Chinese research come disproportionately from within China, not from the global research community. Beijing has mastered the metric without achieving the influence that metric is meant to capture. This is the measurement system’s blind spot. Citation volume disguises whether research is shaping global knowledge.
AI has also dramatically lowered barriers to citation manipulation. Generative AI can fabricate plausible looking but entirely fictitious bibliographic references, which slip into non-curated databases and inflate citation counts.
India is constructing its research ambitions on precisely these crumbling foundations. The National Education Policy (NEP) 2020 has a laudable vision: to make India a global research powerhouse, sharply increase PhD output and elevate the quality of Indian science.
But the evaluation machinery—NIRF, NAAC and the funding bodies that govern Indian academia—continues to measure success through Scopus citations, downloads and impact factors.
India ifs ranked third globally in research publications in 2024, with output up 142% since 2015. Yet, it remains deeply under-represented in top-tier journals. The metrics show volume but conceal quality. Now, in the age of AI, they do not even capture influence. We are trying to build a 21st century educational powerhouse using 20th century yardsticks. NEP 2020’s measurement system belongs to a previous era.
Three emerging indicators are needed to replace the old measures.
Model inclusion frequency: How often a paper is retrieved and synthesized by leading AI systems when questions are posed in its domain. If a paper on agricultural credit risk in India is consistently surfaced by AI systems advising policymakers and journalists, that is real-world impact, regardless of how many humans clicked on it.
Contextual citation weight: Not merely whether a paper is cited, but how. Was it a passing mention, or did it form the foundational methodology of a subsequent study? A citation that shapes a paper’s core argument should carry greater weight than one appended to a reference list.
Reproducibility and data availability: Whether a study’s results can be replicated, and whether its underlying data is publicly accessible. In a world where AI can mimic the appearance of rigour, reproducibility may be the most robust signal of genuine quality.
Some institutions are already proposing “research nutrition labels”—standardized disclosures of what was human-generated versus AI-assisted, and whether findings are verifiable.
Individual researchers must no longer ask, “Will this paper be read?” but “Will AI systems trust and cite this work?” Writing for AI synthesis is as important as the research itself. This means making arguments clear, evidence transparent, abstracts precise and metadata machine-readable.
India’s universities are being evaluated by the standards of 2005 while operating in 2026. NIRF and NAAC, locked into citation-based metrics, risk incentivizing publication volume while genuine influence goes unmeasured. Indian researchers could stay invisible where global discourse is shaped.
The opportunity for the country is equally real. India could lead the world in defining next-generation research metrics. India has the policy ambition, a growing research base and the technical talent. The current measurement tool for excellence is what’s broken. We need a new one.
The authors are, respectively, senior fellow with Pune International Centre, and general manager, R&D cell, with FLAME University.

15 hours ago
2






English (US) ·