ABSTRACT
India stands at a critical juncture in its research and innovation trajectory. With growing ambitions to become a global knowledge economy, the way research is assessed in Indian institutions warrants serious introspection. Globally, the research community is increasingly adopting the principles of Responsible Research Assessment (RRA) to ensure that evaluation mechanisms foster quality, equity, integrity, and societal relevance. The Indian research ecosystem, however, remains tethered to outdated metrics and bureaucratic inertia. This opinion paper explores whether India is truly ready for reform in research assessment and what systemic shifts are needed to realize a more responsible, context-sensitive, and robust framework.
THE CURRENT CRISIS
The evolution of research assessment emerged as a structured mechanism to systematically evaluate the quality, impact, and progression of scientific work, thereby enabling informed decision-making in science policy and institutional development.[1,2] Historically rooted in disciplinary peer judgment, research evaluation practices evolved into structured mechanisms designed to ensure accountability, uphold quality standards, and guide strategic decisions concerning resource allocation, institutional visibility, and policy legitimacy.[3,4] With the advancement of open science, Responsible Research and Innovation (RRI), and participatory models such as citizen science, the contemporary scholarly landscape increasingly demands evaluative frameworks that are not only rigorous but also socially responsible, inclusive, and aligned with sustainability goals.
Despite these normative shifts, the global proliferation of metric-based evaluation (“metric tide”) has provoked a widespread crisis of confidence. Policymakers, academic recruiters, and funding agencies across national contexts now grapple with the challenge of selecting appropriate, context-sensitive, and multidimensional assessment mechanisms that can capture the breadth of research contributions beyond conventional bibliometrics. The problem is particularly acute in India, where an entrenched culture of metric fetishism has institutionalized overreliance on simplistic quantitative indicators such as Impact Factor (IF), h-index, aggregate citation counts, and publication volume.[5–7]
This metric-centric paradigm dominates decision-making processes in major funding and regulatory agencies, including the University Grants Commission (UGC), Indian Council of Medical Research (ICMR), and Department of Science and Technology (DST)—despite sustained critique regarding the validity and contextual appropriateness of these metrics.[7,8] The consequences are far-reaching. The prioritization of quantity over quality has incentivized hyper-competitive academic environments, accelerated the rise of predatory publishing, entrenched a “publish-or-perish” ethos, and fostered unethical practices such as excessive self-citation, citation gaming, strategic collaboration, and questionable research conduct. These tendencies collectively threaten the transparency, integrity, and societal value of academic research in India.[9,10]
Despite repeated warnings from global scientometricians and Indian scholars, agencies continue to prioritize publication volume over societal impact, research rigor, or innovation in decisions related to recruitment, promotion, and funding. Recent studies further underscore the systemic nature of the crisis. Recent studies[11–13] have outlined India’s transition from qualitative peer review to a metric-dependent research assessment system, warning that excessive reliance on rankings and bibliometric indicators has distorted research priorities, fostered poor-quality output, and undermined meaningful academic contribution. Thus, the overreliance on publication-based metrics is contributing to declining research quality, rising retraction rates, and a broader erosion of academic standards.
Contrasting international experiences offer viable alternatives. A recent working paper by the Research on Research Institute (RoRI, 2025) delineates a typology of global research assessment reforms, demonstrating how countries such as the Netherlands, the United Kingdom, Norway, France, Sweden, and Finland have adopted reflexive, pluralistic, and inclusive models of evaluation.[14] These systems acknowledge the diversity of research outputs and disciplinary contexts, incorporating responsible metrics, narrative CVs, environment statements, and community-driven standards. These countries are actively reforming their national assessment regimes to reduce metric dependency and promote responsible, equitable, and context-sensitive evaluation practices aligned with research integrity and diversity. In contrast, India remains entrenched in a traditional audit-oriented culture dominated by reductionist indicators. Although frameworks such as the National Institutional Ranking Framework (NIRF) represent localized adaptations of global ranking models, they continue to rely heavily on bibliometric indicators, with insufficient emphasis on disciplinary heterogeneity, equity, interdisciplinarity, or societal relevance.[15–17]
India’s academic governance bodies need to recognize that metrics are tools or instruments, not substitutes for scholarly excellence. Analogous to assessing individual health solely through body mass index, the use of citation counts and impact factors as surrogate measures of merit is expedient but fundamentally flawed. In an era of epistemic plurality and knowledge democratization, responsible research assessment demands contextual adaptation, transparency, and a commitment to values beyond numerical performance. India stands at a critical juncture: to either perpetuate an outdated model of metric-dominated evaluation or to align with the global momentum toward more reflexive, inclusive, and integrity-based research assessment practices.
Emerging Voices and Institutional Responses
There is, however, a growing recognition of the need for reform. The San Francisco Declaration on Research Assessment (DORA), and the Leiden Manifesto have been widely endorsed globally, yet Indian institutions have been slow to adopt them meaningfully. Exceptions include some progressive centers like the National Centre for Biological Sciences (NCBS) and Indian Institute of Science (IISc), where peer review and qualitative judgment still play a pivotal role. Koley (2025)[6] has emphasized the intersection of responsible assessment and open science, advocating for multi-stakeholder dialogues and contextual metrics that recognize diversity in research practices.
Echoing these concerns, several studies[6,13,18] have criticized the blunt use of citation metrics and advocated for more robust recognition of originality, peer insight, and societal contribution in research evaluation. The Indian National Science Academy (INSA) presents a comprehensive critique of India’s current research assessment practices and proposes a strategic policy framework to promote quality, ethical and context-sensitive research dissemination and evaluation mechanisms.[11] Bhattacharjee and Koley (2022) conducted a critical examination of India’s current research assessment ecosystem, identifying systemic deficiencies such as metric overdependency, inadequate transparency, and weak contextual sensitivity.[13] Building on this discourse, Pal and Kar (2025)[19] proposed a rational, evidence-based framework for assessing social science research that accounts for its epistemological and methodological divergence from STEM disciplines.
Complementing this academic leadership, the DST Centre for Policy Research (DST-CPR), Indian Institute of Science (IISc), received the 2022 DORA Community Engagement Grant to stimulate national conversations on reforming India’s research evaluation ecosystem. As part of this initiative, a significant DORA-hosted online panel discussion titled “Implementing Responsible Research Assessment in India” was held on June 6, 2025. The panel featured eminent speakers from India and across the globe. The session aimed to foster critical discussions on the principles and implementation of Responsible Research Assessment (RRA) within the Indian context and emphasized the urgency of establishing transparent, fair, and adaptable evaluation systems. The panelists highlighted the structural flaws in India’s assessment culture, attributing overreliance on metrics to ethical lapses within academic committees and arguing that uniform evaluation models fail to account for institutional diversity. They also criticized the metric-driven pressure on researchers, particularly in resource-constrained environments, and called for qualitative evaluation of selected works alongside systemic reforms in recruitment and promotion processes. Michael Arentoft outlined Europe’s policy-led transition toward responsible assessment, emphasizing the CoARA framework’s rejection of journal metrics, support for diverse outputs, and commitment to alignment across funding and institutional systems. However, the panel collectively reflected on existing evaluative paradigms, identified systemic challenges, and proposed actionable strategies to align assessment practices with the values of equity, research integrity, and academic inclusivity.
Additionally, institutional voices have begun to surface from within India’s funding ecosystem. On April 21, 2022, the DST organized a national workshop[20] to critically examine the existing research assessment frameworks used by Indian research funding agencies in the evaluation of research projects. The workshop emphasized the inadequacy of one-size-fits-all, metric-heavy frameworks and advocated for responsible, value-driven assessment aligned with national goals and Sustainable Development Goals (SDGs). Using the SCOPE framework developed by INORMS, participants called for context-sensitive models that incorporate qualitative indicators such as ethical conduct, innovation potential, translational impact, and equity considerations.
The theme of RRA has also gained traction in leading academic conferences. The Conclave on “Policy Deliberations for Strengthening South–South Cooperation” served as a prelude to the Science Summit at the 79th United Nations General Assembly (New Delhi, September 11, 2024), positioning responsible governance of research and innovation as a central theme and emphasizing the ethical, inclusive, and collaborative dimensions of research evaluation across the Global South.[21] Through its technical sessions on Responsible Governance for Research and Innovation, Diversity, Equity and Inclusion in Science, and Funding Mechanisms and Capacity-Building for R&D Cooperation, the Conclave underscored the importance of contextualized research assessment frameworks that integrate gender equity (SDG 5), open science, and equitable partnerships (SDG 17). As part of India’s institutional and academic efforts to engage researchers and sensitize policymakers, the event reflected a growing policy advocacy movement within the Global South, with active participation from several UN bodies and a large number of South–South countries. These collaborations highlight India’s emerging role in bringing international attention to responsible research assessment as a key component of global STI governance. Moreover, building on this momentum, the STIiG-2025 International Conference (CSIR-NIScPR, January 2025) advanced national dialogues on refining STI indicators for ethical and performance-based R&D governance.[22] Subsequently, the 24th COLLNET Meeting and 19th International Conference on Webometrics, Informetrics, and Scientometrics (September 19–21, 2025) further included fair and responsible evaluation methods as a key thematic area.
These emerging initiatives signal a gradual yet significant shift toward responsible research assessment in India. While challenges remain, the growing engagement of academic institutions, funding agencies and policy forums reflects a collective readiness to move beyond metric-centric models toward more inclusive, contextual, and integrity-driven evaluation practices.
Policy Pathways and Systemic Recommendations
Institutions and funding bodies should adopt a diverse set of indicators, rather than relying on a one-size-fits-all model. These should include a balanced mix of qualitative peer review, societal impact narratives, policy influence, interdisciplinary engagement, methodological rigor, and adherence to open science practices to holistically evaluate the multifaceted nature research performance. Moreover, integrating principles of Scientific Social Responsibility (SSR) further strengthens this approach by recognising researchers’ contributions to community engagement, ethical conduct, knowledge equity, and the broader public value of science, thereby enabling a more comprehensive and context-sensitive assessment framework.
As highlighted in the RoRI and RESSH 2025 conference proceedings, data stewardship and the interoperability of research information systems are essential for building a robust and transparent research assessment infrastructure. In this context, India needs to advance a national Persistent Identifier (PID) strategy, a PID Graph that interconnects all research entities (researchers, institutions, projects, datasets, publications, other research outputs) through globally recognized identifiers such as ORCID, DOI, ROR, and GrantID. Such a PID-enabled infrastructure would enhance data integrity, traceability, and interoperability across systems, enabling machine-actionable and transparent assessment workflows.[23]
Alongside the PID Graph, India must prioritize the development and strengthening of institutional repositories, national-level Open Research Information (ORI) systems, and innovation-enabling bibliographic databases that are inclusive, flexible, transparent, and governed by the research community.[23] These infrastructures should support a wide range of scholarly outputs, promote open scholarly practices, and foster experimentation in responsible research communication and assessment.
Global frameworks such as the San Francisco Declaration on Research Assessment (DORA) and the Hong Kong Principles must be interpreted and implemented within the Indian context, taking into account the country’s linguistic, social, and disciplinary diversity, and its distinctive research ecosystem.
Research assessment must shift from rigid compliance checklists to a culture of reflective self-evaluation and continuous improvement, fostering a learning-oriented environment for institutions and researchers alike.
Evaluative frameworks must include dimensions such as social responsibility, research integrity, open science practices, and broader societal impact, extending beyond traditional academic metrics and citation-based indicators.
The NIRF should incorporate qualitative dimensions by integrating independent data validation mechanisms, contextual peer evaluation, FAIR-compliant data, and transparency measures to reduce overreliance on self-reported metrics and enhance trust in institutional assessment.
Academic evaluators and bibliometricians must exercise caution when conducting research performance assessments across levels (be it individual researchers, disciplines, institutions, or national systems). They should clearly understand the purpose of evaluation and adopt context-sensitive parameters, data sources, and metrics that are suited to capturing the depth, breadth, and multidimensional character of research. Responsible use of metrics requires methodological awareness and alignment with the values of fairness, transparency, and research quality.
CONCLUSION
India’s research ecosystem is vibrant, but its evaluation regime remains mired in bureaucratic metrics and outdated paradigms. The future of Indian research hinges not just on how much we produce, but on how wisely we assess its quality and impact. If the nation seeks to truly maximize its scientific potential, it must adopt responsible research assessment not as a policy accessory but as a foundational reform. The road ahead demands courage, collaboration, and a principled commitment to embedding quality, societal relevance, economic impact, and responsibility at the heart of all scientific endeavors. While the ecosystem may not yet be fully prepared for this transformation, a growing body of scholarly engagement, institutional experimentation, and policy discourse signals readiness. The question is no longer whether India needs reform, but how swiftly and sincerely it will respond. As scholars and policymakers, we must rise to this moment with evidence, empathy, and ethical conviction to reshape its research evaluation paradigm for the future.
Cite this article:
Kar S. Rethinking Research Evaluation in India: Is the Ecosystem Ready for a Responsible Reform Agenda?. J Scientometric Res. 2025;14(3):x-x.
References
- de Solla Price DJ.. Little science, big science.. 1963
- Garfield E.. Citation indexing: its theory and application in science, technology, and humanities.. 1979
- . Reconfiguring knowledge production: changing authority relationships in the sciences and their consequences for intellectual innovation.. 2010:51-80.
- Moed HF.. Citation analysis in research evaluation.. 2005 [CrossRef]
- Balaram P.. Citations, impact indices and the fabric of science.. Curr Sci.. 2010;99:857-9. [CrossRef] | [Google Scholar]
- Balaram P.. Research assessment: declaring war on the impact factor.. Curr Sci.. 2013;104:1267-8. [CrossRef] | [Google Scholar]
- Madhan M, Gunasekaran S, Arunachalam S.. Evaluation of research in India: are we doing it right?. Indian J Med Ethics.. 2018;3:221-9. [CrossRef] | [Google Scholar]
- Koley M.. Academic excellence or metric obsession? A look at Indian research assessment. Panel presentation at: DORA Online Panel Discussion: Implementing Responsible Research Assessment in India; 2025 Jun 6.. [CrossRef] | [Google Scholar]
- Seethapathy GS, Kumar JUS, Hareesha AS.. India’s scientific publication in predatory journals: need for regulating quality of Indian science and education.. Curr Sci.. 2016;111(11):1759-64. [CrossRef] | [Google Scholar]
- Lakhotia SC.. Predatory journals and academic pollution.. Curr Sci.. 2015;108(8):1407-8. [CrossRef] | [Google Scholar]
- Chaddah P, Lakhotia SC.. A policy statement on “dissemination and evaluation of research output in India” by the Indian National Science Academy (New Delhi).. Proc Indian Natl Sci Acad.. 2018;84(2):319-29. [CrossRef] | [Google Scholar]
- Bhattacharjee S.. Does the way India evaluates its research doing its job?. The Wire Science.. 2022 [CrossRef] | [Google Scholar]
- Bhattacharjee S, Koley M.. Research assessment in India: what should stay, what could be better?. 2022 [CrossRef] | [Google Scholar]
- Rushforth A, Sivertsen G, Bin A, Firth C, Fraser C, Gogadze N, et al. A new typology of national research assessment systems: continuity and change in 13 countries.. RoRI Working Paper. 2025(No. 15.) [CrossRef] | [Google Scholar]
- Lakhotia SC.. Societal responsibilities and research publications.. Proc Indian Natl Sci Acad.. 2014;80(5):913-4. [CrossRef] | [Google Scholar]
- . Scientific scholarly communication: the changing landscape.. 2017:117-32. [CrossRef] | [Google Scholar]
- Koley M, Lawrence R, Lakhotia SC, VijayRaghavan K.. Fair and responsible research evaluation in India.. Curr Sci.. 2025;129(4):297-8. [CrossRef] | [Google Scholar]
- Zare RN.. Assessing academic researchers.. Angew Chem Int Ed.. 2012;51(30):7338-9. [CrossRef] | [Google Scholar]
- . Information infrastructure of social science research in India.. 2025 [CrossRef] | [Google Scholar]
- Bhattacharjee S, Koley M, Bharadwaj J.. Workshop on research assessment practices in Indian funding agencies.. J Sci Policy Gov.. 2023;22(1) [CrossRef] | [Google Scholar]
- Aggarwal R, Bhattacharya S, Kumar N, Nishad SN. 2024 [CrossRef] | [Google Scholar]
- Press Information Bureau.. CSIR-NIScPR’s 4th Foundation Day & “STIiG-2025” International Conference kicks off to refine R&D indicators [press release].. 2025 [CrossRef] | [Google Scholar]
- Kar S.. Developing a framework for an open research information (ORI) system in India to enable responsible research assessment (RRA) practices.. Unpublished manuscript;. 2025 [CrossRef] | [Google Scholar]

