Journal of Scientometric Research, 2019, 8, 2, 72-78.
DOI: 10.5530/jscires.8.2.12
Published: August 2019
Type: Research Article
Aparna Basu, Deepika Malhotra, Taniya Seth, Pranab Kumar Muhuri*
Department of Computer Science, South Asian University, Akbar Bhawan, Chanakyapuri, New Delhi, INDIA.
Abstract:
Most currently available schemes for performance-based ranking of Universities/ Research organizations, such as, Quacarelli Symonds (QS), Times Higher Education (THE), Shanghai University-based All Research of World Universities (ARWU) use a variety of criteria that include productivity, citations, awards, reputation, etc., while Leiden and Scimago use only bibliometric indicators. The research performance evaluation in the aforesaid cases is based on bibliometric data from Web of Science or Scopus, which are commercially available priced databases. The coverage includes peer-reviewed journals and conference proceedings. Google Scholar (GS) on the other hand, provides a free and open alternative to obtaining citations of papers available on the net, (though it is not clear exactly which journals are covered.) Citations are collected automatically from the net and also added to self-created individual author profiles under Google Scholar Citations (GSC). This data was used by Webometrics Lab, Spain to create a ranked list of 4000+ institutions in 2016, based on citations from only the top 10 individual GSC profiles in each organization. (GSC excludes the top paper for reason,s explained in the text; the simple selection procedure makes the ranked list size-independent as claimed by the Cybermetrics Lab). Using this data (Transparent Ranking TR, 2016), we find the regional and country-wise distribution of GS-TR Citations. The size-independent ranked list is subdivided into deciles of 400 institutions each and the number of institutions and citations of each country obtained for each decile. We test for correlation between institutional ranks between GS-TR and the other ranking schemes for the top 20 institutions. Finally, we discuss our results in the context of questions like (1) Is it necessary to have one more global ranking scheme? (2) What are the likely benefits of the GS size-independent formulation? (3) What are the likely sources of error? and (4) Whether a truncated sample as in GS can indeed give a representative ranking acceptable at the global level?