Several commenters have raised serious questions about ranking of Indian universities suggested by Gangan Prathap and B.M. Gupta in their article in Current Science. More specifically, the authors' use of SCOPUS as the source of their data has come in for criticism, because SCOPUS's coverage of humanities and social sciences is not as extensive as its coverage of the sciences.
Now, I have very little experience with SCOPUS, so I am not able to comment directly on this specific issue.
But I do have some things to say on this topic.
In general, ranking of universities is not a good idea -- especially if such an exercise is going to use just one criterion -- research output. Universities do have this other important mission: education. The commitment to teaching and the quality of teaching should matter too. So should the number of graduates.
Even if we accept a research-focused ranking exercise, there is this difficulty: certain fields are better represented in databases, and institutions that have a larger weight in those fields end up enjoying an advantage. In the work of Prathap and Gupta, their choice of SCOPUS appears to privilege sciences over humanities and social sciences.
Even within sciences, there are significant differences between fields. For the same number of faculty, a department of mathematics is likely to produce far fewer publications than, say, a department of biomedical science. The number of citations per paper, too, tends to be lower in mathematics than in biomedical science.
Then, there is computer science, where conference proceedings, rather than journals, are the preferred destination for research publications.
Bottomline: Relying on one source of data is a bad way of measuring research effectiveness of whole institutions. Rankings based on this kind of analysis are misleading. In the hands of a clueless bureaucrat looking for a 'rational' funding methodology, such bogus quantitative data may even be dangerous!
Clearly, the above criticism points to a solution: rate the research performance of individual departments. Like what my colleagues M. Giridhar and J. Modak have done for chemical engineering. Even though publishing and citation traditions within different subfields of chemical engineering may be different, it is far more fair to compare departments than to compare universities.
Ratings for each university can be computed from the scores received by each of its constituent departments. Needless to say, this is a lot more difficult and intense task. Ideally, each department should do such an exercise and each university should display the results on its website.
Accreditation bodies are meant for doing precisely this task: rate departments on several different metrics, and issue a composite letter grade. While the letter grade is prominently displayed on department websites, the detailed evaluations are not available in the public domain. They should be.
Finally, these one-off exercises don't mean much to me. Sure, it is interesting to know that a chemical engineering department in University X is ahead of that in University Y in year Z on some metric. But I am more interested in trends over time, which I think are way more important and useful than static data.
I would love to see how a department's per-faculty numbers of publications and citations have been changing over time. [In order to smooth out the (possibly) wild year-to-year swings in the data, one may use five-year moving averages (i.e., publications from the previous 5 years is plotted against each year).]
Such trend lines would show which departments are growing and which ones are in decline. They contain far more useful information, and should be of interest to potential graduate students and faculty applicants.
1 Comments:
Off topic, but check: http://www.jainbasil.net/?p=478
Hilarious stuff. "Calicut University style of “Computer Science Engineering” !"
Post a Comment