Friday, March 26, 2010

HowTo: Account for teaching in university rankings

Phil Baty of Times Higher Education on the difficulty of arriving at a measure of the quality of teaching [so that this measure can be used in a ranking exercise]:

"To think that such a ratio [i.e., staff to student ratio, SSR] could signify 'teaching quality' shows how serious a problem we face with rankings that privilege the availability of a metric over its validity," the academic said.

He is, of course, right. The same point was made in a paper from the Russian Rectors' Union, handed to me by Victor Sadovnichiy, president of Moscow State University, earlier this month.

It argues that "good teachers always have a lot of students, bad teachers have few".

SSR figures are also easy to manipulate and hard to verify.

David Graham, provost of Concordia University in Canada, opened the web discussion by highlighting research that shows that a ratio of anywhere from 6:1 to 39:1 can be achieved with the same institution's data.


  1. Pranav Dandekar said...

    I wonder why someone hasn't figured out a smart way to crowdsource part of university rankings. Aren't students (in aggregate) the best evaluators of the quality of an institution -- its faculty, labs, other students, etc? They are both the customers and the products of the institution. At the very least, student ratings/reviews/rankings should form one element of the rankings done by USA Today, Outlook, etc. And yes, I am anticipating gaming here -- as with any crowdsourcing task, you need to account for the fact that some fraction of the crowd is trying to game the rankings and correct for it.

  2. Abi said...

    Some sanctified version of "Rate My Professors" should do the kind of crowd-sourcing job you describe, no? After the students rate their professors, a crawler can then aggregate the ratings and condense it all into some metric.