Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Research Evaluation 21 (2012) pp. 71–78 doi:10.

1093/reseval/rvr003
Advance Access published on 8 February 2012

Opening the black box of


QS World University Rankings
Mu-Hsuan Huang*

Department of Library and Information Science, National Taiwan University, Taipei,


Taiwan No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan (R.O.C.)
*Corresponding author. Email: mhhuang@ntu.edu.tw.

In the era of globalization, the trend of university rankings gradually shifts from country-wide
analyses to world-wide analyses. Relatively high analytical weightings on reputational surveys
have led Quacquarelli Symonds (QS) World University Rankings to criticisms over the years.
This study provides a comprehensive discussion of the indicators and weightings adopted in the
QS survey. The article discusses several debates stirred in the academia on QS. Debates on this
ranking system are presented in the study. Firstly, problems of return rate, as well as unequal
distribution of returned questionnaires, have incurred regional bias. Secondly, some universities
are listed in both domestic and international reputation questionnaires, but some others are listed
only in the domestic part. Some universities were evaluated only by domestic respondents,
limiting their performance of the ranking results. Thirdly, quite a few universities exhibit the
same indicator scores or even full scores, rendering the assessment questionable. Lastly,
enormous changes of single indicator scores suggest that the statistic data adopted by QS
Rankings should be further questioned.

Keywords: World Universities; ranking; QS Rankings.

1. Introduction by Centre for Science and Technology Studies (CWTS)


Since US News and World Reports published America’s (Aguillo et al., 2010; Centre for Science and Technology
Best Colleges in 1983, university rankings continue to be Studies 2011), Webometrics Ranking of World
a subject of much interest. They play an important role Universities (Webometrics Ranking) published by
not only in the recruitment of students but also in the Spanish National Research Council (CSIC)
governmental budget allocation for higher education. In (Webometrics 2011),Times Higher Education World
recent years the government and the academia began to University Rankings (THE Rankings) (Times Higher
place great emphasis on the assessment for research per- Education 2011), and Quacquarelli Symonds (QS) World
formance (Buela-Casal et al., 2007). Moreover, the waves University Rankings (Quacquarelli Symonds 2011). Each
of globalization encourage the competition among of them features different criteria and ranks the world
universities on a global basis. Country-wide university universities from different aspects. ARWU analyzes out-
rankings become inadequate. In June 2003, Academic standing research performance of the university, bringing
Ranking of World Universities (ARWU) was published up the indicator of alumni or staff winning Nobel Prizes
by Shanghai Jiao Tong University, which aroused heated and Fields Medals; HEEACT Rankings use both quanti-
discussion. tative and qualitative indicators to rank the university’s
Six major university ranking systems currently exist scientific publication performance; Leiden Ranking ranks
in the world: ARWU (Academic Ranking of World the universities by bibliometric method and provides
University 2011), Performance Ranking of Scientific multiple-indicator approach, including the CWTS crown
Papers for World Universities released by Higher indicators CPP/FCSm and MNCS2 to access performance;
Education Evaluation & Accreditation Council of Webometrics Ranking is based on the university’s web
Taiwan (2011) (HEEACT Rankings), Leiden Ranking presence, visibility, and web access.

ß The Author 2012. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
72 . M.-H. Huang

QS, cooperating with Times Higher Education (THE), reviews. In ARWU, conducted by Shanghai Jiao Tong
had published world university rankings since 2004 before University in China, bibliometric indicators accounted
they separated in 2010. QS Rankings listed the top 500 for 40%. In International Champions League of Research
world universities annually, and started publishing the Institutions by Swiss Science and Technology Council, all
Asian University Rankings since 2009. The QS World the indicators used were bibliometric ones. On the other
University Rankings focuses on four dimensions: hand, in Asia’s best universities ranking by Asiaweek, peer
research quality, graduate employability, teaching reviews accounted for the heaviest weighting of indicators
quality, and international outlook. Among the four dimen- (20%).
sions, the ‘Academic Peer Review’ in research quality was Since the type of measurements and indicators had
given the highest weighting, accounting for 40%. In the substantial influences on the ranking results, the pros
Asian University Rankings, QS revised the weight shares and cons of bibliometrics and peer reviews adopted in
of indicators. The weighting of ‘Academic Peer Review’ the world university rankings were topics of constant
was reduced to 30%. In addition, ‘Citations per Faculty’ discussions (Adam 2002; Van Raan 2005; Buela-Casal
was divided into ‘Papers per Faculty’ and ‘Citations per et al., 2007; Aguillo et al. 2010; Bookstein et al. 2010).
Paper’, accounting for 15% respectively. In 2010, QS The major disagreement was the appropriateness of
system follows the same criteria, indicators, and source source data. Contrary to the peer reviews, bibliometric
of data as before, while THE develops new indicators to approaches used existed data and tended to be more
THE World University Rankings, in which reputation objective. Citation analysis was the most commonly used
survey accounts for 34.5%. bibliometric method. At its early stage of development, it
Though ranking universities world-widely attracts atten- was used to calculate citation counts and citing motives
tion and enthusiasm, it remains difficult in creating a uni- (Folger et al., 1970; Virgo 1977; Nederhof and Van Raan
versal set of measuring indicators and applying them 1993). However, many scholars raised opposing views
properly. This study examines the debates on QS toward citation analysis and posed question about
Rankings. In the next section, related studies on arguments self-citations, citation errors, national bias, language
between bibliometrics and reputational survey as a tool bias, citation behaviors of different disciplines, and bias
for university rankings will be presented. in types of document (Kokko and Sutherland 1999;
Van Leeuwen et al., 1999; Leimu and Koricheva 2005;
Liu et al., 2005; Wong and Kokko 2005).
2. Related studies Van Raan (2005) believed that experts’ opinions were
necessary for evaluating research performance and con-
Numerous studies and papers discussed the several aspects
firmed the importance of peer reviews. He insisted that
of university rankings over the years. Merisotis and
peer reviews should be the foundation of academic
Sadlak (2005) provided a detailed outlines of the process
evaluation, and bibliometric indicators were to be used
of university rankings: The first step was data collection
as supplements after appropriate adjustment. However,
(including existed data or newly compiled data), and the
he also acknowledged that rankings of new universities
second step entailed selection of the types of rankings and
and performance of new researchers may be affected by
variables, followed by selection of indicators, and
subjective recognition existed in reputational survey.
weighting shares before executing the analysis. The most
In fact, bibliometrics and reputational survey had their
influential key factors of the ranking process were the
own advantages and disadvantages. Thus, it was suggested
decisions for indicators and weightings (Buela-Casal
not to rely solely upon one approach (Nederhof and Van
et al., 2007; Van Raan2005). The frequently used indica-
Raan 1993; Aksnes and Taxt 2004; Van Raan 2005;
tors were bibliometrics, especially referring to citation
Bookstein et al., 2010; Qiu et al., 2010).
analysis in this paper, peer reviews, web visibility, and
The measurements and indicators play an essential role
web flow. Any indicator being adopted would impose
in world university rankings. This article argues that in
great influence over the results of rankings. Geuna and
order to produce a proper ranking system, bibliometrics,
Martin (2003) point out that bibliometric and peer
and reputational survey should be employed in a balanced
review were the two predominant methods of academic
way. In next part, the mechanism of criteria and indicators
evaluations, with bibliometric being quantitative evalu-
of QS World University Rankings will be introduced and
ation and peer review being qualitative evaluation.
examined. Subsequently, debates on the contents and
Buela-Casal et al. (2007) conducted a comparison study
forms of QS Rankings will be discussed.
of ranking institutions including Shanghai Jaio Tong
University, Times Higher Education, Swiss Science
Technology Council and Asiaweek, Hong Kong. The indi-
3. Methodology of QS Rankings
cators of three out of four rankings employed data from
ISI, whereas two out of four used peer reviews. It further As shown in Table 1, the QS Rankings adopted
confirmed the dominance of bibliometrics and peer reputational survey (academic peer review and employer

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
QS World University Rankings . 73

Table 1. Criteria, indicators, and weightings of QS Rankings 2008 professors and researchers. Then the number of citations
was divided by the number the full-time faculty.
Criteria Indicators Weightings (%)

Research quality Academic Peer Review 40 60 3.4 Faculty student ratio


Citations per faculty 20 The student data were retrieved from the websites of gov-
Graduate employability Employer review 10 ernment, Higher Education Statistics Agency (HESA) and
Teaching quality Faculty student ratio 20
International outlook International faculty 5 10
so on. When the number of FTE students could not be
International students 5 retrieved, it was then replaced by the sum of total students.
If there were two sets of faculty data (e.g., teaching and
research), the former set was selected.

review) and quantitative indicators (citations per faculty,


faculty student ratio, international faculty, and interna- 3.5 International faculty and students
tional students), both exhibiting 50% in the weighting It was the number of faculty and students holding an
shares. Five fields were reviewed, including, Arts & overseas nationality.
Humanities, Life Sciences & Biomedicine, Social
Sciences, Natural Sciences, and Technology. Rankings of
each field were determined by the reputational survey 4. Debates
results calculated from questionnaires, which are divided The reputational survey adopted by the QS rankings
into academic peer review and employer review. The stirred heated debates. Questionnaires were complicated
calculation methods of main indicators were described as and lacked objectivity to show teaching and researching
follows. performance. The yearly fluctuation of data and the distri-
bution of indicator scores incurred doubts of inconsist-
ency. The following part will discuss the controversies.
3.1 Academic peer review
The contact information of questionnaire respondents was
derived from two databases: ‘The World Scientific’ and 4.1 Unbalanced indicator weighting share
‘Mardev’. The World Scientific indexed about 180,000 Indicator weightings shown in Table 1 revealed that
records of emails, and Mardev indexed about 12,000 focusing peer reviews, QS Rankings regarded reputation
records of e-mails from the field of arts and humanities as efficiently representative of the performance of univer-
(Sowter 2007). The respondents ranged from lecturers sity. However, with heavy weighting of peer reviews, the
to university presidents. They were asked to select result might only reflect the reputation of university rather
30 universities (excluding their own university) they than the actual performance, and the questionnaire
regarded as the best in the field they are affiliated to. respondents might merely enumerate international
Time span of data collection covered 3 years, including renowned universities. Although QS Rankings also
9,386 responses (Baty 2009). evaluated research performance through Citation per
Faculty, the ranking did not take numbers of published
paper into consideration. By calculating average citations
3.2 Employer review only, the university with a limited number of paper but
The methodology of Employer Review was similar to high citation could easily obtain higher scores. As a
Academic Peer Review, and included three kinds of result, the indicator could not reflect the academic
sources: QS’s extensive corporate database, the network productivity of a university fully.
of partners with which QS cooperates, and institutions
who submitted a list of professionals with whom they
worked. There were no limitations to nations of employers 4.2 Problems of return rate and respondents
and types of enterprises. It consulted active recruiters via The results of QS Rankings were greatly impacted by peer
online questionnaires. The survey for 2008 included 2,339 reviews. However, the representative of result posed
recruiters (Times Higher Education 2008). doubts generally. The return rate estimating in 2006 was
less than 1% (Ioannidis et al., 2007; Federkeil 2008). In
2007, number of accumulated return questionnaire was
3.3 Citations per faculty 3,069; 6,354 for 2008, and 9,386 for 2009 (Quacquarelli
Citations per Faculty were derived from two sets of data: Symonds n.d.). The numbers of accumulated questionnaire
total citation count in the past 5 years and full time faculty. increased twice in each year, while the great increases of
Total citation count was retrieved from Web of Science, return questionnaire might make ranking results change
Scopus, and Google Scholar; full-time faculty included enormously. For the respondents, QS Rankings lacked

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
74 . M.-H. Huang

clear parameters. In fact, QS Rankings did not control Table 2. Return status of questionnaires by QS Rankings 2008
strictly on the qualification of respondent, and any
individual could fill out the questionnaire through the Country Academic Peer Employer
Review, N (%) review, N (%)
webpage, even the individual was not an expert in
academic or industry fields.
United States 638 (10.04) 346 (14.79)
United Kingdom 563 (8.86) 269 (11.50)
4.3 Reliability of data source Australia 286 (4.50) 178 (7.61)
Italy 277 (4.36) 29 (1.24)
QS Rankings adopted Faculty Student Ratio as indicator Canada 239 (3.76) 37 (1.58)
to evaluate teaching quality of university, but the reliabil- India 236 (3.71) 64 (2.74)
ity of data source remained questionable. In addition to Indonesia 228 (3.59) N/A
the difficulty of obtaining data, the definition of faculty Philippines 201 (3.16) 45 (1.92)
Germany 182 (2.86) 56 (2.39)
and student in each university was not consistent; some- Malaysia 180 (2.83) 38 (1.62)
times the university might inflate numbers of faculty and New Zealand 160 (2.52) 36 (1.54)
student, resulting in the indicator failing to reflect teaching Spain 142 (2.23) 27 (1.15)
quality and learning environment. With regards to the Singapore 135 (2.12) 74 (3.16)
France 125 (1.97) 36 (1.54)
international outlook based on the international faculty/
Belgium 124 (1.95) 19 (0.81)
student number, the indicator might be meaningful, but China 116 (1.83) 25 (1.07)
the evaluation of internationalization should not be Hong Kong 100 (1.57) 50 (2.14)
measured by one indicator only. Other internationalized Sweden 100 (1.57) N/A
performance factors should be considered such as interna- Japan 96 (1.51) 37 (1.58)
Netherlands 93 (1.46) 75 (3.21)
tional collaboration between universities or scholars.
Switzerland 84 (1.32) 22 (0.94)
Ireland 78 (1.23) 41 (1.75)
Austria 72 (1.13) N/A
4.4 Regional bias Denmark 69 (1.09) 23 (0.98)
The way questionnaires were distributed and calculated Brazil 63 (0.99) N/A
provided clues that QS Rankings generally tended to be Norway 63 (0.99) N/A
Portugal 63 (0.99) N/A
more advantageous for the Commonwealth of Nations. Turkey 63 (0.99) N/A
Table 2 showed the numbers and proportion of the Iran 59 (0.93) N/A
returned questionnaires from each country to Academic Mexico 59 (0.93) 75 (3.21)
Peer Review and Employer Review in 2008. The returned Greece 51 (0.80) 59 (2.52)
questionnaires from the USA, UK, and Australia South Africa 51 (0.80) 19 (0.81)
South Korea 51 (0.80) 32 (1.37)
numbered the highest. British Commonwealth countries Thailand 43 (0.68) 23 (0.98)
(Currently UK, Australia, Canada, India, Malaysia, New Poland 42 (0.66) N/A
Zealand, Singapore, South Africa; Hong Kong and Russia 41 (0.65) 69 (2.95)
Ireland were former members) accounted for 32% of the Israel 40 (0.63) N/A
total questionnaires; the USA accounted for 10%, while Taiwan 39 (0.61) 17 (0.73)
Argentina 36 (0.57) 60 (2.57)
Asian Countries (excluding those belong to British Finland 33 (0.52) N/A
Commonwealth countries) only accounted for 12%, and Chile N/A 28 (1.20)
both Indonesia and Philippines accounted for more than Venezuela N/A 27 (1.15)
3%. Further comparison of the returned questionnaires Ukraine N/A 18 (0.77)
Czech Republic N/A 16 (0.68)
from Asian countries to British Commonwealth countries
Romania N/A 15 (0.64)
showed that Taiwan had the least number of returned Other 1033 (16.26) 354 (15.13)
questionnaires, only for 39, whereas there were 135 from Total 6354 2339
Singapore, and 100 from Hong Kong.
In order to know whether the number of returned ques- Source: QS Top Universities: 2008 Academic Peer Review response analysis, QS
tionnaires led to regional bias, the statistical tests were Top Universities: 2008 Employer Review Response Analysis.
conducted in the following procedure. First of all, the
number of returned questionnaires and the number of
schools at country level of the QS ranking were tested by were tested by Spearman rank correlation. The result
running the Pearson correlation coefficient. The result shows moderately correlated (P  0.01, r’s = 0.491),
shows a positive association (P  0.01, r = 0.794), meaning that for a country, more questionnaires collected
indicating that for a country more questionnaires collected result in higher ranking performance of its schools. The
lead to more schools listed on the QS ranking. two tests conducted with Pearson correlation coefficient
Next, the number of returned questionnaires and the and Spearman rank correlation above prove that the
ranking of countries where questionnaires were collected number of returned questionnaires of a country may be

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
QS World University Rankings . 75

biased. Therefore, the result of the tests casts doubts on the Table 3. Taiwan Schools Ranked in Peer Reviews: International
and Domestic Questionnaires in QS Rankings 2008
fairness of the ranking system examined.
In the result of Employer Review, USA, UK, and
School International Domestic
Australia were also the top 3 in number of returned
questionnaire. The Commonwealth of Nations countries
accounted for 34.5%, the USA for 15%, and Asian Natl Sun Yat Sen Univ. V V
Natl Cent Univ. V V
countries (excluding Commonwealth of Nations countries)
Natl Chunghsing Univ. V V
only for 7.65%. The number of return questionnaires for Natl Chiao Tung Univ. V V
Employer Review was only 17 from Taiwan, while 74 were Natl Cheng Kung Univ. V V
from Singapore and 50 from Hong Kong. As a result, it Natl Chengchi Univ. V V
was worth considering the possibility that lower numbers Natl Tsing Hua Univ. V V
Natl Yang Ming Univ. V V
of return questionnaires might lead to lesser performance
Natl Taiwan Univ. V V
of Taiwan in the QS rankings. Natl Taiwan Univ. Sci. & Technol V V
In addition to the regional bias, fields/industries of Natl Taiwan Normal Univ. V V
respondents were also distributed unequally. Most of the Fu Jen Catholic Univ. V V
Academic Peer Review questionnaires were from three Natl Taiwan Ocean Univ. – V
Natl Taipei Univ. Technol. – V
fields: engineering and IT (33%), natural sciences
Natl Dong Hwa Univ. – V
(31.5%), and social science (29.8%); Employer Review Natl Kaohsiung First Univ. Sci. & Technol. – V
was mainly from four industries: financial services/ Natl Kaohsiung Normal Univ. – V
banking, consulting/professional services, manufacturing/ Natl Changhua Univ. Educ. – V
engineering, and IT/computer services. These unequal dis- Natl Chi Nan Univ. – V
Natl Chung Cheng Univ. – V
tributions possibly impacted the universities performance Natl Yunlin Univ. Sci. & Technol. – V
of Academic Peer Review and Employer Review. Chang Gung Univ. – V
Chang Yuan Univ. – V
Feng Chia Univ. – V
4.5 Inconsistency of university list in questionnaires Ming Chuan Univ. – V
From 2008, QS Rankings divided the questionnaires into Soochow Univ. – V
Tamkang Univ. – V
two parts: international and domestic, based on the Tatung Univ. – V
nation of respondents’ university. In the international Tunghai Univ. – V
part, nation of respondent’s university was excluded Chung Li Yuan Ze Inst. Technol. – V
from the list; in the domestic part, university of respond-
ent was also out of list. Based on these precautions, QS
Rankings intended to reduce the bias caused by intention- university listed in the international part would influence
al recommendation to respondents’ countries and the performance of ranking. In 2009, QS revised question-
affiliated institutes. Although this design might normalize naire from closed-ended to part open-ended question that
respondents’ subjectivities, the inconsistency of listed allowed respondents to fill in the universities which were
schools in two parts led to an unfair evaluation for out of the list, and the effect of inconsistency still needed
numerous schools. further attention.
For instance, there were only 12 universities in Taiwan In addition, the design of questionnaire dividing into
being included in the international part of 2008 question- two parts favored world-renowned universities.
naire, while 30 universities including the 12 included in the Universities obtaining more credits from other countries
national part were listed in domestic part (Quacquarelli could be more advantageous in result of rankings, and
Symonds 2008). The universities excluded from the inter- those universities which enrolled more international
national part of questionnaire would not be evaluated by students probably ranked higher.
respondents from other countries. This could possibly lead
to the result that only the universities listed in the
international part of the questionnaire will enter the 4.6 Ranking results
global QS Rankings, while those excluded are barred As mentioned above, the results of QS Rankings were
from entering. Table 3 showed the listed universities of impacted heavily by the number of return questionnaire
Taiwan in two parts of questionnaire. Chang Gung from each country. Because of the home bias, the
University, excluded from the international part, was an number of return questionnaires from the UK ranked
outstanding university which ranked in the top 500 in both the second highest, and the performance of its universities
ARWU and Performance Ranking of Scientific Papers for was also outstanding. In the 2008 ranking result, there
World Universities by Higher Education Evaluation & were eight British universities ranked in top 50; 29
Accreditation of Taiwan HEEACT; nonetheless, it was universities ranked in top 200. Moreover, the QS
out of QS Rankings. It indicated that whether the Rankings benefitted universities from the Commonwealth

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
76 . M.-H. Huang

of Nations countries with 22 universities in the top 50 Table 4. Schools with Perfect Scores in 2008 QS Rankings by
Academic Peer Review and Citations per Faculty
(44%) and 64 universities in the top 200 (32%). Some
Asian universities that generally ranked low or were out
QS 08 School Country Academic Citations
of list in other ranking systems were ranked as top 500 or Rank Peer per
much higher. These Asian universities had great numbers Review faculty
of returned questionnaires, e.g., India with 236 copies of
questionnaire, Indonesia with 228, Philippines with 201,
1 Harvard Univ. US 100 100
and Malaysia with 180; and these countries all included 2 Yale Univ. US 100 98
more than three universities that ranked as the top 500, 3 Univ. Cambridge GB 100 89
while only two Indian universities were ranked in the 4 Univ. Oxford GB 100 85
ARWU and HEEACT. 5 Caltech US 100 100
8 Univ. Chicago US 100 91
In addition, there was a gap between the ranking result
9 Mit US 100 100
and general impression. For example, the University of 10 Columbia Univ. US 100 94
Edinburgh ranked 23rd in 2008 and was higher than 12 Princeton Univ. US 100 100
University of California—Berkeley (36th); Peking 15 Cornell Univ. US 100 96
University ranked 50th which was higher than University 16 Australian Natl Univ. AU 100 74
17 Stanford Univ. US 100 100
of Washington—Seattle which ranked 59th, and Peking 19 Univ. Tokyo JP 100 78
University was even ranked as the 14th in the 2006 20 Mcgill Univ. CA 100 51
ranking. Although QS Rankings adopted peer review as 30 Univ. Calif. Los Angeles US 100 100
main method of evaluation, the ranking results were still 30 Natl Univ. Singapore SG 100 75
different from the public impression. This difference could 34 Univ. British Columbia CA 100 67
36 Univ. Calif Berkeley US 100 100
influence the respondents’ subjective opinion and the few 38 Univ. Melbourne AU 100 56
numbers of return questionnaires. 41 Univ. Toronto CA 100 100
50 Beijing Univ. CN 100 34
13 Johns Hopkins Univ. US 99 100
4.7 Indicator scores 58 Univ. Calif San Diego US 98 100
The scores of each indicator in QS Rankings also raised
doubts. There were many cases of the same indicator score
or even full score causing the assessing capability to be
queried. In the 2008 results, 21 universities received full were evidence to inaccuracy of statistic data employed by
score in the Academic Peer Review, 12 universities in QS Rankings.
Employer Review, 10 in Citations per Faculty, 19 in
Faculty Student Ratio, 18 in International Faculty, and
16 in International Students. However, the real perform- 4.9 Citations per faculty
ance of some universities getting the full scores was not as Citation per faculty is the only bibliometric indicator used
good as other universities. For instance, Peking University in QS Rankings (weighing 20%). Although the indicator
received full score—the same as Harvard University, and helps moderate the influence posed by the scale of
this result seemed unreasonable. Comparing the results of universities by lowering the advantages of total paper
Academic Peer Review to Citations per Faculty, McGill count and citation times in large-scale universities, it has
University, University of Melbourne, and Peking certain bias. For example, the ratio of citation to faculty in
University all received full scores in Academic Peer the Social Science filed is generally lower than that in the
Review, but differed greatly in Citation per Faculty Science field as a result of different citation patterns
(refer to the grey-shaded text in Table 4). As a result, practiced in different academic fields. Therefore, it
peer reviews might not fully reflect the actual performance should be cautious applying this indicator to the ranking
of university. of universities.
For instance, in the scores in the faculty–student ratio of
Keio University, Japan decreased by 50 points from 2007
4.8 Data fluctuations to 2008, whereas the University of Copenhagen, Denmark
The quantitative indicators that were applied frequently, increased by 49 points. For Citation per Faculty, the
such as faculty–student ratio, paper numbers, and citation University of Alabama decreased significantly from 98 to
counts, were hard to change dramatically in a short time. 40, which led the ranking to drop 72 places. Similarly, the
However, the same indicator scores announced by QS National Taiwan University ranked 89th in the indicator
Rankings in 2007 and 2008 showed enormous changes. of Citation per Faculty in 2007, while ranking 145th in
The errors of student number and faculty number 2008. QS Rankings adopted the same database, Scopus,
(e.g., University of Arkansas, University of Alabama, in two consecutive years with data overlapping for 4 years
and University of Alberta) showing on the QS website because of the citation performance within 5 years.

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
QS World University Rankings . 77

The dramatic changes were unreasonable and inferred error concepts and indicators, leading to different ranking
of Citation per Faculty in the first or second year. results and revealing different facets of university
rankings and values. The intention of this article was to
introduce the features and characteristics of QS Rankings,
5. Conclusion and to assist readers in comprehending differences
and meanings of the figures without being confused by
World university rankings had been the focus of the world
the rankings and pursuing ranking advancement in
since Shanghai Jiao Tong University released its first
a blind way.
ranking in 2003. Numerous academic institutions turned
to study this issue. In the process of compiling world
university rankings, the criteria and indicators adopted
could influence the outcome of the rankings greatly. The References
current evaluation approaches could be divided into two Academic Ranking of World University. (2011) Academic
methods: bibliometrics and peer reviews. These two Ranking of World Universities 2011, <http://www.arwu.org/
methods were both numeric calculations. The data used index.jsp> accessed 3 November 2011.
by bibliometirc were more objective, whereas the data Adam, D. (2002) ‘Citation Analysis: the Counting House’,
used by peer review methods were more subjective and Nature, 415/6873: 726–729.
Aguillo, I., Bar-Ilan, J., Levene, M. and Ortega, J. (2010)
could easily raise problems from unclear parameter ‘Comparing University Rankings’, Scientometrics, 85/1:
definitions. This article took the attention-drawing QS 243–256.
Rankings as an example to discuss the mechanism of this Aksnes, D. W. and Taxt, R. E. (2004) ‘Peer Reviews
ranking system. and Bibliometric Indicators: a Comparative Study at a
In the QS Rankings, 50% of the indicators were based Norwegian University’, Research Evaluation, 13: 33–41.
Baty, P. (2009) Rankings 09: Talking points, Times Higher
on university reputation measured by peer reviews; while Education <http://www.timeshighereducation.co.uk/story.
the other 50% evaluated teaching quality and international asp?storycode=408562> accessed 20 October 2009.
outlook. After thorough examination of each indicator, Bookstein, F., Seidler, H., Fieder, M. and Winckler, G. (2010)
this article concludes with two points. First of all, the in- ‘Too much Noise in the Times Higher Education Rankings’,
dicators used in QS Rankings might lack validity. In the Scientometrics, 85/1: 295–299.
Buela-Casal, G., Gutiérrez-Martı́nez, O., Bermúdez-Sánchez,
process of calculating return questionnaire for university M. and Vadillo-Muñoz, O. (2007) ‘Comparative Study of
reputation, QS Rankings failed to control the number and International Academic Rankings of Universities’,
qualification of questionnaire, thus leading to a selection Scientometrics, 71/3: 349–365.
bias. Countries from the Commonwealth of Nations Centre for Science and Technology Studies. (2011) Leiden
contributed 32% of the return questionnaire, while Asian Ranking 2010, <http://socialsciences.leiden.edu/cwts/
products-services/leiden-ranking-2010-cwts.html> accessed 6
countries, limited in only a few countries, occupied 22%. November 2011.
Similar cases occurred in the Employer Review. The dis- Folger, J. K., Astin, H. S. and Bayer, A. E. (1970) Human
tribution of respondents’ fields and industries was not Resources and Higher Education; Staff Report of the
equal, and the surveys did not investigate comprehensively. Commission on Human Resources and Advance Education.
Additionally, many universities had the same indicator New York: Russell Sage Foundation.
Fenderkeil, G. (2008) ‘Ranking Higher Education Institutions-A
scores, which failed to reflect the differences of perform- European Perspective’, Evaluation in Higher Education, 2/1:
ance within universities. Moreover, there were dramatic 35–52.
changes in the adopted data, revealing the inaccuracy of Geuna, A. and Martin, B. R. (2003) ‘University Research
statistic data. Evaluation and Funding: An International Comparison’,
Secondly, the ranking results might not be fair. Since the Minerva, 41/4: 277–304.
Higher Education Evaluation & Accreditation Council of
Commonwealth of Nations countries were superior in
Taiwan. (2011) 2011 Performance Ranking of Scientific
numbers of return questionnaires, universities in those Papers for World Universities, HEEACT Ranking <http://
countries ranked better relatively. These countries had ranking.heeact.edu.tw/en-us/2011/homepage/> accessed 3
64 universities in the top 200 (34%); in Asia, the countries November 2011.
with more return questionnaires also ranked higher, Ioannidis, J. et al. (2007) ‘International Ranking Systems for
Universities and Institutions: a Critical Appraisal’, BMC
causing a gap between the ranking results and public
Medicine, 5/1: 30.
impressions. This demonstrated how indicator selection Kokko, H. and Sutherland, W. J. (1999) ‘What do Impact
and regional bias could affect the ranking results greatly. Factors Tell Us?’, Trends in Ecology and Evolution, 14:
The concept of QS Rankings had some merits but the 382–384.
procedure may lead to subjective consequences. Although Leimu, R. and Koricheva, J. (2005) ‘What Determines the
its indicators were not comprehensive, the peer review Citation Frequency of Ecological Papers?’, Trends in
Ecology & Evolution, 20/1: 28–32.
method distinguished the rankings as one of the Liu, N. C., Cheng, Y. and Liu, L. (2005) ‘Academic Ranking of
renowned world university rankings. On the world World Universities using Scientometrics - A Comment to the
ranking systems, each system contained different “Fatal Attraction”’, Scientometrics, 64/1: 101–109.

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018
78 . M.-H. Huang

Merisotis, J. and Sadlak, J. (2005) ‘Higher Education Rankings: Times Higher Education. (2008) Strong measures, <http://www.
Evolution, Acceptance, and Dialogue’, Higher Education in timeshighereducation.co.uk/story.asp?storycode=407247>
Europe, 30/2: 97. accessed 20 October 2009.
Nederhof, A. J. and van Raan, A. F. J. (1993) ‘A Bibliometric ——. (2011) The World University Rankings, <http://www.
Analysis of Six Economics Research Groups: a Comparison timeshighereducation.co.uk/index.asp?navcode=92>
with Peer Review’, Research Policy, 22: 353–368. accessed 3 November 2011.
Qiu, J. P., Yang, R. X. and Zhao, R. Y. (2010) ‘Competition Van Leeuwen, T. N., Moed, H. F. and Reedijk, J. (1999)
and Excellence: Ranking of World-class Universities 2009 and ‘Critical Comments on Institute for Scientific Information
Advance of Chinese Universities’, Journal of Library and Impact Factors: a Sample of Inorganic Molecular
Information Studies, 8/2: 11–27. Chemistry Journals’, Journal of Information Science, 25/6:
Quacquarelli Symonds. (n.d) Methodology: A simple overview, 489–498.
QS Web Site <http://www.topuniversities.com/university- Van Raan, A. F. J. (2005) ‘Fatal Attraction: Conceptual and
rankings/world-university-rankings/methodology/simple- Methodological Problems in the Ranking of Universities by
overview> accessed 15 March 2010. Bibliometric Methods’, Scientometrics, 62/1: 133–143.
——. (2008) World University Rankings 2008, THE-QS Web Virgo, J. A. (1977) ‘A Statistical Procedure for Evaluating the
Site <http://research.qsnetwork.com/qs_surveysystem/index Importance of Scientific Papers’, Library Quarterly, 47/4:
.php?viewonly&order=normal&partnerset=0> accessed 18 415–430.
October 2009. Webometrics. (2011) Ranking Web of World Universities,
——. (2011) QS Top Universities, <http://www.topuniversities <http://www.webometrics.info/index.html> accessed 3
.com/> accessed 3 November 2011. November 2011.
Sowter, B. (2007) Methodology: The Peer Review, Wong, B. B. M. and Kokko, H. (2005) ‘Is Science as
<http://www.topuniversities.com.dev.quaqs.com/ Global as We Think?’, Trends in Ecology & Evolution, 20/9:
worlduniversityrankings/university_rankings_news/article/ 475–476.
methodology_the_peer_review/> accessed 17 July 2009.

Downloaded from https://academic.oup.com/rev/article-abstract/21/1/71/1643435


by guest
on 17 January 2018

You might also like