Professional Documents
Culture Documents
SET - Intstrument
SET - Intstrument
http://dx.doi.org/10.1108/ET-06-2013-0072
Downloaded on: 22 March 2016, At: 03:32 (PT)
References: this document contains references to 69 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 237 times since 2015*
Users who downloaded this article also downloaded:
Charles R. Emery, Tracy R. Kramer, Robert G. Tian, (2003),"Return to academic standards: a critique
of student evaluations of teaching effectiveness", Quality Assurance in Education, Vol. 11 Iss 1 pp.
37-46 http://dx.doi.org/10.1108/09684880310462074
Peter Balan, Michele Clark, Gregory Restall, (2015),"Preparing students for Flipped or Team-
Based Learning methods", Education + Training, Vol. 57 Iss 6 pp. 639-657 http://dx.doi.org/10.1108/
ET-07-2014-0088
Pilar Pineda-Herrero, Carla Quesada-Pallarès, Berta Espona-Barcons, Óscar Mas-Torelló,
(2015),"How to measure the efficacy of VET workplace learning: the FET-WL model", Education +
Training, Vol. 57 Iss 6 pp. 602-622 http://dx.doi.org/10.1108/ET-12-2013-0141
Access to this document was granted through an Emerald subscription provided by emerald-
srm:405406 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald
for Authors service information about how to choose which publication to write for and submission
guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as
well as providing an extensive range of online products and additional customer resources and
services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the
Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for
digital archive preservation.
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
Abstract
Purpose – There is a debate in literature about the generalizability of the structure and the validity of
the measures of Student Evaluation of Teaching Effectiveness (SET). This debate spans the
dimensionality and validity of the construct, and the use of the measure for summative and formative
purposes of teachers valuation and feedback. The purpose of this paper is to contribute to the debate
on the aforementioned issues. Specifically the paper tests the relationship of teacher’s “charisma” trait
with a measure of SET consisting of the two dimensions of “lecturer ability” and “module attributes.”
The market characteristics of the paper are those of an emerging market and cross-cultural context
with a specific reference to India.
Design/methodology/approach – In this study, a two-dimensional scale of SET, which was
originally developed by Shevlin et al. (2000) in their study in the UK, was empirically tested with Indian
students and modified. Empirical data were collected from Indian students pursuing their MBA
program in a north Indian university and statistical testing using exploratory and confirmatory factor
analyses was undertaken. The proposed relationship of a teacher’s “charisma” trait was tested as a
reflective construct comprising of the two dimensions of SET with the help of the software package
Amos ver 4.0.
Findings – The results indicate that the measure of SET is influenced by the teacher’s “Charisma”
(trait), thus providing evidence of a halo effect. This raises the issue of validity of SET as an instrument
for measuring teaching effectiveness (TE). The results provide support to the hypothesis that structure
of SET is multidimensional along with the need for adapting the instrument in diverse cultural and
market contexts.
Originality/value – This study contributes to the debate on the validity, structure and use of SET as
an instrument for measuring TE in a developing market with cross-cultural implications such as India.
Keywords India, Structural equation modelling, Higher education, Validity,
Student Evaluation of Teaching Effectiveness, Reflective model
Paper type Research paper
Introduction
Quality of teaching is measured through an assessment of “teaching effectiveness”
(TE). This is evaluated in several ways including influence on positive personal change
and development in students their academic achievements and work. In turn these get
reflected as how students rate the TE of their teachers (Shevlin et al., 2000). In higher Education + Training
education, the measurement of perceived service quality from the students’ perspective Vol. 57 No. 6, 2015
pp. 623-638
is increasingly becoming important (O’Neill and Palmer, 2004; Stodnick and Rogers, © Emerald Group Publishing Limited
0040-0912
2008). However, some issues relating to the construct and its measurement from the DOI 10.1108/ET-06-2013-0072
ET students’ perspective need to be examined. For example, responses to queries such as
57,6 “what are the determinant dimensions of TE?; how to design a quality management
model based on the measurement instrument and how to deal with issues related to its
implementation?” need to be addressed.
Research literature shows that there is lack of consensus on the attributes and
dimensions that constitute TE and there is debate about the psychometric qualities
624 of the measurement instruments deployed (Shevlin et al., 2000). There is still lack of
consensus about the number of dimensions that constitute TE and whether they are
discrete or representative of a single higher-order construct (Abrami et al., 1997;
Marsh and Roche, 1997). Debate also exists regarding the merits of using an overall
evaluation model vs using a multidimensional framework for evaluation of TE relating
to personnel decisions. According to Abrami et al. (Abrami 1985, 1989; Abrami and
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
SET
SET measures are widely used to provide formative feedback to faculty for improving
teaching, course content and structure; a summary measure of TE for promotion
and tenure decisions; and information to students for the selection of courses and
teachers (Marsh and Roche, 1993; Chen and Hoshower, 2003). SETs are collected
and used to provide diagnostic feedback to faculty for improving teaching. Some of the
applications for improving teaching through SETs are: as a measure of TE for
personnel decisions; providing information for students for the selection of courses and
instructors; as a component in national and international quality assurance exercises,
to monitor the quality of teaching and learning; and as an outcome for research on
teaching (e.g. studies designed to improve TE and student outcomes, effects associated
with different styles of teaching, perspectives of former students) (Marsh, 2007).
Student ratings of instruction are extensively employed to evaluate TE in universities
and institutions worldwide (Seldin, 1985; Abrami, 1989; Wagenaar, 1995; Abrami et al.,
2001; Hobson and Talbot, 2001). SET is widely used in universities in UK and USA
to effect changes in course material and its delivery. In the USA, faculty decisions
regarding terms of employment, salary levels and promotion are influenced by
SET results.
Research on SET has generally focussed on the development of an evaluation
instrument (Marsh, 1987), testing of its validity (Cohen, 1981) and reliability (Feldman,
1977) and evaluation of the factors biasing the student ratings (Hofman and Kremer,
1980; Abrami and Mizener, 1983). Research literature shows that the number of
dimensions of SET vary between two and 11 (Table I). According to Marsh (2007),
many SET instruments are not based on a theory of teaching and learning, hence their
content validity is questionable. Though there is support for appropriately constructed
SET instruments and the multidimensionality of the SET construct, some instruments
have very few items and provide evidence of fewer factors. Hence the debate about
which specific components of TE can and should be measured has not been resolved.
There are widespread differences in the SET instruments regarding the quality of
items, the operationalization of the TE construct, and the specific dimensions selected
(Marsh, 2007). Marsh and Dunkin (1997) identified three overlapping approaches to the
identification, construction and evaluation of multiple dimensions in SET instruments
in literature: empirical approaches such as factor analysis and multitrait-multimethod Evaluating the
analyses; logical analyses of the content of effective teaching and the purposes the validity of
ratings are intended to serve, supplemented by reviews of previous research and
feedback from students and instructors and a theory of teaching and learning.
SET in India
According to Marsh (2007), SETs, as a measure of TE, are difficult to validate, since no
single criterion of effective teaching is sufficient. While researchers have suggested
student learning as the only criterion of effective teaching, according to Marsh (2007), 627
this inhibits a better understanding of what is being measured by SETs, of what can be
inferred from SETs, and how findings from diverse studies can be understood within
a common framework.
There is a controversy on whether the SET instruments measure effective teaching
or merely behaviors or teaching styles which are correlated with effective teaching.
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
Thus, a teacher could be a poor teacher if he/she does not use higher order questions,
does not give assignments back quickly, does not give summaries of the material to be
covered, etc. (McKeachie, 1997). Abrami et al. (1996) argued that student opinions
represent a partial and biased view of the “teaching competence” construct due to their
very position in the teaching-learning process, i.e. they cannot be a reliable and valid
source of information on those aspects of teaching that they cannot observe
systematically, or in which conflicts of interest may clearly bias their perceptions and
evaluations. Shevlin et al. (2000) argue that if students have a positive personal and/or
social view of the teacher/lecturer this may lead to higher rating irrespective of the
actual level of TE. They empirically established that student’s perception of the
teacher/lecturers’ charisma significantly predicted TE ratings.
Review of SET research (e.g. Shevlin et al., 2000; Apodaca and Grad, 2005; Hobson
and Talbot, 2001) also shows that while there is a general agreement on the
multidimensional perspective of SETs for purposes of formative feedback and
instructional improvement, there is disagreement about the most appropriate form of
SET for summative purposes, i.e. overall ratings, a multidimensional profile of specific
SET factors, or global scores based on weighted or unweighted specific factors (Marsh,
2007). Abrami and d’Apollonia (1990) defended the unidimensional approach and
proposed the use of a single overall measure, based on the use of the overall rating
items or on a weighted average of items. These items reflect specific teaching
behaviors, for summative purposes based on content validity grounds: and some
dimensions of teaching competence which would be affected by factors beyond the
like discipline, instructor level and course level. Apodaca and Grad (2005) call for
cross-cultural research, which they believe would contribute, to the discussion of the
dimensional aspects through contrast of the models and outcomes obtained in
the English-speaking context with different instruments and populations (Table I).
Some external factors have been found to affect SET ratings (Table II) thus raising
issues of validity and the need for validation of the measure in various contexts.
Conceptual model
Factors that constitute TE vary widely in research literature (Brown and Atkins, 1993;
Marsh and Roche, 1997; Patrick and Smart, 1998; Ramsden, 1991). One of the possible
explanations for the same is that TE is influenced by a latent trait, i.e. charisma or the
personal leadership of the teacher. Thus, the effectiveness of a teacher is affected by
ratings of the students on teacher’s “charisma” trait. Students rate specific attributes of
teaching on the basis of the global evaluation of a teacher (D’Apollonia and Abrami,
1997). The underlying trait of charisma significantly accounts for the SET scores
(Shevlin et al., 2000). Charisma has been shown to affect voter judgments of politicians
(Pillai et al., 1997), as well as leadership at work (Fuller et al., 1996).
It is therefore, proposed that students overall perception of the teacher would
significantly influence SET ratings. Based on literature, TE is construed and evaluated
as a two -dimensional construct of “lecturer ability” (LA) and “module attributes.”
Its hypothesized that the students’ overall evaluation of the teacher on their trait of
“Charisma” would be significantly related with the specific dimensions of TE (Figure 1).
Authors Findings
Fernandez et al. (1998) There was weak relationship between class size and student ratings
d’Apollonia and SET ratings and variables are significantly related with student attributes,
Abrami (1997) lecturer behavior, and course administration
Table II. Marsh and Roche SET ratings are positively related to students prior interest and purpose of
Review of (1997) taking the course
relationship between Greenwald and Grading leniency has a significant relationship with SET ratings
external factors and Gillmore (1997)
student ratings of Marsh (1987) and SET ratings have positive relationship with expected grades
teaching Feldman (1976)
effectiveness Source: Adapted from Shevlin et al. (2000)
Methodology Evaluating the
In this study 209 graduate students were self-selected during the MBA program in validity of
January-March 2013 at the business school department of a Delhi state government
University in New Delhi, India. The demographic profile of the participants were
SET in India
considered to be representative of students of other MBA programs. The final sample
size used for data analysis was 201 due to deletion of eight entries because of the
missing data and errors. The students in the first year of the two year MBA program 629
were selected for the study and were asked to rate their teachers in the two marketing
courses composed of a total of 20 sessions of 90 minutes duration each. The students
rated two male and two female teachers who had taught them the mentioned courses.
Measurement scale
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
The initial 11 items scale of SET employed for the study was adapted from the study
by Shevlin et al. (2000). The data on the 11-item self-reported scale of TE (Appendix 1)
was collected by a member of the managerial staff. The scale consisted of seven items
(items 1-6, 11) to measure the “LA” factor and four items (items 7-10) to measure the
“module attributes.” Students rated their perceptions on the items on a five-point
Likert scale with options ranging from “strongly disagree” (1) to “strongly agree” (5).
The charisma of the teacher/lecturer was measured with two items, i.e. item nos 12
(“The lecturer has charisma”), and 13 (“The lecturer helped in transforming me to take
interest in studies”) (Appendix).
Findings
CFA
The structural pattern of the two-dimensional measure of SET was further evaluated
for construct validity through CFA using AMOS ver 4.0. The parameters of the
Lecturer
Ability
Charisma
Figure 1.
Hypothesized model
of relationship of
Module Charisma with
Attributes Student Evaluation
of Teaching
Source: Adapted from Shevlin et al. (2000) Effectiveness (SET)
ET two-factor specified model (Figure 1) were evaluated with AMOS ver 4.0. The tested
57,6 model was modified based on modification indices wherein items with cross loadings
were deleted or respecified (Byrne, 2001). The modified model was found have an
acceptable fit (Figure 2). Hu and Bentler (1999) suggested that comparative fit index
(CFI) values above 0.95 and root mean square errors of approximation (RMSEA)
values <0.08 represent an acceptable fit. Based on indices of fit of GFI ¼ 0.979;
630 CFI ¼ 0.983; NFI ¼ 0.950; RMR ¼ 0.039; RMSEA ¼ 0.048 (Table IV), the modified
model was found to be acceptable. χ2 value of 16.07 was within two times the
number of degrees of freedom (22 degrees of freedom) and therefore significant
(Bollen, 1989). The modified and accepted model of SET was the two-dimensional
model (as initially hypothesized) with six attributes having significant loading on
their respective factors (Figure 2). The attributes and their respective dimensions
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
(with path coefficients) are shown in Table V. The “t statistics” were significant for
each path and the values of critical ratios were more than twice that of standard
errors. The model was thus found to be of suitable fit and its parsimony was
supported. The t-values of the estimated parameters were significant ( p o 0.001)
(Table V). The reliability of the two dimensions of SET as measured by Cronbach’s α
score were W 0.7, i.e. “LA” (0.725) and “module attributes” (MA) (0.720). Convergent
validity was evaluated according to criteria identified by Fornell and Larcker
(1981) wherein the results of the CFA revealed good to strong loadings of
items/attributes of SET on their respective dimensions (ranging from 0.509
to 0.753) (Table V).
Structural model
The hypothesized model (Figure 1) was then tested using Amos ver 4.0. The model was
found to be of acceptable fit based on goodness of fit indices. The fit indices results of
GFI ¼ 0.979; NFI ¼ 0.963 and CFI ¼ 0.996; RMR ¼ 0.033 and RMSEA ¼ 0.23 (Table VI
(a)) indicate excellent fit of the model. The results thus show that teachers trait of
“charisma” significantly influences the SET dimensions of “LA” and “MA” (Figure 2
and Table VI(a-c)). The standardized regression co-efficient of path of Charisma to SET
dimensions of LA and MA are significant and have values of 0.664 and 0.886,
respectively (Table VI). “Charisma” of the teacher explains 40.2 percent variance of LA
and 78.4 percent variance of MA (Table VI).
The results thus show that teachers trait of “charisma” significantly influences the
SET dimensions of “LA” and “MA” (Figure 2 and Table VI(a-c)).
charisma 631
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
RES1 RES2
1 1
LA MA
e3 e4 e5 E8 E10
E9
Figure 2.
Notes: Char 1 and 2 are items nos 12 and 13 for attributes Empirically tested
measuring charisma; lect3, 4, 5 are items no 3-5 for attributes of model of relationship
lecturer ability; Mod8, 9 and 10 are item nos 8-10 for module of teachers trait of
attributes(Appendix ); LA, lecturer ability; MA, module attributes; “Charisma” with
e-error variances dimensions of SET
Lecturer The lecturer is able to explain difficult concepts in a clear and straight 0.660
ability forward way (lect3)
The lecturer makes use of examples and illustrations in his or her 0.509
632 explanations of concepts (lect4)
The lecturer is successful in presenting the subject matter in an 0.753
Table V. interesting way (lect5)
Dimensions of SET, Module The references given were very useful (Mod8) 0.548
their attributes and attribute In this module I learnt a lot (Mod9) 0.629
path coefficients In my opinion this module was enjoyable and worthwhile (Mod10) 0.740
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
(a) Fit indices results of hypothesized relationship of SET dimensions with charisma
Fit Indices Results
χ2 (degrees of freedom ¼ 16) 17.686
Goodness of fit index 0.979
Normed fit index 0.963
Comparative fit index 0.996
RMR 0.033
RMSEA (p close fit ¼ 0.775) 0.023
(b) Path coefficients of SET dimensions of LA and Mod on Charisma
Results
→Charisma Mod 0.886
→Charisma LA 0.664
Table VI.
Results of structural (c) Squared multiple correlations (SMC) of SET dimensions
equation model of SET dimensions SMC
SET dimensions and Module attributes 0.784
their relationship Lecturer ability 0.402
with Charisma Notes: LA, lecturer ability; Mod, module attributes
SET ratings along with other measures. The validity of the SET scale of TE and
especially its use for decisions related to teacher’s performance evaluation,
remuneration and promotion are questionable. The two factors of “LA” and “module
attributes” reflect a halo effect along with the effect of TE which partially explains the
variance in the number of factors/dimensions of TE identified in literature (Brown and
Atkins, 1993; Marsh and Roche, 1997; Patrick and Smart, 1998; Ramsden, 1991).
The results demonstrate the existence of a halo effect and hence instruments which
measure TE cannot be used for summative purpose for decisions on performance
evaluation of the teacher/lecturer and administrative decisions of promotion and
reward since their validity is questionable. The use of the instrument for feedback
and formative purpose is also questionable as a significant proportion of the scale’s
variation reflects student’s perception of the lecturer/teacher’s charisma (leadership
attributes) rather than their lecturing ability and course module attributes.
Researchers need to test the relationship with alternative specifications to find out
how to reduce the effect of extraneous variables so that the validity of SET ratings
can be established.
Limitations
The specified model was adapted from the study by Shevlin et al. (2000) and the
relationships were specified on the basis of theories of personality from psychology.
However, alternative specifications of the model are possible as suggested by Shevlin
et al. (2000) wherein the direction of influence can be from the SET dimensions to
Charisma. Thus, it can be hypothesized that students perception of teachers’
“Charisma” is formed through their judgment of LA and module attributes. Since the
model of relationship of SET with charisma has only been tested in two context’s,
i.e. the UK and India, the results are not generalizable especially as regards
the dimensionality of SET, the attributes used in the measure and the results
of the study.
understood.
References
Abrami, P.C. (1985), “Dimensions of effective college instruction”, Review of Higher Education,
Vol. 8 No. 3, pp. 211-228
Abrami, P.C. (1989), “How should we use student ratings to evaluate teaching?”, Research in
Higher Education, Vol. 30 No. 2, pp. 221-227.
Abrami, P.C. and d’Apollonia, S. (1990), “The dimensionality of ratings and their use in personnel
decisions”, in Theall, M. and Franklin, J. (Eds), Student Ratings of Instruction: Issues for
Improving Practice, Jossey-Bass, San Francisco, CA, pp. 97-112.
Abrami, P.C. and d’Apollonia, S. (1999), “Current concerns are past concerns”, American
Psychologist, Vol. 54 No. 7, pp. 519-520.
Abrami, P.C. and Mizener, D.A. (1983), “Does the attitude similarity of college professors and their
students produce ‘bias’ in the course evaluations?”, American Educational Research
Journal, Vol. 20 No. 1, pp. 123-136.
Abrami, P.C., d’Apollonia, S. and Rosenfield, S. (1996), “The dimensionality of student ratings of
instruction: what we know and what we do not”, in Smart, J.C. (Ed.), Higher Education:
Handbook of Theory and Research, Vol. XI, Agathon Press, New York, NY, pp. 213-264.
Abrami, P.C., D’Apollonia, S. and Rosenfield, S. (1997), “The dimensionality of student ratings of
instruction: what we know and what we do not”, in Perry, R.P. and Smart, J.C. (Eds),
Effective Teaching in Higher Education: Research and Practice, Agathon Press, New York,
NY, pp. 321-367.
Abrami, P.C., Marilyn, H.M. and Raiszadeh, F. (2001), “Business students’ perceptions of
faculty evaluations”, The International Journal of Educational Management, Vol. 15
No. 1, pp. 12-22.
Adamson, G., O’Kane, D. and Shevlin, M. (2005), “Students’ ratings of teaching effectiveness:
a laughing matter?”, Psychological Reports, Vol. 96 No. 1, pp. 225-226
Apodaca, P. and Grad, H. (2005), “The dimensionality of student ratings of teaching:
integration of uni and multidimensional models”, Studies in Higher Education, Vol. 30 No. 6,
pp. 723-748.
Asch, S.E. (1946), “Forming impressions of personality”, Journal of Abnormal and Social
Psychology, Vol. 41 No. 2, pp. 258-290.
Atkins, M.J. (1993), “Theories of learning and multimedia applications: an overview”, Research
Papers in Education, Vol. 8 No. 2, pp. 251-271.
Basow, S.A. (1995), “ Student evaluations of college professors: when gender matters”, Journal of Evaluating the
Educational Psychology, Vol. 87 No. 4, pp. 656-665.
validity of
Basow, S.A., and Silberg, N.T. (1987), “Student evaluations of college professors: are female SET in India
and male professors rated differently?”, Journal of Educational Psychology, Vol. 79 No. 3,
pp. 308-314.
Bollen, K.A. (1989), Structural Equations With Latent Variables, John Wiley and Sonsm,
New York, NY. 635
Brown, G. and Atkins, M. (1993), Effective Teaching in Higher Education, Routledge, London.
Bruner, J.S. and Tagiuiri, R. (1954), “The perception of people”, in Lindzey, G. (Ed.), Handbook of
Social Psychology, Vol. 2, Addison Wesley, London, pp. 634-654.
Burdsal, C.A. and Harrison, P.D. (2008), “Further evidence supporting the validity of both a
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
Leadership: The Cutting Edge, Southern Illinois University Press, Carbondale, IL, pp. 189-207.
Hu, L.T. and Bentler, P.M. (1999), “Cutoff criteria for fit indexes in covariance structure
analysis: conventional criteria versus new alternatives”, Structural Equation Modeling:
A Multidisciplinary Journal, Vol. 6 No. 1, pp. 1-55.
Kelley, H.H. (1950), “The warm-cold variable in first impressions of persons”, Journal of
Personality and Social Psychology, Vol. 18 No. 3, pp. 431-439.
Krishnan, A. (2011), “Quality in higher education: road to competitiveness for Indian business
schools”, Opinion Journal, Vol. 1 No. 1, pp. 9-15.
Kwan, K.P. (1999), “How fair are student ratings in assessing the teaching performance
of university teachers?”, Assessment & Evaluation in Higher Education, Vol. 24 No. 2,
pp. 181-195.
Lowman, J. and Mathie, V.A. (1993), “What should graduate teaching assistant s know about
teaching?”, Teaching of Psychology, Vol. 20 No. 2, pp. 84-88.
McKeachie, W.J. (1997), “Student ratings: the validity of use”, American Psychologist, Vol. 52
No. 11, pp. 1218-1225.
Marsh, H.W. (1987), “Students’ evaluations of university teaching: research findings,
methodological issues, and directions for future research”, International Journal of
Educational Research, Vol. 11 No. 3, pp. 251-387.
Marsh, H.W. (1991), “A multidimensional perspective on students' evaluations of teaching
effectiveness: reply to Abrami and D'Apollonia”, Journal of Educational Psychology, Vol. 83
No. 3, pp. 416-421.
Marsh, H.W. (2007), “Do university teachers become more effective with experience? A multilevel
growth model of students’ evaluations of teaching over 13 years”, Journal of Educational
Psychology, Vol. 99 No. 4, pp. 775-790.
Marsh, H.W. and Dunkin, M. (1992), “Students' evaluations of university teaching: a
multidimensional perspective”, in Smart, J.C. (Ed.), Higher Education: Handbook on
Theory and Research, Vol. 8, Agathon Press, New York, NY, pp. 143-234.
Marsh, H.W. and Dunkin, M.J. (1997), “Students’ evaluations of university teaching: a
multidimensional perspective”, in Perry, R.P. and Smart, J.C. (Eds), Effective Teaching in
Higher Education: Research and Practice, Agathon, New York, NY, pp. 241-320.
Marsh, H.W. and Hocevar, D. (1984), “The factorial invariance of students' evaluations of college
teaching”, American Educational Research Journal, Vol. 21, pp. 341-366.
Marsh, H.W. and Hocevar, D. (1991), “Students' evaluations of teaching effectiveness: the stability
of mean ratings of the same teachers over a 13-year period”, Teaching and Teacher
Education, Vol. 7 No. 4, pp. 303-314.
Marsh, H.W. and Roche, L.A. (1993), “The use of students' evaluations and an individually Evaluating the
structured intervention to enhance university teaching effectiveness”, American
Educational Research Journal, Vol. 30 No. 1, pp. 217-251.
validity of
SET in India
Marsh, H.W. and Roche, L.A. (1997), “Making student’s evaluations of teaching effectiveness
effective”, American Psychologist, Vol. 52 No. 11, pp. 1187-1197.
O’Neill, M. and Palmer, A. (2004), “Importance-performance analysis: a useful tool for directing
continuous quality improvement in higher education”, Quality Assurance in Education, 637
Vol. 12 No. 1, pp. 39-52.
Patrick, J. and Smart, R.M. (1998), “An empirical evaluation of teacher effectiveness: the
emergence of three critical factors”, Assessment and Evaluation in Higher Education,
Vol. 23 No. 2, pp. 165-178.
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
Pillai, R., Stitesdoe, S., Grewal, D. and Meindl, J.R. (1997), “Winning charisma and losing
the presidential election”, Journal of Applied Social Psychology, Vol. 27 No. 19,
pp. 1716-1726.
Ramsden, P. (1991), “A performance indicator of teaching quality in higher education: the course
experience questionnaire”, Studies in Higher Education, Vol. 16 No. 2, pp. 129-150.
Ryan, J.M., and P.D. Harrison, (1995), “The relationship between individual instructional
characteristics and the overall assessment of teaching effectiveness across different
instructional contexts”, Research in Higher Education, Vol. 36 No. 5, pp. 213-228.
Safer, A.M., Farmer, L.S.J., Segalla, A. and Elhoubi, A.F. (2005), “Does the distance from the
teacher influence student evaluations?”, Educational Research Quarterly, Vol. 28 No. 3,
pp. 27-34.
Seldin, P. (1985), Current Practices in Evaluating Business School Faculty, Center for Applied
Research, Lubin School of Business Administration, Pace University, Pleasantville, NY.
Shevlin, M., Banyard, P., Davies, M. and Griffiths, M. (2000), “The validity of student evaluation
of teaching in higher education: love me, love my lectures?”, Assessment & Evaluation in
Higher Education, Vol. 25 No. 4, pp. 397-405
Stodnick, M. and Rogers, P. (2008), “Using SERVQUAL to measure the quality of the
classroom experience. Decision sciences”, Journal of Innovative Education, Vol. 6 No. 1,
pp. 127-146.
Vernon, P.E. (1964), Personality Assessment: A Critical Survey, Methuen, London.
Wagenaar,T.C.(1995), “Student evaluation of teaching: some cautions and suggestions”, Teaching
Sociology, Vol. 64 No. 1, pp. 64-68.
Wankat, P.C. (2002), The Effective Efficient Professor: Teaching, Scholarship and Service, Allyn &
Bacon, Boston.
Westwood, P. (1998), “Reducing educational failure”, Australian Journal of Learning Disabilities,
Vol. 3 No. 3, pp. 4-12.
Further reading
Bryman, A. (1992), Charisma and Leadership in Organizations, Sage, London.
Quality Assurance Agency for Higher Education (1997), Subject Review Handbook: October 1998
to September 2000 (QAA 1/97), Quality Assurance Agency for Higher Education,
London.
Seldin, P. (1995), Improving College Teaching, Anker Publishing Company, Bolton, MA.
Shakleton, V. (1995), Business Leadership, Routledge, London.
ET Appendix
57,6
638
Downloaded by University of Southern Queensland At 03:32 22 March 2016 (PT)
Table AI.
Student Evaluation
of Teaching
Effectiveness –
questionnaire used Notes: SD, strongly disagree, D, disagree, N, neutral, A, agree, SA, strongly agree
for the study Source: Adapted from Shevlin et al. (2000)
Corresponding author
Dr Rajat Gera can be contacted at: geraim43@gmail.com
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com