Professional Documents
Culture Documents
Taylor Lynda 2013
Taylor Lynda 2013
http://ltj.sagepub.com/
Published by:
http://www.sagepublications.com
Additional services and information for Language Testing can be found at:
Subscriptions: http://ltj.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations: http://ltj.sagepub.com/content/30/3/403.refs.html
What is This?
Language Testing
30(3) 403–412
Communicating the theory, © The Author(s) 2013
Reprints and permissions:
practice and principles of sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/0265532213480338
language testing to test ltj.sagepub.com
Lynda Taylor
University of Bedfordshire, UK
Abstract
The 33rd Language Testing Research Colloquium (LTRC), held in June 2011 in Ann Arbor,
Michigan, included a conference symposium on the topic of assessment literacy. This event brought
together a group of four presenters from different parts of the world, each of whom reported
on their recent research in this area. Presentations were followed by a discussant slot that
highlighted some thematic threads from across the papers and raised various questions for the
professional language testing community to consider together. One point upon which there was
general consensus during the discussion was the need for more research to be undertaken and
published in this complex and challenging area. It is particularly encouraging, therefore, to see a
coherent set of studies on assessment literacy brought together in this special issue of Language
Testing and it will undoubtedly make an important contribution to the steadily growing body of
literature on this topic, particularly as it concerns the testing of languages. This brief commentary
revisits some of the themes originally raised during the LTRC 2011 symposium, considers how
these have been explored or developed through the papers in this special issue and reflects on
some future directions for our thinking and activity in this important area.
Keywords
Assessment literacy, language assessment literacy, test stakeholders
LTRC 2011 in Michigan celebrated 50 years since the publication in 1961 of the
seminal work in our field by Professor Robert Lado, one-time Director of the English
Language Institute at the University of Michigan. His Language Testing volume
helped to inform and shape subsequent research and practice in our field perhaps
more than most publications. Although Lado himself made no explicit reference
in his book to assessment literacy (AL) or language assessment literacy (LAL), it
Corresponding author:
Lynda Taylor, Senior Lecturer in Language Assessment, Centre for Research in English Language Learning
and Assessment (CRELLA), University of Bedfordshire, 47 Montague Road, Cambridge, CB4 1BU, UK.
Email: lynda.taylor@beds.ac.uk
could be argued that the publication was in its essence an exercise in developing
assessment literacy, particularly as far as the testing of languages was concerned.
Though he did not use the term ‘assessment literacy,’ Lado was quite clear about
the target readership for his book and its intended function. He was writing for a
variety of different audiences who would be involved in language teaching and/
or learning, who would be making use of language tests and who would, therefore,
need to understand something about language testing. The volume’s subtitle – The
Construction and Use of Foreign Language Tests – confirms that Lado was keen
to target not just ‘test makers’ (though these undoubtedly constituted an important
audience for him) but a much wider range of key stakeholder constituencies
including: existing teachers of foreign languages; prospective language teachers
and, presumably, those in training; linguists and language specialists; teachers; and
graduate students. Lado articulated his aims in writing his book as follows: “so that
increasingly we may speak on the basis of knowledge rather than from opinions
and hypotheses alone valuable as they are at proper stages in the development of
man’s [sic] thought” (1961, pp. vii–viii). His goal seems to have been to disseminate
language testing knowledge, skills and competence at a time when “great advances
in transportation and communication” were stimulating a need for “more effective
teaching of languages” along with the need for “more effective testing of their use”
(1961, pp. 1–2). Reading Lado’s words from half a century ago, it is tempting to
think that little has changed in some respects; the debate over what constitutes the
effective teaching and testing of languages continues to this day in a world that is
still characterized by globalization and technological advances.
Other notions of literacy to have emerged, mainly since the late 20th century, are simi-
larly metaphorical in nature, often associated with education or with society more
broadly. Teachers and lecturers in schools and universities, for example, sometimes com-
plain that their students fail to acquire the ‘biblical literacy’ (a form of ‘cultural literacy’)
that is fundamental for a full understanding and appreciation of so much Western litera-
ture, art and music from earlier centuries. Where once this knowledge and understanding
could be generally assumed among students within most Western educational contexts,
specially designed resources may be needed nowadays to familiarize students with the
great religious themes that once inspired writers, artists, composers and others in Western
culture and civilization (see Dyas & Hughes, 2005).
At a more applied level, notions of ‘technological literacy’ (e.g., to operate standard
household appliances) and ‘computer literacy’ are now acknowledged throughout soci-
ety; and in a technological age, we are sensitive to the reality that those who are not
computer-literate risk becoming increasingly marginalized or disadvantaged as a result.
Some social commentators express concern at increasingly low levels of ‘civic literacy’
in contemporary society; that is, citizens are seen as lacking the knowledge and skills
needed to participate effectively in the community, government and politics, leading to a
potential threat to democracy and good governance.
Other metaphorical expressions to have entered common usage in recent years include
‘risk literacy,’ ‘health literacy,’ ‘media literacy’ and ‘emotional literacy.’ In each case, the
use of the term ‘literacy’ stands for know-how and awareness linked to the preceding item
in the multi-word phrase. The focus is on the ability to understand the content and dis-
course associated with a given domain or activity and on being able to engage with and
express oneself appropriately in relation to this.
It is hardly surprising, then, that ‘assessment literacy’ (AL) has been added to the
growing list of ‘literacies’ to be acquired in contemporary life, together with ‘language
assessment literacy’ (LAL) as a potentially subordinate or overlapping category. Both
are relatively new fields of as far as theoretical and empirical research is concerned,
but both are generally acknowledged today as an important focus for attention, debate,
policymaking and action in education and wider society. Some writers highlight an
imperative for work in this area in response to the huge growth of testing and assess-
ment worldwide and the increasing number of people involved in it; the role of lan-
guage assessment in particular has expanded in education and wider society as a result
of globalization and migration (for a fuller discussion see Taylor, 2009). Outcomes
from empirical research investigating the nature and development of language assess-
ment literacy are urgently needed not just to inform and underpin existing policy and
practice but also to inspire and shape new and innovative initiatives for disseminating
core knowledge and expertise in language assessment to a growing range of test
stakeholders.
• Who are the key stakeholder groups that need to develop AL/LAL?
• What sort of content input is needed for developing AL/LAL?
• Where are the specific domains and contexts in which this needs to be done?
• When is the best time for this to be done?
• What methods or approaches are likely to be most effective?
The papers presented in the LTRC 2011 symposium demonstrated how a broad range
of research methodologies (quantitative, qualitative and mixed methods) could be suc-
cessfully employed to investigate such questions. Subsequent discussion during the sym-
posium highlighted four particular areas in which further research into AL and LAL
might generate important insights for our field. The four areas related to the following:
language testers and test developers need to be sensitive to many different types of test
stakeholder and the varying ways in which they find themselves engaging with and
understanding assessment issues. Identifying the range of relevant stakeholders and eval-
uating their specific needs in relation to what test scores mean in their context and, con-
sequently, how scores can or cannot be used, is becoming a priority in a world where
assessment occupies such a central role. Ideally the promotion and development of
assessment literacy will be achieved ‘by design’ rather than being a corrective after-
thought. This means that engagement with test stakeholder groups needs to take place at
the outset of any project to develop a new or revised test or testing system, not simply
initiated at a point where the test is ready to be implemented. Similarly, some degree of
familiarization with the principles and practice of assessment will ideally be embedded
from an early stage in the training of language teachers, not simply offered as a ‘bolt-on’
option. AL/LAL development activities may need to be integrated within professional
development programmes or briefing sessions for other stakeholder groups, such as civil
servants or politicians. In each case, the primary content relevant to the needs of that
stakeholder group (e.g., will it be measurement theory? practical know-how? ethical
principles?) has to be extracted and translated into a language that particular group can
access and understand.
concentric circles expand outwards from an ‘expert core’ of assessment knowledge, skill
and principles, with each successive ring (or segment of a ring) representing the level of
content/input that is required to meet the needs of a particular set of stakeholders.
Taking a more discriminating perspective in this way may assist us in identifying the
specific range and depth of testing expertise that is relevant when selecting the content
or input that will best match each group’s needs. It should also make the process of
developing AL/LAL more manageable and achievable. Not everyone needs to know or
be able to do everything to the same level. What is important is that they should be
competent in the knowledge, skills and understanding necessary for their context of
activity. The resulting profile for each stakeholder group is likely to look somewhat dif-
ferent. For example, a profile for test writers may cover a wide range of content dimen-
sions fairly evenly and in some depth. A profile for classroom language teachers,
however, may end up focusing strongly on the practical know-how needed for creating
tests but have a much lighter focus on measurement theory or ethical principles; the lat-
ter may need to be touched upon only briefly at a surface level. While a profile for
university administrators will address those aspects of the assessment literacy construct
that relate to understanding the nature of test instruments and the meaning of their
scores for decision-making purposes, other aspects such as how to construct and vali-
date tests need not receive much attention.
Figure 2(a–d) attempts to illustrate what differential assessment literacy might look
like for these three groups and for the community of professional language testing
experts. It should be noted that the labelled dimensions on the eight axes (i.e. knowledge
of theory, technical skills, etc.) are hypothesized from the discussion of possible AL/
LAL components across various papers in this special issue, while the values (i.e. 0–4)
are hypothesized according to the different stages of literacy suggested by Pill and
Harding. The diagrams are for illustrative purposes only, to show how it might be
Language Language
Local pracces Local pracces
pedagogy pedagogy
Sociocultural Sociocultural
values values
(c) Knowledge of (d) Knowledgeof
theory theory
4 4
Scores and Scores and
3 Technical skills 3 Technical skills
decision making decision making
2 2
1 1
Personal Principles and Personal Principles and
beliefs/atudes 0 concepts 0
beliefs/atudes concepts
Language Language
Local pracces Local pracces
pedagogy pedagogy
Sociocultural Sociocultural
values values
any of the elements depending upon the nature and extent of the stakeholder’s involve-
ment in assessment. Figure 2(a–d) attempts to operationalize the range of key compo-
nents making up the AL/LAL construct and to grade them on some sort of a continuum
of depth/intensity.
Finally, it is worth noting that several of papers in this special issue make it clear that
language assessment literacy is not necessarily a value-free concept. It is important to
consider, therefore, what or whose values shape any programme to develop language
assessment literacy. Two of the papers commented on a certain tension experienced
between the perspectives of ‘professional language testers’ and of ‘non-language testers.’
This observation resonates with the tension commented on by Brumfit (2010) in his dis-
cussion of language education in British society. He described a tension between ‘exper-
tise and popular understanding’:
there has been an increase of linguistically expert academics at the same time as experts of all
kinds have been increasingly marginalised by governments (see Brumfit 2001, Chapter 6).
Several of the tensions mentioned above reflect differences between specialist and popular
views of language and its role. (Brumfit, 2010, p. 21)
Brumfit’s comments referred primarily to the tension that arose in the late 1990s between
specialist and popular views concerning primary and secondary language education
issues in the UK. A similar tension can be discerned in the world of language testing;
there is always the danger that testing experts risk adopting a somewhat superior attitude
as far as the value and significance of their professional expertise is concerned but which
leads to them being marginalized in a way that is counterproductive for all concerned.
The development of AL/LAL across a range of stakeholder groups is likely to be most
successful when it reflects a dynamic and iterative process informed by a collaborative
ongoing dialogue taking place at the interface between language testing experts and non-
specialist test stakeholders.
In her Guest Editorial to this valuable special issue, Inbar-Lourie highlights four
strands to emerge from the authors’ contributions: first, the breadth of scope in terms of
how the AL/LAL construct can be interpreted and defined; second, the nature and extent
of the language component within an AL/LAL construct; third, the site of authority for
determining the precise nature of AL/LAL; and fourth, what constitutes the essential
core components of AL/LAL. This final paper has offered some additional reflections on
these strands and has attempted to suggest some practical methods for conceptualizing
range and depth of content and for differentiating profiles of stakeholder need and levels
of proficiency. Hopefully, some of these ideas can be refined more fully to reveal new
insights and approaches in this important area for our field.
Funding
This research received no specific grant from any funding agency in the public, commercial or
not-for-profit sectors.
References
Blue (2010). (Ed.) Developing academic literacy. Bern, Switzerland: Peter Lang.