Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

INVITED RESEARCH ISSUES

TESOL Quarterly invites readers to submit short reports and updates on their work.
These summaries may address any areas of interest to Quarterly readers.

Edited by CONSTANT LEUNG


Kings College London

Reconceptualizing Language Assessment Literacy:


Where Are Language Learners?
JIYOON LEE
University of Maryland, Baltimore County
Baltimore, Maryland, United States
YUKO GOTO BUTLER
University of Pennsylvania
Philadelphia, Pennsylvania, United States

doi: 10.1002/tesq.576

T he learner-centered approach in language teaching was one of the


major paradigm shifts in the past century. Learners’ active engage-
ment in learning has been highly encouraged, and language education
researchers and teachers have investigated pedagogical approaches to
incorporate learners’ perspectives in learning (Lambert, Philp, &
Nakamura, 2017; Nunan, 1988). Language assessment is no exception.
Varying types of language assessments (e.g., self-assessment and peer
assessment) have now been introduced in classrooms in order to
engage learners as autonomous agents in language assessment.
The prevailing accountability discourse in education in many parts
of the world demands comprehensive information about learners.
Ranges of assessments are implemented to collect information about
learners and to facilitate their constructive learning. Among varied
subject-specific assessments, language assessment seems to play a
unique role not only in education but also in the society, reflecting
language’s significance as a primary means of communication and

TESOL QUARTERLY Vol. 0, No. 0, 0000 1


© 2020 TESOL International Association
knowledge sharing. Language assessment has been used for high-
stakes decisions such as admissions and citizenship. The range of lan-
guage assessment practices requires a high degree of subject-specific
assessment literacy (AL), namely, language assessment literacy (LAL),
among stakeholders (Levi & Inbar-Lourie, 2019). LAL is broadly
defined as stakeholders’ knowledge about language assessment princi-
ples and its sociocultural-political-ethical consequences, the stakehold-
ers’ skills to design and implement theoretically sound language
assessment, and their abilities to interpret or share assessment results
with other stakeholders (Brindley, 2001; Davies, 2008; Fulcher, 2012;
Inbar-Lourie, 2008; Lee, 2019). While still at a relatively early stage,
the existing research on LAL has provided valuable information
regarding the nature of LAL. It enhances our understanding of the
current status of LAL among stakeholders and helped us envision the
ways to support their LAL development. However, previous studies on
LAL primarily have focused on teachers and administrators, while
learners’ perspectives have rarely been incorporated (Malone, 2017).
Considering the interconnected relationships among instruction,
learning, and assessment in language education as well as the central-
ity of learners’ needs in curriculum development, we argue that it is
critical to incorporate perspectives of one of the major stakeholders,
namely the perspectives of learners, in our understanding of LAL.
Based on a review of the current LAL models and corresponding
empirical studies, we address the importance of incorporating learn-
ers’ perspectives on language assessment in the conceptualization of
LAL. We also suggest that diversifying research goals and methods can
better reflect language learners’ perspectives.

LITERATURE REVIEW

Current Theoretical Models


While there is some disagreement with respect to the detailed com-
positional nature of LAL, the general consensus is that LAL is critical
for stakeholders, and that it can greatly influence stakeholders’ educa-
tional decisions related to assessment (Fulcher, 2012; Levi & Inbar-
Lourie, 2019). What stakeholders need to know is also largely agreed
upon; they need to have (1) knowledge of language assessment theo-
ries and context, (2) practical skills to develop and interpret assess-
ment, and (3) understanding of the social consequences of assessment
(including ethics and fairness of assessment). These consensuses are
well represented in major LAL models.

2 TESOL QUARTERLY
The concept of LAL was first introduced in Brindley’s (2001) sug-
gestion for curriculum development in language teacher education.
He argued that teachers’ understanding of social consequences and
contextual information of language assessment needs to be the core of
teacher education curriculum. Aligned well with Brindley’s view of cen-
tralizing teachers’ understanding of social and political consequence
and the role of sociocultural context in education, Inbar-Lourie
(2008) conceptualized LAL as the what, the how, and the why, placing
a strong emphasis on the why. She emphasized the social and political
consequences of language assessment in relation to context where lan-
guage assessment is implemented. Her model focused primarily on
classroom-based assessment, placing the teachers’ role in developing,
implementing, and interpreting classroom-based language assessment
at the center of the model. Inbar-Lourie posited that LAL should
reflect the epistemological shift in research from positivism to inter-
pretivism. Interpretivism considers reality as a socially constructed
product. Inbar-Lourie suggested that language assessment conse-
quences should be interpreted in relation to the educational context
where the language assessment was implemented.
Inbar-Lourie’s (2008) conceptualization of LAL was supported by
Davies’s model of LAL, which consisted of knowledge, skills, and princi-
ples (Davies, 2008). In his model, knowledge indicated stakeholders’ lan-
guage learning theories, measurement theories, and context where
language assessments are implemented. Skills referred to education
stakeholders’ ability to develop, administer, and use testing technol-
ogy, as well as conduct statistical analyses and interpret testing results.
Finally, principles meant one’s understanding of proper uses, impact,
fairness, and ethics of language assessment. Lee (2019) attempted to
connect learning and assessment more specifically. She placed lan-
guage teachers’ understanding of language learning and teaching the-
ories and practices at the core of LAL. She further argued that
teachers’ LAL should start from their knowledge about learners’ lan-
guage learning (e.g., learner characteristics, language learning pro-
cesses, characteristics of language forms) and should influence their
decision about learners’ language learning.
While acknowledging the importance of classroom-based assessment,
Fulcher (2012) expressed his concern that Inbar-Lourie’s interpretative
approach to LAL may potentially mislead those who have less experi-
ence with theoretical accounts of language assessment. According to Ful-
cher, Inbar-Lourie’s strong emphasis on LAL as being socially
constructed may lead such stakeholders to overlook psychometric
aspects of language assessment and develop a misunderstanding of LAL.
Unlike other primarily deductive models, Fulcher inductively devel-
oped an LAL model based on closed- and open-ended survey

INVITED RESEARCH ISSUES 3


responses from 278 in-service language teachers. His model included
layers of context, principles, and practices. In Fulcher’s model, context
referred to stakeholders’ ability to situate language assessment in his-
torical, political, and philosophical context to better understand lan-
guage assessment’s roles. Principles included stakeholders’
understanding of assessment processes, theoretical account, and ethics
of language assessment practices. Practices encompassed stakeholders’
knowledge, skills, and abilities to design, develop, maintain, and ana-
lyze not only large-scale standardized but also classroom-based assess-
ment (Fulcher, 2012).
The LAL models discussed above differed in the epistemological
frameworks they are based on and in their detailed compositions of
LAL. Nonetheless, these models all recognized that LAL is multi-
faceted and consists of not only theoretical accounts of language
assessment but also the practical skills, as well as understanding of the
sociocultural-political-ethical consequences of language assessment.
Although these models are mainly concerned with teachers’ LAL,
limited but meaningful attempts were made to include other stake-
holders in the discussions of LAL (Malone, 2017; Taylor, 2013). Taylor
(2013) categorized stakeholders into three groups who engage in lan-
guage assessment: test developers and researchers as a core group; lan-
guage teachers and course instructors as an intermediary group; and
the general public and policy makers as a peripheral group in relation
to the degrees of the required LAL. She examined each group’s LAL
needs in terms of the following eight components: knowledge of the-
ory, scores and decision making, personal beliefs/attitudes, local prac-
tices, technical skills, principles and concepts, language pedagogy, and
sociocultural values (p. 410). Taylor argued that not every component
should be equally emphasized for every stakeholder, and that ideal
LAL profiles should differ depending on the stakeholders’ needs and
involvement in language assessment. For example, Taylor suggested
that professional language test makers need to have solid and bal-
anced understandings of all eight aspects of LAL, whereas classroom
teachers need firm understanding of language pedagogy while knowl-
edge of assessment theory is less critical. Taylor’s arguments to con-
sider diverse stakeholders’ LAL development certainly broadened the
scope of LAL discussions. Nonetheless, students were not considered
among the stakeholder groups in her model.
The limited focus on test-takers in the current LAL discussions was
addressed by Malone in her plenary talk at East Coast Organization of
Language Testers in 2017. Malone stated that LAL studies largely
ignored the test-takers or obtained information of test-takers’ experi-
ences through the eyes of their teachers. She argued that students
have the right to understand the purpose, reasons, and logistics, as

4 TESOL QUARTERLY
well as decisions regarding assessment, and that their perspectives can
also enhance the validity of tests. Malone also discussed possible logis-
tical challenges associated with inviting test-takers’ views due to their
diverse characteristics and limited experiences.
In essence, the existing models primarily concern LAL for teachers,
testing professionals, and administrators. This tendency has critically
impacted the directions of empirical research studies.

Examining General Trends in Empirical Research

Using keywords including language assessment literacy and assessment


literacy + language on LLBA, ERIC, World Cat, and ProQuest databases,
the authors searched for peer-reviewed journal articles and book chap-
ters of empirical studies. The search yielded 52 empirical studies on
LAL. The inclusion criteria we used were as follows:
1. Studies that investigated language stakeholders’ current status,
development, improvement, or needs in LAL.
2. Studies that used one LAL or AL models to analyze the data or
studies that discussed their findings in relation to participants’
LAL or AL.
3. Studies that described participants (language stakeholders) and
data collection methods and/or instruments.
4. Studies that framed themselves as LAL studies.
5. Studies that were published between 2001 and Fall 2019.
Out of 52 studies, 46 studies (88.4%) investigated preservice and/or
in-service teachers’ LAL (e.g., Fulcher 2012; Levi & Inbar-Lourie, 2019;
Vogt & Tsagari, 2014). In-service teachers’ LAL has been the main object
(N = 37, 71.2%) of LAL research. The studies often revealed preservice
and in-service teachers’ lack of LAL and suggested the great needs nec-
essary for improving their LAL. It was also commonly studied how pre-
service and in-service language teachers developed LAL when they
enrolled in a language assessment course or a professional workshop
(e.g., Baker & Riches, 2018; Lee, 2019; Walters, 2010). A small number
of studies (N = 5, 11%) looked at other stakeholders’ LAL including uni-
versity admission officers, employers, and policymakers (e.g., Deygers &
Malone, 2019; O’Loughlin, 2013; Pan & Roever, 2016; Pill & Harding,
2013) or compared language teachers’ LAL with those of other stake-
holders (e.g., Malone, 2013). These studies typically document stake-
holders’ misconceptions or misinterpretations of purposes or uses of
language assessment, which in turn provides a strong rationale for offer-
ing LAL education to varied stakeholders as well as teachers.

INVITED RESEARCH ISSUES 5


With respect to data collection methods, surveys and interviews are
prominent (N = 37, 71%). Curiously, inventories, which were often
used in other fields of educational research, are not frequently used in
studies concerning LAL, perhaps reflecting the lack of any valid LAL
inventories. The limited number of inventories used in LAL studies
included a translated version of the Teachers’ Conceptions of Assess-
ment Inventory developed by Brown, Hui, Yu, and Kennedy (2011) or
the Assessment Literacy Inventory developed by Mertler and Campbell
(2005), among others. These inventories presented scenarios concern-
ing assessment-related issues and ask stakeholders’ perspectives about
the assessment practices in the scenarios. Other research methods,
although comparatively rare, included quality analyses of teacher-made
assessments (Koh, Burke, Luke, Wengao, & Tan, 2018; Levi & Inbar-
Lourie, 2019), think-aloud protocols (Yastibas & Takkacß, 2018), and a
metaphor analysis (Nimehchisalem & Nur Izyan Syamimi, 2018).
The Teachers’ Assessment Literacy Enhancement Project (Tsagari
et al., 2018) is worth mentioning here due to its large scale and poten-
tial impacts on the discussion of learners. A 3-year project, involving
language assessment experts from five European countries, was initi-
ated to train language teachers regarding assessment to better serve
language learners. The project’s philosophy posited that teachers’ LAL
is inclusive of their understanding of their learners. This project con-
ducted an extended needs analysis of in-service language teachers and
developed language assessment courses and a handbook for teachers.
Tsagari and her colleagues, however, did not report empirical evi-
dence indicating that the training did change the participating teach-
ers’ awareness of learners.
Finally, two other studies have de facto examined learners’ percep-
tions and understanding of assessment, although the authors did not
frame their studies as LAL. One such was conducted by Sato and
Ikeda (2015), who examined how Japanese and Korean university stu-
dents understood constructs of items in very high-stakes university
entrance exams. Study participants were asked to select one pre-identi-
fied ability that each testing item was intended to measure. Sato and
Ikeda found that the agreement between learners’ and researchers’
identification of construct was not very high (i.e., 71.8% Japanese
learners and 59.1% Korean learners), suggesting that many learners
have insufficient understanding of what the high-stake tests actually
measure—tests for which these learners have invested substantial time
and effort to prepare for.
Another study that examined learners’ perspectives of language
assessment is Vlanti (2012). In this study, Vlanti surveyed Greek junior
high school students and their teachers, comparing their perceptions
towards assessment practice. Multiple discrepancies between the two

6 TESOL QUARTERLY
groups were evident. Vlanti noted that teachers believed that their
assessment clearly reflected the curriculum requirement. Many also
believed that it may be useful to invite their students when developing
assessment, but in practice they rarely did. The students were mainly
concerned with grades (i.e., quantified summative results of the assess-
ment), rarely thinking they received sufficient information on test con-
tents and formats as the teachers had claimed to give to the students.
Vlanti suggested that both teachers and learners need to better under-
stand the purpose of the assessment and how learners could appropri-
ately incorporate assessment procedure for learners to gain greater
autonomy of their own learning.
When expanding our search to include studies that involved learn-
ers’ perspectives in language assessment outside of LAL research, we
found several studies on self- and peer-assessment (e.g., Butler, 2018;
Butler & Lee, 2006, 2010; Cohen & Upton, 2006; Ma & Winke, 2019).
Examining learner performance in self- and peer-assessments, these
studies highlighted and explored learners’ cognitive processes, deci-
sion-making processes, or test-taking strategies during the assessment.
We also found that language assessment validation studies began to
incorporate learners’ perspectives (e.g., Cheng & DeLuca, 2011; Fox &
Cheng, 2007). Cheng and DeLuca (2011), for example, analyzed learn-
ers’ assessment-taking experience in order to validate the administra-
tion and interpretation of assessment. They showed that learners are
capable of providing critical information to improve language assess-
ment practice.
The self- and peer-assessment studies, and the validation studies
mentioned above, certainly enhanced our understanding of learners’
cognitive and operative processes regarding language assessment. Fur-
thermore, the findings of these studies led us to believe that strong
validity arguments can come from test-takers’ participation in the test
development process (Malone, 2017). However, the results were not
discussed from the LAL point of view; none of the studies addressed
learners’ meta-perspectives of assessment as critical part of teachers’
and test developers’ LAL.
After we completed the literature review above, we realized that a
few more studies had been published; however, they did not specifi-
cally investigate learners. One study that attempted to incorporate
learners in relation to LAL was Kremmel and Harding (2020). Krem-
mel and Harding tested if Taylor’s (2013) proposals on differentiated
LAL profiles were empirically supported. The results indicated that
her LAL models for text developers, researchers, and teachers were
largely supported. Their large-scale survey did include the test-taker.
However, because the test-takers composed only 2.8% of the entire
data set, they were excluded from the analysis.

INVITED RESEARCH ISSUES 7


HOW STUDENTS’ VOICES CAN ENHANCE OUR
UNDERSTANDING OF LAL

It is a fundamental premise of the learner-centered approach to


place learners’ needs at the core of language instruction as well as at
the starting point of language learning. It is also essential to make
clear alignment among language instruction, learning, and assessment,
and to incorporate learners’ voices in language assessment. Meanwhile,
existing LAL models have largely been developed based on a theory-
driven and deductive manner from the perspectives of test developers,
researchers, or teachers. In light of this situation, we suggest incorpo-
rating bottom-up approaches to LAL. Doing so, we acknowledge that
it is challenging to invite learners into theory building and empirical
research on LAL. As Malone (2017) suggested, learners are not always
fully trained to share their experiences about language assessment
practice. Furthermore, the contexts where individual learners experi-
ence are varied, which makes it harder to develop generalizable LAL
models.
Taking learners’ voices more seriously, however, can allow us to
reexamine our current understanding of LAL. For example, if assess-
ment knowledge and practice are necessary for learners, they may
need to be presented differently from those of other stakeholders,
depending on their developmental levels (e.g., age) and experiences.
Learners’ voices in assessment may direct our attention to other ele-
ments of LAL, for which we may wish to revise existing LAL models to
incorporate new ideas gained from learners or even develop a separate
learner-specific LAL model. Incorporating learners’ voices in LAL may
give us a more direct pathway for accountability discourse of educa-
tional policies. Instead of focusing on how learners can develop opera-
tive skills when setting and comprehending assessment criteria, as has
been done in the past, researchers and educators may also wish to
focus on how to nurture students’ awareness of social and ethical con-
siderations in language use and assessment. This approach may be bet-
ter aligned with the spirit of the learner-centered approach to
language education.
While there is no doubt that we need more empirical studies con-
cerning learners’ perspectives and understanding of language assess-
ment, the limited information available to date has already given us
some valuable insights. First, it is vital that learners be sufficiently
informed about language assessment; namely, to make the language
assessment experience fair, it is necessary for learners to understand
the purposes, goals, and constructs of language assessment. As Sato
and Ikeda’s study (2015) implies, learners’ understanding of

8 TESOL QUARTERLY
assessment constructs has the potential to influence their learning to a
great extent. Second, given the discrepancies in perceptions towards
language assessment between learners and their teachers, as Vlanti
(2012) notes, it is imperative for teachers and learners to have suffi-
cient communication about purposes, goals, constructs, and expected
consequences of language assessment; the teachers’ role in this pro-
cess appears to be critical. Teachers should create opportunities for
their students to discuss and negotiate the purposes and goals of the
assessment and the consequences of assessment in students’ learning.
Thus, one’s ability to negotiate with other stakeholders to maximize the
use of assessment to enhance students’ learning should be an important
element of LAL. As a way to enhance communication between learners
and teachers, teachers can proactively invite learners to be part of the
assessment processes and help learners understand the connections
between learning and assessment; such transparency in learning and
assessment would not only clarify their learning goals but also give learn-
ers greater agency and empowerment in their own learning.
With respect to research methodology, in addition to surveys and
interviews, the field of language assessment can benefit from incorpo-
rating other types of methods. The surveys and interviews employed in
existing studies were often conducted in somewhat holistic or decon-
textualized ways. However, individuals’ responses to decontextualized
questions may differ when they are asked to respond in a more con-
textualized fashion (Butler & Lee, 2006), depending on the respon-
dents’ age, learning and teaching contexts, and the power relations
between learners and their teachers, as well as between the researchers
and the responders. Talking about or responding to assessment-related
questions may be particularly challenging for young learners. Because
of their cognitive and affective development, as well as their relevant
experience, young learners may not be used to the metalanguage
which often appears in existing survey items and interview questions
(Christensen & James, 2017).
As an example of a contextualized approach to LAL, we can adapt
approaches used in the previous studies in assessment and other fields
of applied linguistics. For instance, teachers can invite learners to partic-
ipate in constructing assessment (e.g., Brown, 1994; Dann, 2002). Such
activities allow teachers to understand what the students think important
in relevant instruction. They also provide the students with valuable
opportunities to reflect on what is important to learn while taking a tea-
cher’s perspective into consideration. By letting stakeholders engage in
assessment-related activities, researchers can observe how learners and
teachers interact and negotiate the goals and means of learning through
assessment in a concrete fashion. Similarly, incorporation of think-aloud
and stimulated recall when learners engage in concrete assessment items

INVITED RESEARCH ISSUES 9


can be encouraged in research on LAL, although they have their own
limitations (e.g., Ericsson & Simon, 1996). Information on learners’
online processing during assessment has been a popular approach for
validation research in assessment, and such information can be more sys-
tematically used to better understand stakeholders’ LAL.

CONCLUSION
Reviewing major existing LAL models and previous empirical studies,
we found that existing research on LAL has primarily focused on teach-
ers, while learners’ perspectives and voices are largely missing in its con-
ceptualization. We also found that previous research heavily relies on
surveys and interviews, and moreover, such surveys and interviewers are
often conducted in somewhat holistic and decontextualized ways.
Based on these findings, the following suggestions can be made:
1. Learners’ perspectives and voices should be more seriously and
systematically incorporated to better understand LAL. Learners
are the central stakeholders, and LAL is important for learners,
in addition to teachers and other stakeholders, as current schol-
arship notes. We can expect that assessment-informed learners
will be more autonomous in their own learning. A comprehen-
sive understanding of LAL cannot be achieved without taking
the learners’ perspective seriously; a more learner-centered
approach is necessary in LAL research. A learner-centered
approach to LAL is critical not only for theory building but also
for assessment practice. Moreover, it can also shed light on
other educational practices as well as assessment practice such
as teacher education and curriculum development. The remain-
ing question is how to incorporate learners’ voices in theoretical
discussions of LAL. Extensive empirical studies are needed to
decide whether there is room for learners’ voices in the current
LAL models by expanding definitions and scopes of LAL com-
ponents. Alternatively, we may need to develop a new LAL
model that connects both teachers’ and learners’ LAL.
2. While information on learners’ perspectives on language assess-
ment remains limited at this point, the information currently
available informs us that communication between teachers and
learners is critical for development of comprehensive LAL.
Teachers should not only serve as facilitators, but also as negotia-
tors to reflect their learners’ perspectives in language assessment
development. One’s ability to communicate with other stakehold-
ers on assessment purpose and roles in learning appears to be an

10 TESOL QUARTERLY
important component of LAL, and thus it needs to be high-
lighted in the existing conceptualization of LAL.
3. A greater variety of research methods should also be encour-
aged. Given the relatively heavy reliance on surveys and inter-
views in previous studies, broadening data collection methods
would be beneficial not only for obtaining more information on
LAL but also for triangulating the data collected in other
means. For researchers interested in young learners, careful
consideration is necessary to see if the methods widely used in
LAL research, particularly surveys, are indeed age-appropriate,
and whether survey data is therefore valid and reliable. Future
LAL research can benefit from incorporating research methods
that have been already employed in other assessment research
(e.g., think-aloud in validation studies).
In conclusion, we believe that inviting learners’ perspectives in LAL
research while broadening data collection methods will enhance our
understanding of LAL and promote a greater degree of learner- cen-
tered educational practice.

ACKNOWLEDGMENT

We would like to thank the anonymous reviewers as well as Constant Leung for
their very constructive feedback on this article. Any remaining errors in this article
are our own.

THE AUTHORS

Jiyoon Lee is an assistant professor of education at the University of Maryland, Bal-


timore County (UMBC). She conducts research on learner-centered assessment,
language assessment literacy, and teacher education. She currently teaches courses
on linguistics and language assessment to preservice and in-service teachers in
UMBC’s TESOL program.

Yuko Goto Butler is a professor of educational linguistics in the Graduate School


of Education at the University of Pennsylvania. She is also the director of the
Teaching English to Speakers of Other Languages (TESOL) Program. Her
research interests include language assessment and second and foreign language
learning among children.

REFERENCES
Baker, B. A., & Riches, C. (2018). The development of EFL examinations in Haiti:
Collaboration and language assessment literacy development. Language Testing,
35(4), 557–581. https://doi.org/10.1177/0265532217716732

INVITED RESEARCH ISSUES 11


Brindley, G. (2001). Language assessment and professional development. In C.
Elder, A. Brown, K. Hill, N. Iwashita, T. Lumley, T. McNamara, & K.
O’Loughlin (Eds.), Experimenting with uncertainty: Essays in honour of Alan Davies
(pp. 126–136). Cambridge, England: Cambridge University Press.
Brown, G. T. L., Hui, S. K. F., Yu, F. W. M., & Kennedy, K. J. (2011). Teachers’
conceptions of assessment in Chinese contexts: A tripartite model of account-
ability, improvement, and irrelevance. International Journal of Educational
Research, 50(5–6), 307–320. https://doi.org/10.1016/j.ijer.2011.10.003
Brown, H. D. (1994). Teaching by principles: An interactive approach to language peda-
gogy. Englewood Cliffs, NJ: Prentice-Hall.
Butler, Y. G. (2018). The role of context in young learners’ processes for respond-
ing to self-assessment items. Modern Language Journal, 18, 1–20. https://doi.
org/10.1111/j.1540-4781.2006.00463.x
Butler, Y. G., & Lee, J. (2006). On-task versus off-task self-assessment among Kor-
ean elementary school students studying English. Modern Language Journal, 90,
506–518. https://doi.org/10.1111/j.1540-4781.2006.00463.x
Butler, Y. G., & Lee, J. (2010). The effects of self-assessment among young learners
of English. Language Testing, 27(1), 5–32. https://doi.org/10.1177/
0265532209346370
Cheng, L., & DeLuca, C. (2011). Voices from test-takers: Further evidence for lan-
guage assessment validation and use. Educational Assessment, 16(2), 104–122.
https://doi.org/10.1080/10627197.2011.584042
Christensen, P., & James, A. (Eds.). (2017). Research with children: Perspectives and
practices (3rd ed.). London, England: Routledge.
Cohen, A. D., & Upton, T. A. (2006). Strategies in responding to new TOEFL reading
tasks (TOEFL Monograph No.MS–33). Princeton, NJ: Educational Testing Ser-
vice.
Dann, R. (2002). Promoting assessment as learning: Improving the learning process. Lon-
don, England: Routledge.
Davies, A. (2008). Textbook trends in teaching language testing. Language Testing,
25(3), 327–347. https://doi.org/10.1177/0265532208090156
Deygers, B., & Malone, M. (2019). Language assessment literacy in university
admission policies, or the dialogue that isn’t. Language Testing, 36(3), 347–368.
https://doi.org/10.1177/0265532219826390
Ericsson, K., & Simon, H. (1996). Protocol analysis: Verbal reports as data (3rd ed.).
Cambridge, MA: MIT Press.
Fox, J., & Cheng, L. (2007). Did we take the same test? Differing accounts of the
Ontario secondary school literacy test by first and second language test-takers.
Assessment in Education: Principles, Policy and Practice, 14(1), 9–26. https://doi.
org/10.1080/09695940701272773
Fulcher, G. (2012). Assessment literacy for the language classroom. Language
Assessment Quarterly, 9(2), 113–132. https://doi.org/10.1080/15434303.2011.
642041
Inbar-Lourie, O. (2008). Constructing an assessment knowledge base: A focus on
language assessment courses. Language Testing, 25(3), 385–402. https://doi.
org/10.1177/0265532208090158
Koh, K., Burke, L. E. C., Luke, A., Wengao, G., & Tan, C. (2018). Developing the
assessment literacy of teachers in Chinese language classrooms: A focus on
assessment task design. Language Teaching Research, 22(3), 264–288. https://doi.
org/10.1177/1362168816684366
Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of
language assessment literacy across stakeholder groups: Developing the

12 TESOL QUARTERLY
Language Assessment Literacy Survey. Language Assessment Quarterly, 17(1), 100–
120. https://doi.org/10.1080/15434303.2019.1674855
Lambert, C., Philp, J., & Nakamura, S. (2017). Learner-generated content and
engagement in second language task performance. Language Teaching Research,
21, 665–680. https://doi.org/10.1177/1362168816683559
Lee, J. (2019). A training project to develop teachers’ assessment literacy. In E.
White & T. Delaney (Eds.), Handbook of research on assessment literacy and teacher-
made testing in the language classroom (pp. 58–80). Hershey, PA: IGI Global.
https://doi.org/10.4018/978-1-5225-6986-2.ch004
Levi, T., & Inbar-Lourie, O. (2019). Assessment literacy or language assessment lit-
eracy: Learning from the teachers. Language Assessment Quarterly, 17(2), 168–
182. https://doi.org/10.1080/15434303.2019.1692347
Ma, W., & Winke, P. (2019). Self-assessment: How reliable is it in assessing oral
proficiency over time? Foreign Language Annals, 52, 66–86. https://doi.org/10.
1111/flan.12379
Malone, M. (2013). The essentials of assessment literacy: Contrasts between testers
and users. Language Testing, 30(3), 329–344. https://doi.org/10.1177/
0265532213480129
Malone, M. (2017, October). Unpacking language assessment literacy: Differentiating
needs of stakeholder groups. Paper presented at East Coast Organization of Lan-
guage Testers, Washington, DC.
Mertler, C. A., & Campbell, C. (2005). Paper presented Measuring teachers’
knowledge and application of classroom assessment concepts: Development of
the “Assessment Literacy Inventory.” Paper presented at the annual meeting of
the American Educational Research Association, Montr eal, Quebec, Canada.
Nimehchisalem, V., & Nur Izyan Syamimi, M. H. (2018). Postgraduate students’
conception of language assessment. Language Testing in Asia, 8(1), 1–14.
https://doi.org/10.1186/s40468-018-0066-3
Nunan, D. (1988). The learner-centered curriculum. Cambridge, England: Cambridge
University Press.
O’Loughlin, K. (2013). Developing the assessment literacy of university proficiency
test users. Language Testing, 30(3), 363–380. https://doi.org/10.1177/
0265532213480336
Pan, Y., & Roever, C. (2016). Consequences of test use: A case study of employers’
voice on the social impact of English certification exit requirements in Taiwan.
Language Testing in Asia, 6(1), 1–21. https://doi.org/10.1186/s40468-016-0029-5
Pill, J., & Harding, L. (2013). Defining the language assessment literacy gap: Evi-
dence from a parliamentary inquiry. Language Testing, 30(3), 381–402. https://
doi.org/10.1177/0265532213480337
Sato, T., & Ikeda, N. (2015). Test-taker perception of what test items measure: A
potential impact of face validity on student learning. Language Testing in Asia, 5
(1), 1–16. https://doi.org/10.1186/s40468-015-0019-z
Tsagari, D., Vogt, K., Froelich, V., Csepes, I., Fekete, A., Green, A., Hamp-Lyons,
L., Sifakis, N., & Kordia, S. (2018). Handbook of assessment for language teachers.
Nicosia, Cyprus: Teachers’ Assessment Literacy Enhancement (TALE).
Retrieved from http://taleproject.eu/.
Taylor, L. (2013). Communicating the theory, practice and principles of language
testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.
https://doi.org/10.1177/0265532213480338
Vlanti, S. (2012). Assessment practices in the English language classroom of Greek
junior high school. Research Papers in Language Teaching and Learning, 3(1), 92–
122.

INVITED RESEARCH ISSUES 13


Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers:
Findings of a European study. Language Assessment Quarterly, 11, 374–402.
https://doi.org/10.1080/15434303.2014.960046
Walters, F. (2010). Cultivating assessment literacy: Standards evaluation through
language-test specification reverse engineering. Language Assessment Quarterly, 7,
317–342. https://doi.org/10.1080/15434303.2010.516042
Yastibas, A. E., & Takkacß, M. (2018). Understanding language assessment literacy:
Developing language assessments. Journal of Language and Linguistic Studies, 14
(1), 178–193.

14 TESOL QUARTERLY

You might also like