Third Group of The Princples of Assessment

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

WRITING REPORT CHAPTER REVIEW

H.D BROWN’S BOOK

Written by:
ALIYA IZET BEGOVIC YAHYA
EVI MALA WIJAYANTI
NAFRIANTI
RAIHANAH PERMATA SARI

MAGISTER OF ENGLISH DEPARTMENT


FACULTY OF LANGUAGE AND ART
STATE UNIVERSITY OF JAKARTA
2018
The identity of book:

Title :Language Assessment: Principles and


Classroom Practices
Author : H. D. Brown
Publisher : Pearson Education
Year : 2004
Pages : 324 pages
ISBN : 0130988340, 9780130988348

A. Summary
Assessment is an ongoing process that encompasses a much wider domain than a test
(Brown, 2004, p. 4). It is meant that all students’ performance, whether written or spoken work.
It does imply that the teacher does not only rely on testing score but also the things happen along
the classroom are being assessed.
Related with assessment, Brown explores the principles of assessment which are divided
into five types: practicality, reliability, validity, authenticity and washback. These five principles
should be applied in assessment.
First, practicality related to factors such as cost, time, administration, and
scoring/evaluation (Brown, 2004, p. 19). It refers to the relationship between the resources that
will be required in the design, development and use of the test and the resources that will be
available for assessment.
Second, reliability refers to the extent which a test produces consistent scores at different
administrations to the similar group of test takers. Reliability are divided into 4 types which are
student-related reliability, rater reliability, test administration reliability and test reliability.
The first type of reliability, student-related reliability refers to psychological and physical factors
including illness, fatigue and bad day which can affect the true score of the test-takers and brings
out ‘observed score’. It infers to students’ performances are not fully administered during the
tests. The second type of reliability, rater reliability is divided into two types which can affect the
assessment: inter-rater reliability and intra-rater reliability. These two types related to the rater’s
internal and external factors affect the assessment. The third type of reliability, test
administration refers to the conditions that triggers in which test is administered such as noisy
sound, the amount of light, variations in temperature, etc. The last type of reliability, test
reliability is meant that the tests should be fit into the time constraints, not too long or short and
it also should be clearly written.
Third, validity is about the extent to which inferences made from assessment results are
appropriate, meaningful and useful in terms of the purpose of the assessment. In order to
establish validity, we have to consider five types of validity: content validity, criterion validity,
construct validity, consequential validity and face validity. The first type of validity, content
validity is about the relations between what the tests actually matters and the conclusions are to
be drawn from it. For example, in assessing listening, the teacher can use multiple-choice test.
The second type of validity, criterion validity is referred to the extent to which performance on a
test is related to criterion which the indicator of ability being tested. For example we can obtain
criterion validity in communicative classroom test if test scores are added to communicative
measures of grammar points. Criterion validity falls into two categories: concurrent validity
and predictive validity. It refers to the test scores are supported by other concurrent
performance whereas predictive validity refers to a prediction of a test-taker’s likelihood of
future success. The third type of validity, construct validity is about the extent to which a test
actually taps into the theoretical construct (theory) as it has been defined. For example,
proficiency and communicative competence are linguistic constructs. The fourth type of validity,
consequential validity is about the positive or negative consequences of a particular test. The last
type of validity, face validity is the extent to which students view the assessment as fair, relevant,
and useful for improving learning. It is meant that the validity is all about test-taker’s point of
view so it becomes more subjective than other types of validity.
Fourth, authenticity refers to the degree of correspondence of the characteristics of a
given language test task to the features of a target language task and then suggest an agenda for
identifying those target language tasks and for transforming them into valid test items. In short,
the task will be valid if it is likely to be enacted in the real world. Therefore, in a test,
authenticity may be present in the following ways: the test should be natural as possible; items
are contextualized; topics are meaningful for the learner; some thematic organization to items is
provided through a story line or episode; and tasks represent closely real-world tasks.
Fifth, washback refers to the effects the tests have on instruction in terms of how
students prepare for the test. It can be said as a facet of consequential validity. This validity also
refers to the effects of an assessment on teaching and learning prior to the assessment itself, such
as preparation before tests. In enhancing washback, the teacher should consider to comment
generously and specifically on test performance which we call it as feedback. It is much better
than single letter grade or numerical score in a test.

B. Review Brown’s Book


The second chapter reviews five principles of testing: practicality, reliability, validity,
authenticity, and washback. Included also are terms subsumed under them such as rater
reliability and content validity. However, the chapter lacks discussion on how reliability can
actually be estimated, and reliability is not discussed anywhere else in the book. It is
questionable whether readers can fully appreciate and understand the meaning of test consistency
if they are not taught to envision how they can estimate reliability through basic statistics. What
H. D. Brown does do very well in this chapter is familiarize novice teachers with strategies they
can use in the classroom to make tests more user-friendly and less anxiety-filled. Unlike most
language testing professionals, H. D. Brown uses the term to refer primarily to various forms of
feedback from tests in a classroom setting.
Teachers who are in pre or in service even undergraduate students in education need to
know how to design language tests in the four skills. H. D. Brown’s book oversimplifies some
definitions, and after a few weeks into a semester long course on language testing, a student may
want more information on some of testing’s more controversial issues.

C. The supplementary books

Here some books that also discussed about the principles of assessment. The books are:
 Anderson, L. W. 2003. Classroom assessment. London: LEA Publisher
 Earl, L.M & Katz, M. S. 2006. Rethinking classroom assessment with purpose in
mind: Assessment for learning, assessment as learning, assessment of learning. Manitoa
Education
 Russel,l, M.K. 2012. Classroom assessment: Concept and application. McGraw
Hill
 Stufflebeam, D.L & Coryn, C. L. S. 2014. Evaluation theory, models, and
applications. Jossey-Bass
To know more detail about the principles from those authors, in the below the summary of each
books.

1. (Russell & Airasian, 2012)(Anderson, 2003)


Generally, in this book talks about classroom assessment focus in concepts and application.
Assessment is a daily, ongoing, integral part of teaching and learning. Classroom Assessment:
Concepts and Applications explores how assessment is a key component of all aspects of the
instructional process, including organizing and creating a classroom culture, planning lessons,
delivering instruction, and examining how students have grown as result of instruction. The text
also introduces preservice teachers to new tools and approaches to classroom assessment that
result from the infusion of computer-based technologies in schools. This is the most teacher
friendly assessment textbook available one that will inform a teacher’s assessment practices for
years to come.
The explanation about the assessment more complete discussing in the chapter2. Here the
explanation about the assessment proposed by (Russell & Airasian, 2012)

1. THE IMPORTANCE OF CLASSROOM ASSESSMENT


Assessment is an essential component of teaching. Assignments were used to provide
chances for students to develop knowledge and skills, and also to provide teachers with insight
into challenges students were encountering. (Russell & Airasian, 2012, p. 2)
Classroom assessment is the process of collecting, synthesizing, and interpreting information
to aid in classroom decision making. Classroom assessment takes many forms and is a
continuous process that helps teacher make decision about classroom management, instruction,
and student. (Russell & Airasian, 2012, p. 2)
2. PURPOSE OF CLASSROOM ASSESSMENT
According to Lopez, the three major domains are:
No Name Definition
1 Cognitive Domain Encompasses intellectual activities such as memorizing,
interpreting, applying knowledge, solving problems, and
critical thinking.
2 Affective Domain Involves feeling, attitudes, values, interests, and emotions.
3. Psychomotor Domain Includes physical activities and actions in which students
must manipulate object such a pen, a keyboard, or a zipper.

While the purposes of assessment are :


1. Establishing a classroom that support Learning
2. Planning and conducting Instruction (Placing students and providing feedback).
3. Diagnosing student problems and disabilities
4.Summarizing and grading academic learning and progress.

3. PHASE OF CLASSROOM ASSESSMENT


The types of decisions teachers make based on assessment information can be
categorized into three general phases.
NO. Early Assessments Instructional Summative
Assessments Assessments
1 Purpose Provide teacher with a Plan instructional Carry out the
quick perception and activities and monitor bureaucratic
practical knowledge the progress of aspects of
instruction teaching
2 Timing During the first week Daily throughout the Periodically
or two of school school year during the school
year
3 Evidence-gathering Largely informal Formal observation Formal tests,
method observation and student papers for papers, reports.
planning; informal Quizzes, and
observation for assignments
monitoring
4 Type of evidence Cognitive, affective, Largely cognitive and Mainly cognitive
and psychomotor affective
5 Record keeping Information kept in Written lesson plans; Formal records
teacher’s mind; few monitoring kept in teacher’s
written records information not grade book or
written down school files

4. ASSESSMENT, TESTING, MEASUREMENT, & EVALUATION

No Name Definition
1 Assessment a process of collecting, synthesizing, and interpreting
information in order to make a decision. Depending on the
decision being made and the information a teacher needs in
order to inform that decision, testing, measurement, and
evaluation often contribute to the process of assessment.
2 Testing a formal, systematic procedure used to gather information
about student’s achievement or other cognitive skills
3. Measurement a process of quantifying or assigning a number to a
performance or trait. The example is when a teacher scores a
quiz or test.
4. Evaluation a product of assessment that produces a decision about the
value or worth of a performance or activity based on
information that has been collected, synthesized, and
reflected on.

5. THREE GENERAL WAYS TO COLLECT DATA


Teachers rely on three primary methods to gather assessment information for classroom
decisions:
No Name Definition Example
1 Student Among the many products students produce Quiz and Test
Products are homework, written assignment, completed
in class, worksheet, essay, book report, science
project, lane report, artwork, and portfolio (as
well as quiz & test)
2 Observation Observation is a second major method teacher assesses
Techniques classroom teachers use to collect assessment students when they
data. It involves watching or listening to read aloud in a reading
students carry out a specific activity or respond group. Because such
in a given situation. observations are
planned, the teacher
has time to prepare the
students and identify
in advance the
particular behaviors
that will be observed.
3. Oral Observing students through questioning “Why do you think the
Questioning regarding to the material provided before. author ended her story
Technique that way?”
“Explain to me in your
own words what the
definition of recount
text is?” “Raise your
hand if you can tell me
why the answer is
incorrect”
These are all questions
teachers use to collect
information from
students during and at
the end of a lesson. It
also intends to find out
how the lesson is
being understood by
students and engage a
student who is not
paying attention.
6. STANDARDIZED AND NON-STANDARDIZED ASSESSMENT

a. STANDARDIZED ASSESSMENT
Administered, scored, and interpreted in the same way for students, regardless of where
and when they are assessed. The main reason for standardizing assessment procedures is to
ensure that the testing conditions and scoring procedures have a similar effect on the
performance of students in different schools and states.

b. NON-STANDARDIZED ASSESSMENT
Constructed for use in a single classroom with a single group of students. It majorly
focused in the single classroom. It's important to know that standardized tests are not necessarily
better than non-standardized ones. Standardization is only important when information from an
assessment instrument is to be used for the same purpose across many different classroom and
location.

7. VALIDITY AND RELIABILITY

The appropriateness of an assessment is determined by its validity and reliability.

Key Aspect of Assessment Validity


1. Validity is concerned with this general question: “what extent is this decision based on
appropriate assessment information?”
2. Validity refers to the decisions that are made from assessment information not the
assessment approach itself.
3. Validity is a matter of degree; it does not exist on all-or-nothing basis. Think of
assessment validity in terms of categories: highly valid, moderately valid, and invalid.
4. Validity is always determined by a judgment made by the test user.

Key Aspect of Assessment Reliability


1. Reliability refers to the stability or consistency of assessment information and is
concerned with this question: “how consistent or typical of the students’ behavior is the
assessment information I have gathered?”
2. Reliability is not concerned with the appropriateness of the assessment information
collected, only with its consistency, stability, or typically.
3. Reliability does not exist on an all-or-nothing basis, but in degrees: high, moderate, or
low.
4. Reliability is a necessary but insufficient condition for validity.

8. ETHICAL ISSUES AND RESPONSIBILITIES


Although assessment is thought of as a technical activity, there are ethical concerns
associated with the assessment process. Since teacher’s decision can influence student’s self-
perception and life chances, teacher must be aware of many ethical responsibilities involved in
assessment process.
1. Informing students about teacher expectations and assessments before beginning
teaching and assessment
2. teaching students what they are to be tested on before summative assessment
3. not making snap judgments and identifying students emotional labels.
4. avoiding stereotyping students
5. avoiding terms and examples that may be offensive to students
6. avoiding bias toward students with limited English with different culture experiences
2. (Lorna & Katz, 2006)
Lorna & Katz (2006)looked at the assessment from the perspective of purpose rather than
method. The book describe about the three different assessment purposes: assessment for
learning; assessment as learning; and assessment of learning. The order (for, as, of) is intentional,
indicating the importance of assessment for learning and assessment as learning in enhancing
student learning. Assessment of learning should be reserved for circumstances when it is
necessary to make summative decisions.
Assessment for learning occurs throughout the learning process. It is designed to make
each student’s understanding visible, so that teachers can decide what they can do to help
students progress. In assessment for learning, teachers use assessment as an investigative tool to
find out as much as they can about what their students know and can do, and what confusions,
preconceptions, or gaps they might have. Assessment as learning focusses on students and
emphasizes assessment as a process of metacognition (knowledge of one’s own thought
processes) for students. Within this view of learning, students are the critical connectors between
assessment and learning. For students to be actively engaged in creating their own
understanding, they must learn to be critical assessors who make sense of information, relate it to
prior knowledge, and use it for new learning. Assessment of learning refers to strategies designed
to confirm what students know, demonstrate whether or not they have met curriculum outcomes
or the goals of their individualized programs, or to certify proficiency and make decisions about
students’ future programs or placements. It is designed to provide evidence of achievement to
parents, other educators, the students themselves, and sometimes to outside groups (e.g.,
employers, other educational institutions).
The role of assessment shifted from tested student because of the social changing. In the
past, the learning focused on teaching basic skill and knowledge, now, it beyond all those aspects
such as critical thinking, problem-solving, and so on. Besides that, learning is now viewed as a
process of constructing while in the old days learning is accumulation knowledge that is
sequenced, hierarchical, and need to be explicitly and also the educator relied on assessment as
comparison among students to motive them to learn in the past, now, students are motivated
when they experience progress and achievement. Those changes affect the way teachers to assess
their students. In the below, there are three strong implications for how teachers teach, what they
teach, and especially how they apply classroom assessment practices.
The
sub-

chapters are classroom assessment and societal change, the effects of classroom assessment on
learning, classroom assessment and its effects on motivation, using classroom assessment for
differentiating learning, and quality in classroom assessment. The quality in classroom
assessment’s chapter explains about four basic principles or quality issues that are important in
classroom assessment: reliability, reference points, validity, and record-keeping.
1. Reliability
There are many ways to promote reliability:
1. Teachers can use a variety of assessment tasks to provide a range of information. The
more information gathered, the clearer is the picture of a student’s learning profile.
2. Students can show their learning in many different ways. If teachers are to have a good
understanding of an individual student’s learning, they need to allow that student to
demonstrate his or her competence in a manner that suits his or her individual strengths
3. Teachers can work with other teachers to review student work.
2. Reference Point
In classroom assessment, there are three reference points teachers use when considering a
student’s performance:
1. How is the student performing in relation to some pre-determined criteria, learning
outcome, or expectation (criteria- or outcomes-referenced)?
2. How is the student performing in relation to the performance of other students in the
defined group (norm-referenced)?
3. How is the student performing in relation to his or her performance at a prior time
(self-referenced)?
3. Validity
Validity of classroom assessment depends on:
• analyzing the intended learning and all its embedded elements
• having a good match among the assessment approaches, the intended learning, and the
decisions that teachers and students make about the learning
• ensuring that the assessment adequately covers the targeted learning outcomes,
including content, thinking processes, skills, and attitudes
• providing students with opportunities to show their knowledge of concepts in many
different ways (i.e., using a range of assessment approaches) and with multiple measures,
to establish a composite picture of student learning
4. Record Keeping
High-quality record-keeping is critical for ensuring quality in classroom assessment. The
records that teachers and students keep are the evidence that support the decisions that are made
about students’ learning. The records should include detailed and descriptive information about
the nature of the expected learning as well as evidence of students’ learning and should be
collected from a range of assessments.
3. (Anderson, 2003)
This part will discuss the summary of chapter one and two in Anderson’s
book (2003) which related to the principles in classroom assessment.
Chapter one discuss the introduction of classroom assessment while in
chapter two (Anderson, 2003) discuss the why, what, and when of
assessment.
1. Chapter one: Introduction of Classroom Assessment
In this chapter, Anderson (2003) emphasizes that one of the keys to
be a good teacher is laying in the decision that teacher make. He
argues that any teaching act is the result of a decision, whether
conscious or unconscious, that the teacher makes after a complex
cognitive processing of available information. The reasoning leads the
hypothesis that the basic teaching skill is decision making.
In line with the decision making by the teachers, another thing to
be concerned is that how we understand the teacher decision. The
teacher decision making is closely related to how teacher will place
students in certain cell. In the process of decision making, teacher
needs the source of information as consideration as following:
- Health information
- Transcripts course taken and grades earned in those course
- Written comments made by teacher;
- Standardized test score;
- Disciplinary referrals;
- Correspondence between home and school;
- Participation in extracurricular activities
- Portions of divorce decrees pertaining to child and visitation
rights;
- Arrest records.
The quality of information
a. Validity
In general terms, validity is the extent to which the information
obtained from an assessment instrument (e.g., test) or method
(e.g., observation) enables you to accomplish the purpose for
which the information was collected. In terms of classroom
assessment, the purpose is to inform a decision. For example, a
10 CHAPTER 1 teacher wants to decide on the grade to be
assigned to a student or a teacher wants to know what he or she
should do to get a student to work harder.
b. Reliability
Reliability is the consistency of the information obtained from
one or more assessments. Some writers equate reliability with
dependability, which conjures up a common-sense meaning of
the term. A reliable person is a dependable one—a person who
can be counted on in a variety of situations and at various times.
Similarly, reliable information is information that is consistent
across tasks, settings, times, and/or assessors.
c. Objectivity
In the field of tests and measurement, objectivity means that the
scores assigned by different people to students’ responses to
items included on a quiz, test, homework assignment, and so on
are identical or, at the very least, highly similar. If a student is
given a multiple-choice test that has an accompanying answer
key, then anyone using the answer key to score the tests should
arrive at the same score. Hence, multiple-choice tests (along
with true-false tests, matching tests, and most short answer
tests) are referred to as objective tests. Once again, as in the
case of test validity, this is a bit of a misnomer. It is the scores on
the tests, not the tests per se, that are objective.

2. Chapter two: The Why, What, and When of Assessment


This chapter revealed three critical questions concerning
assessment as it pertains to teacher’s decision making. First, why do
teachers assess students? After listing a few obvious answers, we
move on to the more interesting ones. Second, what do the teachers
assess? Third, when should teachers assess their students?
a. Why do teachers assess students?
Teachers assess students for many reasons. They assess because
they are required to do so. For example, there are federal-, state-, or
district-mandated assessments, such as statewide achievement tests,
that must be administered. Teachers also assess their students to
justify their decisions. Student: “Why did I get a C−?” Teacher:
“Because even though you got a B and a B− on your unit tests
(achievement), you didn’t turn in any homework (effort).”
From a more positive perspective, one consistent with the
message of this book, teachers assess their students to inform the
decisions they make about them. “I’m not sure whether to spend more
time on this unit. The students seem to be working hard (effort), but
they don’t seem to be getting it (achievement). And, I have so much
more to cover before the end of the year (reality). Well, maybe if I
spend a couple more days on it, it will pay dividends in the long run.”
One of the recurring themes of this book is that informed decisions
enable teachers to do their jobs better.
In general, teachers’ assessments provide two types of
information. Some information allows teachers to describe things,
whereas other information allows them to begin to offer explanations
for what they describe. To do this, Binet had to formulate a set of if-
then hypothesis. Basically the arguments were as follows:
1. If students score poorly on my test and fail in school, they are
failing because they lack the intellectual capacity to succeed in
school.
2. If students score well on my test and fail in school, they are
failing because they lack the motivation to succeed in school.
b. What do teachers assess students?
1. Assessing students achievement
2. Assessing classroom behaviour
c. When do teacher assess students?
1. Decision about learning
2. Decision about teaching
In Anderson’s book (2003) decision making is a critical component of
effective teaching. In fact, it may be, as Shavelson (1973) argued, the basic
teaching skill. Although good information does not necessarily produce wise
decisions, having access to good information is certainly an asset for the
decision maker. In the remainder of this book, the emphasis is the use of
classroom assessment to enable teachers to improve their decision-making
capabilities.
Before deciding how to assess students, teachers must determine the
purpose of the assessment, the assessment information that is needed to
accomplish the purpose, and the timing of the assessment. The primary
purpose of assessment is to gather the information teachers need to make
sound, defensible decisions. To make these decisions, teachers may need to
describe students’ behavior, effort, and/or achievement. In many cases,
however, teachers need to explain the behavior, effort, and/or achievement.
If done well, assessment provides information that can be used to make a
variety of decisions—decisions that can substantially improve a teacher’s
effectiveness in working with his or her students.
4. (Stufflebeam, Daniel, L., & Coryn, Chris, 2014)
A program evaluation theory is a coherent set of conceptual, hypothetical, pragmatic, and
ethical principles forming a general framework to guide the study and practice of program
evaluation. Program evaluation theory has six main features:
1. overall coherence,
2. core concepts,
3. tested hypotheses concerning how evaluation procedures produce desired outcomes,
4. workable procedures,
5. ethical requirements,
6. and a general framework for guiding program evaluation practice and conducting
research on program evaluation.
Here, there are some general criteria for evaluating program evaluation theories that
organized by category
a. Professionalizing Program Evaluation
Is the theory useful for. . .
• Generating and testing standards for program evaluations?
• Clarifying roles and goals of program evaluation?
• Developing needed tools and strategies for conducting evaluations?
• Providing structure for program evaluation curricula?
b. Research
Is the theory useful for. . .
• Generating and testing predictions or propositions concerning evaluative actions and
consequences?
• Application to specific classes of program evaluation (the criterion of particularity) or a wide
range of program evaluations (the criterion of generalizability)?
• Generating new ideas about evaluation (the criterion of heuristic power)?
• Drawing out lessons from evaluation practice to generate better theory?
c. Planning Evaluations
Is the theory useful for. . .
• Giving evaluators a structure for conceptualizing evaluation problems and approaches?
• Determining and stating comprehensive, clear assumptions for particular evaluations?
• Determining boundaries and taking account of context in particular program evaluations?
•Providing reliable, valid, actionable direction for ethically and systematically conducting
effective program evaluations?
d. Staffing Evaluations
Is the theory useful for. . .
• Clarifying roles and responsibilities of evaluators?
• Determining the competencies and other characteristics evaluators need to conduct sound,
effective evaluations?
•Determining the areas of needed cooperation and support from evaluation clients and
stakeholders?
e. Guiding Evaluations
Is the theory useful for. . .
•Conducting evaluations that are parsimonious, efficient, resilient, robust, and effective?
• Promoting evaluations that clients and others can and do use?
• Promoting integrity, honesty, and respect for people involved in program evaluations?
• Responsibly serving the general and public welfare?
The program evaluation field could benefit by employing the methodology of grounded
theories as one theory development tool. In applying this approach, theorists would generate
theories grounded in systematic, rigorous documentation and analysis of actual program
evaluations and their particular circumstances. Besides that, researchers should use
metaevaluation reports systematically to examine the reasons why different evaluation
approaches succeeded or failed.
Sound theories of evaluation are needed to advance effective evaluation practices. The
history of formal program evaluation includes theoretical approaches that have proved useful,
limited, or in some cases counter productive (for instance, objectives-based evaluation and
randomized controlled experiments). The definition of an evaluation theory is more demanding
than that of an evaluation model (an evaluation theorist’s idealized conceptualization for
conducting program evaluations). An evaluation theory is defined as a coherent set of
conceptual, hypothetical, pragmatic, and ethical principles forming a general framework to guide
the study and practice of program evaluation. Beyond meeting these requirements, an evaluation
theory should meet the following criteria: utility in efficiently generating verifiable predictions or
propositions concerning evaluative acts and consequences; provision of reliable, valid, actionable
direction for ethically conducting effective program evaluations; and contribution to an
evaluation’s clarity, comprehensiveness, parsimony, resilience, robustness, generalizability, and
heuristic power. Despite these demanding requirements of sound evaluation theories, theory
development must be respected as a creative process that defies prescriptions of how to develop a
sound theory.
D. Discussion
Each author has their own way to explain the principles. The table below shows that the
difference of ways of each author in explaining the principles of assessment from Brown’s book.
In short, the difference can be shown in the table:

(Anderson, 2003) (Lorna & Katz, (Russell & Airasian, 2012) (Stufflebeam,
2006) Daniel, L., & Coryn,
Chris, 2014)
1. Anderson 1. The same term 1. The book 1. Overall
used terms of ‘quality’ by emphasizes not coherence
2. Tested
quality Anderson is used only in classroom
hypotheses
information in in here assessment, but
2. Earl and Katz concerning how
referring to also to the action
added two evaluation
principles after that: Decision
principles which procedures
2. The terms of making;
are reference 2. The book produce desired
objectivity is
point and record- provides the outcomes
overlapped with
3. Ethical
keeping important things
Brown’s
requirements
definition of related to 4. A general
reliability. classroom framework for
3. The book assessment: why, guiding program
mainly discuss what, when and evaluation
issues about how the practice and
classroom classroom conducting
assessment. So it assessment need research on
is more practical to be conducted; program
3. The book discuss
than Brown. evaluation
the issues of
classroom
assessment. One
of them is the
ethics of
assessment

E. Conclusion
In conclusion, five of books are well written and well conceptualized. The way to explain
the principles can be understood by beginning-level and advanced teachers. Overall, the concept
of the principles of assessment are same, how to evaluate in appropriate way. The Brown’s book
is given the theoretical basic in the principles of assessment rather than others book. On the other
hand, the rest of books can be useful to enrich knowledge about the principles of assessment
from various perspectives that can help develop the assessment appropriately. Each book has
strengths and weaknesses. However, they can complete the gap among them. It is recommended
to read all of those books.

References
Anderson, L. W. (2003). Classroom Assessment: Enhancing the Equality of Teacher Desicion
Making.
Brown, D. (2004). Language Assessment Principles and Classroom Practices. Longman.
Lorna, E., & Katz, S. (2006). MB: Rethinking classroom assessment with purpose in mind.
Assessment for learning, assessment as learning, assessment of learning. Retrieved from
http://www.wncp.ca/media/40539/rethink.pdf
Russell, M. K., & Airasian, P. W. (2012). Classroom assessment. Concepts and applications.
Stufflebeam, Daniel, L., & Coryn, Chris, L. S. (2014). Evaluation, Theory, Models &
Applications (2nd ed.). John Wiley & Sons.

You might also like