Professional Documents
Culture Documents
T L409-Writtencritiquefinal
T L409-Writtencritiquefinal
Written Critique
For my written critique I have decided to evaluate the Measure of Student Progress
(MSP) Science exam that is administered to all 5th graders attending Washington State public
schools. This yearly state wide exam was created and published by individuals of the Office of
Superintendent of Public Institutions (OSPI). This institution is also responsible for all
Washington State public school regulations and requirements, and decide the amount of state
funding that is given to each Washington public school. This is important to note because one of
the purposes of the MSP is to be used as a reference when deciding how to distribute state
funding amongst Washington public schools. The schools that have higher test averages receive
more funding from the OSPI through the state. Aside from the business aspect of this state test,
the MSP is meant to decrease student dropout rates, increase student academic achievement,
enhance the state wide assessment practice, and advance early learning opportunities. The MSP
seems to have fairly positive purposes and goals, but is this exam truly affective for achieving the
stated criteria? I believe not, due to the fact that this assessment is not authentic enough it will
A reliable assessment can be defined in a few different ways. The first is defined as an
assessment that can be fairly graded by multiple teachers and have the same results all across
(OMalley & Peirce, 1996). Another definition for reliability is assuring teachers are using a fair
scoring rubric that allows you to properly score a student. For example, it is difficult to score a
student accurately using a holistic rubric because it a student can follow the criteria on some
things but not others and that would not be properly graded through a holistic rubric, which is no
t reliable. A reliable rubric would be an analytical rubric because it is more precise for grading
students on multiple criteria. Through out my research I have found that the Science MSP is in
fact reliable. The exam is primarily multiple choice with ensures its inter-rater reliability since
multiple choice questions only have one right answer theres no room for unfair grading, so any
teacher could grade the same test and end with the same results. The MSP is graded based on a 1
through 4 scale as an analytic rubric and the student is rated based on the amount of questions
they received correct (OSPI, 2015). This is a reliable form of grading because it is a grading
scale that has been reviewed and decided on by a group of credible educators, this shows that
multiple people in the field agree with the way the exam is being graded therefore it is a fair
grading scale. The MSP is also reliable in the sense that it focuses solely on the students
performance in that content. Due to the fact that the exam is multiple choice and the rubric is
specific to the exam, there is no room for miss grading. Reliability is very important for an
assessment to be graded as far as possible, but although the MSP is a reliable assessment I feel
Assessments not only need to be reliable, but they must also have validity. There are two
main types of validity, content validity and consequential validity. Content validity is the
relationship between content objectives and the assessments objectives (OMalley & Peirce,
1996), which means the assessment must reflect the content in an appropriate manner. Content
validity ensures that teachers are not administering exams that have little to nothing in common
with the content being taught. Consequential validity is the affect that the content and assessment
have on the students, and whether or not it benefits the students learning (OMalley & Peirce,
1996). Based on these two types of validity I have come to the conclusion that the science MSP
does not have validity. The MSP assessment is a highly private exam that is not exposed to any
teacher before the test is given to the students. This means teachers are unable to see what their
students are going to be tested on, which makes it difficult for them to teach all the content the
students need to know. This creates a gap between the content being taught to students and the
content they are going to be assessed on. This gap results in the lose of validity for the
assessment. I recall being in my fifth grade science class learning about what birds eat then later
on being tested about precipitation which had nothing to do with what I was learning, this was a
result of my teacher not knowing what my class would be tested on and unfortunately he was
unable to teaching the proper content to match the assessment. Consequential validity is
important because it determines the students benefits from the content and assessment, does the
assessment help the student in their life somehow? The answer is no, the science MSP is not
relatable to all students causing the assessment to lose validity, once again. For example the 2012
science MSP contains questions about the lives of crickets (OSPI, 2012) but not all students in a
diverse classroom have seen or even know what a cricket looks like, therefore assessing them on
a subject like this is completely irrelevant them which does not display validity. It is unfair to
assess a diverse group of students on such a specific subject because not all the students will be
able to relate to the content. This causes some students to have an unfair advantage over others.
The MSP gives the same exam with the same content to all students which is seen as equal, but
the MSP is not equitable at all. The MSP lacks validity in both sense of the word, which explains
Many educators disagree with the methods used through the MSP to assess their students
because although it is equal it is not equitable. The biggest limitation of the science MSP is its
inability to accommodate a diverse student body, resulting in low scores for students who or
either low income or part of a minority group. In a diverse classroom the students will all have
very different abilities and experiences putting them at different levels of knowledge. Giving a
diverse classroom the exact same test is said to be equal because it is the same test for everyone,
but because each student obtains unique levels of knowledge the test will give some an unfair
advantage over others, which in not equitable. Inequitable assessments result in unfair
opportunities for students of different culture and abilities (Woodward, 2001). Unfortunately, the
MSP is not relevant to the majority of students who complete the assessment. This is an
extremely serious issue for students and teachers because students are falling behind based on the
low scores they received on the MSP, when in reality they were set up to fail from the start. This
inequitable form of assessment is holding student back for reasons that have nothing to do with
their abilities and intelligence. The struggle that students face when taking the science MSP lies
in the relevance of the content and the format of the test. Public schools in Washington are
rapidly growing more diverse, but the content of the MSP exams do not accommodate the
demands of a diverse student body. Students come from unique backgrounds and will have
different skills or knowledge, yet these exams are created to test students as if they were all
equal. These exams discriminate students of different backgrounds and doesn't allow them to
prove their qualifying abilities and knowledge. The one strength I can identify for the science
MSP is its reliability. The form in which the assessment is graded is fair and indeed follows the
inter rater reliability definition. Any teacher from any school could grade these assessments and
have the same results. The real questions however, should not be whether the MSP is being fairly
graded but whether this assessment is fair for the students. I very strongly appose the MSP and
feel that it should be changed to accommodate students of all backgrounds in order to become
Graves, K. (2000). Designing language courses: A guide for teachers. Boston: Heinle & Heinle.
Office of Superintendent of Public Instruction. (n.d.). Retrieved April 23, 2015, from
http://www.k12.wa.us
OMalley, J., & Pierce, L. (1996). Authentic assessment for English language learners: Practical
Woodward, T. (2001). Planning courses and lessons: Designing sequences of work for the