Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Dion Figueroa

Grading and Interpretation of Assessment Data


Examples

Link to Assessments and Answer Key:


https://drive.google.com/open?id=1Jgofa5TUhT7BItnhnuqT9BrBrSfB9o67

Data Displays:
Dion Figueroa

Patterns of Learning Narrative:

To best interpret the data that I collected from the summative assessment I decided to

focus on the percent of questions answered correctly per learning objective. When creating the

test I designed it in such a way that each learning objective being assessed is clearly displayed

before the start of each section. This was intended not just to help the students identify which

skills are needed to do well on a particular section, but also made it much easier to interpret

how students performed on any of the given learning objectives. I also decided to divide up the

data to display how each student did individually on the learning objectives. This format of data

visual allows for each student’s overall mastery of each learning objective to be clearly

identified. I also felt that it would be important to note if there were any major distinctions

between how males and females performed on the assessment, as well as graph that details

how the class averaged on each of the learning objectives.

Parent letter:

To the parents of ​STUDENT X​,

I am writing you in regards to your student’s performance on our recent end of unit

assessment on figurative language. I wanted to start off by telling you how great of a student

STUDENT X​ is and how much of a joy they are to have in the classroom. For example earlier in
Dion Figueroa

the unit ​STUDENT X​ was tasked with finding a poem on their own and creating a short analysis

of the themes found in that poem. The class and myself generally enjoyed hearing ​STUDENT X

analysis as well as the connections that they made with our ongoing unit. Student’s like

STUDENT X​ make me proud to be a teacher!

In regards to the assessment, I was writing you because ​STUDENT X​ did not do as well

as we would have hoped at this point in the unit. While they did score a passing grade I felt that

there were some key areas that we could still improve on as what really matters is what the

student learned and not the grade at the top of the paper. Your student ​STUDENT X​ did a great

job at accessing low and mid-tier knowledge for the assessment, however, where ​STUDENT X

did not do so great is when it came to higher levels of application. ​STUDENT X​ struggled on

questions that were directly related to ​STUDENT X​ creating their own metaphors or analyzing

the specific reasons that figurative language was being used in that particular context.

The first step to ​STUDENT X​ doing better is to go over the test corrections, not just to

get a better grade but to reflect a little and get a better understanding of what it is that they really

need further instruction on. It is extremely difficult to help ​STUDENT X​ if she is not completely

aware of what she does not understand. The next step will be for us to have a one-on-meeting

where we go over the test and I allow ​STUDENT X​ to ask any questions that may have come up

while doing the test correction. From there I will take over and ask ​STUDENT X​ some leading

questions directly relating the skills that they will need to answer these types of questions better

in the future. With that information I can assign ​STUDENT X​ extra practice that is specifically

designed to help ​STUDENT X​ achieve mastery, but will also ask ​STUDENT X​ to go back and

re-edit the test corrections for any answers that ​STUDENT X​ still did not answer completely.

I have also let ​STUDENT X​ know that I will be checking in with them over the course of

the semester anytime similar high-level questions come up, as well as allowing for some
Dion Figueroa

differentiated instruction in case these problems persist over longer periods of time. If I notice

any other trends I will let you know as soon as possible with any correlating data to help us get

STUDENT X​ on track.

Thank you for taking an interest in how your student is doing, for taking the time to read

this letter, and feel free to contact me with any further questions you may have.

Regards,

Dion Figueroa

Formal Discussion Paper:

Interpretation of Data

Introduction

The purpose of this paper is to analyze the data collected for a summative assessment

on a fictional unit of a 7th Grade English-Language Arts classroom that has spent the last week

learning about figurative language and its uses in the context of the English language. Grading

and interpreting this document was fairly difficult, because of the fictional nature of the unit. The

data collected from these assessments was then divided up by demographic, and student in

order to properly identify in which areas students, as well as the class, had achieved mastery.

The levels of overall mastery would also be used to identify to the instructor which learning

objectives need more attention or additional classroom time. The purpose of the assessment

and its data analysis is to act as a reference guide on how to better scaffold the student to

mastery.

Grading Assessment

The summative assessment was administered via Canvas, an online classroom

application, and as stated in the introduction grading this assessment was fairly difficult because
Dion Figueroa

of the fact that there was no actual instruction time. Since the test-takers were all but unfamiliar

with the subject of the test, assessing their mastery of learning objectives was based on

percentage correct in a given section. Each learning objective had a dedicated section of the

test where between 2-6 questions were asked in order to assess mastery. Within each

subsection of the test were a number of questions that assessed proficiency not just of the

learning objective, but also its application at varying levels of blooms. Multiple choice questions,

and fill in the blanks were easily definable as correct or incorrect, and students were able to see

whether or not they got the answer correct immediately after submitting the test in Canvas. For

the essay questions I had to keep in mind that takers of the test would very likely be unable to

answer essay questions that sought to measure mastery of the learning objectives using higher

tiers of bloom's taxonomy, so I decided to simplify the grading of these questions into two parts.

Students were given half credit if they responded to the question in a way that demonstrates

basic low-level knowledge of the subject, and were given full credit if they answered the

question wholly as well as correctly. In a standard testing and classroom environment students

would have lost all points rather than half if the answer did not demonstrate any mastery.

In order to avoid bias the entirety of the test was created in a backwards design based

off of the learning objectives highlighted before the creation of the assessment. Had this

assessment have been administered in an actual classroom, all of the questions on the

assessment would have been directly tied to content that was covered over the course of the

unit. Each of the learning objectives was also measured in at least two types of question format

that were aimed to gauge understanding in multiple applications, as well as to assess mastery

on lower and higher levels. Each of the constructed response sections was clearly labeled with

directions and what exactly was expected of the students in their response. As stated in

Wormelli (2006) “by using a variety of questions and prompts, we get a better picture of
Dion Figueroa

students’ mastery” as students may not be able to answer a question correctly, but the manner

in which they answer the question can give the instructor clues as to which specific skills a

student needs support with to achieve better mastery.

Impacts and Implications

The implications of this assessment to my classroom are numerous. First the

assessment allows me to get a detailed look at how well students understand each of the

learning objectives in varying degrees. This does not just highlight mastery, but also aims to

show if students have any difficulty expressing mastery across the varying levels of bloom’s

taxonomy. Mastery can only be considered attained if students know how to apply their

knowledge of the learning objectives rather than simply being able to identify correct answers.

Getting this understanding of where exactly a student is struggling with a learning objective

does not just help to get them caught up in class, but also helps to better track student’s

understanding over longer periods of time.

Breaking down the students understanding in this manner allows for easier teacher

intervention. It is important to know where it is exactly that a student is losing understanding as

that helps the instructor when it is time to intervene and offer more support, or even make

changes to our own lesson plans, perhaps implementing more differentiation, to help foster

further understanding. This should be the aim of every assessment as we are not simply testing

the students for a grade, but we are testing to give us a baseline of student understanding and

learning patterns, and to help the educator streamline the learning process and make it better

the next time around.

Feedback

The main points of feedback that I used was through the essay and short answer

sections of my assessment. Of course in this scenario, for the most part, the students were way
Dion Figueroa

off in their answers, but I was still able to assess understanding at least on the surface level.

The questions themselves were based more on activating high levels of Bloom’s so they were

written in a way that forced students to contemplate their answers. Within my feedback I made

sure to show students which parts of their answer were good first and foremost. Over the course

of semester we learned that it is very important to frontload your feedback with positive

comments as to not shut the student off from criticism. After addressing the positives I would

then go into where exactly it was that they missed points on their answer. For most of the

students that took this particular exam, points were not missed due to not answering the

question or answering it correctly, they simply did not answer the question in its entirety.

In a real world scenario I would be checking these questions in two separate ways. First

off I evaluated the questions for whether or not they understood what the question was asking.

This could be identified if the answer given was related to the question at hand or if it was

completely erroneous. If the answer is on topic that means that the student at least has a base

knowledge of the topic that is being assessed. The next part of the evaluation process looks at if

the student was able to use that knowledge and to apply it to activating higher levels of blooms.

Within this I would also be checking to see if their answers are inclusive of the skills that the

learning objective is asking for. Questions that ask students to create are the best for identifying

mastery of the learning objective as it forces students to demonstrate their understanding in

their own words rather than simply answering the prompt

In a real world scenario I would also make sure to let students know exactly where it was

that they were missing the learning objectives. For some students simply studying the material

more would suffice, but other students may need to come back for supplemental instruction, or

may need to be assigned more in class work to help them hone their skills. It is very important

for the teachers and parents to know exactly where it is their student to improve, but it is even
Dion Figueroa

more important for the student to be able to see an outline of what exactly it is they need to

improve. This does not just help them take more accountability in their own learning, but also

lets them know that just because they got a question wrong does not mean that they are “too

stupid” to answer it, but that maybe they just need to go back and reiterate certain specific skills

that will help them the next time they are assessed.

​Strategies and Application

I have always felt that the most effective strategy from learning from feedback is to

encourage the students to reflect over the test, analyze the test and feedback to see what they

got wrong and why, and then to built off of that my aim is for them to identify how they can do

better the next time. In the classroom scenario I would most likely approach this by allowing

students to go over test corrections together in groups the day the assessment is returned. This

would hopefully get students engaged about looking at their feedback, but to also allow them to

do some peer teaching to help clarify anything they may have misunderstood. My reasoning

behind choosing peer feedback activities to foster motivation and engagement is supported in

Lelis (2017), she states “when students are able to value [peer-feedback] activities, they tend to

describe them as challenging, creative, exciting and supportive.” Test corrections are something

that not every classroom does, but I feel that it is important for students as the grade they got

matters much less than them leaving the lesson having learned everything they need to learn. I

would then proceed to allow students to get full points on any question they got wrong with the

condition that they write a three to four sentence explanation as to why they got the question

wrong, what it is they did to make it correct, and why that is now the correct answer. This will not

just let students get more points for the test, but will also give me another datapoint to look at if

even after the corrections there are still things that they misunderstood or got wrong.

With this type of post-assessment support strategy data is everything. With the use of
Dion Figueroa

data I could further support growth in my classroom by pairing students who had mastery in

specific learning outcomes or sections with students that struggled. It is further noted in Lelis

(2017) that research suggests that students are much more able to understand difficult and

complicated concepts when explained to them by their fellow peers than by the instructor. For

different types of assessments this may mean that I have to create data sets that are more in

depth than the one used for this particular assessment. It may be necessary to further break the

learning objectives down into the specific skill sets that are necessary for mastery in order to

further “micromanage” the support and scaffolding that students are receiving.

Reflections and Bringing it Together

The key strengths of my assessment were really in the design and implementation of it. I

felt that using the backwards design to construct the assessment worked really well for me and

allowed me to not just create great questions that are directly connected to each of the learning

objectives, but also allowed for me to design the test in a way that ensured for extra

transparency. I felt that in creating the assessment it was important for students to understand

which sections of the exam were asking them to implement exactly which skills. If students are

being assessed on their ability to analyze a metaphor, it should be clearly stated before the

beginning of that section that that is what they are doing. In this usage case I felt that using a

reading selection to base the questions on also helped quite a bit. In a real world scenario there

will always be some sort of resource or activity that the test is based on, and making sure to

give the students an assessment that looks like what they have been doing in class is an extra

way to eliminate any sorts of potential biases. Research shows that students do not do as well

on a test if it is displayed in an unfamiliar format (Yüksel & Gündüz 2017).

The key weakness of my assessment l felt was in the questions themselves. Throughout

the entire process of creating the assessment I could never get over feeling self-conscious
Dion Figueroa

about if my questions were of good enough quality or not. It always felt that I was reaching when

creating the questions, or trying to be a little too outside of the box. The more I think of it, this

may have simply been a consequence of the manner in which the test was constructed. In the

real world I will be teaching subjects that I am much more knowledgeable of as well as have had

a multitude of time preparing myself for the lesson. In the case of this test my foundational

knowledge of figurative language is somewhat lacking, so when it came time to create an

assessment on figurative language I found it very difficult to create good quality questions even

with the help of the learning outcomes.

Conclusion

Breaking down an assessment and creating quality datasets from it are just as important

as important as creating a good assessment. It is important for an assessment to not just be

used as a grade for the students, but as a measuring stick to gauge what the students have

learned and how the instructor can better help the student attain mastery in the future. Using a

backwards design is an important aspect of this as all of the questions on the assessment are

directly related to the learning objectives that have been determined beforehand. Giving

constructive feedback is another component in increasing student learning in the classroom, as

it is only through constructive feedback that students see how and in which ways they can better

themselves in the classroom. Lastly, assessment should always be used to build students up,

never to break them down!

References

Lelis, C. (2017). Participation Ahead: Perceptions of Masters Degree Students on Reciprocal

Peer Learning Activities. Journal Of Learning Design, 10(2), 14-24.

Wormelli, R. (2006). Fair Isn't Always Equal. Stenhouse.

Yüksel, H. & Suha, G. (2017). Formative and Summative Assessment in Education: Opinions
Dion Figueroa

and Practices of Instructors. European Journal of Education, 3(8), 336-356.

You might also like