Tyler Ka2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Lilyanne W.

Tyler
Summer 2021
FRIT 7236: Technology Assessment and Data Analysis
Key Assessment Stage 2

Section 1 - Students:
The students in this set of data are made up of 6th grade students, aged 10 to 12 years old in a
ESOL co-taught class. This means that a portion of the students are English Language Learners
and an English as a Second Language support teacher is in the room as well as the general
classroom teacher. While the students are provided accommodations for assessments, they still
must complete their assessments in English. The ELL students are given an assessment at the
beginning of the year to determine their English proficiency, and their accommodations are
decided based upon those results. While the ELL students have reading levels slightly lower than
average for 6th grade, the other students in the class are considered average 6th grade students
academically. This group of students is ethnically diverse and come from an area of lower
socioeconomic status.

Section 2 - Course:
This assessment was given in a 6th grade English/Language Arts course using the curriculum
assigned for all 6th grade students in the school district. The curriculum is built around the
Georgia Standards of Excellence (GSE) and course objectives were created based on the
corresponding standards. The objectives for this assessment cover a variety of English/Language
Arts concepts including vocabulary, literary reading, and writing. The assessment took place at
the end of a semester of instruction. The objectives are as follows:
1. When given a word, the student will be able to identify and label the Greek and Latin
affixes and/or roots present
2. Given a text that contains an unfamiliar word, the student will be able to infer the likely
meaning using context.
3. Given a list of words, students will be able to differentiate between words with positive
and negative connotations.
4. When given a short story or fable, the student will be able to determine the theme and
describe what details from the story support their idea.
5. After reading a fictional text, the student will be able to summarize the main events of the
story without including opinion.
6. After reading a fictional text, the student will be able to explain how the plot of the story
progressed by describing the exposition, rising action, climax, falling action, and
resolution.
7. Using a previously read fictional story and knowledge of literary point of view, the
student will translate and construct a scene from the story written in a different point of
view from the original.
8. When given a claim, students will be able to construct an argument supported by relevant
and credible evidence.
9. When given basic story elements such as character traits, and setting, students will be
able to organize and create a narrative story with descriptive scenes, dialogue, and a clear
structure.
10. When presented with a figurative language concept, students will be able to accurately
create and integrate examples of figurative language into their writing.
11. Students will be able to create a digital portfolio to showcase their understanding of
writing skills developed throughout the academic year, including explanations of
concepts and examples of student work and growth.
12. After reading and discussing a novel, students will be able to collaborate with peers to
present a multimedia presentation on their understanding and analysis of the novel.
13. Once selecting an author, students will be able to research and gather necessary
information from credible sources and use the information to construct an
autobiographical text.

Section 3 - Descriptive Analysis:


The assessment, which covered multiple Language Arts concepts, featured items in multiple
formats including multiple choice, essay, short answer, higher order thinking items, and
performance items. Of the 27 students in the course, 22 students were present and completed the
assessment. The overall data for the 15 question assessment showed that the average score was
67% among the students present for the assessment. The median was 67% and the standard
deviation was 0.22. A chart displaying the full set of data can be found by clicking here: Student
Data Analysis.

When examining the types of items within the assessment using the table linked above, it appears
that the multiple choice items were the items the students struggled with the most, with the
exception being one essay item. These items had the lowest accuracy rate of the 15 assessment
items, with 50% or less of the students completing them accurately. The accuracy data for these
questions is shown in the following chart.
On the contrary, students performed with the highest accuracy on performance items and one
essay item. More than 50% of the students who took the assessment answered these items
accurately.

In order to determine the reliability of the assessment, the Spearman-Brown reliability test was
conducted. In order to complete this analysis, each student’s response to each question was
translated into a 1 for an accurate answer or a 0 for an inaccurate or unanswered response. The
odd item results were added together and the even item results were added together. These two
scores were correlated to give a reliability score of 0.94 out of 1.0. Full calculations can be
viewed by clicking on the second page of the above linked data analysis sheet. Because the score
of 0.94 is extremely close to the total of 1.0, it can be concluded that this assessment is highly
reliable. Linking assessment items to objectives, which have been carefully created based on the
state standards helps to increase the reliability of assessments. Additionally, ensuring that
classroom instruction is consistently correlated to the objectives and providing feedback and
expectations openly to students through the instructional process creates reliability.

Section 4 - Student Strengths and Weaknesses Analysis:


Overall, students tended to struggle with the multiple choice assessment items more than any
other item. These items assessed objectives related to vocabulary acquisition and usage. While
more analysis using conferencing and repeated assessments would need to be done to determine
if the cause for this weakness was the assessment items themselves or student understanding, this
data gives a starting point for investigation. Another important factor to consider for these
assessment items is the fact that many students in the class are English Language Learners.
Because these items assessed vocabulary acquisition and usage and were given to the students in
English based on the allowed accommodations, these items were likely more difficult for many
of the students, especially depending on their English proficiency level.

Again, the students did particularly well on the performance task items of the assessment which
assessed objectives related to analyzing and presenting their understanding of various topics.
This could be due to the nature in which the items were written compared to other items, the
skills and content that were being assessed with these items, or the expression style of the
students. Further analysis would need to be done in order to determine the more specific reason
for this more accurate performance. However, it can be said that since these items were scored
with specific and clear rubrics or checklists, they were highly reliable and provided students with
freedom of expression, which led to more students being able to show their true understanding of
the content.

Upon analysis of the individual student data, student 22 would require individual conferencing
since they did not accurately answer any of the assessment items. Students 1 and 13 showed the
most proficiency with only 1 out of the 15 questions answered inaccurately. Since the reliability
of the assessment was very high, it can be concluded that students need more support in
vocabulary related objectives, have mastered objectives related to analyzing and presenting their
understanding, and prefer performance task items over multiple choice items when expressing
their knowledge.

Section 5 - Improvement Plan:


In order to improve the achievement rate of the instructional objectives, class discussion and
conferencing about the vocabulary related items would need to be conducted. This would allow
me to determine what specifically about the items and their content were confusing to students. I
also would go back and determine if the assessment items accurately matched the learning
objectives and the instructional content used to teach them. Review of these concepts would need
to take place, and I would embed further practice opportunities going forward in instruction.
Additionally, I would plan to reassess students on these objectives after remediation of the
concepts was complete in order to determine if the students have reached mastery or grown
closer to mastery. Based on student feedback about the assessment from discussion and
conferences, I would revisit the assessment items and revise them as needed to eliminate any
problems such as ineffective distractors and poorly worded question stems. Lastly, I would
collaborate with the ESOL co-teacher for the class to determine an appropriate support for the
ELL students with vocabulary skills. Additional, targeted practice would occur separately for
these students in order to help them build their English vocabulary since they may encounter
frequent interference from their first language when attempting to master these skills.

Next, I would conduct discussion about the performance task items to determine what
specifically about them the students found clear and understandable. Each student would receive
specific feedback on their performance task item responses either through conferencing or
written/digital comment. Using student feedback, I would work to create more similarly
structured performance items in future assessments, especially if the students argued that they
could best express their understanding in this manner.

Individual student conferences would be conducted to determine student misunderstandings


about other assessment items such as short answer items, higher order thinking items, and essay
items. Again, specific feedback would be given to each student so they could determine where
there was a gap in understanding and expectations. During individual student conferencing, I
would conference with student 22 to determine if the lack of accuracy on the assessment was due
to lack of proficiency, item error, or student effort. If it is determined that lack of proficiency was
the most likely cause of student 22 receiving a score of 0%, then I would meet with other parties
such as school counselors and instructional coaches to begin the process of putting in place
supports for that student to receive classroom accommodations. Ultimately, more assessments
would be given later to all students, assessing the same learning objectives, to determine the
validity of the data.

You might also like