Professional Documents
Culture Documents
Lesson-3.1
Lesson-3.1
Lesson-3.1
Lesson Summary
The aim of the assessment is to determine the achievement of the goals of the Arts students;
however, the same information can also be used to determine the efficacy of the programs and
instructional strategies to allow all students to reach the best of their abilities. In order to obtain
relevant and accurate evidence on student performance of results, assessment should include a
variety of strategies that are commonly used throughout the teaching process.
Learning Objective
a. Identify the different assessment tools in assessing the learning arts in the elementary level.
b. Create an assessment tool of any topic in Arts.
Discussion
Good assessment in art requires many of the same things that assessment in any content
area requires. It needs time to be thoughtfully implemented, professional development for
teachers using and administering the assessments, and alignment with the district, state, or
national standards in the arts. And this supports and develops teacher instruction and student
learning.
.
Traditional assessment tools
The most widely used traditional assessment tools are multiple-choice tests, true/false
tests, short answers, and essays.
A. True/false tests
Truth/false items enable students to decide to figure out which of the two possible answers
is true. Since it's easy to score, it is easy to administer true/false assessments. But guessing could
increase the probability of success by 50 percent. Especially when the test item is incorrect, it is
hard to figure out if the student actually knows the right answer. One possible solution is to ask
the student to provide an explanation for the incorrect item or rewrite the statement correctly.
However, this affects the ease in scoring negatively (Simonson et al., 2000).
B. Multiple-choice tests
Multiple-choice tests are commonly utilized by teachers, schools, and assessment
organizations for the following reasons (Bailey, 1998, p. 130):
They are fast, easy, and economical to score. In fact, they are machine scorable.
TEACHER EDUCATION DEPARTMENT
VISAYAS STATE UNIVERSITY-ALANGALANG
Binongtoan,Alangalang, Leyte, 6517, PHILIPPINES
Tel. Number : (053) 525-0140
Email: alangalang@vsu.edu.ph
Website: www.vsu.edu.ph
1. They can be scored objectively and thus make the evaluation seem to be fairer and/or more
accurate than the subjective assessments.
2. They "look like" assessments and might thus appear to be appropriate to the convention.
3. They reduce the probability of learners guessing the correct items in relation to the truefalse
ones.
Simonson and others addressed the limitations of multiple-choice testing. They believed
that, depending on the level of cognitive effort, it would be more difficult and time consuming to
develop. In other words, multiple-choice items can be used successfully to assess items that
require a low level of cognitive effort, such as remembering previously memorized information,
whereas items that require students to use higher-level thought skills, such as analyzing and
synthesizing are more difficult to generate (2000).
Similarly, Hughes (in Bailey, 1998) criticizes multiple-choice tests for the following aspects:
1. The technique tests only on the recognition of skills,
2.guessing can have a considerable though unexplained impact on test scores, and
3. The methodology seriously limits what can be measured.
4. It is really hard to write successful items
5. Backwashing can be dangerous
6. Tricking should be made better
C. Essays
Essays are effective assessment tools since the questions are versatile to test higher level
cognitive abilities. However, they are not very practical due to the fact that it is very difficult and
time-consuming to score the essays. Moreover, subjectivity might be an issue in scoring. Creating
a rubric might be helpful to grade the essays (Simonson et al., 2000). A rubric can be defined as
“a criteria-rating scale, which gives the teachers a tool that allows them to track student
performance” (Abrenica, online document). Instructors have an option to create, adapt, or adopt
rubrics depending on their instructional needs. The templates provided on the web might be
helpful for them to adjust the generic rubrics into their own instruction (Simonson et al., 2000).
D. Short-answer tests
In short-answer tests “items are written either as a direct question requiring the learner fill
in a word or phrase or as statements in which a space has been left blank for a brief written
answer” (Simonson et al., 2000, p. 270). Furthermore, the questions need to be precise. In
addition, the items that are open to interpretations allow learners to fill in the blanks with any
possible information (Simonson et al., 2000).
Performance Based-Assessment
This measures students' ability to apply the skills and knowledge learned from a unit or
units of study. Typically, the task challenges students to use their higher-order thinking skills to
create a product or complete a process (Chun, 2010).
TEACHER EDUCATION DEPARTMENT
VISAYAS STATE UNIVERSITY-ALANGALANG
Binongtoan,Alangalang, Leyte, 6517, PHILIPPINES
Tel. Number : (053) 525-0140
Email: alangalang@vsu.edu.ph
Website: www.vsu.edu.ph
• Setting
• Role
• Audience
• Time frame
• Product
TEACHER EDUCATION DEPARTMENT
VISAYAS STATE UNIVERSITY-ALANGALANG
Binongtoan,Alangalang, Leyte, 6517, PHILIPPINES
Tel. Number : (053) 525-0140
Email: alangalang@vsu.edu.ph
Website: www.vsu.edu.ph
Scoring rubrics
In the Higher Education Report, S.M. Brookhart describes a scoring rubric as, “Descriptive
scoring schemes that are developed by teachers or other evaluators to guide the analysis of the
products or processes of students’ efforts.”
Types of scoring rubric
1. Analytic scoring rubric
Analytical rubrics are meant to break down the finished output or goal into observable
elements and components. In other words, the student has a project or task, and you use the
analytic score rubric to assess all aspects of the project. Analytic rubrics typically use numbers to
measure quality. Let’s take the example below.
The rubric might break down the evaluation process into three parts- foreground, middle,
background, atmospheric perspective, and overlapping and size variation. For each of these
components, numbers would be assigned. (1) Needs improvement, (2) Developing, (3) Proficient,
(4) Excellent. The rubric also explains what exactly each of those numbers means. So, a student
might have a score like this:
Student Output: Landscape painting
Foreground, middle, background (3) – The work shows proficient understanding of foreground,
middle ground, and background.
Overlapping and size variation (2) – The work shows limited understanding of overlapping and
size variation.
With an analytic scoring rubric, the student and teacher can see more clearly what areas
need work and what areas are mastered.
Formative assessment
Formative assessment tools that identify misconceptions, struggles, and learning gaps
along the way and assess how to close those gaps. It includes effective tools for helping to shape
learning, and can even bolster students’ abilities to take ownership of their learning when they
understand that the goal is to improve learning, not apply final marks (Trumbull and Lash, 2013).
It can include students assessing themselves, peers, or even the instructor, through writing,
quizzes, conversation, and more. In short, formative assessment occurs throughout a class or
course, and seeks to improve student achievement of learning objectives through approaches
that can support specific student needs (Theal and Franklin, 2010, p. 151).
Summative assessment
Summative assessment to evaluate students' understanding, awareness, skills or progress of an
educational period, such as a unit, course or program. Summative assessments are almost often
formally graded and are also highly weighted. Summative assessment can be used to a significant
degree in combination and accordance with the formative assessment, and teachers can explore
a number of ways to incorporate both approaches.