Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F.

FLORES

ASSESSMENT of LEARNING determine strengths and weaknesses of students as bases


LET 2023 NEW CURRICULUM for remedial instruction..
1. Validating theoretical knowledge in the actual 2. FORMATIVE EVALUATION
assessment of learning. Administered during the instructional process to provide
1.1. Demonstrate understanding of principles in feedback to students and learners on how well the former
constructing traditional, alternative/ authentic are learning the lesson being taught. Frequently done to
forms of high quality assessment determine who have reached mastery of the lesson.
1.2. Apply knowledge and skills in the development 3. SUMMATIVE EVALUATION
and use of assessment tools for formative and Undertaken to determine student achievement for grading
summative purposes purposes. Usually done at the end of a unit, which
1.3. Apply rules in test construction and use of summarizes the student’s accomplishments.
authentic assessment tools for product and APPROACHES TO EVALUATION
process assessment CRITERION-REFERENCED MEASURE (CRM)
1.4. Demonstrate skills in interpreting assessment ● A student’s performance is compared against a
results to improve learning. predetermined or agreed upon standard.
1.5. Comprehend and apply basic concepts of ● Designed to measure students’ performance with
statistics in educational assessment and respect to some particular criterion or standard. It is
evaluation used to evaluate performance against performance
1.6. Demonstrate knowledge of providing timely, objective.
accurate and constructive feedback to learners NORM-REFERENCED MEASURE (NRM)
and parents ● A student’s performance is compared with the
MEASUREMENT performance of other students.
An educational process that checks the specificity of an ● Designed to measure the ability of one student
individual which is expressed quantitatively. compared to the abilities of other students in the same
The quantification of what students learned through the use of class.
tests, questionnaires, rating scales, checklists, and other devices. TYPES OF TESTS AND THEIR USES(Manarang, 1983;
MEASUREMENT answers the question, how much does a Louisell&Descamps, 1992)
student learn or know? 1. Mode of Response
EVALUATION ● Oral
An educational process that checks the personality of ● Written
an individual which is expressed qualitatively. ● Performance
A process of making judgements, assigning value, or deciding on 2. Ease of Quantification of Response
the worth of student’s performance. EVALUATION answers the ● Objective – with definite/ exact answer
question, how good, adequate, or desirable is it? ● Subjective – divergent answers
ASSESSMENT 3. Mode of Administration
The full range of information gathered and synthesized ● Individual – one student at a time
by teachers about their students and their classrooms. Gathered ● Group – simultaneous
through observation, verbal exchange, written reports, or 4. Test Constructor
outputs. ASSESSMENT looks into how much change has ● Standardized – prepared by an expert or specialist;
occurred on the student’s acquisition of a skill, knowledge or follow uniform procedure
value before and after a given learning experience. ● Teacher-Made – prepared by classroom teacher with
PURPOSES OF M-E-A no established norm for scoring and interpretation
● Appraisal of the school, curriculum, instructional 5. Mode of Interpreting Results
materials, physical plant, equipment ● Norm-Referenced – comparing performances of
● Appraisal of the teacher students
● Appraisal of the school child ● Criterion-Referenced – comparing an individual
FUNCTIONS OF M-E-A performance with a specific goal
● Improvement of student learning 6. Nature of Answer
● Identification of students’ strengths and weaknesses ● Personality – emotion, social adjustment, dominance &
● Assessment of the effectiveness of a particular teaching submission, value orientation, disposition, emotional
strategy stability, frustration level, degree of introversion or
● Appraisal of the effectiveness of the curriculum extroversion
● Assessment and improvement of teaching ● Intelligence – mental ability (I.Q)
effectiveness ● Aptitude – predicting the likelihood in a learning area
● Communication with and involvement of parents in ● Achievement – to determine what student has learned
their children’s learning from formal instruction
METHODS OF COLLECTING ASSESSMENT DATA ● Accomplishment – to determine what students has
1. Paper-and-pen learned form a broader area
⮚ Supply Type – requires the student to produce or ● Socio-metric (Preference) – discovering learner’s likes
construct an answer to the question and dislikes; social acceptance; social relationships
⮚ Selection Type – requires the student to choose the ● Trade – to measure an individual’s skill or competence
correct answer from a list of options in an occupation or vocation
2. Observation ● Speed – to determine ability and accuracy bounded
Involves watching the students as they perform certain with time
learning tasks like reading, speaking.... ● Diagnostic – to identify specific strengths and
TYPES OF EVALUATION weaknesses in past and present learning
1. DIAGNOSTIC EVALUATION ● Formative – to improve teaching and learning while it
Undertaken before instruction, in order to assess student’s is going on
prior knowledge of a particular topic or lesson. Done to

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 1 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

● Summative – given at the end of instruction to ● more than one defensible correct answer
determine student’s learning and assign grades RULES
TEST CONSTRUCTION: TEACHER MADE TESTS ● essence of the problem should be in the stem; all
Why Assess? options should measure the same objective
● To determine entry knowledge and skills ● when the incomplete statement format is used, the
● To check status of learners’ learning options should come at the end of the statement
● To determine if targets are met ● there should be coherence in stems and options
STEPS IN WRITING TEST ITEMS ● there should be consistency in the length/presentation
1. Identification of instructional objectives and learning of choices
outcomes. ● avoid repetition of words in the options
2. Listing the topics to be covered in the test. ● the choices should be arranged
3. Preparation of the Table of Specification (TOS). ascendingly/descendingly
4. Selection of the appropriate type/s of test. ● the choices should be arranged in vertical/columnar
5. Writing the test items. order
6. Sequencing the test items. ● stems and options should be stated positively
7. Writing the directions or instructions. whenever possible
8. Preparation of the answer sheet (if necessary) and scoring ● avoid negative statements or double-negative
key. statements in the stem
9. Administering the test. ● options should be plausible and homogeneous
10. Analyzing the test results. ● items should have defensible correct or best option
11. Interpreting the test results. ● vary the placement of correct options (to avoid pattern)
GENERAL GUIDELINES IN WRITING TEST ITEMS ● avoid overlapping options
● Avoid wording that is ambiguous and confusing. ● options for complex type must be clear
● Use appropriate vocabulary and sentence structure. ● make sure there is only one correct/best answer to an
● Keep questions short and to the point. item
● Avoid using negative and double negative statements. ● stem and options should be in a single page
● Avoid using abbreviations/ acronyms especially if not ● avoid using none of the above
used/presented in the class. ● use none of the above option only if there is an
● There should be clear instruction. absolute right answer
● There should be specified number of points. ● avoid using all of the above
● There should be no patterns provided. ⮚ It is a poor distracter since it has very little
● There should be proper mechanical make-up. discriminating power to identify knowledgeable
● Do not provide clues/hints to the answer. from non-knowledgeable students.
● Use vocabulary suited to the maturity of the students. ● do not have combination of all of the above and none
● Use language that even the poorest readers will understand. of the above in the options
● Items should not be directly lifted from book/reference. ● use four or five options
TYPES OF TEACHER-MADE TEST ● there should be uniformity in the number of choices for
1. Objective all the items
● Multiple Choices ● Analogy ● there should be no articles a/an at the end of the stem
● Matching Type ● Rearrangement ● stem should be clear and grammatically correct and
● Alternative Response ● Identification should contain elements common to each option (MC
● Completion/Augmentation ● Labeling obey Standard English rules of punctuation and
2. Subjective (Essay) grammar; a question requires a question mark)
● Extended TYPES OF ESSAY
● Restricted 1. EXTENDED RESPONSE QUESTIONS
MULTIPLE CHOICES ● Leave students free to determine the content and
Stem – question or problem in each item; can be presented in 2 to organize the format of their answer
ways: ● Opinionated or open-ended answers are solicited
● Incomplete statement – all the options end with a from students
period or only the last option ends with a period. 2. RESTRICTED RESPONSE QUESTIONS
● Direct question – options do not end with a period but ● Limit both the content and the format of the
stem ends with a question mark. students’ answers
Options - alternatives where student selects the correct answer ● Certain parameters are used in the
- there is only one correct/best answer from the questions/problems
options, the less appropriate are foils or distracters (maximum ADVANTAGES OF ESSAY
no. of options is 5 and the minimum is 4) ● No guessing, assesses factual information
ADVANTAGES ● Allows divergent thinkers to demonstrate higher order
● great versatility in measuring objectives - from the level thinking skills (HOTS)
of rote memorization to the most complex level ● Reduces lead time required to produce
● the teacher can cover a substantial amount of course ● Less work to administer for smaller number of students
material in relatively short time ● Can be rich in diagnostic information
● scoring is objective DISADVANTAGES OF ESSAY
● teachers can construct options that require students to ● Subjectivity in scoring
discriminate among them - vary in the degree of ● Even different times of day make a difference
correctness ● First paper to be read/checked often sets standard
● effects of guessing are largely reduced since there are ● Time consuming in checking
greater options ● Can result to student rambling, confusion or inability to
● items are more amenable to item analysis find a focus
DISADVANTAGES HOW TO WRITE ESSAY QUESTIONS
● more time-consuming in terms of looking for options ● Define the task clearly to the student
that are plausible

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 2 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

● When testing for content, make each item relatively


short and increase the number of items
● Do NOT provide a choice of questions
With negatively skewed data, the mean is lowest, followed by
● Devise answer key as you write question
the median and mode.
● Give students the criteria for evaluating the answers
● Present material to get higher thinking skills
To be effective, essay questions need to….
● Be related to classroom and/or homework learning
● Be clearly articulated Measure Advantages Disadvantages
● Be unambiguous Mean Best known Affected by
● Cover larger segments of material, rather than have a (Sum of all values/ average extreme values
very limited scope no. of values) Exactly calculable Can be absurd for
● Provide sufficient time for the quality of answers Make use of all discrete data
expected data (e.g. Family size
● Require incorporation of factual knowledge Useful for = 4.5 person)
● Require students to provide reasoning for their answers statistical analysis Cannot be
● Include clear directions as to length and structure obtained
INCREASING OBJECTIVITY OF ESSAY SCORING graphically
● Score blind Median (middle Not influenced by Needs
● Read one question at a time value) extreme interpolation for
● Halo effects values group/
● Have a policy on irrelevant answers, errors Obtainable even if aggregate data
TEST INTERPRETATION data (cumulative
CENTRAL TENDENCY distribution frequency curve)
MEAN unknown (e.g. May not be
● Most common measure of central tendency group/aggregate characteristic of
● Best for making predictions data) group
● The best measure of central tendency if the distribution Unaffected by when: (1) items
is normal irregular class are only few; (2)
● The arithmetic average, computed simply by adding width distribution
together all scores and dividing by the number of Unaffected by irregular
scores. open-ended class Very limited
● It uses information from every single score. statistical use
MEAN (Ungrouped) Mode (most Unaffected by Cannot be
X frequent value) extreme values determined
X= Easy to obtain exactly in
n from histogram group data
Example Determinable Very limited
If X = {3, 5, 10, 4, 3} from only values statistical use
X= (3 + 5 + 10 + 4 + 3) / 5 near the modal
= 25 / 5 class
= 5
MEASURE OF VARIABILITY/ DISPERSION
MEDIAN
● A numerical index that shows the extent to which the
● Divides a distribution of scores exactly in half.
scores of a group scatter/ disperse/spread below and
● The middle-most value.
above a central point in a distribution.
● Better than mode because only one score can be
1. RANGE- the difference between the highest and the
median and the median will usually be around where
lowest values in a dataset. The range is simple to
most scores fall.
compute and is useful when you wish to evaluate the
● If data are perfectly normal, the mode is the median.
whole of a data set. It is useful for showing the spread
● The median is computed when data are ordinal scale or
within a dataset.
when they are highly skewed.
2. INTER-QUARTILE RANGE- indicates the extent to which
Finding the Median
the central 50% of values within the data set are
● First you rank order the values of X from low to high or
dispersed. It is based upon and related to median. The
vice versa
inter-quartile range provides a clearer picture of the
● Count number of observations and add 1
over all data set as compared to range.
· Divide by 2 to get the middle score
3. STANDARD DEVIATION- summarizes the amount by
MODE
which every value within a data set varies from the
● The most common observation in a group of scores.
mean. It indicates how tightly the values in the data set
● Distributions can be unimodal, bimodal, or multimodal.
are bunched around the mean value. The SD is the most
● If the data is categorical (measured on the nominal
widely used measure of dispersion, it takes into account
scale) then only the mode can be calculated.
EVERY VARIABLE in the dataset. It is usually presented
● The most frequently occurring score.
in conjunction with the mean.
The Shape of Distributions
Measures of Position
With perfectly bell shaped distributions, the mean, median, and
(or Location or Relative Standing)
mode are identical.
▪ Are used to locate the relative position of a data value
in a data set
▪ Can be used to compare data values from different data
With positively skewed data, the mode is lowest, followed by the sets
median and mean. ▪ Can be used to compare data values within the same
data set

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 3 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

▪ Includes z-(standard) score, percentiles, quartiles, 2. Normal distribution


deciles and stanine scores 3. Negative skew
a) Quartiles- divides the distribution in to four
b) Deciles- divides the distribution in to ten
c) Percentiles- divides the distribution in to 100
d) Stanine Scores- Standard scores that tell the location of a
raw score in a specific segment in a normal distribution
which is divided into 9 segments, numbered from a low of 1
through a high of 9. Scores falling within the boundaries of
these segments are assigned one of these 9
numbers(standard nine)
Stanines 1, 2, and 3 reflect BELOW AVERAGE performance.
Stanines 4,5, and 6 reflect AVERAGE performance.
Stanines 7,8, and 9 reflect ABOVE AVERAGE performance. KURTOSIS
e) Standard or Z- scores - Can be used to compare data values 1. Platykurtic
from different data sets by “converting” raw data to a 2. Mesokurtic
standardized scale 3. Leptokurtic
- Calculation involves the mean and standard
deviation of the data set
-Represents the number of standard deviations
that a data value is from the mean for a specific
distribution
- Can be positive, negative or zero
f) T-scores- ells the location of a score in a normal distribution
The Z statistic will allow you to standardize a normal distribution
having a mean of 50 and a standard deviation of 10.
Sociometry is a quantitative method for measuring social
relationships. Sociometric technique shows the interpersonal
relationships among the members of a group.
PERSONALITY ASSESSMENTS ASSESSMENT OF LEARNING 2
- used to determine one’s emotional adjustment, Characteristics of 21st Century Assessments
interpersonal relations, motivation, interest, 1. Responsive
attitudes/feelings toward self, others and events or 2. Flexible
activities. 3. Integrated
Psychometrics- The branch of psychology that deals with the 4. Informative
design, administration, and interpretation of quantitative tests 5. Multiple Methods
for the measurement of psychological variables such as 6. Communicated
intelligence, and personality traits.’ 7. Technically Sound
Likert Scale- -psychometric scale commonly involved in research 8. Systemic
that employs questionnaires. It is the most widely used approach
to scaling responses in survey research, such that the term (or Student learning outcomes are statements of the knowledge,
more accurately the Likert-type scale) is often used skills and abilities individual students should possess and can
interchangeably with rating scale. demonstrate upon completion of a learning experience or
-respondents specify their level of agreement or sequence of learning experiences.
disagreement on a symmetric agree-disagree scale for a series of
statements. Thus, the range captures the intensity of their Characteristics of Good Student Learning Outcomes(SLO)
feelings for a given item. 1. Good student learning outcomes (SLO) are centered on
-The format of a typical five-level Likert item, for example, the students, on what the learners are capable of doing,
could be: instead of the teaching technique.
o Strongly disagree 2. The student learning outcome are based on the
o Disagree issuances from government regulatory agencies like
o Moderately Agree CHED’s Policies, Standards and Guidelines on teacher
o Agree education and DepEd’s K to 12 Law Enhanced Basic
o Strongly agree Education in the Philippines.
Semantic Scale - -is a type of a rating scale designed to measure 3. Good student learning outcomes are very well
the connotative meaning of objects, events, and concepts. The understood by both students and faculty.
connotations are used to derive the attitude towards the given 4. Good learning outcomes include a spectrum of thinking
object, event or concept. skills from simple to the higher order of application of
The respondent is asked to choose where his or her position lies, knowledge and skills.
on a scale between two bipolar adjectives (for example: 5. Good learning outcomes are measurable. Students’
"Adequate-Inadequate", "Good-Evil" or "Valuable-Worthless"). competencies should be expressed as transitive verbs
Semantic differentials can be used to measure opinions, and/or action words which are demonstrable and
attitudes and values on a psychometrically controlled scale. observable at various levels.
Thurstone Scale - means of measuring attitudes towards
religion. It is made up of statements about a particular issue, and Three Types of Learning
each statement has a numerical value indicating how favorable Learning can be achieved in different forms. In order to cater the
or unfavorable it is judged to be. People check each of the different forms of learning, Benjamin Bloom and a committee of
statements to which they agree, and a mean score is computed, colleague in 1956 identified three domains of educational
indicating their attitude. activities namely: cognitive, affective, and psychomotor.
Skewness
SKEWNESS
1. Positive skew

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 4 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

objectives, this time giving emphasis on the affective


Cognitive = Knowledge domain.
Psychomotor = Skills
⮚ The affective domain refers to the way in which we deal
Affective = Attitudes
with situations emotionally such as feelings,
appreciation, enthusiasm, motivation, values, and
BLOOM’S TAXONOMY of Educational Objectives attitude.
1. COGNITIVE DOMAIN- call for outcomes of mental
⮚ The taxonomy is ordered into 5 levels as the person
activity such as memorizing, reading, problem solving,
progresses towards internalization in which the
analyzing, synthesizing and drawing conclusions.
attitude or felling consistently guides or controls a
2. AFFECTIVE DOMAIN- refers to a person’s awareness
person’s behavior.
and internalization of objects and stimulation, it focus
on emotions.
3. PSYCHOMOTOR DOMAIN- focus on the physical and
kinesthetic skills of the learners, characterized by
progressive levels of behaviors from observing mastery
of physical skills.
Bloom’s Cognitive Taxonomy
1. Knowledge – recognizes students’ ability to use
memorization and recall facts.
2. Comprehension- involves students’ activity to read
subject matter, extrapolate and interpret important
and put other ideas to their own words.
3. Application- students take new concept and apply to
AUTHENTIC ASSESSMENTS
another situation.
Authentic assessments attempt to demonstrate what a student
4. Analysis- students have the ability to take new
actually learns in class rather than the student’s ability to do well
information and break it down into parts and to
on traditional tests and quizzes. Many have claimed this type of
differentiate them.
assessment an excellent means of evaluating a student’s
5. Synthesis- students are able to take various types of
knowledge of subject matter.
information to form a whole creating a pattern where
CHARACTERISTICS OF AUTHENTIC ASSESSMENTS
one did not previously exist.
1. Authentic Assessment starts with clear definite criteria of
6. Evaluation- involves students’ ability to look at
performance made known to the students.
someone else’s idea or principles and see the worth of
2. Authentic Assessment is a criterion- referenced rather than
the work and the value of the conclusion.
norm referenced and so it identifies strengths and
weaknesses but does not compare students nor rank their
Bloom’s Revised Taxonomy
levels of performance.
(Anderson and Krathwohl’s Affective Taxonomy)
3. Authentic Assessment requires students to make their own
answer to questions rather than select from given options
as in multiple choice items and requires them to use a range
of higher order thinking skills (HOTS).
4. Authentic Assessment often emphasizes performance and
therefore students are required to demonstrate their
knowledge, skills or competencies in appropriate situations.
5. Authentic Assessment changes the role of students as
passive test takers into become active and involve
participants in assessment activities that emphasize what
Psychomotor (Skills)
they are capable of doing.
⮚ In the early seventies, E. Simpson, Dave and A.S Harrow AUTHENTIC ASSESSMENT TOOLS
recommended categories for the Psychomotor Domain
▪ Authentic assessment makes use of three modes of
which included physical coordination, movement and
assessment:
use of the motor skills body parts. Development of
1. Observations which include date and information that the
these skills requires constant practice in accuracy and
teacher collects from daily work with students.
speed. 2. Performance samples which are tangible results that
⮚ Simpson contributed 7 categories, demonstrate student’s achievement.
⮚ Dave 5 categories and 3. Tests and measures of student’s actual performance at a
⮚ Harrow 6 categories. specific place and time.
⮚ They have been re-organized and simplified into 4 I. Observations-Based Assessment Tools
categories or levels. ▪ To make observation-based assessment efficient and
impartial, Diane Hart (1994) suggested the following
guidelines:
1. Observe not only one but all the students.
2. Observation must be as frequent and as regular as possible.
3. Observations must be recorded in writing.
4. Reliability of observation records is enhanced if multiple
observations are gathered and synthesized.
Developmental checklist is an observation tool which
requires the teacher to describe the traits or learning behaviors
AFFECTIVE DOMAIN being assessed. When used regularly during the school year,
⮚ In 1964, David R. Krathwohl, together with his developmental checklists give a moving picture of the student’s
colleagues, extended Bloom’s Taxonomy of Educational progress towards the desired competencies.
Objectives by publishing the second taxonomy of

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 5 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

Interview sheet is another observation tool which is also Checklist can be expanded into rubrics, and the
called the conference recording form. Interview sheets consist of highest level of a rubric can often be made into a
a list of questions the teacher intends to ask and space for checklist.
recording student’s answers. Rubrics can also be modified into scoring guides to
PERFORMANCE-BASED ASSESSMENT assign points or grades to final products and
⮚ process of gathering information about students performances.
learning through actual demonstration of essential and 3. Students can use rubrics throughout a project or unit for self
observable skills (process) and creation of products. and peer-assessment.
Open to many possible answers and judged using multiple 4. Rubrics can make instruction more effective.
criteria or standards. Rubrics set out the learning expectations for a unit
REASONS FOR USING PERFORMANCE-BASED ASSESSMENT and can be used to plan instruction in 21st century skills.
1. Dissatisfaction of the limited information obtained from HOLISTIC RUBRIC
selected- response test. ⮚ Provides an overall impression of a student’s work
2. Influence of cognitive psychology, which demands not only ⮚ Yields a single score for a product or performance
for the learning of declarative but also for procedural ⮚ Is well-suited to judging simple products or
knowledge. performances
3. Appropriate for experiential, discovery-based, integrated ⮚ Does not provide a detailed analysis of the strengths
and problem-based learning approaches. and weaknesses
SEVEN CRITERIA IN SELECTING A GOOD PERFORMANCE ANALYTIC RUBRIC
ASSESSMENT TASK
⮚ Divides the product or performance into distinct traits and
1. Authenticity - task is similar to what students might
judges each separately
encounter in real world
⮚ Is better suited to judging complex performances involving
2. Feasibility- realistically implementable (costs, space, time,
several dimensions
equipment)
3. Generalizability- likelihood that the task will be generalized ⮚ Provide more specific information or feedback
to comparable task ⮚ Helps students better understand what quality of work is
4. Fairness- task is fair to all regardless of social status or expected.
gender. ⮚ Is more time-consuming to learn and apply
5. Teachability -allows to master the skill that one should be Important Elements of a Rubric
proficient in. 1. Competency to be tested
6. Multi-foci- multiple instructional outcomes 2. Task
7. Scorability- reliably and accurately evaluated 3. Evaluative Criteria and their Indicators
II. Performance Assessment Tools 4. Performance Levels
▪ Student achievements at specific place and time 5. Qualitative and Quantitative descriptions of each
are actual student performance that deserve to be performance level
assessed. STEPS IN WRITING RUBRICS
▪ One of the most frequently used measurements ⮚ Set the Scale
instruments is the checklist. Use your professional judgment to assess student
▪ A performance checklist consists of a list of learning on a scale of 1-3, 1-4, 1-5, or 1-10 that is appropriate for
behaviors that make up a certain type of evaluating the performance /process/ product.
performance (e.g. using a microscope, preparing a ⮚ Define the Ratings
letter, solving mathematics performance, etc.) Add appropriate descriptors to each number on the
Process Oriented Performance - Based Assessment scale that you have identified.
1. Formulate a process-oriented learning 4 = Advanced;
competency. 3 = Proficient;
2. Design a process-oriented learning activity. 2 = Basic;
3. Create a scoring rubric to evaluate the process of 1 =Beginning
the activities to target the learning competencies. ⮚ Identify basic descriptions
Competencies are defined as groups or clusters of skills and Add simple descriptions for each number on the
abilities needed for a particular task. scale.
Example of a process-oriented learning competencies 4 – Advanced ability to __________;
are: 3 – Proficient ability to ___________;
1. Create a brochure in order to spread awareness about 2 – Basic ability to _____________;
Covid19 virus and its disease. 1 – No ability to ________________.
2. Deliver a persuasive speech before a class. ⮚ Descriptions of what performance will look like at
When designing a task, see to it that what you are trying to target each level
are the competencies that you set at the beginning of the lesson. 4 - The student is able to (description of what advanced
Task designing should relate to learning competencies that performance would look like).
would be evaluated and developed for learners. 3 – The student is able to (description of what proficient
This is to see to it that each learner will become an active performance would look like) but not yet able to (description
member of the group, will develop accountability and of advanced performance).
responsibility through the given task. This will also ensure proper 2 - The student is able to (description of what basic
monitoring of progress of the learners. performance would look like) but not yet able
A rubric is a versatile and flexible instrument that can support to(description of proficient performance).
assessment for all purposes and can be used with many different 1 - The student is unable to (description of desired
methods. performance).
BENEFITS OF RUBRICS PRODUCT-ORIENTED ASSESSMENT
1. Rubrics can be used for both formative and summative The design of the task in this context depends on what the
assessment. teacher desires to observe as outputs of the student.
2. Other kinds of instruments can be developed from rubrics. Complexity- within the range of ability of the students.

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 6 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

Appeal- interesting enough so that students are encouraged to Three considerations in assessing affective domain:
pursue the task to completion. 1. Emotions and feelings change quickly most especially
Creativity- lead the students into exploring the various possible for young children and during early adolescence.
ways of presenting the final outcome. 2. Use varied approaches in measuring the same affective
ASSESSMENT in the AFFECTIVE DOMAIN trait as possible.
Affective domain is the most nebulous and the hardest to 3. Decide what type of data or results are needed:
evaluate among Bloom’s three domains. individual or group data.
Traditional assessment procedures, for instance, concentrate on Teacher Observation
the cognitive aspects of learning and typically focus on the This is often used when the teacher wants to observe the
development of tasks and instruments for measuring cognitive approach behaviors (positive) and avoidance behaviors
learning. (negative).
In the AFFECTIVE DOMAIN, consider the following concepts: There are two types of this kind of observation:
Attitudes 1. Unstructured observation- normally open-ended, no
⮚ are defined as a mental predisposition to act that is checklists or rating scale is used. Everything that is observed
expressed by evaluating a particular entity with some is just simply recorded.
degree of favor or disfavor. Individuals generally have 2. Structured observation- checklist or rating forms are to be
attitudes that focus on objects, people, or institutions. made since it will be used to record observations.
Attitudes are also attached to mental categories. Student self-report
Motivation. Student interview- there is an opportunity that
⮚ A reason or set of reasons for engaging in a particular teachers may have direct involvement with the students
behavior, especially human behavior as studied wherein teachers can probe and respond for better
inpsychology and neuropsychology. According to Geen understanding.
(1995), motivation refers to the initiation, direction, Surveys and questionnaires- there are two types of
intensity, and persistence of human behavior. - using survey and questionnaires:
Motivation is the purposeful commitment to learn. the constructed response, straightforward approach
The NEED Theory asking the students about their affect by responding to simple
(Abraham Maslow of Hierarchy of Human Needs) statement of question;
One of the theories that explain human motivation. the selected-response, in which students can choose
● Human beings have wants and desires which influence from options and this assures anonymity.
their behavior; only unsatisfied needs can influence Peer ratings
behavior, satisfied needs cannot. Least common method in assessing affective domain
● Arranged in order of importance, from the basic to the because students may not take this seriously. However, the
complex. teachers can accurately observe what is being assessed in peer
● The person advances to the next level of needs only ratings since teachers also engage with the students.
after the lower need is at least minimally satisfied. Checklists
● The further the progress up the hierarchy, the more - contain criteria that focus on the intended outcome or
individuality and psychological health a person target. It helps students in organizing the tasks assigned to them
will show. into logically sequenced steps that will lead to successful
The Two-Factor Theory completion of the task.
(Frederick Herzberg) Rating Scale
He stated that certain factors in the workplace result in job - helps students understand the learning
satisfaction while others do not, but if absent lead to target/outcomes and to focus students’ attention to
dissatisfaction. He distinguished between: performance. It gives feedback to students as far as their
● Motivators (challenging work, recognition, strengths and weaknesses with respect to the targets to which
responsibility- positive satisfaction they are measure.
● Hygiene factors (status, job security, salary and Likert Scale
fringe benefits) which do not motivate if -respondents specify their level of agreement or
present, but if absent will result in demotivation. disagreement on a symmetric agree-disagree scale for a series of
The ERG Theory (Existence, Relatedness, and Growth) statements. Thus, the range captures the intensity of their
Clayton Alderfer feelings for a given item.
-The format of a typical five-level Likert item, for
example, could be:
Strongly disagree
Disagree
Moderately Agree
Agree
Strongly agree
Semantic Differential Scale
-is a type of a rating scale designed to measure the
Self-efficacy. connotative meaning of objects, events, and concepts. The
● An impression that one is capable of performing in a connotations are used to derive the attitude towards the given
certain manner or attaining certain goals. It is a belief object, event or concept.
that one has the capabilities to execute the courses of The respondent is asked to choose where his or her
actions required to manage prospective situations. position lies, on a scale between two bipolar adjectives (for
Assessment TOOLS in the AFFECTIVE DOMAIN example: "Adequate-Inadequate", "Good-Evil" or "Valuable-
Three feasible methods of assessing affective traits and Worthless"). Semantic differentials can be used to measure
dispositions: opinions, attitudes and values on a psychometrically controlled
1. teacher observation scale.
2. student self-report Sentence Completion
3. peer ratings - It captures whatever comes to mind from each
student.

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 7 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

PORTFOLIO ASSESSMENT This stipulated that there are three components of summative
1. CONTENT PRINCIPLE- should reflect the subject matter that assessment, namely: Written Works, Performance Tasks, and
is important for the students to learn. Quarterly Assessment.
2. LEARNING PRINCIPLE- should enable students to become Written Works - This component ensures that students are able
active learners. to express learned skills and concepts in written form. This may
3. EQUITY PRINCIPLE- should allow students to demonstrate include essays, written reports, long quizzes, and other written
their personal learning styles and multiple intelligences. outputs.
Types of PORTFOLIOS Performance Tasks - This component allows the learners to
1. Showcase Portfolio -Selection of best works show what they know and are able to do in diverse ways.
- Student chooses work, profile are accomplishments Learners may create innovate products or do performance-
and individual profile emerges. based tasks. Note that some written outputs may be considered
2. Documentation Portfolio as performance tasks.
-Like a scrapbook of information and examples. Includes Quarterly Assessment - This component measures learning at
observations, tests, checklists, and rating scales. the end of the quarter. This may be in the form of objective tests,
3. Process Portfolio performance-based assessments, or a combination thereof.
-demonstrate all facets or phases of the learning process The grading system, according to this policy, is standard-based
Stages in Implementing PORTFOLIO Assessment and competency-based. For the Kindergarten, checklists,
Stage 1 anecdotal records, and portfolios are used instead of numerical
Identifying teaching goals to assess grades, which are based on the Kindergarten curriculum guide.
Stage 2 On the other hand, Grade 1 to 12 learners are graded on three
Introducing the idea of portfolio assessment to the class components every quarter: Written Works, Performance Tasks,
Stage 3 and Quarterly Assessment. The weights of these components
Specification of Portfolio Content vary depending on the subject and grade level of the learner. All
Stage 4 grades
Giving clear and detailed guidelines for portfolio are based on weighted raw scores of the learner’s summative
presentation assessments. The minimum grade needed to pass a specific
Stage 5 learning area is 60 (percentage score), which is then transmuted
Informing key school officials, parents and other to 75 in the report card. The lowest mark that can appear on the
stakeholders report card for Quarterly Grades and Final Grades is 60.
Stage 6
Development of the Portfolio NORM-REFERENCED and CRITERION- REFERENCED GRADING
GRADING and REPORTING SYSTEM There are two common types of grading systems used
Grading is the process of judging the quality of the performance at different educational levels in the Philippines. We
of a student. have the norm-referenced grading system and the
Nonetheless, the grade is the symbol used to represent the criterion-referenced grading system.
achievement or progress of a student. In the norm-referenced grading system, student’s performance
According to Erickson and Strommer (1999, cited by Gabuyo & is evaluated relative to the performance of the other students
Dy, 2013), grading and reporting systems are used to: within the group. When grades are compared to other students
⮚ Communicate the achievement of the students (where you rank), it is called norm-referencing. In such a system,
⮚ Provide students information to improve their self- grade depends on what group you are in, not just your own
evaluation performance. In addition, the typical grade may be shifted down
⮚ Provide incentives for students to learn or up, depending on the group’s ability.
ADVANTAGES:
⮚ Select or group students from certain educational path
1. It is easy to use.
or progress
2. It works well for the courses with retention policies, and it
⮚ Evaluate the effectiveness of the program
limits only a few students to advance to the next level of the
⮚ Inform the teacher about what students have and not course.
learned. 3. It is useful if the focus is on the individual achievement of
⮚ Motivate and encourage good work. the students.
Functions of Grading and Reporting 4. It does not encourage cooperation among the students.
1. Enhancing student’s learning 5. The teacher easily identifies learning criteria – the
o enhancing students’ motivation percentage of students who received the highest grade or
o indicating where teaching might be modified lowest grade.
2. Reports to parents/guardians DISADVANTAGES:
o inform parents and guardians of students’ progress 1. The performance of a student is not only determined by his
3. Administrative and guidance uses achievement, but also the achievement of the other
o helping to decide promotion, graduation, honors, students.
athletic eligibility 2. It promotes competition among the students rather than
o reporting achievements to other schools or to cooperation.
employers 3. Not all students can pass the given subject or course.
o providing input for realistic educational, vocational and Criterion-referenced grading system give grades that reflect
personal absolute performance or compared to specified standards.
o counseling The student must get a grade higher than or equal to a given
Methods of Computing Final Grade standard to pass a given test.
There are two conventional methods of computing the final ADVANTAGES:
grades: 1. The performance of the students will not be affected by
1. the averaging method and the performance of the whole class.
2. the cumulative method. 2. All students may pass the subject or course when they
The computation of the final grade will depend on the grading meet the standard set by the teacher.
policy of the school or college and universities. DISADVANTAGES:
CLASSROOM ASSESSEMENTS In K to 12

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 8 of 9|Ai


PROFESSIONAL EDUCATION|ASSESSMENT OF LEARNING |PROF. CARMILO F. FLORES

1. It is difficult to set a reasonable standard if it is not ▪ Is obtained by subtracting the mean from the given
stated in the grading policies of the institution. data value and dividing the result by the standard
2. All students may not pass the subject or course when deviation.
they do not meet the standard set by the teacher or the ▪ Can be positive, negative or zero
institution. Stanine Scores
Guidelines for Effective and Fair Grading ▪ Standard scores that tell the location of a raw score in
1. Discuss the grading procedures to the students at the a specific segment in a normal distribution which is
very start of instruction. divided into 9 segments, numbered from a low of 1
2. Make clear to the students that their grades will be through a high of 9
purely based on achievement. ▪ Scores falling within the boundaries of these segments
3. Explain how other elements like effort or personal- are assigned one of these 9 numbers(standard nine)
social behaviors will be reported. ▪ Stanines 1, 2, and 3 reflect BELOW AVERAGE
4. Relate the grading procedures to the intended learning performance.
outcomes or goals/objectives. ▪ Stanines 4,5, and 6 reflect AVERAGE performance.
5. Get hold of valid evidence like test results, reports ▪ Stanines 7,8, and 9 reflect ABOVE AVERAGE
presentation, projects, and other assessment results as performance.
bases for computation and assigning grades.
6. Take precautions to prevent cheating on tests and
other assessment measures.
7. Return all tests and other assessment results, as soon
as possible.
8. Assign a weight to the various types of achievement
included in the grade.
9. Tardiness, weak effort, or misbehavior should not be
charged against the achievement grade of students.
10. Be judicious/fair and avoid bias.
11. Grades are black and white, as a rule, do not change
grades.
12. Keep pupils informed of their class standing or
performance.
Conducting Parent- Teacher Conference
❑ Make plans for the conference. Set the goals and
objectives of the conference ahead of time.
❑ Begin the conference in a positive manner. Starting the
conference by making a positive statement about the
student sets the tone for the meeting.
❑ Present the student's strong points before describing
the areas needing improvement. It is helpful to present
examples of the student’s work when discussing the
student’s performance.
❑ Encourage parents to participate and share
information.
❑ Plan a course of action cooperatively.
❑ End the conference with a positive comment.
❑ Use good human relations skills during the conference
STATISTICS and COMPUTER: Tools for Analyzing Data
Descriptive statistics summarize and organize characteristics of
a data set. A data set is a collection of responses or observations
from a sample or entire population.
There are 3 main types of descriptive statistics:
● The distribution concerns the frequency of each value.
● The central tendency concerns the averages of the
values.
● The variability or dispersion concerns how spread out
the values are.
Inferential statistics allows you to make predictions
(“inferences”) from that data. With inferential statistics, you
take data from samples and generalize about a population.
The most common methodologies of inferential statistics are :
a. hypothesis tests
b. confidence intervals
c. regression analysis.
Standard Score or z-score
-Can be used to compare data values from different
data sets by “converting” raw data to a standardized scale
-Calculation involves the mean and standard deviation
of the data set
-Represents the number of standard deviations that a
data value is from the mean for a specific distribution

ST. LOUIS REVIEW CENTER-Southern Tagalog Page 9 of 9|Ai

You might also like