Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 19

Unit 5

Stage 3-Part II

Monitoring
&
Assessment
1. Monitoring
What is it?
 Monitoring is a periodic assessment & adjustment during the
try - out period.

 It is like formative evaluation.

 It determines how the curriculum is working, such that the monitoring report
becomes the basis of a decision on what aspects have to be retained, improved
& modified.

 Monitoring also provides decision that would even end or terminate the
program.

 Usually, monitoring is done by the module writers, curriculum experts or


outside agencies.

 It is an important aspect of curriculum.


 Monitoring is a process of gathering information for evaluating
the effectiveness of the curriculum & ensuring that the intended,
implemented & students’ learning achievements are aligned.

 This process typically focuses on such issues as relevance,


consistency, practicality, effectiveness, scaling-up and
sustainability, as well as whether learners are achieving the
expected learning outcomes.

 It measures the extent to which the curriculum is commensurate


with the diverse needs of all learners.
Importance

Monitoring …

Provides constant feedback on the extent to which the


curricula are achieving their goals.

Identifies potential problems at an early stage and proposes


possible solutions.

Monitors the accessibility of the curriculum to all sectors of


the target population.

Monitors the efficiency with which the different


components of the curriculum are being implemented and suggests
improvements.

Evaluates the extent to which the curriculum is able to


achieve its general objectives.
10 Steps for a Successful Systemic Process

1. Clarify purpose & identify roles / responsibilities.


2. Identify acceptable evidence of success.
3. Determine process for gathering data.
4. Identify instruments for data gathering & recording.
5. Train monitors.
6. Prepare staff.
7. Conduct the monitoring.
8. Analyze data & share results.
9. Determine plan of action.
10. Recognize that monitoring gives a schoolwide picture through many
small snapshots.
2. Assessment
What is it?
Assessment is the systematic collection, review and use of
information about educational programs to improve student learning.
Assessment focuses on what students know, what they are able to do,
and what values they have when they graduate.

Types of Assessment
Placement Test/Assessment
Observation of learning
Short-term achievement test
Diagnostic assessment
Achievement Tests/Assessment
Proficiency Tests/Assessment
Placement Test/Assessment

 Used to place a student into an appropriate level or a language


curriculum.

 Tests include language items & four skills.


 The results have to be available quickly.
 Purpose- to ensure that the course is neither too easy or too difficult for the
learner.
 Common tests include-
 Vocabulary tests, Cloze tests, & Structured interviews;
 Proficiency tests & diagnostic tests can act in the role of placement tests.
Observation of Learning

Monitoring learners' progress in a course can occur at the level of


the learning activity.

 Focus on TASKS/ACTIV1T1ES, not learners

 Use of observations, surveys, checklists.

Purpose: decide whether the activity is valid (necessary, desired,


etc.) to encourage learning.
Short-Term Achievement Tests/Assessment
 Learners are monitored to see their progress at regular intervals during
the course.

 These include- weekly tests (quizzes), self­assessment records, etc.

The Good Characteristics should …

 provide a clear record of progress that is easily interpreted!


 be in a form to motivate learners to keep working towards the course
goals!
 not occupy too much class time!
 be a regular expected part of class activity.
Diagnostic Tests/Assessment
 Used to identify learners' strengths and weaknesses;
 Can be limited to one or overall skills
 Can be paper, oral or computer –based

Purposes: to find the gaps and weaknesses and provide a remedy


for them and is a part of needs analysis-determining what goes into
a course

Diagnostic information:

 Interpreting results of placement, achievement and proficiency tests

 Analyzing language use- (our skills, conversational activity, interview'


and self- assessment with checklists or scales.
Achievement Tests/Assessment

 are based on what the learners have studied and show their
progress through the course.

 Measures achievement of learners usually at the end of a course


and perhaps one or two test parts during the course and
effectiveness of the course.

o Includes mid-term exams, final exams


o Provides valuable washback on teaching and learning.

Characteristics of good achievement tests:

 are based on material taught in the course!


 Learners usually know what material will be covered:
 Are criterion referenced: a criterion set indicating whether learners
have achieved enough to be given a pass for the course.
Proficiency Tests/Assessment

 Used to test learner’s overall ability

 The content of a proficiency test, therefore, is not based on the content


or objectives of any language
 May be done before or at the end of a course
 Used to enter or exit the course or program (TOEFL or IELTS)

NOTE: Proficiency Tests are not usually made by teachers for a


particular course, but are made by some outside organization or
person who is interested in comparing learners who have studied in
different courses
Reliability

Reliability in testing refers to the consistency of results under similar


conditions. A reliable test should yield consistent scores for the same
individual when taken multiple times.

Several factors contribute to test reliability:

 Consistency in test conditions, such as time constraints and presentation


of instructions.
 Consistent and clear marking procedures.
 A large number of assessment points, such as many questions or items in
a test.

An unreliable test cannot be considered valid, emphasizing the


importance of reliability in ensuring that a test accurately measures
Validity

Validity in testing refers to the degree to which a test measures


what it is intended to measure (listening test)
Two practical ways to assess the validity of a test are through face
validity and content validity.
Face validity involves determining if the test appears to measure
what it claims to measure.
Content validity is assessed by analysing the test content in
comparison to what it is supposed to measure
Practicality

 Refers to the logistical, down-to-earth, administrative issues


involved in making, giving, & scoring an assessment instrument.

These include “costs, the amount of time it takes to construct & to


administer, the number of people needed to administer and mark
the test, ease of scoring, & ease of interpreting/reporting the
results”

Tests can be made more practical by-

 having reusable test papers, being carefully formatted for easy marking,
being not too long, & using objectively scored items such as true/false or
multiple choice.
Authenticity

Refers to the degree of correspondence of the characteristics of a


given language test task to the features of a target language task.

AN AUTHENTIC TEST...

 contains language that is as natural as possible.

 has items that are contextualized rather than isolated.

 includes meaningful, relevant, interesting topics.

 provides some thematic organization to items, such as through a story


line or episode.

 offers tasks that replicate real-world tasks.


Summary of the Steps

Decide what kinds of assessment are needed & when they are
needed.

Write the tests.

Check the reliability, validity and practicality and


AUTHENTICITY of the tests.
Conclusion

 Assessment…

 Is a major source of information for the evaluation of a


course & thus its gradual improvement.

 Also contributes significantly to the teacher’s & learners’


sense of achievement in a course & thus is important for
motivation.

 Curriculum design should include the planning of a well


thought-of assessment program of various kinds.

You might also like