Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

UNIVERSITY OF NORTHEASTERN PHILIPPINES

SCHOOL OF GRADUATE STUDIES


IRIGA CITY

EDUCATION 203: TEST CONSTUCTION AND EVALUATION OF CURRICULUM


MIDTERM EXAM
NAME: GROUP 8 Permit no.
COURSE: MAED-MAJOR IN ADMINISTARTION AND SUPERVISION Date: April 27, 2024____
I. Essay
1. What is measurement? Differentiate it from Evaluation.
Measurement
 is the process of giving numbers or quantities to objects, events, or attributes
using particular rules or criteria. Data is collected to quantify or characterize a
specific attribute or variable. In education, measurement is used to evaluate and
quantify many aspects of student learning, such as knowledge, skills, abilities,
and attitudes. It frequently employs the use of standardized exams, assessments,
or other measurement tools to collect data and issue scores or ratings.
 is the process of giving quantities to objects, events, or attributes using particular
rules or criteria.
 is the process determining the characteristics of an educational process,
program, or curriculum through the use of an accepted standard or applied
criteria in an effort to compare performance learning
WHILE;
Evaluation
 is a larger process that entails making judgments or assessments about the
quality, value, or efficacy of anything. It extends beyond simple measurement to
include the interpretation and analysis of data gathered through measurement
in order to make educated conclusions or decisions. In education, evaluation is
used to determine the overall efficacy of educational programs, instructional
practices, and policies. It entails assessing several sources of data, including
measurements, to identify strengths, shortcomings, and areas for improvement.
 is the process of interpreting information and making judgements about what
has been learned.
 uses methods and measures to judge student learning and understanding of the
material for purposes of grading and reporting.

2. Why is evaluation considered integral part of teaching learning process?


Evaluation
 is an important component of the teaching-learning process because it offers
feedback to instructors and students, assesses learning outcomes, assures
accountability and quality, motivates students, supports evidence-based decision
making, and promotes continual development in education. It is critical for
enhancing instruction, monitoring student success, and making educated decisions
to improve educational quality.
 It helps in forming the values of judgment, educational status, or achievement of
student.
 It allows teachers determine whether students have learned the intended
materials.
 It helps the teachers and learners to improve teaching and Learning.
 This makes way for more systematic change in the lesson plan and facilitates the
overall development of the students along with growth of teacher. If there is a
glitch in the lesson plan, evaluation points it out to you.
3. How a criterion-referenced test does differ from a norm-referenced test?
Norm-referenced test aim to sort and rank students, often for competitive
purposes. Criterion-referenced test, however, are more concerned with whether a
student has achieved specific learning goals. So in summary;

Criterion-Referenced Test Norm-Referenced Test


 Focus on absolute standards and  Focus on relative performance and
determine whether a student has met compare a student's performance to
specific criteria. that of a larger group.
 Do not rely on a reference group and  Compare a student's performance to
evaluate student performance based on the performance of a norm or reference
specific criteria. group.
 Often used to measure mastery of  Commonly used for comparing and
specific skills or knowledge. ranking students, such as in college
 Report performance in terms of admissions or standardized
meeting specific standards or criteria. assessments.
 Report performance in percentiles or
other relative measures.

4. What are the steps of constructing of Test-Made tests?


General Steps in the Preparation of Teacher-made test
 Prepare a table of specification.
 Define clearly the instructional objectives of the course.
 Analysis of the relative importance of these objectives.
 Construct the test. - The purpose of the test.
Step by Step Test-Made Construction
(1) Define the learning objectives
(2) Determine the test format such as multiple-choice, true or false, etc.
(3) Create a pool of test items that measure the identified learning objectives
(4) Establish item difficulty and discrimination index
(5) Organize test items
(6) Set test length
(7) Review and revise test accuracy, clarity and alignment
(8) Finalize the test
(9) Administer and Analyze results

5. Discuss how to conduct item analysis.


(1) Administer the test to a group of students and collect their responses.
(2) Calculate the difficulty level of each item by determining the proportion or
percentage of students who answered the item correctly.
(3) Calculate the discrimination power of each item by comparing the performance of
the top-scoring group with the bottom-scoring group.
(4) For multiple-choice items, calculate the Point Biserial Correlation, which measures
the correlation between item responses (correct/incorrect) and the total test score.
(5) For dichotomous items (true/false, yes/no), calculate the Discrimination Index (DI),
which is the difference between the proportion of high scorers who answered the
item correctly and the proportion of low scorers who answered the item correctly.
(6) For rating scale or Likert-type items, calculate the Item-Total Correlation, which
measures the correlation between item responses and the total test score.
(7) Review and interpret the results, identifying problematic items with extreme
difficulty levels or low discrimination values.
(8) Consider the content, wording, and clarity of problematic items and make decisions
on revising or removing them from the test.
(9) Assess the reliability of the test items using measures such as Cronbach's alpha or
Split-Half reliability.
(10) Revise or eliminate problematic items and consider developing new items to
replace them, ensuring the revised test maintains a balance of difficulty levels and
discriminates well between high and low performers.
(11) Conduct a pilot test with the revised items if necessary.
(12) Repeat the item analysis process with the revised items to ensure their quality
and effectiveness.
(13) Use the item analysis results to inform instructional decisions, improve the test,
and enhance the validity and reliability of the assessment.

6. What is the importance of TOS?

The table of specifications (TOS) 1is an important tool in the assessment and evaluation
process. It serves as a blueprint that outlines the content areas or topics to be assessed
and the corresponding cognitive levels or learning objectives to be measured. Here's a
brief discussion on the importance of the table of specifications:
 The TOS ensures that the evaluation reflects the intended learning objectives or
outcomes. By precisely stating the content areas and cognitive levels to be tested, you
can guarantee that the assessment appropriately reflects what students are expected to
know and be able to achieve.
 The TOS helps to ensure that the evaluation includes a representative sample of
information from across the curriculum. It gives a thorough and complete overview of
the subjects or content areas that should be covered in the assessment, avoiding
overemphasis or neglect of specific areas.
 The TOS contributes to a balanced evaluation by ensuring that all content areas and
cognitive levels are well represented. It prevents an undue emphasis on a single topic or
cognitive level while offering a balanced and thorough assessment of students'
knowledge and skills.
 The TOS guides test development by setting the number of items or points assigned to
each content area and cognitive level. It helps to ensure that test items are divided fairly
and represent the relative significance of various topics or cognitive levels.
 The TOS helps to ensure the validity and reliability of the assessment. Aligning the
assessment with the desired learning objectives and ensuring content coverage
improves the validity of inferences drawn from the assessment findings. It also
contributes to the assessment's dependability by guaranteeing uniformity and fairness
in both content and cognitive level representation.
 The TOS can also help with educational decisions by flagging areas that require more
concentration or further instruction. It assists teachers in identifying the material areas
or cognitive levels that pupils may be struggling with, allowing them to tailor their
teaching tactics accordingly.

7. What is Difficulty index / Discrimination Index?

Difficulty Index Discrimination Index


 Measures the level of difficulty or  Measures the extent to which a test
easiness of a test item. item effectively discriminates between
 Indicates the proportion or percentage high-performing and low-performing
of students who answered the item students.
correctly.  Indicates the item's ability to
 Higher difficulty index values indicate differentiate between students of
more challenging items. different abilities.
 Calculated by dividing the number of  Positive discrimination index values
students who answered the item suggest effective discrimination, while
correctly by the total number of negative values indicate ineffective
students and multiplying by 100. discrimination.
 Helps identify the level of mastery  Calculated by comparing the
required to answer the item correctly. performance of the top-scoring group
with the bottom-scoring group on the
item.
 Helps identify items that effectively
differentiate between students with
higher and lower levels of mastery.

8. Discuss the steps in the U-L methods of item analysis.


(1) Administer the test to a group of students and collect their responses.
(2) Divide the students into two groups: the upper group (top-scoring students) and the
lower group (bottom-scoring students). The percentage of students in each group
can vary, but a common approach is to use the top and bottom 27%.
(3) Calculate the difficulty index for each item by determining the proportion or
percentage of students in the upper and lower groups who answered the item
correctly.
(4) Calculate the discrimination index for each item by subtracting the difficulty index of
the lower group from the difficulty index of the upper group.
(5) Review the difficulty and discrimination indices to identify problematic items. Items
with extreme difficulty levels or low discrimination values may need to be revised or
removed from the test.
(6) Use the item analysis results to make decisions about item retention, revision, or
elimination based on the established criteria for acceptable difficulty and
discrimination indices.
(7) Consider additional factors such as content validity, item clarity, and item relevance
when making decisions about item retention or revision.
(8) Revise or eliminate problematic items and consider developing new items to replace
them, ensuring that the revised test maintains a balance of difficulty levels and
discriminates well between high and low performers.
(9) Conduct a pilot test with the revised items if necessary.
(10) Repeat the item analysis process with the revised items to ensure their quality
and effectiveness.
(11) Use the item analysis results to inform instructional decisions, improve the test,
and enhance the validity and reliability of the assessment.

9. What is validity? How is validity established?

Validity
 is the degree to which a test or evaluation accurately assesses what it is
designed to measure. It is a critical component of assessment quality, ensuring
that inferences drawn from test scores or assessment outcomes are relevant
and appropriate.
 Is referring to the outcomes of an assessment and whether the evidence known
about the assessment supports the way in which the results are used. Test users
need to be sure that the particular assessment they are using is appropriate for
the purpose they have identified. They may find it useful to review evidence in
the accompanying teacher’s guide or the technical guide.
It can be established;
 Through combining evidences from different sources and methods to support
the interpretation and application of test results. It involves conducting
thorough item development, expert reviews, statistical analyses, and empirical
research to ensure that the test accurately measures the intended construct or
content domain. Validity is an ongoing process that requires continuous
monitoring and improvement to maintain the quality and appropriateness of
the assessment.

10. What is Reliability? How can it be established?


Reliability
 is the consistency, stability, and dependability of test scores or assessment
findings. It is a crucial part of assessment quality, ensuring that test scores are
valid and trustworthy. Reliability is determined by how consistently the test
provides findings across time, across different test administrations, or among
different raters.
 refers to the steadiness, constancy, and trustworthiness of test scores or
assessment results. It plays a vital role in the quality of assessments,
guaranteeing that scores are accurate and dependable. Reliability is assessed by
examining how consistently the test produces results over time, across various
test administrations, or when assessed by different individuals.

It can be established;
 through careful planning, data collection, and statistical analysis. Multiple data
points, such as scores from different test administrations or ratings from
different raters, are necessary to assess the consistency of the test scores or
assessment results. By employing appropriate reliability measures and
conducting reliability analyses, researchers and test developers can determine
the extent to which the test or assessment produces consistent and dependable
results.
 through the use of suitable reliability metrics and conducting rigorous analyses,
researchers and developers can gauge how consistently and reliably the test or
assessment generates its outcomes.
II. Complete this Table of Specification.

Budget of Domain Item


Contents work/time % No. of Cog Aff. Psycho. Total Placemen Remarks
allotment Items . t
I 7 28 14 5 4 5 14 1-14 EASY
II 6 24 12 4 4 4 12 15-26 EASY
III 4 16 8 3 2 3 8 27-34 AVERAGE
IV 5 20 10 4 4 2 10 35-44 AVERAGE
V 3 12 6 2 2 2 6 44-50 DIFFICULT
TOTAL 25 100 50 18 16 16 50

III. Find the difficulty index and the Discrimination Index of these test Items.
No. of pupils tested: 60 = .27 x 60 = 16.2 or 16
Remarks
Item Upper 27% Lower 27% Difficulty Discrimination *Rejected
Number Index Index ** Retained
*** Revised
1 13 6 0.59 0.44 Retained
2 16 10 0.81 0.38 Revised
3 15 7 0.69 0.5 Retained
4 14 5 0.59 0.56 Retained
5 10 10 0.62 0 Rejected
6 12 6 0.56 0.38 Retained
7 9 12 0.65 -0.19 Rejected
8 11 5 0.5 0.38 Retained
9 6 3 0.28 0.19 Revised
10 8 4 0.38 0.25 Revised
11 5 2 0.21 0.19 Revised

Difficulty index Index of Discrimination


0-.20 – Very Difficult Below 0.1 - poor
.21-.80 – Average 0.20-0.29 – marginal item
.82-1.0 – Very easy 0.30 – 0.39 – good
0.40 – up – very good item

Submitted by:

GROUP 8
No. Name Permit No. Position
1 Aguinillo, Melody S. 0498 Member
2 Aragdon, Cristine Joy B. 0953 Member
3 Aragdon, Mary Grace B. Member
4 Belmonte, Darwin R. 1218 Member
5 Belmonte Minenie P. 1219 Member
6 Buenaobra, Leonywin F. 1626 Member
7 Bueza, Marjon C. Member
8 Camacho, Christina Member
9 Castor, Analyn I. 1220 Member
10 Concepcion, Marisa B. Member
11 Cortez, Nelley Jane D. 1137 Member
12 Dollero, April B. 1419 Group Leader
13 Naveza, Kareen P. 0778 Member
14 Nuñez, Elaiza I. Secretary
15 Ortega, Raquel B. 1876 Member
16 Parol, Porferia M. 1627 Member
17 Porton, Kelvin Leo E. 1644 Member
18 Villano, Annie Claire P. Member

You might also like