Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

MZUZU UNIVERSITY

PREPARED BY CHANCE KITTAH


EDUF 2302

EDUCATIONAL TESTING, MEASUREMENT AND EVALUATION

TOPIC 1: BASIC CONCEPTS

1. Test: -Is an instrument or systematic procedure for measuring a sample of behavior by


posing a set of uniform questions.
OR
-Is a systematic method of gathering data for the purpose of making intra or inter-
individual comparison.
OR
-Is a means of measuring the knowledge, skills, feeling, intelligence or aptitude
for an individual.
2. Testing: - Is a technique for obtaining information. This can be done through
administering a test, project, performance assessment etc.

CLASSIFICATION OF TESTS
-Tests can be classified according to the following ways:
1. Content
-According to the content, tests are classified into three types, namely:
(i) Achievement or cognitive test
-This is a test that has a well-defined content domain where questions come from.
Examples include all tests administered at the university and by Maneb.
(ii) Aptitude
-This is a test that aims at measuring people’s potential to do certain things.
(iii) Affective
- This is a test that aims at measuring people’s likes and dislikes.
NB: Aptitude and affective tests are similar in the way that they do not have well
defined content.
2. Who made it?
- According to who made it (test), tests are classified into two types, namely:
(i) Teacher made/ classroom test
- This is a test which is developed, administered, scored and interpreted by who
taught that particular subject.
(ii) Standardized test
- Is a test that has fixed process for developing, administering, scoring and
interpreting the scores.
– Examples include: Maneb, Paec, Sat
3. How marking is done.
-Here we have the following:
(i) Objective test
- Is a test that when scored by the same individual at different occasions, the marks
remain the same. Eg. Multiple choice qns, true or false qns, short answer qns and
matching test.

(ii) Subjective test


- This is a test that when scored by the same individual at different occasions the
marks become different.

4. Score interpretation
- There are two ways of interpreting scores. These are:
a. Norm reference interpretation
b. Criterion interpretation
Note:- When one uses the norm reference interpretation as a method to interpret a
test, the test becomes a “norm reference test”.
- When one uses the criterion method to interpret a test, the test becomes a
“criterion test”.
– This means that under score interpretation, there are two types, namely:
(i) Norm reference test
- This is a test in which the scores of an individual are compared with the
performance of the other.
(ii) Criterion test
- In this test, the scores of an individual are not compared with those of the others,
rather the scores are compared to the standards, for example; the child is able to read
vowels’.

Functions of test
-Tests are administered for a number of purposes, these include the following:
i) Tests can be used for administrative purposes.
E.g. - Promotion
- certification
- Selection
-Placement; putting learners into their respective categories
ii) Instructional/classroom functions of a test.
E.g. –test helps or encourage clarification of meaningful coarse objectives.
-test provides feedback to both teacher and the learner
-test motivates students to learn
-test facilitates learning
-test increases re-learning and over learning among the learners.
iii) Tests can also be used for guidance and counseling functions
iv) Tests can be used to conduct research.

MEASUREMENT

Definitions
(i) Is a process of quantifying a degree or extent to which something or an object possess
a given trait/variable/characteristics.

(ii) Is a systematic process of assigning number or numerals to the observation according


to a defined rule or procedure e.g. allocating marks to an essay.
What do we measure?
- We measure different things such as height, knowledge, ability, weight, voltage, length,
temperature e.t.c.

Scales of Measurement
Basically, there are four scales of measurement, these are:
(i) Nominal scale
(ii) Ordinal scale
(iii) Interval scale
(iv) Ratio scale

(i) Nominal scale


-It is the scale that names objects
- It classifies objects into different categories
- There is no logical order in the classified objects.
- The classified objects are mutually exclusive. Meaning that they cannot belong
to both characteristics
-Examples of objects that can be classified include: gender, colour, religious
affiliation e.t.c.
(ii) Ordinal scale
- It has all the characteristics as in (i), it only adds that there is order in the
classified objects.
–Objects can belong only to that order, e.g. A, B, C, D….implying that A is better
than B, C, D.
(iii) Interval scale
-It has all the characteristics as in (ii), the difference of this scale is that there is an
equal unit of the scale e.g. Temperature (28oc to 25oc and 18oc to 15oc the equal
unit difference) it also adds that ‘zero’ in this scale means ‘something’.
- It is also referred to as “equal unit scale”
(iv) Ratio scale
- it has all the characteristics as in all the three, it only adds that in this scale
‘zero’ is ‘nothing’ e.g. a bag of maize 0kg. Mass is measured using ratio scale.
Using these scales, different variables can be measured.
Variables
These are things that change or vary.
Types of variables
i) Quantitative variables: Continuous and discrete
ii) Qualitative

EVALUATION
Definitions
(i) Is a systematic process of collecting, analyzing, and interpreting data in order to make
decisions.
(ii) Is a systematic procedure of determining whether and to what extent objectives are being and
have been achieved
(iii) Is a process of making value judgment about the wealth or merit of something that has been
evaluated.
iv) Is a science of providing information for decision making. It is to make a valid judgment after
measuring, assessing and testing.

Steps followed when carrying out an evaluation


Step 1: Determine the data you want to collect ie. qualitative or quantitative data.
Step 2: Determine who will give you the data.
Step 3: Sampling the people who will give you the data
Step 4: Think of how to get the data.
Step 6: Development of the instruments to be used to collect the data. The instruments for
collecting data must be reliable and valuable. Test the instruments before using them.
Step 7: Data collection.
Step 8: Analyzing the collected data.
Step 9: Interpreting.
Step 10: Making decisions.

Types of evaluation
According to people who conduct evaluation, there are:
(i) Internal: done by people who are involved in the project.
(ii) External: done by the people who are not involved in the project.

Advantages of internal evaluation Disadvantages of internal evaluation


 Areas of emphasis cannot be left out  It can be biased
 It can tell you about your mistake.  You can leave the weak part of
your work

Advantages of external evaluation Disadvantages of external evaluation


 It exposes the weakness of someone  Can be corrupted to avoid revealing bad
since it is not biased. things in the project.

What do we evaluate?
(i) Student performance (ii) Curriculum (iii) Text books (iv) Teaching strategies (v) The school
(vi) Special projects and programs (vii) The students’ performance

TYPES OF EVALUATION
-There are four types of evaluation, namely:
(i) Preliminary evaluation/Needs assessment/Baseline survey
-It is done at the beginning of the project. In a classroom situation it is known as a diagnostic
evaluation; where you find out what the learners already know and also their needs. You also
find out the strengths and weaknesses of the learners.
-Baseline evaluation is carried out before something happens by looking or finding out the needs
of the people (society).
(ii) Mid-Term evaluation: Here you compare what is being implemented with the stated
project’s goals or objectives. In other words you are monitoring performance. It is also known as
Formative evaluation.
(ii) Formative evaluation: This is an on-going evaluation in order to find out if learners are
following. It is also known as continuous assessment
(iii) Diagnostic evaluation : This is a type of evaluation that acts as a follow up to formative
evaluation
(iv) Summative evaluation : This is the evaluation that occurs at the end of a lesson, program or
an activity.

Assignment: With reference to the types of evaluation, discuss the functions of evaluation.
Pages: 5
Line spacing: Double
Font Type: Times New Romans
Font size: 12
Due Date: 14th December, 2018.

ETHICAL ISSUES IN ASSESSMENT/EDUCATION


What is an ethic?
-Is a systematic study that looks at why other things are good or bad. Similarly in assessment,
there are certain behaviors which are right (good) or wrong (bad).
-It is moral correctness that guides actions of professionals and their interrelationship with clients
and other professionals.
-It is like the code of conduct.

Examples of good ethics (good morals) in assessment


-Exposing students to standard conditions when writing test.
-Marking the test objectively.
What is malpractice?
-Is when someone does what is not supposed to be done. Or it can be defined as a breach of a
professional conduct.
–Payne also defines ‘Malpractice as unethical and inappropriate practice that occurs within the
assessment context that affects the reliability and validity of assessment.

Examples of malpractice in education


1. Cheating during examinations
2. Adding more time than required when the students are writing the exams
3. Adding more marks to the student who does not deserve it.
4. The marking being influenced by the teacher’s prior knowledge of the student (this can
be due to hatred between the teacher and learner).
5. Raising the scores of the performance levels on specific assessment instrument without
simultaneously increasing the student’s achievement levels. i.e. 43% then you increase
the marks.
6. Any practice involving the reproduction of actual assessment materials through any
medium for use in preparing the students for an assessment or exam.
7. Any practice, any preparation activity that includes questions, tasks, graphs, charts,
passages, or other materials included in the assessment and/or materials that are
paraphrases or highly similar in content to those in actual use.
8. Preparations for the assessment that focuses particularly on the assessment instrument or
parallel form of the instrument including its format other than on the objectives being
assessed (the teacher in this malpractice teaches learners on how to answer the questions).
9. Any modifications in procedures for the administrating or scoring the assessment results
in non-standards and/or delimiting conditions for one or more students (In this
malpractice teachers modify the procedures to suit few learners instead of being fair to
all).
10. Any practice that allows people with insufficient and inappropriate knowledge, skills to
administer or score the test on behalf of the teacher.
11. Any administration or scoring practice that produces results contaminated by factors not
relevant to the purpose of the assessment (giving marks basing on handwriting or based
on pre-conceived ideas i.e. subjectivity).
12. Any practice of excluding one or more students from an assessment because this one is
seen as a failure and will affect the overall performance i.e. increasing number of failures.
13. Any practice such as providing students either immediately preceding or during
administration of the assessment with definitions of words.
14. Any practice such as gestures, facial expressions, body language, comment of any other
action that guides the responses during examinations.
15. Any practice such as erasing, darkening , re-writing or in other way correcting or altering
students responses to an assessment tasks either during or following the administration of
the assessment.
Question 1: Should examination information be kept confidential before, during and after
the administration of the assessment?
Answer: Yes
Question 2: Do you think examinees have the right to access some information relating to
upcoming examination?
Answer: No
Examples of abuses that follows as exams results are kept confidential
- Professionals vary qualifications and they have sometimes misinterpreted and abused
the results
– Decisions made about education placement solely made have been discovered to be
biased / misinterpreted.
-Nilko also says errors in scoring occurred which could be detected because the test
results would be scrutinized by examinees.
-Although some test were publicly declared to measure leave skills and abilities,
examinees were unable to check their content to decide whether their preparations were
adequate or whether they would seek training.
-It has also been discovered that test results have been used unprofessionally.

Professional integrity and testing


-Integrity means the quality of being honest and upright in character. -
-Honesty and uprightness are necessary because they will ensure that professionals (teachers)
will seek to uphold the highest good of the examinees.

How to achieve professional integrity as teachers


-By holding paramount the safety, health and welfare of all students being assessed.
-We must be knowledgeable about and behave in compliance with the laws in the conduct of
professional activities.
-We must maintain and improve our professional competence in education assessment.
-Provide assessment services in areas of our competence and experience.
-Strive to improve the assessment literacy of the public by promoting sound assessment results.
-Perform all professional responsibilities with honesty, integrity and due care and fairness.
Cheating
Why do students cheat?
-Usually students cheat because of the following reasons:
i) Lack of adequate preparation. Most students involved in cheating do not study.
ii) Failure to attend classes fully (absenteeism).
iii) Lack of self-confidence (inferior complex).
iv) Poor understanding of the content.
v) For fear of failure.
vi) Desire for a better grade (They don’t want to go home with a bare pass).
Vii) Pressure from others to succeed e.g. parents, friends and guardians.
Ix) Low self-esteem.
X) High competition for things, places.
Xi) Unfair test –nature of the test.
Xii) Sometimes students feel obliged to help others to succeed so they do so.
Xiii) Poor study skills –too playful.
Xiv) They think that they can cheat and not get caught -students think that they are too clever
such that nobody can catch them.
xv) To maintain good relationship with students.

Forms of cheating
-The forms cheating include the following:
i) The use of crib sheet (Likasa).
ii) Using the Giraffe style (Peeping)
iii) Tatooing
iv) Exchanging the information.
V) Writing the test on behalf of the other.
Vi) Using cellphone.
Vi) Getting the information before the exam paper.
Viii) Asking deliberate questions from invigilators

How to curb cheating?


–Physical checking by the invigilators.
–Improving the seating plan.
–Use of IDs. This would help to prevent those who would write on behalf of others.
–Adequate preparation for the exam.
–Knowing the skills for taking the test.

Principles of testing/assessment
Assessment Principles

i) Assessment should be valid. Validity ensures that assessment tasks and associated
criteria effectively measure student attainment of intended learning outcomes at the
appropriate level .
ii) Staff development policy and strategy should include assessment:-All those
involved in assessing students must be competent to undertake their roles and
responsibilities
iii) Assessment should be reliable and consistent:- There is need for assessment to be
reliable and this requires clear and consistent processes for the setting, marking,
grading, moderation of assignments.
iv) Assessment should be inclusive and equitable: - As far as is possible without
compromising academic standard inclusive and equitable assessment should ensure
that tasks and procedures do not disadvantage any group or individual.
v) Information about assessment should be explicit, accessible and transparent: -
Clear, accurate, consistent and timely information on assessment tasks and procedures
should be made available to students, staff and other external assessment examiners.
vi) The amount of assessed work should be manageable: -The scheduling of the
assignments and the amount of assessed work required should provide a reliable and
valid profile of achievement without overloading staff and students.
vii) Formative and summative assessment should be included in each program: -
Formative and summative assessment should be incorporated into programs to ensure
that the purposes of assessment are adequately addressed. Many programs to ensure
that the purposes of assessment are adequately addressed. Many programs may also
wish to include diagnostic assessment.
viii) Timely Feedback that promotes learning and facilitates improvements should
be an integral part of the assessment process: -Students are entitled to feedback on
submitted formative assessment tsks and on summative tasks, where appropriate. The
nature extend and timing of feedback for each assessment task should be made clear
to students in advance.

THE ROLE OF INSTRUCTIONAL OBJECTIVE IN ASSESSMENT

OBJECTIVES: These are the intended learning outcomes. They are things learners are expected
to achieve at the end of an instruction or teaching.

Characteristics of objectives

1. Simple
2. Measurable
3. Attainable
4. Realistic
5. Time bound

In short the above characteristics can be abbreviated as SMART

Classification
Objectives are classified into three major types. These are:
i) Cognitive
ii) Affective
iii) Psychomotor

Levels of Cognitive Domain


Knowledge
-State
-Mention
-List
Comprehension
-Draw
-Illustrate
-Convert
Application
-Use
-Solve
-Change
Analysis
-outline
-Distinguish
-Identify
Synthesis
-Classify
-Summarize
-Create
Evaluation
-Compare
-Justify
-Assess

Levels of Affective Domain


Receiving
-Listening
-Attend
-Select
Responding
–Answer
-Comply
-Applaud
Valuing
-Debate
-Argue
-Participate
Organization
-Discuss
-Organize
-Define
Characterization
-Value
-Change
-Resolve
-Question

Levels of Psychomotor Domain


Perception
-Choose
-Select
-Identify
Set
-Display
-Move
-React
Guided Response
-Measure
-Construct
-Assemble
Mechanism
-Sketch
-Work
-Measure
-Complex
–Overt
-Behaviour
Adaptation
-Measure
-Construct
-Assemble

THE ROLES OF OBJECTIVES IN THE TEACHING PROCESS


i) Help to come with good questions during test construction
ii) Help to use appropriate teaching and learning resources
iii) Prevent the teacher from dig racing from the topic.

ROLES OF OBJECTIVES IN ASSESSMENT


- It acts as a basis for assessment (asking questions)

INSTRUCTIONAL OBJECTIVES
i) They provide direction for the instructional process.
ii) They convey instructional intention to other people like students, parents and
education personnel.
iii) They provide a basis for assessing student learning because you describe the
performance to be measured.

***Read about ethical issues in assessment (the do’s and don’ts)

THE PROCESS OF TEST CONSTRUCTION


Before you set a test, the objectives should be highly considered. The process involves the
following steps:

Step 1: Determine the purpose of the test.


–Consider the use of the test you intend to set (some of the purpose of test include; placement,
certification of learners, research, determining the achievement of learners, promoting learners to
other classes, selecting learners to pursue certain academic programs).
–This will therefore help the examiner to formulate questions that suit the level of difficulty.
Step 2: Identify and classify the constructs (behaviors) you would like to measure.
-A construct is something to be measured in a hypothetical thing
-Construct validity refers to the degree or extent to which a test is able to measure the learners
e.g aptitude test can measure intelligence.
-It is a trait that is believed to be measured by examiners that the learners possess it
-It will help the examiner to include relevant questions. -----
-Some of the behaviors that can be measured include: reasoning skills, numerical skills,
intelligence, trustworthiness, honesty communication skills, etc.
Step 3: The development of a test specification table or blue print (Planning)
- This is a two way chart that shows the content that has been covered on one hand and success
criteria covered on the other hand. E.g.

Content Objectives K C A A S E Total


2 2 1 20
Total

Imagine you have taught five topics in a term, draw the table of specification. How many
questions are you going to ask on each topic?

Advantages of Table of Specification


The following are the advantages of specification table:
i) It will enable the teacher to determine whether the test was simple or difficult depending on
the number of questions falling on each category of cognitive domain.
ii) It will help the examiner to allocate or estimate the appropriate time for the exams or test.
iii) It will help the examiner to know the amount of resources for taking a particular exam
iv) It will help the examiner to allocate the marks appropriately based on the level of difficult.
v) Basing on the questions, the examiner will be able to make sure that all the topics have been
included in the test.
vi) It will also help the examiner to see whether his/her questions or test is leaning on one part or
level of cognitive domain (higher order or low order).
Step 4: Determining the format or type of questions
- Here an examiner chooses the right type of tests he or she is going to use i.e. Multiple Choice
Questions, True/False Questions, Marching Test, Short Answer Test.
Step 5: Actual construction of questions following the specification table
-This is the time when the examiner formulates questions basing on the specification table.
Step 6: Revising the questions to make sure that there are no errors.
-The examiner scrutinizes all the questions to iron out any mistake.
Step 7: Assembling the paper
-The examiner arranges the questions in order guided by the levels of cognitive domains (simple
question should come first)
-All Multiple, True/False, Matching questions should have their own section.
-Each section must have its own instruction
-Make sure that the questions are clear.
-Make sure that you have included the time.
-Make sure that you have included the date for writing the test.
-Make sure you have included the end of the questions.
Step 8: Moderation of the paper
-Aspects to look at when carrying moderation of the paper:
a) Checking the clarity of the questions
b) Ensuring that the marks have been allocated for each question
c) Checking the total mark for each paper
d) Checking if the time has been indicated
e) Checking the topics covered
f) Look for the level of difficulty for the question
Step 9: If it is a standardized test, then after the moderation, conduct the pilot testing
-The test is tried at a certain school to find out if it is good test
-If you have developed a questionnaire, also carry out a pilot testing.

ITEM ANALYSIS

-Analysis is done on each item in order to promote the effectiveness of the item. If you have ten
questions, you are required to conduct analysis on all of them.

Kinds of item analysis

-There are three kinds of item analysis, namely:


i) Item difficulty: The symbol for this analysis is P.
ii) Item discrimination: The symbol for this analysis is D.
iii) Item distractor analysis
Item difficulty/Level of difficulty
-This is the degree or an extent to which an item is either very difficult or very easy to a given
group of examinees.
-You can know this by looking at how many students have scored the questions or failed out of
the total-Proportion correct of the item (number of candidates who got the questions correct out
of the total gives us the difficulty level of the question.
-It is also called P-Value i.e.
P-Value is expressed as a decimal
60 students in class
50 students passed
10 failed
P-Value =50/60
=5/6
percentage =90
P-Value =0.8

How can we determine the difficulty level of an item?


-In case of item difficulty, if 100 students sat for the exam and only 5 students scored better
marks and 95 students failed, it means the item was difficult and the opposite is true. This can be
demonstrated through the following scale:

0.1---------------------------0.2-----------------------------0.8-----------------------------0.9
difficult item recommendable item very difficult item

Item discrimination
-It involves differentiating between the more knowledgeable students (high achievers) and the
less knowledgeable ones (the less achievers).
-If in the item high achievers are achieving and the less achievers are failing, then the item is
discriminating well.
-But if the less achievers are achieving and the high achievers are failing then the item is
discriminating negatively.
How is item discrimination done?
-This is done through following the following steps:
i) Rank the marked scripts from the highest to the lowest i.e. 95 to 12.
ii) Get a proportion of the group of high scoring (25 scripts of high score) and a proportion of the
group of low scoring (25 scripts of low score)
iii) Find the P-high and the P-low (item which is item difficulty in 25 high and 25 low). The
analysis should be done on the same question for both groups (the 25-high and 25-low).
iv) The item discrimination is found by using the following formula:

Item Discrimination (D) = P(high)-P(low)


Assume that 5 students out of 25- high got a certain question correctly and 10 students out of the
25-low also did well on the same question, therefore the item discrimination will be as follows:

Item Discrimination (D)= 5/25 - 10/25


= -5/25
= -1/5
Note:It is possible to have an item discrimination with negative (-) or a positive sign (+).
-When there is D with a negative sign, it is questionable because the item is discriminating
negatively, the item should be revisited or removed from test.

Distractor analysis
-It is done on Multiple Choice Questions
-It involves finding out if an option was able to play its role of distracting the examinees.
-In Multiple Choice Questions, you can have a statement or question with alternatives:
A
B
C
D
-One of the alternatives is the correct answer e.g. B This is referred to the Key
-The rest (A, C, D) are referred to as Distractors or Foils or Decoys
-A good Multiple Choice Question, all the alternatives must look like answers to the question
(plausible)
-If a good number of students chooses A as an answer others B, C or D Then it is a good
Multiple Choice Question.
-If a distractor (option) was not chosen as an answer by the students, it should therefore cease to
be a distractor.

ORGANISATION OF DATA OR TEST SCORES


-Organization of data means presenting test scores in an organized manner. For example, you can
have the following disorganized scores:

93, 56, 77, 95, 66, 12, 95, 45, 56, 66, 66, 23, 45, 34, 77, 77, 66, 88, 72, 10
-The above scores can be organized through:

i) Ranking i.e. arranging the scores from the highest to the lowest.
95, 95, 93, 88, 77, 77, 77, 72, 66, 66, 66, 66, 56, 56, 45, 45, 34, 23, 12, 10 Simple
ii) Frequency Distribution Table

X TALLY F

95 // 2

93 / 1

88 / 1

77 /// 3

72 / 1

66 //// 4

56 // 2

45 // 2

34 / 1

23 / 1

12 / 1
10 / 1

iii) Cumulative
Frequency Distribution Table
-The table is as follows:

X TALLY F CF

95 // 2 20

93 / 1 18

88 / 1 17

77 /// 3 16

72 / 1 13

66 //// 4 12

56 // 2 8

45 // 2 6

34 / 1 4

23 / 1 3

12 / 1 2

10 / 1 1

iii) Relative Frequency Table

This is shown as follows:


X TALLY F CF RF

95 // 2 20 0.1

93 / 1 18 0.05

88 / 1 17 0.05

77 /// 3 16 0.15

72 / 1 13 0.05

66 //// 4 12 0.2

56 // 2 8 0.1

45 // 2 6 0.1

34 / 1 4 0.05

23 / 1 3 0.05

12 / 1 2 0.05

10 / 1 1 0.05

Note: RF=f/n where f is the ‘frequency’ and n is the ‘total number of scores’

Summarizing test data


How to summarize test data?
-There are three ways of representing data, these include:
i) reporting measures of central tendency.
ii) Reporting measures of variation.
iii) Reporting measures of relationships.
1) Measures of Central Tendency
-In this method the teacher is required to report the mode, median and mean.

Mode
-This is the score that has the highest frequency. If two non-adjacent scores occur with equal and
highest frequency, they are called bimodal and if they are multiple (many) modes are called
multimodal.
-If two adjacent scores occur with equal and highest scores, the mode is found by finding the
average of the two scores.
Median
-Is a score that is at the middle of the distribution when the scores have been arranged in
hierarchical manner. If you have two scores as median, the real median is the average of the two
scores. If you have a large number of scores e.g. 250 you have to use a median locator. The
formula for the Median locator is n+1/2 (number of scores added to 1 and then divide by 2).
-The information given above is for ungrouped data.
Terms or concepts that mean the same as median
-If you have scores 1-100 the median will be 50 percentile then it is abbreviated as P50.
–If you have divided the 100 into quarters, it will have 4 quarters. Then this implies that we will
have 1st quartile (P25), 2nd quartile (P50), 3rd quartile (P75) and 4th quartile (P100).
How to find median for grouped data
-First of all, a look at how we group the ungrouped data;

*Determine the range of the scores i.e. the highest minus the lowest.
*Then divide the range by any number that you wish to have as your class interval i.e. if you
want to have 10 class intervals, then you will divide the range by 10. If the number you get from
the division is a decimal number round it to the whole number.
*The width interval should be always an odd number so if the number get after division is an
even number you are free to go down or up i.e. if the width interval that you got is 6 you can
therefore choose between 5 and 7.
*The lowest class interval should always contain the lowest score.
*The interval should be a multiple of that number.
*From there you can build the class interval beginning with the lowest score, e.g. 5-9, 10-14, 15-
19 e.t.c.

Example: Use the following

x F cf data to find P50, P75 and P25:

50-59 1 27

45-49 4 26

40-44 2 22

35-39 7 20

30-34 3 13

25-29 5 10

20-24 1 5

15-19 2 4

10-14 2 2

P50 = L + (.50N-CF)W
F
Where P50 means Median expressed as Percentile; L is the Lower real limit of the class interval
that contains the median; .50N is the median locator. This is the same as 50/100 multiply by N
(1/2N); CF is the cumulative frequency below the class interval that contains the median; F is
frequency of the class interval that contains the median and W is the width interval.

Solution

P50 = 34.5+(13.5-13)5
7
= 53.4
P25 = L+(.25N-CF)W
F
Therefore P25 = 24.5+(6.75-5)5
5
= 26.25
P75 = 39.5+(20.25-20)5
2
=40.1

2) Measures of spread or variation


-Take note that in a graph mode is identified by looking the peak of the graph.
–When reporting, don’t only report measures of central tendency but also measures of
spread in order to arrive at right conclusion.
-The measures of spread include the following:
a. Range: Inclusive range, exclusive range, interquartile range and semi-quartile range.
b. Mean deviation
c. Variance
d. Standard deviation

Types of range
-There are four types of range and these are:

a. Exclusive range:-This is the difference between the highest score and the lowest score.
E.g. 80-22=58
b. Inclusive range:-Is the difference between the upper real limit of the highest score and
the lower real limit of the lowest score in a distribution. E.g. 80.4-21.5=58.9
c. Interquartile range:-This is the range found between the quartiles. E.g. P0, P25, P50, P75,
P100 In this case, the interquartile range=P75-P25
d. Semi-interquartile range: This is the range which is found between two interquartile
ranges.

You might also like