Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

PROGRAM DEVELOPMENT AND EVALUATION

1. Why conduct program evaluation?


Evaluation is a process that critically examines a program. It involves collecting and analyzing
information about a program’s activities, characteristics, and outcomes. Its purpose is to make
judgments about a program, to improve its effectiveness, and or to inform programming
decision. Evaluation data can be used to improve program services.

2. Differentiate between summative kind of evaluation from formative evaluation?


Formative evlaution –Focuses on ongoing development
Occurs during implemnation of a program. Focuses on assessing discrepancies between
implementation plan and execution.
Formative evvlaution is used before program design or implementation. It generates data on the
need for the program and develops the baseline for subsequent monitoring. It also identifies
areas of improvement and can give insights on what the program’s priorities should be. This help
determine areas of concern and focus, and increases awareness of the program among the
target population prior to launch.
- Are placement tests accurately placing students?
- Is methodlogy used by teachers effective?
- Is the pace of material adequate?
Summative Evaluation- Determines the effectiveness of a program
Summative Evaluation is conducted after the program’s completion or at the end of a program
cycle. It generates data how well the project delivered benefits the target population. It justifies
the project, show what they have achieved, and lobby for project continuation and expansion.
- Did program ahieve goals
- What did students learn
- Were placement tests adequate?
3. List down four (4) types of evalution?
Summative Evaluation, Formative Evaluation, Planning Evaluation, Process Evaluation, Outcome
Evaluation, Impact Evaluation, Economic Evaluation
4. Distinguish research from evaluation?
Research is about being empirical. It involves factual description without judgments about
quality
Evaluation is about drawing evaluative conclusions about quality, merit, quality or worth.
The purpose of evaluation is essentially to improve the existing program for the target
population, while research is intended to prove a theory or hypothesis.
5. List down five (5) attributes of program evaluation and provide a one (1) sentence description
of each attribute.
Ethically conducted – Evaluation should be conducted ethically and the evaluators must
recognize the participants entitlement to privacy. Data is kept secretly.
Leads to Continuous Leaning and Continuous improvement – An Evaluation must be such that all
the stakeholders should be able to interpret the evaluation criteria and ist findings.
Uses Participatory Method – It ensures that the evaluation focuses on locally relevant questions
that met the needs of project planners and beneficiaries.
Affordable/ Appropriate in terms of Budget – It should be efficient in the use of resources and
be intelligently designed and executed.
Timely Carried Out- It should be time-bound.
Never Use for Fixing Blames and Finding Faults – it should be conducted ethically, with o
intention of harming someone or supporting someone unethically.
6. Explain the Kirkpatrick’s Four Level of Outcomes?
Level 1: Reaction
You want people to feel that training is valuable. Measuring how engaged they were, how
actively they contributed, and how they reacted to the training helps you to understand how
well they received it. It also enables you to make improvements to future programs, by
identifying important topics that might have been missing.
Level 2: Learning
Level 2 focuses on measuring what your trainees have and haven't learned. In the New World
version of the tool, Level 2 also measures what they think they'll be able to do differently as a
result, how confident they are that they can do it, and how motivated they are to make changes.
This demonstrates how training has developed their skills, attitudes and knowledge, as well as
their confidence and commitment.
Level 3: Behavior
This level helps you to understand how well people apply their training. It can also reveal where
people might need help. But behavior can only change when conditions are favorable.
Imagine that you're assessing your team members after a training session. You can see little
change, and you conclude that they learned nothing, and that the training was ineffective.
It's possible, however, that they actually learned a lot, but that the organizational or team
culture obstructs behavioral change. Perhaps existing processes mean that there's little scope to
apply new thinking, for example.
As a result, your people don't feel confident in applying new knowledge, or see few
opportunities to do so. Or, they may not have had enough time to put it into practice.
Be sure to develop processes that encourage, reinforce and reward positive changes in behavior.
The New World Kirkpatrick Model calls these processes "required drivers." If a team member
uses a new skill effectively, highlight this and praise him or her for it.
Level 4: Results
At this level, you analyze the final results of your training. This includes outcomes that you or
your organization have decided are good for business and good for your team members, and
which demonstrate a good return on investment (ROI). (Some adapted versions of the model
actually have a Level 5, dedicated to working out ROI.)
Level 4 will likely be the most costly and time-consuming. Your biggest challenge will be to
identify which outcomes, benefits, or final results are most closely linked to the training, and to
come up with an effective way to measure these outcomes in the long term.
7. Identify one (1) evaluation model and discuss thoroughly the nature of the model and the
steps on how this will be employed in evaluation an educational program (As a springboard
for your discussion specify a program)
onald Kirkpatrick, former Professor Emeritus at the University of Wisconsin, first published his
model in 1959. He updated it in 1975, and again in 1993, when he published his best-known
work, "Evaluating Training Programs."
Each successive level of the model represents a more precise measure of the effectiveness of a
training program. It was developed further by Donald and his son, James; and then by James and
his wife, Wendy Kayser Kirkpatrick.
In 2016, James and Wendy revised and clarified the original theory, and introduced the "New
World Kirkpatrick Model" in their book, "Four Levels of Training Evaluation." One of the main
additions is an emphasis on the importance of making training relevant to people's everyday
jobs.
The four levels are Reaction, Learning, Behavior, and Results. We look at each level in greater
detail, and explore how to apply it, below.
8. Illustrate using a Logic Model how an evaluation plan in K-12 program be carried out

9. Describe the step by step process of validating an evaluation instrument.


Generally speaking the first step in validating a survey is to establish face validity. There are two
important steps in this process. First is to have experts or people who understand your topic
read through your questionnaire. They should evaluate whether the questions effectively
capture the topic under investigation.

The second step is to pilot test the survey on a subset of your intended population.
Recommendations on sample size for pilot testing vary. Some academicians are staunch
supporters of things like a 20 participant per question.

After collecting pilot data, enter the responses into a spreadsheet and clean the data. Here is an
important tip: Have one person read the values while another enters the data. Having one
person read and enter data is highly prone to error. After entering the data you will want to
reverse code negatively phrased questions. When used sparingly, negatively phrased questions
can be very useful for checking whether participants filled out your survey in a reckless fashion.

After collecting pilot data, enter the responses into a spreadsheet and clean the data. Here is an
important tip: Have one person read the values while another enters the data. Having one
person read and enter data is highly prone to error. After entering the data you will want to
reverse code negatively phrased questions. When used sparingly, negatively phrased questions
can be very useful for checking whether participants filled out your survey in a reckless fashion.

Check the internal consistency of questions loading onto the same factors. This step basically
checks the correlation between questions loading onto the same factor. It is a measure of
reliability in that it checks whether the responses are consistent.

The final step is revising the survey based on information gleaned from the PCA and CA.
Consider that even though a question does not adequately load onto a factor, you might retain it
because it is important. You can always analyze it separately. If the question is not important you
can remove it from the survey.
10. Givve at least three (3) different kinds of evaluation instruments and name the appropriate
validation type as to whether kit will need content validation, criterion-related, or construct
related?
Questionnaires
Survey
Interviews
Observation
Testing

11. With Reference to a procedure called Factor Analysis, explain the function of each of the
following:
Exploratory or confirmatory factor analysis – is a complex statistical tool used to discover the
structure underlying a set of item responses. It plays a crucial role in scale development and
revision, theory generation and development.
Iterations
Eigenvalues - They are calculated and used in deciding how many factors to extract in the overall
factor analysis.
Cut off value of eigenvalues
Scree plot - This is a rough bar plot of the eigenvalues. It enables you to quickly note the relative
size of each eigenvalue. Many authors recommend it as a method of determining how many
factors to retain.

You might also like