Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

INSTRUCTIONAL DESIGN

Kirkpatrick's Evaluation Model


Ever wonder about Kirkpatrick’s evaluation model? Unsure of what it is or how to use
it? Let me give you a brief introduction!

Learn About Kirkpatrick's Evaluation Model

Does the term evaluation scare you? Don’t let it! Performing evaluations shouldn’t
have a negative association, instead, use the results of the evaluation to drive your
organization to success! Let’s take a deeper look at the levels of Kirkpatrick’s
evaluation model.

Evaluation is critical to organizational success and is essential if an organization wants


to continue to adapt and grow (Hale & Astolfi, 2015, p. 6). Kirkpatrick has developed
4 levels of evaluation to determine the overall effectiveness of trainings/programs or
lack thereof (Reio, Rocco, Smith & Chang, 2017, p. 36). Evaluation can provide key
data that allows you to make specific improvements benefiting not only the
organization but the employees as well.

Level 1: Reaction

Level one of Kirkpatrick’s evaluation model, reaction, focuses on the


learners/participants' reactions or feelings to the event.

The reactions of learners/participants are often overlooked and can hold valuable
insight into overall feelings toward trainings/programs. Level one of Kirkpatrick’s
model can aid in the identification of positive or negative feelings. Identifying positive
reactions to trainings/programs can be useful! This information will assist in the
retention of learners/participants attendance. Detecting negative reactions or
emotions is essential. Don’t be afraid to get negative feedback, you will have the
ability to change it! Negative reactions may prevent learners/participants from
completing the trainings/programs. The identification of both positive and negative
reactions can aid in organizational support through the modification of
trainings/programs (Reio, et al., 2017, p. 36).
Level 2: Learning

Level two of Kirkpatrick’s evaluation model pertains to content evaluation in regard to


knowledge gained by learners/participants as a result of the training/program (Reio
et al., 2017, p. 36).

This level, according to Kirkpatrick and Kayser (2016), “assess the degree to which
participants acquire the intended knowledge, skills, attitude, confidence, and
commitment based on their participation in the training" (p. 42). Achievement tests or
performance assessments are used to measure learning and surveys measure self-
reported behavior changes at this level of evaluation (Hale & Astolfi, 2015, p. 21). The
evaluation of learning is vital, as learning can lead to changes in behavior (Kirkpatrick
& Kirkpatrick, 2006, p. 50).

Level 3: Behavior

Level three, according to Jones, Fraser, and Randall (2018), is identified as “the most
important of the four levels because training alone will not generate changes in
practice or outcomes” (p. 498). Level three evaluates behavior, or to what degree
learners/participants are able to transfer content to job-specific context (Jones et al.,
2018, p. 498).

This level is crucial in evaluating learning transfer and the impact performance will
have on the organization. The goal of training is to improve performance, and
implementing Kirkpatrick’s levels of evaluation will provide you with the quantitative
data you need to determine that.

Level 4: Results

The highest level of Kirkpatrick’s evaluation model is considered level four: results.
Results indicate the extent to which targeted outcomes have been met (Jones et al.,
2018, p. 499).

At this level of evaluation, organizations are hoping to quantitatively measure change


(Reio et al., 2017, p. 37). Overall, after any type of program or training
implementation, the organization is looking for a benefit. Training should lead to
results, performance improvement and gains in knowledge. The use of Kirkpatrick’s
evaluation can determine the success and provide valuable data to modify training to
increase the results!
Can’t Get Enough Of Evaluations?

If you can’t get enough of evaluations, you are in the right career! As an Instructional
Designer, evaluations are critical and are completed at multiple stages in the process!

Successful Instructional Design can be determined through proper evaluation and


can be thought of as an iterative process happening at almost every stage in the
design. Formative evaluations are part of Instructional Design and evaluate
effectiveness throughout the process and are divided into needs assessment and
implementation evaluation (Hale & Astolfi, 2015, p. 6). Feedback from the formative
evaluation guides the design based on the result. The needs assessment, typically
occurring at the beginning of the design process, determines if the training or
program is necessary and identifies the need, goals, objectives, and vision of the
organization (Hale & Astolfi, 2015, p. 6). As an Instructional Designer, we must
remember that training isn’t always the solution. Instructional Design isn’t and
shouldn’t be a one-size-fits-all approach. Solutions and strategies are developed
through careful analysis. Implementation evaluation determines if a training/program
is being implemented as intended and can occur at a specific point in time or
frequently throughout the implementation (Hale & Astolfi, 2015, p. 6). Summative
evaluations, usually occurring after implementation, aid in determining if
organizational goals were achieved, thus leading to improvements or changes in
design where needed. (Hale & Astolfi, 2016, p. 6).

Program evaluation is relevant to the field of Instructional Design, essentially


determining the success of the training/program. Research through program
evaluation will provide tangible evidence of a successful design and ROI, delivering
valuable information to stakeholders and decision-makers (Hale & Astolfi, 2015, p. 9).

Instructional Designers are responsible for successful designs, application of


knowledge in relevant contexts, and ROI for stakeholders. Program evaluation for
Instructional Design can be thought of as a measure of accountability for designers,
ensuring learners acquire the intended skills and knowledge needed.

As an Instructional Designer myself, the possibilities evaluations create is exciting!


Think of evaluations not only to identify success but as a way to identify areas that
can be improved!
The Kirkpatrick Model For Dummies
Kirkpatrick’s Learning Evaluation Model is a framework for evaluating the impact of your
training.

Why Evaluate Training Impact? The Kirkpatrick Model Can Help You

In short, so that you can improve the learner experience and achieve the goals of
your training. Good Instructional Design never really comes to an end; your content
can always be refined and improved. If you simply finish a project and move on to the
next one, you will never learn; if there are problems with the content you’re
producing, they will persist through the content you produce in the future. As
designers and developers, it is our duty to be learning ourselves and to be continually
improving what we create. We owe it to our clients and to the people learning from
our courses.

As we go through these levels, ask yourself where you would place your current
evaluation of your training. And what could you do to move it on to the next level?

The Levels Of Kirkpatrick

Level 1: Reaction

This is on the level of ‘which face best represents how you feel about this’. It’s about
how participants respond to your training; did they enjoy it? Did they think it was
valuable? Did they feel good about the instructor or the venue or the design? It tells
you nothing about whether your course actually fulfilled its objective and taught
something; the later levels do that, but it does give you an idea how your learning
was received, and how you could improve the User Experience.

How do I move up to this level?

If you aren’t currently evaluating your training at all, this is the place to start. Why not
implement a post-course questionnaire, asking participants how they found the
course? Gather the responses and see if there are any recurring themes. Do people
struggle with the navigation? Find the voiceover irritating? Do you need to rethink
accessibility? There’s a potential wealth of information here, and a questionnaire is a
simple thing to set up.
Level 2: Learning

The point of learning is—unsurprisingly—that people learn! Did the participants


actually learn from your material? How much has their knowledge increased?

How do I move up to this level?

A good way of assessing this is with 2 quizzes—one at the beginning and one at the
end of the course. Ask questions about the same topics, and see if learners are
answering more questions correctly after their training. If they are, it would suggest
that they did learn; if not, then something about your learning material is clearly not
doing its job.

This can be a helpful method of evaluation, as it can give you specific information. If
all your learners are getting questions about a specific topic wrong, it might be time
to look again at how you’re teaching that topic. What about it is unclear? How could
you better present it so your learners take the knowledge on board?

Another way of assessing this is, again, with a post-course questionnaire. But in
addition to the basic questions of level 1, you could ask the learners to tell you what
they learned on this course. In some cases, this might even be more insightful than a
quiz. Asking people to articulate something in their own words shows how much they
truly understand about it.

One final suggestion; you could send out a follow-up quiz, say a week after the
training. Quizzing them straight after they’ve taken the course, when it’s all fresh in
their minds, doesn’t tell you what they’ll remember over the long term. What do they
recall a week later? A month later? And if the key points aren’t being remembered;
how can you improve your learning to ensure that they are? Should you be providing
refresher training or job aids in addition to your training course? Would it help to
build a mobile app that sends daily tips and reminders to your learners after the
course?

Level 3: Behavior

We’ve all been on those compliance courses where we learn about the correct
procedure for doing something, and then go back into the office the next day and
continue doing things exactly how we used to. The issue here is not lack of
knowledge; we know the correct procedure. It’s that the knowledge is not being
applied. Reaching Kirkpatrick’s level 3 means asking, are participants using what they
learned?
How do I move up to this level?

This is usually something you have to assess a little while after the course has been
taken; you need to leave time for the new behaviors to settle in.

The best way of gaining this kind of insight is through 360º feedback.

360º feedback comes from the participant themselves, their colleagues, and their
superiors. Asking the participant and everyone around them if their behavior has
changed after taking the learning will give you a 360º view of the situation! If your
training has had the desired effect, it will be noticeable to everyone involved.

Sometimes, the feedback will say that no changes have occurred. In those instances,
it’s important to ask why people think this is the case. Behavior can only change if the
conditions for it are favorable. Will the boss actually let your participant apply their
new knowledge? Is there a tool or a system that has not been put in place? Does the
learner have any desire or incentive to apply the learning? And what can be done to
remedy these situations?

Level 4: Results

Kirkpatrick’s final level of evaluation looks at whether training positively impacted the
organization.

This relies on goals being set before the development of the training. What changes
were managers looking for? How is success defined? Otherwise, you won’t know what
results you’re hoping to see.

How do I move up to this level?

The way you evaluate this will be determined by the results you’re looking to see.
Typically, it will involve analyzing data. If it’s improvement in ROI you’re aiming for,
you need to be assessing financial statements. If it’s a lessening of health and safety
incidents in the office, you need the data on how many incidents there have been.

When evaluating the impact of your training, it’s important to know about all 4 of
these levels. For example, if the behavior hasn’t changed, you need to look at the
previous levels to understand why—did people actually learn what they needed to?
And if not, is that because the design was so confusing and unhelpful that they
mentally checked out? Plan your training evaluations to cover all 4 levels; that way,
you’ll have the whole perspective on how effective your training is, and how it can be
improved.
Determining The ROI Of eLearning - Using
Kirkpatrick’s Model Of Training Evaluation
The measurement of effectiveness of online training is a hot topic right now. In this article, I
outline how you can use the Kirkpatrick’s model of training evaluation to measure training
effectiveness, its impact, and the ROI of eLearning.

How To Determine The ROI Of eLearning

The measurement of ROI of eLearning needs an integrated approach that should


begin during the Training Needs Analysis or TNA phase and should successively build
up right up to the determination of its impact on business.

How Do You Begin The Exercise To Measure The Effectiveness Of Online Training?

At EI Design, we use the approach shown here. Essentially, we focus on each stage
from TNA to ROI calculation as the right action at each stage will impact the ROI of
eLearning positively.

1. Training Needs Analysis (TNA)


This is the baseline and will be eventually used to measure the desired gain for the
learners (acquisition of new skill or fixing a gap) and if this gain resulted in the
required impact the business had sought. Capturing the needs of the learners and
having clarity on how you will measure their progress (that is, the expected gain)
is a key component of the exercise.
2. Determining the right training format (online, blended, or ILT)
The next step is identifying the right format of training that aligns best with the
TNA (Sometimes, training may not be the answer, and you need to identify
supporting measures like coaching or mentoring). The selection of the right
format is crucial in encouraging the learners to pursue it and also in ensuring that
they connect with it, complete it and apply the learning on the job.
3. Identifying the learning strategy
The selection of the right learning strategy format is vital in engaging the learners.
As we know, only an effective learning strategy can create a sticky learning
experience. However, it is equally important to note that having formal training
alone may not be enough. We also need to provide room for application of
knowledge and practice or nudges to mastery. Hence, the learning strategy must
have a combination of formal training and performance support intervention to
successfully manage your mandate.
4. Determining the gain by the learners
An effective assessment strategy is the main approach to determine this. However,
it is important to ensure that the measurement of this gain also factors for the
validation of the application of the acquired learning and is not limited to the
validation of knowledge acquisition.
5. Assessing the gain for business
We can look back at the parameters identified during the TNA stage and assess
the required gain that has occurred and if the business saw the required impact.
6. ROI determination is the final outcome of this exercise
Once we monetize the gain and compare this with your costs, we will arrive at the
ROI of your eLearning. Often, this outcome may need you to go back to the TNA
and reassess or tweak the way forward.
To show how these aspects are interconnected and are part of a cycle is captured in
the diagram here.

ROI determination methodology: One of the popular models used for ROI


determination is the Kirkpatrick’s model of training evaluati on. I will outline what it
entails and how can it be used to determine training effectiveness and the ROI of
eLearning.

What Is Kirkpatrick’s Model Of Training Evaluation?

Kirkpatrick’s model of training evaluation is one of the popular models used to


evaluate the effectiveness of training. It was created in 1959 and has undergone
revisions in 1975 and 1994. The fact that it has survived for over half a century affirms
the value it continues to offer.
It features the following four levels:

Level 1: Reaction

Level 2: Learning

Level 3: Behavior

Level 4: Impact

Initially, the model was viewed as a pyramid with each level building up from the
previous one.

Increasingly, the same 4 levels are viewed as a "chain of evidence," and I feel this
reflects a more relevant connection between the levels.
How Can You Practically Use The Kirkpatrick Model Of Training Evaluation To
Determine The ROI?

To use the Kirkpatrick model of training evaluation, we need to identify two aspects


at each level namely:

 What are we measuring?


 How will we use this outcome (to improve training effectiveness and increase its
impact)?
Level 1: Reaction

Objective: At this level, the focus is to determine the learner’s reaction to the
training. Today, we have wide-ranging options through Learner Analytics to identify if
the learners liked the training if they found it useful and if they would be able to
apply the learning.

From an evaluation perspective, this feedback enables L&D teams to assess if they
are on track or if any further changes are required.

Level 2: Learning

Objective: At this level, the focus is to determine what was learned or gained (this
should be attributable directly to the training).

From an evaluation perspective, this feedback enables L&D teams to assess if they
met their mandate (captured during TNA) that could include:

 Knowledge gain.
 Acquisition of a new skill.
 Further proficiency gain on an existing skill.
 Behavioral change.
The pointers from this stage of evaluation would point to:

 The need for further training.


 The need to supplement formal training with other measures that could
include performance support intervention or mentoring/coaching.
Level 3: Behavior

Objective: At this level, the focus is to determine if the learner behavior changed
(again, this should be attributable directly to the training).
From an evaluation perspective, this feedback enables L&D teams to assess if there
was a demonstrable change in the learner’s behavior.

Often, this is can be tricky. Although, learners had successfully cleared the
assessment, yet there is no demonstrable change.

This may need re-assessment to determine why this is not happening.

 Sometimes, it could be because learners have no opportunity to demonstrate


what they learned, and often, it may point to the need for reinforcement.
 There may be a need to have refresher programs to be offered over an extended
period of time till the required gain is observed.
Level 4: Impact

Objective: At this level, the focus is to determine if the business saw the gain and if
the required impact was created on account of the training.

From an evaluation perspective, this feedback enables L&D teams to review if the
expected impact identified during the TNA phase indeed happened.

How Can You Use Kirkpatrick’s Model Of Training Evaluation To Measure ROI?

ROI determination (or Level 5): This is an add-on to the initial model (that has 4
levels) and is referred to as the Kirkpatrick-Phillips Evaluation Model of training.

In simple terms, you will have a positive ROI of eLearning if the demonstrable gain
from the training exceeds the cost you incurred to create and deliver the training.

I hope this article gives you practical insights into how you can enhance the impact of
each stage from TNA to the evaluation of its impact and see a positive ROI of
eLearning.
The Kirkpatrick Model Of Training
Evaluation And Learning Analytics
A thorough evaluation of your training program is a pre-requisite for getting started with
learning analytics. Mastering the Kirkpatrick Model can be your best plan for it. Read 5 simple
steps to get started.

Implementing Learning Analytics With The Kirkpatrick Model

Corporate training isn’t just about curiosity and learning, it needs to be utilitarian and
impactful. With businesses pouring millions of dollars every year into training, it’s no
wonder they need real and quantified business results and Return On Investment
(ROI). This is where learning analytics enters the L&D arena.

When it comes to implementing learning analytics, the Kirkpatrick Model gives a


simple yet powerful way to measure the effectiveness of training. Ever since its
inception, it has been widely used by L&D professionals. Here is an overview of the 4
levels of the model.

Understanding The 4 Levels Of The Kirkpatrick Model For Training Evaluation

1. Reaction

It is the assessment of the initial reaction of learners to the course; reactions to its
relevance, training methods used, and delivery. It is NOT a measurement of learning.

2. Learning

This level measures whether learners acquired the knowledge and skills taught in the
course. Methods for assessing learning may be knowledge or performance-based.

3. Behavior

It measures the extent to which the acquired knowledge and skills are transferred to
the job.

4. Result

The value of learning depends on the quantifiable impact it has on the organization.
Results measure this impact.
Pushing A Boulder Uphill: The Challenge For Evaluation

Ask any L&D professional about training evaluation and they will most certainly talk
about having the Kirkpatrick Model in place. But when you look carefully, typically
only the first 2 levels are implemented. Even with its ubiquitous presence for many
years now, why do most organizations still struggle with mastering a comprehensive
evaluation program? These are a few of the most significant challenges.

There is a lot of “friction” when it comes to implementing the evaluation.


Stakeholders are often more concerned about finishing up a training program and
getting employees back on the job as soon as possible (and less concerned about
detailed evaluation). Moreover, generally there is a level of trust with L&D to be
effective and many stakeholders only want to know if employees passed or failed.

In general, Levels 1 and 2 are integrated with the training and under the direct
control of the L&D function. Levels 3 and 4 call for a much higher level of
involvement by stakeholders and the organization, thus making the process more
challenging. Even with Levels 1 and 2, the appropriateness of the
evaluation/assessment questions and extent to which the information gathered is
used varies widely.

Many Level 2 evaluations “drift away” from the learning objectives and do not
appropriately measure the achievement of those objectives. For example, using a
multiple-choice question to measure if a participant can log in to a software program.
Often, these evaluations are based on a passing threshold, and more meaningful and
useful learning analytics are NOT enabled.

There often is a lack of available evaluation expertise (such as evaluation strategy,


design, methodology, analysis, and reporting) plus limited resources, time and
budget.

Getting Ahead Of The Odds: The Solution For Evaluation

There are 5 simple steps to make things clear.

1. Have The Talk: The First Step To Effective Evaluation

The first step to building successful training evaluations and implementing learning
analytics is to have a comprehensive and clear conversation with all stakeholders.
Educate them on how implementing learning analytics properly can enhance the
impact of training, which in turn can translate to an increase in ROI. The goal is to
collaborate and establish a strategy that outlines what learning analytics will be
implemented.

2. Your Data Is Only As Good As The Questions You Ask!

After convincing all stakeholders about the value of learning analytics, devise a plan
on how to go about implementing it. It is necessary to have all stakeholders
participate and agree on:

 How each level of the Kirkpatrick Model is going to be addressed; specifically,


what questions should be asked and what decisions need to be made
 How the evaluation strategy and learning analytics should unfold
 From where the information will be gathered (sources)
 How the information will be gathered (methods), including how the desired
learning analytics will be enabled
 Requirements for analyzing data
 Requirements for how the findings and recommendations will be reported
3. Align Your Evaluation Instruments With The Model

Each level of the Kirkpatrick Model evaluates a particular stage in the learners’
journey. Each level has its own objective, role in the evaluation process, and needs
proper instruments in place to be successful. Create templates for each instrument
that can be adapted to each training solution.

Level/Role Reaction Learning Behavior Results

Feedback about learners’ Assess the difference Information on "success" Most


experience between pre and post- on the job persuasive
training knowledge and information
skills for
management

Sources/  Feedback  Pre and  Mentoring Review


Methods surveys post finances
 Feedback before and
 Smiley  assessm from
after training
sheets ents supervisors

 Point rating  Focus  Peer


scales groups comments
4. ‘Capture, Analyze, Report’ To Leverage The Benefits Of Your LMS

Learning Management Systems (LMS) are meant to be the data tracking and
reporting hub for eLearning courses. While most LMS do provide these facilities,
managers do not really pay attention to most data other than completion, qualifying
passing rates.

For example, if you have a training solution that aims to teach a 10-step procedure
and your only focus is knowing if and when learners have completed the course, it
doesn’t do much to help determine the quality and effectiveness of training. You
need to leverage maximum benefits out of your LMS and enable deeper learning
analytics.

You can adopt a Learning Record System (LRS) within your LMS which enables even
nuanced tracking and recording of learning activities—online and offline. Having an
xAPI-enabled LMS takes assessing learning experiences a step further.

5. Analytics Serve No Purpose Without Implementation

Reports are colorful, but only useless pie charts and graphs if you just set them aside
after a glance. Implementing ways to collect evaluation metrics doesn’t make sense if
they are not going to be put to use. You can update stakeholders about the reports
and come up with strategies to implement changes based on them. It’s really about
having good data to inform decisions. You should also periodically update your
evaluation strategy and continuously improve your evaluation process.

The most important part of evaluation or analytics implementation is having


resources, time and budget in place. To be honest, you will probably meet with a lot
of challenges at every phase of the process of evaluation and implementation. But
rest assured, successfully implementing learning analytics will transform your
employee training program for good. So, why not walk the extra length if it ends up
being the key to improving your business impact. And for insights on the practical
aspects of learning analytics, including a collaborative 5-step process for
implementation, register for this live webinar. Discover practical ideas to get started
with learning analytics by downloading the eBook Leveraging Learning Analytics To
Maximize Training Effectiveness - Practical Insights And Ideas.

You might also like