Professional Documents
Culture Documents
Read The Articles!
Read The Articles!
Does the term evaluation scare you? Don’t let it! Performing evaluations shouldn’t
have a negative association, instead, use the results of the evaluation to drive your
organization to success! Let’s take a deeper look at the levels of Kirkpatrick’s
evaluation model.
Level 1: Reaction
The reactions of learners/participants are often overlooked and can hold valuable
insight into overall feelings toward trainings/programs. Level one of Kirkpatrick’s
model can aid in the identification of positive or negative feelings. Identifying positive
reactions to trainings/programs can be useful! This information will assist in the
retention of learners/participants attendance. Detecting negative reactions or
emotions is essential. Don’t be afraid to get negative feedback, you will have the
ability to change it! Negative reactions may prevent learners/participants from
completing the trainings/programs. The identification of both positive and negative
reactions can aid in organizational support through the modification of
trainings/programs (Reio, et al., 2017, p. 36).
Level 2: Learning
This level, according to Kirkpatrick and Kayser (2016), “assess the degree to which
participants acquire the intended knowledge, skills, attitude, confidence, and
commitment based on their participation in the training" (p. 42). Achievement tests or
performance assessments are used to measure learning and surveys measure self-
reported behavior changes at this level of evaluation (Hale & Astolfi, 2015, p. 21). The
evaluation of learning is vital, as learning can lead to changes in behavior (Kirkpatrick
& Kirkpatrick, 2006, p. 50).
Level 3: Behavior
Level three, according to Jones, Fraser, and Randall (2018), is identified as “the most
important of the four levels because training alone will not generate changes in
practice or outcomes” (p. 498). Level three evaluates behavior, or to what degree
learners/participants are able to transfer content to job-specific context (Jones et al.,
2018, p. 498).
This level is crucial in evaluating learning transfer and the impact performance will
have on the organization. The goal of training is to improve performance, and
implementing Kirkpatrick’s levels of evaluation will provide you with the quantitative
data you need to determine that.
Level 4: Results
The highest level of Kirkpatrick’s evaluation model is considered level four: results.
Results indicate the extent to which targeted outcomes have been met (Jones et al.,
2018, p. 499).
If you can’t get enough of evaluations, you are in the right career! As an Instructional
Designer, evaluations are critical and are completed at multiple stages in the process!
Why Evaluate Training Impact? The Kirkpatrick Model Can Help You
In short, so that you can improve the learner experience and achieve the goals of
your training. Good Instructional Design never really comes to an end; your content
can always be refined and improved. If you simply finish a project and move on to the
next one, you will never learn; if there are problems with the content you’re
producing, they will persist through the content you produce in the future. As
designers and developers, it is our duty to be learning ourselves and to be continually
improving what we create. We owe it to our clients and to the people learning from
our courses.
As we go through these levels, ask yourself where you would place your current
evaluation of your training. And what could you do to move it on to the next level?
Level 1: Reaction
This is on the level of ‘which face best represents how you feel about this’. It’s about
how participants respond to your training; did they enjoy it? Did they think it was
valuable? Did they feel good about the instructor or the venue or the design? It tells
you nothing about whether your course actually fulfilled its objective and taught
something; the later levels do that, but it does give you an idea how your learning
was received, and how you could improve the User Experience.
If you aren’t currently evaluating your training at all, this is the place to start. Why not
implement a post-course questionnaire, asking participants how they found the
course? Gather the responses and see if there are any recurring themes. Do people
struggle with the navigation? Find the voiceover irritating? Do you need to rethink
accessibility? There’s a potential wealth of information here, and a questionnaire is a
simple thing to set up.
Level 2: Learning
A good way of assessing this is with 2 quizzes—one at the beginning and one at the
end of the course. Ask questions about the same topics, and see if learners are
answering more questions correctly after their training. If they are, it would suggest
that they did learn; if not, then something about your learning material is clearly not
doing its job.
This can be a helpful method of evaluation, as it can give you specific information. If
all your learners are getting questions about a specific topic wrong, it might be time
to look again at how you’re teaching that topic. What about it is unclear? How could
you better present it so your learners take the knowledge on board?
Another way of assessing this is, again, with a post-course questionnaire. But in
addition to the basic questions of level 1, you could ask the learners to tell you what
they learned on this course. In some cases, this might even be more insightful than a
quiz. Asking people to articulate something in their own words shows how much they
truly understand about it.
One final suggestion; you could send out a follow-up quiz, say a week after the
training. Quizzing them straight after they’ve taken the course, when it’s all fresh in
their minds, doesn’t tell you what they’ll remember over the long term. What do they
recall a week later? A month later? And if the key points aren’t being remembered;
how can you improve your learning to ensure that they are? Should you be providing
refresher training or job aids in addition to your training course? Would it help to
build a mobile app that sends daily tips and reminders to your learners after the
course?
Level 3: Behavior
We’ve all been on those compliance courses where we learn about the correct
procedure for doing something, and then go back into the office the next day and
continue doing things exactly how we used to. The issue here is not lack of
knowledge; we know the correct procedure. It’s that the knowledge is not being
applied. Reaching Kirkpatrick’s level 3 means asking, are participants using what they
learned?
How do I move up to this level?
This is usually something you have to assess a little while after the course has been
taken; you need to leave time for the new behaviors to settle in.
The best way of gaining this kind of insight is through 360º feedback.
360º feedback comes from the participant themselves, their colleagues, and their
superiors. Asking the participant and everyone around them if their behavior has
changed after taking the learning will give you a 360º view of the situation! If your
training has had the desired effect, it will be noticeable to everyone involved.
Sometimes, the feedback will say that no changes have occurred. In those instances,
it’s important to ask why people think this is the case. Behavior can only change if the
conditions for it are favorable. Will the boss actually let your participant apply their
new knowledge? Is there a tool or a system that has not been put in place? Does the
learner have any desire or incentive to apply the learning? And what can be done to
remedy these situations?
Level 4: Results
Kirkpatrick’s final level of evaluation looks at whether training positively impacted the
organization.
This relies on goals being set before the development of the training. What changes
were managers looking for? How is success defined? Otherwise, you won’t know what
results you’re hoping to see.
The way you evaluate this will be determined by the results you’re looking to see.
Typically, it will involve analyzing data. If it’s improvement in ROI you’re aiming for,
you need to be assessing financial statements. If it’s a lessening of health and safety
incidents in the office, you need the data on how many incidents there have been.
When evaluating the impact of your training, it’s important to know about all 4 of
these levels. For example, if the behavior hasn’t changed, you need to look at the
previous levels to understand why—did people actually learn what they needed to?
And if not, is that because the design was so confusing and unhelpful that they
mentally checked out? Plan your training evaluations to cover all 4 levels; that way,
you’ll have the whole perspective on how effective your training is, and how it can be
improved.
Determining The ROI Of eLearning - Using
Kirkpatrick’s Model Of Training Evaluation
The measurement of effectiveness of online training is a hot topic right now. In this article, I
outline how you can use the Kirkpatrick’s model of training evaluation to measure training
effectiveness, its impact, and the ROI of eLearning.
How Do You Begin The Exercise To Measure The Effectiveness Of Online Training?
At EI Design, we use the approach shown here. Essentially, we focus on each stage
from TNA to ROI calculation as the right action at each stage will impact the ROI of
eLearning positively.
Level 1: Reaction
Level 2: Learning
Level 3: Behavior
Level 4: Impact
Initially, the model was viewed as a pyramid with each level building up from the
previous one.
Increasingly, the same 4 levels are viewed as a "chain of evidence," and I feel this
reflects a more relevant connection between the levels.
How Can You Practically Use The Kirkpatrick Model Of Training Evaluation To
Determine The ROI?
Objective: At this level, the focus is to determine the learner’s reaction to the
training. Today, we have wide-ranging options through Learner Analytics to identify if
the learners liked the training if they found it useful and if they would be able to
apply the learning.
From an evaluation perspective, this feedback enables L&D teams to assess if they
are on track or if any further changes are required.
Level 2: Learning
Objective: At this level, the focus is to determine what was learned or gained (this
should be attributable directly to the training).
From an evaluation perspective, this feedback enables L&D teams to assess if they
met their mandate (captured during TNA) that could include:
Knowledge gain.
Acquisition of a new skill.
Further proficiency gain on an existing skill.
Behavioral change.
The pointers from this stage of evaluation would point to:
Objective: At this level, the focus is to determine if the learner behavior changed
(again, this should be attributable directly to the training).
From an evaluation perspective, this feedback enables L&D teams to assess if there
was a demonstrable change in the learner’s behavior.
Often, this is can be tricky. Although, learners had successfully cleared the
assessment, yet there is no demonstrable change.
Objective: At this level, the focus is to determine if the business saw the gain and if
the required impact was created on account of the training.
From an evaluation perspective, this feedback enables L&D teams to review if the
expected impact identified during the TNA phase indeed happened.
How Can You Use Kirkpatrick’s Model Of Training Evaluation To Measure ROI?
ROI determination (or Level 5): This is an add-on to the initial model (that has 4
levels) and is referred to as the Kirkpatrick-Phillips Evaluation Model of training.
In simple terms, you will have a positive ROI of eLearning if the demonstrable gain
from the training exceeds the cost you incurred to create and deliver the training.
I hope this article gives you practical insights into how you can enhance the impact of
each stage from TNA to the evaluation of its impact and see a positive ROI of
eLearning.
The Kirkpatrick Model Of Training
Evaluation And Learning Analytics
A thorough evaluation of your training program is a pre-requisite for getting started with
learning analytics. Mastering the Kirkpatrick Model can be your best plan for it. Read 5 simple
steps to get started.
Corporate training isn’t just about curiosity and learning, it needs to be utilitarian and
impactful. With businesses pouring millions of dollars every year into training, it’s no
wonder they need real and quantified business results and Return On Investment
(ROI). This is where learning analytics enters the L&D arena.
1. Reaction
It is the assessment of the initial reaction of learners to the course; reactions to its
relevance, training methods used, and delivery. It is NOT a measurement of learning.
2. Learning
This level measures whether learners acquired the knowledge and skills taught in the
course. Methods for assessing learning may be knowledge or performance-based.
3. Behavior
It measures the extent to which the acquired knowledge and skills are transferred to
the job.
4. Result
The value of learning depends on the quantifiable impact it has on the organization.
Results measure this impact.
Pushing A Boulder Uphill: The Challenge For Evaluation
Ask any L&D professional about training evaluation and they will most certainly talk
about having the Kirkpatrick Model in place. But when you look carefully, typically
only the first 2 levels are implemented. Even with its ubiquitous presence for many
years now, why do most organizations still struggle with mastering a comprehensive
evaluation program? These are a few of the most significant challenges.
In general, Levels 1 and 2 are integrated with the training and under the direct
control of the L&D function. Levels 3 and 4 call for a much higher level of
involvement by stakeholders and the organization, thus making the process more
challenging. Even with Levels 1 and 2, the appropriateness of the
evaluation/assessment questions and extent to which the information gathered is
used varies widely.
Many Level 2 evaluations “drift away” from the learning objectives and do not
appropriately measure the achievement of those objectives. For example, using a
multiple-choice question to measure if a participant can log in to a software program.
Often, these evaluations are based on a passing threshold, and more meaningful and
useful learning analytics are NOT enabled.
The first step to building successful training evaluations and implementing learning
analytics is to have a comprehensive and clear conversation with all stakeholders.
Educate them on how implementing learning analytics properly can enhance the
impact of training, which in turn can translate to an increase in ROI. The goal is to
collaborate and establish a strategy that outlines what learning analytics will be
implemented.
After convincing all stakeholders about the value of learning analytics, devise a plan
on how to go about implementing it. It is necessary to have all stakeholders
participate and agree on:
Each level of the Kirkpatrick Model evaluates a particular stage in the learners’
journey. Each level has its own objective, role in the evaluation process, and needs
proper instruments in place to be successful. Create templates for each instrument
that can be adapted to each training solution.
Learning Management Systems (LMS) are meant to be the data tracking and
reporting hub for eLearning courses. While most LMS do provide these facilities,
managers do not really pay attention to most data other than completion, qualifying
passing rates.
For example, if you have a training solution that aims to teach a 10-step procedure
and your only focus is knowing if and when learners have completed the course, it
doesn’t do much to help determine the quality and effectiveness of training. You
need to leverage maximum benefits out of your LMS and enable deeper learning
analytics.
You can adopt a Learning Record System (LRS) within your LMS which enables even
nuanced tracking and recording of learning activities—online and offline. Having an
xAPI-enabled LMS takes assessing learning experiences a step further.
Reports are colorful, but only useless pie charts and graphs if you just set them aside
after a glance. Implementing ways to collect evaluation metrics doesn’t make sense if
they are not going to be put to use. You can update stakeholders about the reports
and come up with strategies to implement changes based on them. It’s really about
having good data to inform decisions. You should also periodically update your
evaluation strategy and continuously improve your evaluation process.