Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

I.J.E.M.S., VOL.

3(1) 2012: 77-84 ISSN 2229-600X

EVALUATION OF TRAINING IN ORGANIZATIONS: A PROPOSAL FOR AN


INTEGRATED MODEL
1 Vijay Kumar P., 2 Narayana M.S.,3 Vidya Sagar M.,
1 Director,School of Management, JNTUK.
2 Professor & Director, PG Courses PNC & KR College of PG Courses Narasaraopet, Guntur – 522601 A.P. India
3 Assistant Professor, MBA Department Mittapalli College of Engineering Tummalapalem, NH-5 Guntur – 522233

ABSTRACT
Purpose – Training is a key strategy for human resources development and in achieving organisational objectives.
Organisations and public authorities invest large amounts of resources in training, but rarely have the data to show the results
of that investment. Only a few organizations evaluate training in depth due to the difficulty involved and the lack of valid
instruments and viable models. The purpose of this paper is to present an evaluation model that has been successfully applied
that integrates all training dimensions and effects. The model analyses satisfaction, learning, pedagogical aspects, transfer,
impact and profitability of training.

KEYWORDS Training and Development, pedagogical System,

INTRODUCTION are based on important organizational goals and


Training and development has positive impact on the improvement efforts. However, that connection must be
individual, the organization and the nation (Smith, 1992). directly guiding training efforts if training results are to be
Human resource evaluation is defined as “systematic linked to organizational measures (Burrow and Berardinelli,
collection of descriptive and judgmental information 2003).
necessary to make effective training decisions related to the Evaluation can serve a number of purposes within the
selection, adoption, value, and modification of various organization. According to Philips (1983), evaluation can
instructional activities” (DeSimone et al, 2003). This help to do the following:
definition makes several important points:
 Determine whether a program is accomplishing its
First, when conducting an evaluation, both descriptive and objectives.
judgmental information may be collected. And these both  Identify the strengths and weaknesses of HRD programs
are needed in human resource development (HRD)  Determine the cost-benefit ratio of an HRD program
evaluation. Some of the judgments are made by those  Decide who should participate in future HRD programs.
involved in the program, and others are made by those not  Identify which participants benefited the most or least
involved in the program. from the program
 Reinforce major points to be made to the participants.
Second, evaluation also involves the systematic collection of
 Gather data to assist in marketing future programs.
information according to a predetermined plan or method to
 Determine if the program was appropriate.
ensure that the information is appropriate and useful.
 Establish a database to assist management in making
Finally, evaluation is conducted to help managers, decisions
employees, and HRD professionals make informed decisions This paper aims to offer a training evaluation model, thereby
about particular programs and methods. For example, if part helping the training professional to design and implement
or a program is ineffective, it may need to be changed or rigorous and coherent training evaluation processes that
discarded. Or, if a certain program valuable, it may be enable the entire training function to be optimized.
replicated in other parts of the organization. Evaluation
begins with a clear identification of the purpose or results THE CONCEPT OF EVALUATION
expected from the training programs. By focusing on the Training evaluation involves scrutinizing the program both
purpose and results evaluators are guides to the reasons that before and after the program is completed. Figure 1
the training program has been developed and the changes emphasizes that training evaluation be considered by
and improvements in learner performance that should result managers and trainers before training has actually occurred.
from training. It would be expected that training programs The evaluation process should begin with determining
training needs. Needs assessment helps identify what

77
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

knowledge, skills, behaviour, or other learned capabilities Once the outcomes are identified, the next step is to
are needed. Once the learned capabilities are identified, the determine an evaluation strategy. Factors such as expertise,
next step in the process is to identify specific, measurable how quickly the information is needed, change potential, and
training objectives to guide the program. The more specific the organizational culture should be considered in choosing
and measurable these objectives are, the easier it is to a design. Planning and executing the evaluation involves
identify relevant outcomes for the evaluation. Analysis of previewing the program (formative evaluation) as well as
the work environment to determine transfer of training is collecting training outcomes according to the evaluation
also useful for determining how training content will be used design. The results of the evaluation are used to modify,
on the job. Based on the learning objectives and analysis of market, or gain additional support for the program. Finally is
transfer of training, outcome measures are designed to assess the examination of each aspect of the evaluation process,
the extent to which learning and transfer have occurred. starting with the development of outcome measures.

Figure 1. The Evaluation Process (according to Grove and Ostroff, 1991)

HOW TO PREPARE A PLAN FOR EVALUATING In this sense, the participants' reactions have important
TRAINING consequences for learning (level two). Although a positive
There are several models of training evaluation that organise reaction does not guarantee learning, a negative reaction
the process, provide guidelines for the content and outline almost certainly reduces its possibility.
the phases of its implementation. Some of them are based on
Second Level: Learning
contributions from the classic master of evaluation, Donald
Learning can be described as the extent to which the
Kirkpatrick and his model consists of four levels of
attitudes of the participants change, their knowledge
evaluation.
increases or their skills are broadened as a consequence of
the training. This is a second level of evaluation of learning
First Level: Reactions
behavior whereby evaluation is intended to measure the
The first level is the reaction level in which the reactions of
progress made in terms of knowledge, skills or attitudes. In
the trainees are understood to mean the way in which they
other words, evaluation tests the participants to see whether
perceive and subjectively evaluate the relevance and quality
new skills have been acquired. At this point, evaluation can
of the training. It attempts to answer questions regarding the
relate to the method used to transfer the knowledge, skills
participants' perceptions - Did they like it? Was the material
and attitudes. To assess the amount of learning that has
relevant to their work? This type of evaluation is often called
occurred due to a training program, level two evaluations
a “smileysheet.” According to Kirkpatrick, every program
often use tests conducted before training (pretest) and after
should at least be evaluated at this level to provide for the
training (post test). Assessing at this level moves the
improvement of a training program. At this level, evaluation
evaluation beyond learner satisfaction and attempts to assess
measures the satisfaction of the people who followed the
the extent students have advanced in skills, knowledge, or
training. In conjunction with that, positive reactions are of
attitude. Measurement at this level is more difficult and
critical importance in creating sufficient learning motivation.

78
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

laborious than level one. Methods range from formal to Fourth Level: Results
informal testing to team assessment and self-assessment. If Level four evaluation attempts to assess training in terms of
possible, participants take the test or assessment before the organizational results. At this point, evaluation checks how
training (pretest) and after training (post test) to determine the results are evaluated at the end of the training initiatives.
the amount of learning that has occurred. An evaluation of the results therefore measures the progress
Third Level: Behavior made at organizational level. Frequently
A third evaluation level is that of changes in job behavior or thought of as the bottom line, this level measures the success
performance. This involves studying the change in job of the program in terms that managers and executives can
behavior which takes place as a result of the training. understand - increased production, improved quality,
Evaluating at this level attempts to answer the question - Are decreased costs, reduced frequency of accidents, increased
the newly acquired skills, knowledge, or attitude being used sales, and even higher profits or return on investment (Level
in the everyday environment of the learner? At this point, 5 -ROI). From a business and organizational perspective,
evaluation sees whether tasks are performed differently this is the overall reason for a training program, yet level
before and after the training. In order for positive reactions four results are not typically addressed. Determining results
and learning effects actually to lead to changed job behavior, in financial terms is difficult to measure, and is hard to link
the transfer of acquired skills to the work situation must directly with training.
especially be ensured. The quality of this transfer is strongly According to Kirkpatrick (1998), the subject of evaluation or
dependent on the support the participant receives after the the level at which evaluation takes place is dependent on the
training, especially from his immediate supervisor or coach phase during which the evaluation takes place. In
(Kirkpatrick, 1998). From a study by Bergenhenegouwen Kirkpatrick's four level model, each successive evaluation
(1997), which explain the low effectiveness of training level is built on information provided by the lower level.
courses, are found in this area in which immediate bosses Assessing Training Needs often entails using the four-level
who have more of a discouraging effect, who themselves do model developed by Donald Kirkpatrick (1994). According
not set a satisfactory example or provide insufficient to this model, evaluation should always begin with level one,
supervision. For many trainers this level represents the truest and then, as time and budget allows, should move
assessment of a program's effectiveness. However, sequentially through levels two, three, and four. Information
measuring at this level is difficult as it is often impossible to from each prior level serves as a base for the next level's
predict when the change in behavior will occur, and thus evaluation. Thus, each successive level represents a more
requires important decisions in terms of when to evaluate, precise measure of the effectiveness of the training program,
how often to evaluate, and how to evaluate. but at the same time requires a more rigorous and time-
consuming analysis.

Table - 1: Human resource training evaluation models/frameworks (DeSimone et al, 2003)

Model/ Framework Training Evaluation Criteria

1. Kirkpatrick (1994) Four levels: Reaction, Learning, Job Behaviour, and Results

2. CIPP (Galvin, 1993) Four levels: Context, Input,Process, and Product


3. CIRO (Warr et al., 1970) Context, Input, Reaction, and Outcome
4. Brinkerhoff (1987) Six stages: Goal Setting, Program Design, Program Implementation,
Immediate Outcomes, Intermediate or Usage Outcomes, and Impacts
and Worth
5. Systems approach Four sets of activities: Inputs,Process, Outputs, and Outcomes
(Bushnell,1990)
6. Kraiger, Ford and Salas (1993) A classification scheme that specifies three categories of learning
outcomes (cognitive, skill – based, affective) suggested by the literature
and proposes evaluation measures appropriate for each category of
outcomes
7. Kaufman and Keller (1994) Five levels: Enabling and Reaction, Acquisition, Application,
Organizational Outputs, and Societal Outcomes
8. Holton (1996) Identifies five categories of variables and the relationships among them:
Secondary Influences, Motivation Elements, Environmental Elements,
Outcomes, Ability/Enabling Elements
9. Phillips (1996) Five levels: Reaction and Planned Action, Learning, Applied Learning on
the Job, Business results,Return on Investment

79
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

LEVELS OF TRAINING EVALUATION Firstly, one should emphasise the great sensitivity of the
results to the climate created during training; thus, one can
Level 1: Participant satisfaction have a very high level of satisfaction with inadequate
The first level of evaluation is to ascertain the participants’ training activities that are nevertheless led by a trainer with
opinions on the training received and their level of great social and communication skills, and vice versa.
satisfaction in this regard. The aspects that usually make up
this level of the evaluation, and on which the participants’ Secondly, this level of evaluation provides the participant’s
opinions are sought are as follows: view of the training, but does not report on the actual
 The appropriateness of training regarding their needs learning by the participant or on the application of such
and expectations; learning in the workplace, and even less about the impact
 Achieving the goals set by the training; that all this will have on the organisation. Therefore, this
 Quality of content – its suitability, level, depth, interest, level of assessment should be followed by the next levels as,
ratio between theory and by itself, it provides useful, but insufficient, information on
practice; the results of training.
 The quality of the methods and techniques used – Thus, organisations that only assess participant
suitability, variety, enjoyment; satisfaction, of which unfortunately there are still many, do
 The quality of pedagogical resources – documents, not in fact evaluate training but merely reflect the opinion of
audiovisual materials, its more immediate clients. The usefulness of this will
projection equipment; depend on how the information collected is put to use and
 The trainer – his/her knowledge and skills at a how it is linked with the results from the other evaluation
pedagogical level, communication, levels.
steering the groups;
Level 2: Learning achieved by the participant
 The group climate and level of participation;
The second level of evaluation focuses on identifying what
 The quality of other resources that come into play, such
participants have learnt by the end of the training.
as classrooms and spaces, services (e.g. coffee, lunch),
Evaluation at this level presupposes the existence of
timetables, information received;
operational and measurable training objectives, which act as
 The scope of applying what has been learned to the an evaluation reference. In other words, as a norm from
workplace; and their suggestions and proposals for which to value the learning achieved. But one must also bear
improvement. in mind that training can generate unexpected learning,
which as a result is not reflected in the proposed goals. The
Virtually all organisations evaluate this level, which is evaluation system should be designed to allow this
usually carried out through a questionnaire that participants unforeseen learning to be collected, which is sometimes of
complete just after the training has ended. However, the great value to the organisation and the individuals. The
satisfaction of participants can also be assessed during evaluation of learning takes place primarily in three stages:
training, with the intention of introducing improvements (1) At the beginning of training in order to determine the
during the process as a result of participants’ opinions. One entrance level of the participants. This diagnostic
can also use other evaluation instruments both during and at evaluation, if done in advance, allows the entire design
the end of training, such as: of training activities to be tailored to meet the real needs
of participants, thereby increasing the effectiveness of
 Informal or spontaneous evaluation, through group the training.
questioning or by questioning some of the participants (2) During training, in order to detect the pace of learning of
individually about their satisfaction regarding the participants and introduce improvements to help them
above-mentioned aspects. reach the expected level of learning.
 Collective assessment, applying group techniques that (3) At the end of training in order to assess the results
organise the participants in small discussion groups to achieved by the participants, namely the learning
think about one or more of the items outlined above. achieved thanks to the training.
The assessment can be conducted by the trainer or by a
person from the training department, and allows the The evaluation of learning is essential as it detects the
gathering of consensus views on the level of satisfaction immediate results of training and allows for further
with training. evaluation regarding the transfer of training to the
 participant observation by the trainer, which can lead to workplace, which is what really interests the organisation. In
the development of a report that is delivered to the fact, if we do not know what participants have learned, we
training department. cannot expect them to transfer anything to their workplace.
 Interviews conducted by the training department with
some participants selected at random and/or the trainer. Level 3: Pedagogical appropriateness
The evaluation of this level has several limitations that This level is focused on determining the level of internal
should be discussed. coherence of the training process from a pedagogical point

80
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

of view. In other words, it investigates the pedagogical therefore takes place at several points: at the
appropriateness in both the design and delivery of training in beginning of training to oversee and adapt the
order to achieve the training objectives most effectively and design, during training to monitor its
efficiently. This evaluation level is specific to the model implementation, and after training to assess the
under consideration here and provides a clear pedagogical adequacy of the process undertaken. The interview
orientation, differentiating it from other evaluation models. provides very useful information and helps detect
Thus, the elements that are evaluated at this level are those imbalances in order to improve the training.
that relate to the design and implementation of the training  Observation – Observation is conducted during the
and its suitability for the target group. They are as follows: delivery of the training, and may be one of two
types depending on the agent carrying it out. On the
 Training objectives one hand, the trainer can conduct participant
Their relevance is analysed according to the need or observation of the development of training
needs expected to be met, their suitability at the level of activities as well as of his/her own performance.
the target group, their relevance, and the quality of their The information obtained can be drawn up in a
design and writing. report to be discussed in the final interview with the
 Content – training department mentioned earlier. On the other
Its relevance is determined in relation to the objectives, hand, the training department can conduct a
its relevance, appropriateness of its selection, its level of systematic observation of the development of
precision and structuring, and the balance between training, using a recording system – for example
theoretical and practical content. checklist, video – and then subsequently analyse
 Methodology – the information gathered. This type of observation,
Its relevance is determined in relation to the objectives given its cost and difficulty, is usually reserved for
and content selected, the relevance of the methods and those training activities that, for specific reasons,
techniques prioritised, the presence and usefulness of require a thorough assessment of their pedagogical
practical methods, and the quality of application of the appropriateness.
methodology.  Self-evaluation – The trainer conducts a self-
 Human resources – evaluation of the development of training and
The teaching skills of trainers are evaluated, both in his/her performance, which is reflected in a semi-
terms of knowledge and practical experience as well as structured document that is subsequently analysed
pedagogical skill and group management. by the training department.
 Material and functional resources – These would be the main options for evaluating the
Their appropriateness, relevance, and spatial quality are pedagogical appropriateness of training. When drawing up
analysed, as well as furniture, pedagogical resources, its evaluation plan, each organisation should select the
timetables, and other material aspects related to training. evaluation items, agents, timing and tools, depending on its
From the range of instruments that can be used to needs and actual possibilities. This level of evaluation
evaluate this level, we have selected those used most provides very useful information for the training department;
frequently and those that provide the most significant it guarantees the adaptation of the training design to meet the
findings regarding the pedagogical coherence of needs of the organisation; it allows for the introduction of
training. They are as follows: improvements during the training process and optimises
 Participants’ questionnaire – The training subsequent applications.
department can develop a questionnaire to gather
the participants’ views on the pedagogical Level 4: Transfer
coherence of the elements mentioned above, which This level is focused on detecting changes that take place in
can be carried out at the end of the training. Rather the workplace as a result of training. At Level 2 the learning
than developing a specific questionnaire, several achieved by the participants is identified, but what really
items regarding this level of evaluation may be matters to the organisation is not the learning itself, but
introduced into the questionnaire on satisfaction, rather the transfer of learning to the workplace, that is, how
which is aimed at the participants. Nevertheless, it it translates into changes in the working behaviour of people.
should be noted that the information obtained Thus, evaluating transfer means detecting whether the skills
through these items only provides the participants’ acquired through training are applied in the workplace and
opinions on pedagogical appropriateness, and must whether this is sustained over time.
therefore be compared with results obtained though Even though transfer is what all training activities
the use of other instruments. should pursue, achieving this goal is not always guaranteed
 Trainer interview – The training department and is sometimes not easy. There are several models that
conducts an interview with the trainer to ascertain analyse transfer factors. Here we focus on those factors that
the pedagogical appropriateness of the design and depend on the training department and determine the
the delivery of the training. The interview collects possibility of evaluating the results. Training should be
information on all the elements outlined above, and geared towards the transfer of the learning that it generates,

81
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

and this should be reflected in both the design and the The impact assessment focuses on identifying the
implementation and monitoring of training. Thus, training results and benefits that training brings to the organisation.
must begin with a detailed knowledge of the organisation’s We understand benefit to mean the increase in levels of
needs and must be established within the operational usefulness or welfare associated with the increasing quantity
objectives. These objectives will allow a subsequent of training acquired. The calculation of the benefits
evaluation of the changes experienced in people’s working concentrates on measuring the effects of training by
behaviour: if we do not know the situation from the start, or establishing impact indicators. An impact indicator is a unit
the objectives, we cannot objectively determine the changes of measurement to identify the concrete and tangible effects
that have occurred. Furthermore, training needs to be of training in the organisation (qualitative and quantitative).
implemented following a methodology that facilitates and These indicators make it possible to identify, monitor
enables transfer, in other words, a methodology which is developments and measure the actual impact that the
practical, implementable, close to the reality of the job, and training has generated in the organization during a period of
which includes strategies to guide and ensure subsequent time.
transfer. Finally, training has to look at mechanisms for Impact indicators may be expressed in various terms: they
monitoring and maintaining transfer, mechanisms that can be expressed in quantities (numbers of purchases or
should run parallel to the evaluation. In this way, the numbers of products), as indices (of quality or of
orientation of the training design towards transfer is the first satisfaction), as periods (of delivery or of service provision)
requirement necessary for the achievement of transfer of and as effects (materials used, human resources involved,
training. But this also requires the active involvement of etc.).
other key agents in addition to the trainer, such as the There are two types of indicators:
participant and his/her superiors and colleagues. These play (1) economic, or hard indicators; and
a crucial role in both facilitating transfer as well as in its (2) qualitative, or soft indicators.
evaluation, ensuring the application of lessons learned,
eliminating potential barriers and collecting information that Their characteristics are substantially different, if not
will make the evaluation possible. conflicting. Hard indicators are:
The evaluation of transfer thus involves several persons who  easy to measure and quantify;
all play a crucial role in its execution:  easy to translate into monetary value;
 Trainers and training specialists design the evaluation  objective;
system and drive and oversee its implementation. For  common in corporate data;
this reason they should obtain the cooperation of other  highly credible to management; and
agents and negotiate their level of involvement in the  barely present in training.
evaluation of transfer.
 The participants also play an important role through Level 6: Profitability
self-evaluating their transfer and assessing the potential
barriers in their environment The translation of training impact into economic terms
 The participant’s supervisor or line manager is a key enables a profitability index to be obtained, expressed by the
player as he/she knows the daily performance of co- return in monetary benefits generated by the investment
workers in detail and can assess whether changes have made in training. Two procedures are followed for this
been achieved through the training. purpose:
 The participants’ colleagues and even customers can act (1) calculation of the costs involved; and
as important agents at this stage of evaluation. (2) calculation of the profitability.

Level 5: Impact Calculating costs.


The impact of training is understood to mean the effect of
certain training activities on an organisation, in terms of Cost calculation is the first step towards undertaking a
responding to the needs of training, problem-solving and training impact assessment, and focuses on identifying the
contributing to the scope of the strategic objectives that the costs involved in the training processes carried out by an
organisation has identified. Thus, the impact consists of organisation. There are different types and classifications of
changes due to learning attained through training and how costs. Those most commonly used in the field of training for
the transfer of this learning into the workplace affects the organisations are as follows:
department or area of the trained person as well as the  direct costs – trainers, materials, spaces, per diems, etc.;
organisation as a whole.  indirect costs – management, design, administration,
The impact of training is thus conceived as the effects that communication, additional materials, participants’
training generates in the organisation, as a result of the use salaries, etc.; and
of the skills that participants have acquired through training.  overheads – general services of the organisation, such as
There are two types of effects: utilities, cleaning, depreciation, etc.
(1) qualitative or not translatable into economic terms; and All these costs are generally classified into fixed and
(2) quantitative and translatable into monetary value. variable costs, a process that is very useful when preparing

82
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

the training budget, and also useful when calculating the and resources at hand, the time required and the
overall costs of various training activities. This calculation approximate costs involved. Ultimately, the evaluation
makes it possible to obtain the total costs and therefore the plan must be feasible and realistic.
investment made in training, amounts to be used  The evaluation plan must be accepted by everyone
subsequently to calculate profitability. The calculation of involved in the evaluation, from participants to
costs is the simplest of the calculations involved in managers. If the agents do not accept the proposed plan
evaluating profitability as it merely involves collecting the it will be a failure – participation in evaluation is a
data available in the organisation – usually found in the guarantee for success. Therefore, it is better to design a
budgets and economic information relating to training – and simple evaluation plan agreed upon by all rather than a
adding it together in the required categories. complex plan that does not receive support from the
organisation.
CALCULATING PROFITABILITY  Ensure that training is the main cause of the results
Once the impact in terms of benefits (evaluation Level 5) obtained, and isolate the possible effects of other
and the cost of training have been obtained, profitability can factors in the organisation. It is advisable not to be
be determined. Two procedures may be highlighted here: too over-confident if good results are obtained and
(1) the cost-benefit analysis; and analyze the real contribution of training to the
(2) return on investment. success
Both aim to obtain a profitability figure and are therefore  Do not attempt to evaluate everything but aim
based on the costs and benefits involved in the training. systematically to evaluate training that is strategic
The cost-benefit analysis seeks the net benefit of the for the organisation or that plays an important role.
training, for which purpose it compares the costs with the  Distribute the evaluation results to the training
benefits using the following formula: clients and establish mechanisms for their use, in
order to optimise the training and enhance its
Total benefit - total costs = net benefit: contribution towards obtaining the organisation’s
goals. The distribution and use of the results is what
However, the return on investment, explained at length by really gives meaning and value to the evaluation of
Phillips (1994), calculates profitability by indicating the net training, enhancing the active involvement of all
profit gained on the investment made, in other words, by concerned.
looking for a profitability index. The formula applied is as
follows: CONCLUSION
ROI = net benefits/cost* 100 The Evaluation of any training program me has certain aims
to fulfill. These are concerned with the determination of
As can be seen, both methods for calculating profitability are change in the organizational behavior and the change needed
based on a comparison of costs and benefits, and although in the organizational structure. Hence evaluation of any
they follow different processes they aim to identify the training program must inform us whether the training
profitability derived from the training activities conducted. programme has been able to deliver the goals and objectives
This is a purely economic calculation, and therefore it leaves in terms of cost incurred and benefits achieved .The analysis
aside the qualitative impact, the importance of which was of the information is the concluding part of any evaluation
discussed above. Therefore, these results should be added to programme. The analysis of data should be summarized and
the non-economic results obtained from the benefits then compared with the data of other training programmes
calculation. Nevertheless, the calculation of profitability similar nature. On the basis of these comparisons, problems
alone is enormously helpful in making decisions about the and strength should be identified which would help the
levels of investment in training and provides data that are trainer in his future training programmes.
highly valued by the managing bodies of organisations.
References
Strategies to improve the evaluation of training in
organisations 1. Barzucchetti, S. and Claude, J.F. (1995), Evaluation de
la formation et performance de l’entreprise, Ed.
 Evaluation means comparing results with a previously Liaisons, Rueil-Malmaison.
defined reference. This reference is the situation that is
expected to be achieved due to the training and is often 2. ESADE (2005), Informe Cranfield ESADE. Gestio´n
expressed in the form of objectives. Therefore, the estrate´gica de Recursos Humanos, ESADE, ESADE.
objectives must be well defined, observable and
measurable. 3. Eurostat (2009), “Continuing vocational training survey
 The evaluation plan should be based on a detailed (CVTS2)”, Population and social conditions
analysis of the existing material and functional 3/2000/E/No17, available at:
possibilities for its implementation. Thus, the available http://circa.europa.eu/Public/irc/dsis/edtcs/library?l¼/pu
information should be considered, as well as the tools

83
I.J.E.M.S., VOL.3(1) 2012: 77-84 ISSN 2229-600X

blic/continuing_vocational/eu_manual_pdf/_EN_1.0_& 7. Baldwin, T.T. Transfer of training: A review and


a¼d (accessed 23 October 2009). directions for future research / T.T Baldwin, J.K.Ford //
Personnel Psychology,1998, Vol. 41, p. 63-100.
4. FTFE-D’ALEPH (2006), “La evaluacio´n de la
iniciativa de acciones de formacio´n en las 8. Bernardin, H. Human Resource Management: An
empresas.Iniciativa 2006”, internal document, FTFE, Experiential Approach / H.Bernardin, J.Russel. New
Madrid. York: McGraw-Hill, 1993.

5. Holton, E.F. (1996), “The flawed four-level evaluation 9. Burrow, J. Systematic performance improvement –
model”, Human Resource Development Quarterly, Vol. refining the space between learning and results /
7, pp. 5-21. J.Burrow, P.Berardinelli Journal of Workplace
Learning, 2003, Vol.15 (1), p. 6-13.
6. Kirkpatrick, D.L. (1998), Evaluating Training
Programs, Berrett-Koehler, San Francisco, CA 10. Chmielewski, T.L. Measuring Return-on-Investment in
Kontoghiorghes, C. (2004), “Reconceptualizing the Government: Issues and Procedures.Public
learning transfer conceptual framework: empirical .L.Chmielewski, J.J.Phillps // Personnel Management,
validation of a new systemic model”, International 2002, Vol. 31 (2), p. 255-237.
Journal of Training and Development, Vol. 8, pp. 210-
21.

84

You might also like