Download as pdf or txt
Download as pdf or txt
You are on page 1of 197

EVALUATION OF THE EFFECTIVENESS OF USING

COMPUTER-BASED TRAINING SIMULATIONS TO


DEVELOP MANAGERIAL COMPETENCY

A thesis submitted in partial fulfilment for the degree


of Doctor of Business Administration

by

John Kenworthy

Henley Management College/Brunel University

September 2005
Abstract
Computer-based simulations and games are powerful tools to support learning
environments (Swanson and Holton, 1999) and Gartner research suggests that simulations
may be e-learning’s ‘killer application’ (Lundy et al., 2002). The multi-billion dollar
business and management training industry and management education are beginning to
turn more attention to using simulations and games but there are doubts about even the
most fundamental claims of the efficacy of simulations (Feinstein and Cannon, 2002).
This study tests a model in comparing a training programme using three different
experiential activities, a simulation, a business game and case studies using Kirkpatrick’s
(1959/60) familiar and ubiquitous (Russ-Eft and Preskill, 2001) four levels as a guiding
model for evaluation. In particular, the study focuses attention on the development of
managerial competencies and the differences in demonstrated competency before and after
(May, 1993) a strategic management training programme (Baker et al., 1997).
The literature on management learning provides insights into the multi-disciplinary
nature of the research highlighting the many factors considered to influence and shape the
way people learn and transfer their learning to the workplace. The literature includes
consideration of enjoyment of the learning event (Prensky, 2000, Schank, 1997) and
motivation (Holton, 1996), learning style (Kolb, 1976) and personality type (Patz, 1990,
1992), the potential effect of team working (Higgs, 1999) and personal background such
as age, gender, cultural heritage and prior academic achievement (Sternberg, 1997).
Managerial competency models are discussed and compared to establish the most fitting
model for the location of the research and to provide a measure of change in behaviour
and individual competencies, these are linked to organisational competence and
performance – providing the links across the four levels of evaluation.
An appropriate research methodology is discussed and the chosen quasi-
experimental design fits the scientific research tradition demands for robust methodology
and the pragmatic demands of research conducted in the business training world
(Easterby-Smith et al., 1991). Operationalisation of suitable constructs and processes
provide the empirical evidence repeatedly called for in previous studies.
Data was collected from 266 participants working for private companies in
Malaysia and Singapore. The results detected significant differences between both
simulation and game groups and the case study groups in reaction, learning and learning
transfer and business impact and, as implied by Kirkpatrick (1959/60), significant
correlations between the different levels are observed but the strength of the relationship
2
falls well short of sufficiently explaining the results. Both programmes including the
simulation or the game show higher levels of learning, change in demonstrated managerial
competencies and in business performance than the case study based programme groups
providing strong evidence that simulations and games are effective tools to employ within
an experiential learning intervention. The continued importance of human tutors or
facilitators to provide useful feedback and debriefing from the activities is strongly
indicated.
From the literature, factors that may influence learning are considered but the
results failed to detect any differences between learning styles or personality type.
However, some influence was detected with the age of participants with younger
managers as suggested by Aldrich (2002), significantly enjoying the computer-based
activities over older managers. The impact of team working and enjoyment of team
working within the activities is discussed suggesting that the composition of the team
(Belbin et al., 1976, 1981), the task and the context, whether competitive or collaborative
may impact which competencies are developed. As suggested by previous research in this
field (Wolfe and Guth, 1975, Keys and Wolfe, 1990, Brenenstuhl and Catalanello, 1979,
Gopinath and Sawyer, 1999, Hannafin, 1992, Hannafin et al., 1996), the results provide
empirical evidence that computer-based simulations and games are significantly more
useful for learning and competency development than case studies.

3
Contents
CHAPTER 1 PURPOSE OF STUDY 10
CHAPTER 2 LITERATURE REVIEW 13
2.1 Overview 13
2.2 Management learning 15
2.2.1 Schools of thought in management learning 15
2.2.2 Evaluation models and taxonomies 24
2.2.3 Management learning and evaluation summary 38
2.3 Learning transfer 41
2.3.1 Managerial competency models 42
2.3.2 Learning transfer and competency frameworks summary 52
2.4 Business results – linking individual competency models to organisation outcomes 53
2.4.1 Business results summary 57
2.5 Personality and personal background 59
2.5.1 Learning styles 60
2.5.2 Factors that shape and influence learning 61
2.5.3 Personality and personal background summary 67
2.6 Previous research in simulations and games 69
2.6.1 Support for simulations and games 70
2.6.2 Criticisms of simulation and game research 71
2.6.3 Evaluation of simulations 72
2.6.4 Evaluation of simulations for learning outcomes 74
2.6.5 Previous research summary 76
2.7 Literature review summary 76
CHAPTER 3 RESEARCH QUESTIONS AND HYPOTHESIS 78
CHAPTER 4 METHODOLOGY 82
4.1 Key choices in methodology 82
4.1.1 Independence of the researcher 83
4.1.2 Sample size 83
4.1.3 Theory testing or generation 84
4.1.4 Experimental or fieldwork design 84
4.1.5 Universality 85
4.1.6 Verification or falsification 86
4.1.7 Summary key choices 86
4.1.8 Scientific method – ideal but inherently complex 87
4.2 Research model 87
4.2.1 Validity, reliability and generalisability 88
4.2.2 Issues with experimental design 89
4.2.3 Learning evaluation design 90
4.2.4 Quasi-experimental design 90
4.2.5 Summary research model 93
CHAPTER 5 CONSTRUCTS AND PROGRAMMES INVESTIGATED 94
5.1 Operationalisation of constructs 94
5.1.1 Personality type 94
5.1.2 Preferred learning style 95
5.1.3 Position in the organisation 95
5.1.4 Cultural heritage 96
5.1.5 Managerial competencies 96
5.1.6 Bosses performance rating 97
5.1.7 Reaction to the programme 98
5.1.8 Learning 98
5.1.9 Motivation to learn and transfer learning and transfer climate 98
5.2 Programmes investigated 99
5.3 Evidence collection 102
5.3.1 Procedures 102
5.4 Data analysis strategy 106
5.4.1 Statistical procedures to be used in the research 106
CHAPTER 6 RESULTS AND ANALYSIS 109
6.1 Effectiveness of simulations and games 111
4
6.1.1 Correlation on Kirkpatrick’s levels of evaluation 117
6.2 Effect of learning styles 128
6.3 Effect of teams and groups 134
6.4 Effect of demographics 138
CHAPTER 7 DISCUSSION 149
7.1 Analysis summary 149
7.2 Limitations 154
7.3 Directions for further research 156
7.4 Practitioner guide 158
CHAPTER 8 CONCLUSIONS 160
8.1 Summary of key findings and conclusions 160
8.2 Personal learning reflection 162
BIBLIOGRAPHY 166
Appendix 1 - Strategy Programme Overview 183
Appendix 2 – Online Version of Kolb’s LSI III 184
Appendix 3 – Performance Rating Scale 188
Appendix 4 – Simulation and Game Overviews 189
Appendix 5 – MCQ Competency Descriptors 191
Appendix 6 – Learning Scale 193
Appendix 7 – Example Report 194
Appendix 8 – Reaction Form 197

5
List of Tables
Table 1. Fourteen schools of thought about learning: Essence and use - Adapted from
Burgoyne (2002) .................................................................................................................15
Table 2. Five key schools of learning - adapted from Houldsworth (2004) after Burgoyne
(2002) ..................................................................................................................................17
Table 3. Four learning modes - Kolb (1976) .....................................................................22
Table 4. Four general purposes of evaluation (Easterby-Smith, 1994) ..............................25
Table 5. Russ-Eft & Preskill (2001) - Reasons to evaluate ................................................25
Table 6. Kirkpatrick's (1959/60, 1994) four levels of evaluation .......................................30
Table 7. Four stages of performance intervention (Brinkerhoff, 1987)..............................31
Table 8. Six stages of evaluation (Brinkerhoff, 1987)........................................................32
Table 9. Reasons that training is not evaluated with financial analyses (Mosier, 1990) ....39
Table 10. JCS competencies (Dulewicz and Herbert, 1992) ..............................................46
Table 11. Top Ten competencies in a KBE - Singapore HRD perspective (Lee Mei Ching
et al., 2002) .........................................................................................................................48
Table 12. A study of the attributes of managerial effectiveness in Singapore - competency
model - (Kenworthy and Wong, 2003) ...............................................................................49
Table 13. Managerial competencies (Spencer and Spencer, 1993) ....................................50
Table 14. Comparing MCQ (Spencer and Spencer, 1993), JCS (Dulewicz and Herbert,
1992), Lee et al. (2002), Kenworthy and Wong, (2003).....................................................51
Table 15. Kolb's four different learning styles (Johnson and Stratton, 1978) ....................60
Table 16. Possible learning outcomes for simulations (adapted from Anderson and
Lawton, 1997) .....................................................................................................................75
Table 17. Summary research questions and hypotheses .....................................................81
Table 18. Key choices of research design (Easterby-Smith et al, 1991) ............................83
Table 19. MBTI dimensions (Myers and Myers 1980) ......................................................94
Table 20. Cronbach alpha reliability analysis on MCQ......................................................97
Table 21. Programme outcomes..........................................................................................99
Table 22. Linking MCQ to training programme...............................................................101
Table 23. Participant breakdown statistics in each group.................................................110
Table 24. Differences MCQ pre to post test t-test summary ............................................112
Table 25. Discriminant analysis Sim and Game group against Case Study MCQ ...........113
Table 26. Summary t test reaction ....................................................................................114
Table 27. ANOVA Reaction data by activity type ...........................................................115
Table 28. Summary t test learning ....................................................................................115

6
Table 29. Summary t tests MCQ differences....................................................................116
Table 30. Summary t test performance change.................................................................117
Table 31. Learning correlation with participant reaction..................................................118
Table 32. Correlation reaction to learning ........................................................................118
Table 33. Factor analysis reaction data.............................................................................119
Table 34. MCQ difference correlation with learning........................................................121
Table 35. MCQ difference correlation with activity reaction...........................................122
Table 36. Multiple Regression MCQ Differences, all groups and each Activity type .....123
Table 37. MCQ difference correlation with performance change ....................................124
Table 38. Correlation boss's performance rating change with boss's rating of change in
MCQ .................................................................................................................................125
Table 39. Correlation participant reaction to activity and performance change...............125
Table 40. Multiple Regression - dependent boss performance rating...............................126
Table 41. Summary ANOVA table Kolb LSI preference on reaction, learning and learning
transfer between groups ....................................................................................................129
Table 42. Correlation teamwork with MCQ and learning ................................................136
Table 43. Age and activity type ANOVA.........................................................................141
Table 44. Summary Research Questions and Hypotheses and Findings ..........................149

7
List of Figures
Figure 1. Kirkpatrick (1994) Four levels of evaluation and literature review overview ....14
Figure 2. Circles of learning - adapted after Mabey, Topham & Roland Kaye (1998),
Binsted (1988) and, Houldsworth (1994, 2004) .................................................................18
Figure 3. Kolb's Experiential Learning Cycle (1976).........................................................22
Figure 4. Models and 'schools of thought' in evaluation (Easterby-Smith, 1986) ..............27
Figure 5. 'Chain of consequences' for a training event (Hamblin, 1974)...........................35
Figure 6. Evaluation of outcomes and link to decision making (adapted after Burgoyne
and Singh, 1977) .................................................................................................................36
Figure 7. Use of evaluation style matrix (Easterby-Smith 1994) .......................................40
Figure 8. Hierarchical model of competence (Baker et al., 1997) ......................................54
Figure 9. Model of professional competence (Cheetham and Chivers, 1996)....................56
Figure 10. Individual variables of competency, competence and performance and
organisation core competence (adapted from Young, 2002) ..............................................57
Figure 11. Combining LSI and MBTI dimensions (Kolb et al., 2000)...............................62
Figure 12. Three faces of simulation evaluation (adapted from Anderson, Cannon, Malik,
and Thayikulwat, 1998) ......................................................................................................74
Figure 13. Research model .................................................................................................88
Figure 14. Research design .................................................................................................90
Figure 15. Mean Differences in Competency ...................................................................111
Figure 16. Correlation enjoyment and usefulness reaction on activity to learning...........120
Figure 17. Converging and other LSI ANOVA MCQ difference ....................................130
Figure 18. Converging and other MBTI LSI ANOVA MCQ difference .........................131
Figure 19. MBTI Learning style and enjoyment/usefulness of activity ...........................133
Figure 20. ANOVA MCQ and learning by activity type..................................................135
Figure 21. Enjoyment of Simulation or Game by age group ............................................139
Figure 22. Usefulness of Simulation or Game by age group ............................................139
Figure 23. Enjoyment of Case Study by age group ..........................................................140
Figure 24. Age and activity type ANOVA charts.............................................................142
Figure 25. Gender and achievement orientation ...............................................................143
Figure 26. Pre MCQ managers and senior managers........................................................144
Figure 27. Summary ANOVA MCQ differences on prior academic attainment..............145
Figure 28. ANOVA Learning difference Asian and Western heritage.............................146
Figure 29. ANOVA Simulation and Game Group - learning and developing others -
Asian-Western difference .................................................................................................147
Figure 30. ANOVA Cultural heritage and position, pre and post test MCQ....................148

8
Acknowledgements
I would like to thank the following people for making the completion of this draft thesis
possible for me:
Malcolm Higgs, for being the most encouraging, if sometimes cryptic, supervisor, always
willing to take the time to share his honest academic advice and occasional steers with me.

Vic Dulewicz, for his supporting supervision and steering me to fill the gaps and for
laying the groundwork that a lot of this thesis is based on.

David Price, who first allowed me to believe that I might be able to do this, encouraged
me and took a risk in supporting my studies.

Keith Gay for patient, kind guidance in my development through stage 1.

Liz Houldsworth for kindly reading an early draft and her feedback and direction.

ABSEL members, and in particular Jerry Gosen, Joe Wolfe and Andy Feinstein for their
guidance and great insights into this research.

My clients who were kind enough to support this research and in particular, Kamal, Thiru,
Azman, Mei-Woo, Sureish, David, Eng-Kiat and Chris.

My father, Wesley Kenworthy, for his encouragement and enthusiasm for my personal
development and for his undying love and giving me the space to be who I want to be.

Annie, for her love, patience throughout the many lost weekends and her support both
personal and professional that has given me the opportunity and space to do this.

9
Chapter 1 Purpose of Study
The use of computer-based simulations and games has received attention more
recently for both their increasingly sophisticated design and their promotion of participant
interest (Mitchell, 2004). However, one of the major problems, according to Hays and
Singer (1989), is how to evaluate the training effectiveness of simulations (Feinstein and
Cannon, 2002). Although for more than 40 years, researchers have lauded the benefits of
simulation (Wolfe and Crookall, 1998), very few of these claims are supported with
substantial research (Miles et al., 1986, Butler et al., 1988).
Many authors attribute the lack of progress in simulation evaluation to poorly
designed studies and the difficulties inherent in creating an acceptable methodology of
evaluation. Hence, this study will provide a benchmark and model to evaluate the
effectiveness of the use of computer-based simulations in management development. The
research compares the use of two different types of simulation and a case study approach
in a quasi-experimental design assessing participant enjoyment, learning and behaviour
change in the workplace following a development programme intervention.
This research is designed to add to our understanding of experiential pedagogies
and the evaluation of development interventions and hence the value of using specific
delivery methods in management education and development to the business. It is this area
in particular that is noted by many authors as lacking (Barnett, 1984, Anderson and
Lawton, 1997b, Teach and Giovahi, 1988, Bedingham, 1997, Bee and Bee, 1994, Gopher
et al., 1994, Phillips, 1998, Schank, 2002, Wolfe, 1990).
In particular, this research considers the development of managerial competencies
using computer-based business training simulations, as demonstrated learning transfer
following the ubiquitous Kirkpatrick levels of training evaluation. The research includes
consideration of each individual’s preferred learning style (Kolb, 1984, Honey and
Mumford, 1982) to understand if this affects participant enjoyment, learning and
behaviour change from a particular method of development intervention. The research will
also consider each participants age and level of formal qualification to assess if there is a
trend, as suggested by Aldrich (2002, 2005), that younger managers prefer and benefit
more from computer-based, immersive training methods.
It is important to clarify here, what the author means by simulations, as these come
in many guises and the term is used ubiquitously in the field of education and management
learning.

10
Kemmis et al. (1977) and MacDonald et al. (1977) usefully categorise computer
assisted learning, describing four educational paradigms.
1. Instructional: Directed or programmed learning. The drill and practice of content
acquisition (after Tolman, 1932, Skinner, 1950).
2. Revelatory: Discovery learning. Student exploration within parameters (after
Bruner, 1973, Ausubel, 1978).
3. Conjectural: Social or constructivist learning. Modelling that allows full
manipulation and testing of ideas and hypothesis (after Vygotsky, 1978, Kolb,
1984).
4. Emancipatory: Computer as a labour saving device. This cuts across the three
paradigms above and deals with the degree to which student labour is authentic
rather than inauthentic (Kemmis et al., 1977).
Drawing also on Barr and Tagg (1995), management training simulations and
games are classic examples of the Revelatory paradigm. MacDonald et al. (1977) define
that within a simulation, the learner may only tinker within the parameters, not the central
working of the system.
Simulations and games are increasingly being introduced into educational
programmes and business training (Thompson et al., 2002, Suqrue and Kim, 2004, Lane,
1995), and for over 40 years, researchers have praised the benefits of simulations (Keys
and Wolfe, 1990), but very few of these claims are supported with substantial research
regarding the learning benefits of the technique (Brenenstuhl and Catalanello, 1977,
Hannafin et al., 1996, Gopinath and Sawyer, 1999).
An article in The Wall Street Journal (Totty, 2005, Page R6), states that companies
in the U.S. “spend about $60 billion a year on training their employees, but there's a good
chance much of that is wasted”. Totty goes on to cite several examples of major US
corporations using computer gaming in training employees effectively. According to
Gartner Research, simulations may be the ‘killer application’ for e-learning (Lundy et al.,
2002) and in 2005, Gartner estimate that annual spending for training worldwide is over
$100 billion, and e-learning content accounts for only a little of that. When it comes to
potential growth in the e-learning content market, Gartner’s Lundy suggests that much of
the expected growth for e-learning will be driven by simulations (Boehle, 2005). Trade
journals are replete with extolling the benefits of simulation-based learning and how it
brings e-learning to a new level but there is doubt in the minds of organisation executives
about using simulations. The idea of playing games at work, let alone with interactive
cartoon-like characters does not strike well with many, and the branching-video and game-
11
style simulations are likely to encounter resistance from companies that question whether
business soft skills can be taught on a computer. Summers (2004), drawing from a
number of sources, estimates the size of the worldwide market for business simulations
between $623 and $712 million. Estimating the number of business simulations in use by
companies and academia is particularly difficult as there is little consistency in definition
of business simulations, though Faria and Wellington (2004) report a thorough analysis of
the academic market showing more than 52% of respondents had used a business game
with expectations that use would increase. The ASTD State of the Industry Reports
(Thompson et al., 2002, Suqrue and Kim, 2004) support the figures and show that these
are all very positive trends for simulations, but realistically, the current use of business
simulations represents a tiny proportion of overall spending on corporate training –
perhaps as much as 1%.
Part of the problem is that nobody has shown definitively that simulation training
works in the business world. In gauging the impact of e-learning initiatives on sales,
customer satisfaction, or overall company performance, training departments do not
isolate simulation from other forms of online content, such as workbooks and lectures.
Compounding a dearth of empirical evidence is a mindset at many companies that
dismisses video minidramas, flash animation, and virtual characters as manifestations of
pop culture, unsuitable for serious business instruction (Davies, 2003).
However, the basic idea of simulation - the more realistic the computer experience,
the more engaged the mind becomes, accelerating learning and retention - remains
compelling. What the business world needs is compelling evidence that simulations and
games are useful and effective – the purpose of this study is to go some way towards
providing such evidence. This will be achieved through a review of the multi-disciplinary
literature on management learning and competency development and how it may be
evaluated, the factors that are thought to shape the way in which people learn and transfer
their learning to the workplace, the appropriate measures for the business world in the
form of demonstrated managerial competency and effectiveness and lastly a review on
previous research in this field. An appropriate research methodology is discussed that will
provide the empirical evidence demanded by business and academics and a suitable quasi-
experimental design model is detailed. The thesis continues to outline the processes for
evidence collection and the strategy for analysing the data. The results of the study answer
the research questions and hypothesis relating to the effectiveness of business simulations
in management learning put forward from the literature and the thesis concludes with the
key findings and a discussion of the implications for practitioners.
12
Chapter 2 Literature Review

2.1 Overview
There is an increasing drive amongst professional training organisations (for
example, the American Society for Training and Development, ASTD) for better
evaluation of training and development intervention (Thompson et al., 2002, Suqrue and
Kim, 2004). The basic principle driving this is for training to demonstrate its worth to
organisations – whether this be its’ attributable Return on Investment (ROI) or its value in
improving performance of individuals (such as gains in productivity or reduced accidents)
or of the organisation (such as more efficient use of resources or demonstrable
improvement in quality). Essentially, training and development costs time and money and
needs to be shown to be worthwhile. The trend to evaluate the business impact of training
and development programmes continues as increasing numbers of organisations
worldwide undertake these evaluations (Phillips, 1999). Senior managers increasingly
want to see the economic contributions - including ROI - that training and development
programmes bring to their organisations. Interest in measurement and evaluation of
training is spreading globally (Phillips, 1997) and increasingly as organisations
implement e-learning technologies to deliver or support training in replacement of more
traditional classroom instruction based programmes (Mantyla and Woods, 2001, Schank,
2002).
Most organisations recognise the need and value of evaluating performance
interventions though few undertake anything but basic training intervention evaluations
(Russell, 1999). There is an increasing trend, particularly in the U.S.A., to measure the
ROI for training and development (Phillips, 1998) whilst Reingold (1999) notes that the
average U.S. company spent some $10 million on internal and external executive
development in 1998 and that spending on U.S. corporate training and education for
managers rose to $16.5 billion. Gartner research in 2005 puts the total global spend on
training at $100 billion (Boehle, 2005). In the meantime, there is a transformation in the
ways in which training and development is being delivered through e-learning, the Internet
and computer-based simulations. Aldrich (2002) suggests that the next generation of
learners, those aged 30 and below, having grown up with computer games, now expect to
be engaged on multiple levels simultaneously with fast feedback, graphical, high
simulation, immersive, user-centric learning environments. Yet, there is little empirical

13
research (Burns et al., 1990) that supports the notion of training effectiveness. Although
people recognise the need to evaluate effectiveness, few do so (Phillips, 1998).
Kirkpatrick’s (1959/60, 1994) four level model is the most familiar and frequently
used in business training for evaluating training effectiveness, and although dated the
model has still not been superseded. The literature review broadly follows these four
levels. Starting with a review of the study of management learning and how it is evaluated,
using Kirkpatrick’s and other evaluation models, and reflecting the multi-disciplinary
approach of the research study, four further main sections: managerial competency models
and holistic business competence models; a review of personality and personal
background that cuts across all levels, and lastly a review of previous research in this
particular field (Figure 1):

Previous research in
simulations and games

Competence
Holistic models
Business
Impact Competency of competency
Personality & background

Behaviour change
models

Management Learning
Learning

Evaluation Models
Reaction
& taxonomies

Figure 1. Kirkpatrick (1994) Four levels of evaluation and literature review overview

Commencing with the literature on management learning positions this research


study and allows the researcher to bring the multi-disciplinary threads together through the
evaluation of management development, that highlight pertinent and important aspects of
related disciplines on factors that are considered to shape the way people learn and how a
researcher might operationalise useful constructs to better understand and observe the
effectiveness of a training intervention over others to achieve the purpose of the study.
Appropriate examples from research studies in the simulation and gaming literature

14
illustrate connections from the broader fields of management learning, personality,
background and teamwork, and this review’s final section deals explicitly with prior
research in the use of simulations and games to develop managers. From this final section,
the researcher can recognise the limitations and issues faced by others and heed the calls
for further research to ensure that this study is a significant contribution to our
understanding and knowledge.

2.2 Management learning


Learning theories abound and each show subtle and dramatic differences and
combined with a vast variety of methods in teaching, training and learning facilitation
(Huczynski, 2001), there is little doubt that the analysis of management learning remains a
challenge. Perhaps due to the variety of models and approaches available, there doesn’t
seem to be one, univocally accepted model of what constitutes management, let alone
management learning, nor how to analyse it (Ivancewich and Matteson, 1996).
Winterton and Winterton (1999) attempt to distil the meaning of management
development through contrast of the most prevalent theories and concepts. The resulting
confusion of what management development is, or is not, highlights the complexity of the
field and suggests that an attempt to address the whole of management development, or
management learning, may add to the confusion rather than provide a pragmatic model to
fit most situations. As such, the aim of this section of the literature review is to provide a
map of the theoretical field and focus on the (relatively) small part of the field as it can be
applied to this research.
Firstly, this section will outline the main schools of thought in management
learning and associated approaches to management development, then review evaluation
of management development and the dominant approaches.

2.2.1 Schools of thought in management learning


Describing fourteen schools of learning, Burgoyne (2002) provides a map of the
alternative theoretical perspectives on management learning. Table 1 on page 15 shows
these fourteen different theories in management learning and the different essence of how
learning may be facilitated, what type of learning is likely to take place and a simple
example of a learning situation that fits within the theory:

15
Table 1. Fourteen schools of thought about learning: Essence and use - Adapted from Burgoyne (2002)
Learning Theory View of Self Essence to facilitate Learning of:
learning Example use
1. Conditioning and Mechanical Clarity about desired Simple or repetitive skills.
the connectionist behaviours, repetitive Dog training
approach practice
2. The trait Specification Profile of knowledge and Complex knowledge and
modification view behaviours identifying those behaviours
capable of being influenced Competency development
and design programme
accordingly
3. The information Recorder Effective communication of Knowledge and
transfer approach information in an organised procedures
fashion Legal profession
4. The cognitive Knowing Help learners hone own Personalised learning in
school mental models complexity
Research skills
5. The systems Discovery Learning is contextual Contextual or situational
theory approach adapting to environment and learning
motivated by survival Management simulations
(success)
6. The humanistic Essential Recognise individual Individual learning of
and existential complexity, wholeness and complex real-life existence
approach own mental models NLP training
7. Social learning Identity Creation of the process by Learning is about
theory which learners develop developing identity
identity in doing or being in Leadership development
own eyes and others in
social environment
8. Psychodynamics Mystical Realigning dynamic balance Learning through
and related of the unconscious and interactions between
approaches conscious mind conscious and
unconscious
Psycho-therapy training
and use
9. Post-modernism Decentred and Help learners become Complex and accepting of
fragmented comfortable with natural self.
multiple identities, directions Life coaching
and lack of clarity
10. Situated learning Communal Collective and informal Learning takes place in
theory learning in situations the situation
Apprenticeship
11. Post-structuralism ‘Vacant’ Learners are not fixed Myriad ways of learning
beings, learning can happen Self-motivated learning
in many ways
12. Activity theory Contextualised Activity or task based Learning of particular
learning in context with tasks or activities
personal and available Pilot training
resources
13. Actor network Co-evolving Learners are part of an Learning evolves as the
theory integrated system wide whole system
context including tools and All encompassing
materials
14. Critical realism Hermeneutic Help learners become action Learning is reflection in
researchers continuously context and adapting to
learning and adapting new situations
Action research

Since the aim of this research is to evaluate the learning and behaviour change in
individuals (their competency as a manager) as a result of a computer-based simulation
training event , considering Burgoyne’s 14 schools, this research falls less than neatly into
two particular schools of thought, namely Trait Modification and Systems Theory
Approach. There are elements of other schools of thought which partially fit – Actor

16
network theory for example and information transfer approach suggesting that the
complexity of learning and any particular intervention is unlikely to fit neatly into one
School of Thought. In comparing a management simulation, Strategy CoPilot® (Imparta,
2003), a management game, Strategy at the Edge (CELSIM, 2003), and case study
programmes – this research may be considered to be comparing schools of thought about
learning: trait modification, systems theory and information transfer respectively. Though
as Burgoyne suggests, there are many different ways the arbitrary distinction between one
theory and another may be drawn – however, it begins to provide a useful framework to
consider what is being developed in an intervention, and how we might consider the
learning entities in order to most suitably develop the person and/or their capabilities
and/or knowledge.
Houldsworth (2004) uses Burgoyne’s schools of thought but usefully distils them
and suggests five key schools of management learning within organisations today, being
recognised as the longest established: conditioning theories; experiential approach;
information transfer approaches; and trait modification approach. Table 2 shows these five
schools, their basis and learning principles together with example common usage:
Table 2. Five key schools of learning - adapted from Houldsworth (2004) after Burgoyne (2002)

Approach Basis Learning principles Example common use


Conditioning theories Stimulus/response Reinforcement, practice Rote learning (times
view on learning and feedback tables, verbs)
(Burgoyne, 2002) Early school learning
Cognitive theories Learner build complex Holistic learning, ‘the school of hard
maps assimilating or personal and knocks’
accommodating subjective. Practical Learning on the job
knowledge (Burgoyne problem solving
and Stewart, 1977)
Experiential Learning as a natural Sharing experience, Syndicate method,
approaches process of growth. practical problem action learning
(Knowles et al., 1998) solving around real life group work and
issues (Revans, 1971, formalised on-the-job
1980) learning
Information transfer Product of learning as Transmission, Subject and tutor focus.
stored information organisation, storage Lengthy reading lists
and retrieval (Burgoyne and Stewart,
1977)
MBA programmes and
much e-learning
Trait modification Learning causes Knowledge, skills and Competency
change in profile of attributes as traits development training.
characteristics (Burgoyne, 2002) are Blended learning
learned through a cycle including role play or
of learning (Kolb, 1976, simulations
Honey and Mumford,
1992)

The distinctions between the approaches build upon each other, taking elements
that are understood to be most useful and incorporating these with new dimensions of
understanding about human beings and learning.

17
Symons (1996) provides a simpler distinction whilst seeking to provide an
underpinning understanding of the key theories and issues for those designing and
delivering management development programmes. He develops characteristic types of
programmes in business management, Type A and Type B.
Type A programmes are more traditional in nature characterised by a teacher-
centric and analytical approach – similar to Houldsworth’s (2004) later grouping
conditioning, cognitive and information transfer approaches. Type B programmes
however, are learner centred and experiential in nature – experiential and trait
modification approaches grouped from above.
This classification may be too simplistic though Symons (1996) rightly suggests
that there is a steady move within the management development community towards more
experiential, or Type B programmes, reflecting a move away from Taylor’s (Scientific
Management) division between “thinkers” and “doers” and the need for organisations to
tap individuals at all levels. This move to more experiential approaches is borne out in the
ASTD State of the Industry reports (Thompson et al., 2002, Suqrue and Kim, 2004).
These schools of thought on management learning and programme types provide a
framework from which to work but do not provide one that greatly assists the positioning
of this research. Earlier work by researchers in the field of computer-assisted learning
(Binsted, 1988, Topham, 1990, Houldsworth, 1994) condenses learning approaches into
three circles of learning. Building on the work of Mabey et al. (1998), the author has
added the trait modification approach – within which the simulation and management
game of this research as part of a programme would then be placed (Figure 2).

High

tion
odifica
Trait m
Learner Experiential
Personal growth
autonomy
over Cognitive
Conditioning
content Basic data & Situation specific
Information & response skills
Incl. Information
Transfer
approaches

Low

Low Learner autonomy over process High

Figure 2. Circles of learning - adapted after Mabey, Topham & Roland Kaye (1998), Binsted
(1988) and, Houldsworth (1994, 2004)

18
Figure 2 above, shows the trait modification approach placed where the learner
has greater autonomy over the content of learning as one would anticipate that the learners
would know their traits profile (Burgoyne, 2002) and the desired profile and hence what
content they need. The trait modification approach includes elements from the cognitive,
conditioning and experiential schools, with greater emphasis from the latter, and these are
reviewed in the following paragraphs to contextualise how the development programmes
under consideration in this research fit.

Conditioning theories
These are based on a stimulus/response view of learning. Practice, feedback and
reinforcement are the main learning principles and considerable empirical research has
been undertaken (Burgoyne, 2002). Although frequently criticised as being mechanistic in
nature, there are some areas of training where these principles are appropriate where a
Pavlovian response to a situation may be the most beneficial – for example in fire safety or
emergency evacuation. It may not be appropriate as an approach to management
development (Houldsworth, 2004).

Information transfer
Under this proposed model, information transfer approaches would be a sub-set of
conditioning approaches having their roots in Greek monologue with a curriculum defined
by research and tradition (Symons, 1996). Here, the teachers are subject experts
transmitting information and the product of learning as stored information is tested by
examination (Houldsworth, 2004). This type of learning is often regarded as ‘classroom
learning’ though increasingly this approach has been the driver of the migration of much
learning content onto technology-based delivery systems, commonly referred to as ‘e-
learning’ and probably the reason that e-learning is the subject of so much criticism in
spite of the promise of anytime, anywhere accessibility.
This type of learning has little to do with the individual self or the development of
managers unlike the more experiential models or action learning models where managers
take responsibility for their own learning. Symons (1996) however, suggests that transfer
of information is a necessary precursor to other models which makes intuitive sense in that
in order to use information or knowledge, it is necessary to have acquired such
beforehand.

19
Cognitive theories
In reaction to the inability of the behaviourism of the conditioning theories to
adequately explain for much of human activity, cognitive theories arose from the concern
that the stimulus/response link was not simple or straightforward (Winn and Snyder,
1996). Here, the learner builds up complex cognitive maps of the world, which can be
assimilated or modified or accommodated according to the appropriateness of fit and this
is discovered through a process of experimentation. The more traditional origins of
cognitive psychology attach great importance to the use of psychometrics and the pivotal
position of learning-styles theory (Burgoyne and Reynolds, 1997b). Unlike information
transfer, cognition is holistic in approach in the Gestalt tradition (Symons, 1996) and
knowledge here is considered personal and subjective where learners not only accumulate
facts and data, but also the representation of patterns and relationships among and between
them (Burgoyne and Stewart, 1977).
Cognitive learning theories include three sub-categories of learning outcomes:
Declarative, Procedural and Strategic Knowledge.

Declarative knowledge
The learner is typically required to reproduce or recognise some item of
information. For example, White (1984) demonstrated that students who played a
computer game focusing on Newtonian principles were able to more accurately answer
questions on motion and force than those who did not play the game.

Procedural knowledge
This requires a demonstration of the ability to apply knowledge, general rules, or
skills to a specific case. For example, Whitehall and McDonald (1993) found that students
using a variable payoff electronics game achieved higher scores on electronics
troubleshooting tasks than students who received standard drill and practice.

Strategic knowledge
This requires the application of learned principles to different contexts or deriving
new principles for general or novel situations – referred to by others as constructivist
learning (see Dede, 1997). Wood and Stewart (1987) for example, found that the use of a
computer game to improve practical reasoning skills of students led to improvements in
critical thinking.

20
Experiential approaches
Such approaches are most often selected for management development (Suqrue
and Kim, 2004, Thompson et al., 2002). Here, knowledge derives from concrete
experience and each individual has the freedom of choice and the capacity to initiate
action rather than just respond to circumstances (Symons, 1996). It is seen as a natural
learning process of growth rather than a teacher led activity. Shared experiences within
groups or syndicates are strongly emphasised over content.
There are three main ideas associated with experiential approaches relevant to this
study: andragogy; action learning and the learning cycle (and associated learning styles).

Andragogy
Knowles (1970, 1996, Knowles et al., 1998) popularised the notion that adults are
considered to learn differently from children. The pedagogical model, designed for
teaching children, places the teacher as responsible for all decision-making regarding
learning content, delivery, timing and evaluation. The andragogical model focuses on the
education of adults and is based on the precepts that adults need to know why they should
learn something, that adults maintain responsibility for their own decisions and enter
educational activity with greater volume and more varied experiences to children, that
adults have a readiness to learn those things that they need to know in order to cope
effectively with real-life situations, and adults are more responsive to internal motivators
than external motivators. Burgoyne and Reynolds (1997a) consider that management
learning theory has been strongly influenced by this humanistic ethic of adult education.

Action learning
Revans’ (1980, 1983) model of action learning shows learning reduced to a
combination of programmed knowledge and questioning insight and links a cognitive
philosophy with an experiential method. The basis of action learning uses the manager’s
experience of the past and uncertainties of the future, refining the mental map together
with a group of ‘fellows’ in a ‘set’ through tackling a particular practical problem.
The expected outcomes of successful action learning are a change in behaviour
following an exchange of ideas with others leading to a personal action plan adapted to the
environment. Inherent within this is the belief that this will be for the better (McLaughlin
and Thorpe, 1993).

21
Learning cycle
A popular model of experiential learning is Kolb’s (1984) learning cycle. Here,
learning is seen as a four-step process that Kolb identifies as: 1) watching, 2) thinking, 3)
feeling (emotion), and 4) doing (muscle).
Kolb defined learning as “… the process whereby knowledge is created through
the transformation of experience.” (Kolb, 1984, p41) Drawing from three key sources:
Dewey (1938) – who emphasised that learning needed to be grounded in experience;
Lewin (1951) – who stressed the importance of being active in the learning; and Piaget –
who considered that intelligence was the result of a person’s interaction with the
environment.
Learning, in Kolb’s model shown in Figure 3 below, involved four stages: the
immediate concrete experience of an event which results in observations that are
assimilated into theory and from which, the individual deduces new implications for future
action.

Concrete Experience

Experience

Testing implications
Active Observation Reflective
of concepts in
Experimentation & Reflection Observation
new situations

Formation of
abstract concepts
& generalisations

Abstract
Conspetualisation

Figure 3. Kolb's Experiential Learning Cycle (1976)

The assumption underlying Kolb’s (1976) model is that in order to be effective, the
learner needs four different kinds of abilities or basic learning modes as shown in Table 3:

Table 3. Four learning modes - Kolb (1976)

1. Concrete Experience (CE) – able to involve fully,


openly, and without bias in new experiences.
2. Reflective Observation (RO) – able to reflect on and
observe these experiences from many perspectives.
3. Abstract Conceptualisation (AC) – able to create
concepts that integrate observation into logically sound
theories.
4. Active Experimentation (AE) – able to use these
theories to make decisions and solve problems.
22
This portrays the ideal learning cycle where the learner goes through each aspect –
experiencing, reflecting, thinking and acting – in a recursive process responding to the
learning situation, context and learning content. The learning cycle has intuitive appeal
and is commonly used as a basis of experiential learning, however, situational learning
theorists (Lave, 1998, Fox, 1998) propose a more radical model rejecting the cognitive
structure and suggest that learning is more social in nature and that individual’s do not
necessarily internalise experience as learning does not have to be explicit and declarative.
Simplistically, it may be that it is possible to learn from experience without knowing what
has been learned – something that Prensky (2000) suggests is a benefit of using games for
learning.

Trait modification
Trait modification approaches to learning have a basis in the concept of the learner
being describable as a broad set of traits – knowledge, skills and attitudes (Burgoyne,
2002) – and that learning is a change in the profile of these characteristics. These
approaches assume, to a greater or lesser extent, the characteristics of the conditioning,
cognitive and experiential approaches. Kraiger et al. (1993) synthesised the work of Gagne
(1984), Anderson (1982) and others proposing very similar broad-based categories of
learning outcomes: skill-based, cognitive and affective outcomes.
Cognitive or knowledge outcomes are reflected in the cognitive theories section
above, while skill-based learning outcomes address technical or motor skills. A number of
game-based instructional programmes have been used for practice and drill of technical
skills – for example, Gopher et al. (1994) investigated the effectiveness of the use of an
aviation computer game by military trainees on subsequent test flights.
Affective learning outcomes refer to attitudes – including feelings of confidence,
self-efficacy, attitudes, preferences and dispositions. Some research have shown that
games can influence attitudes, for example see Wiebe and Martin (1994) and Thomas et
al. (1997).
The trait modification approach is closely associated with psychometric traditions,
such as Myers Briggs Type Indicators (MBTI) and the learning process is most often
associated with the idea of the learning cycle and learning styles.
Competency-based approaches to management development share the
characteristics of the trait modification approach. Dubois and Rothwell (2004) suggest that
competency-based approaches should be used when the resources and time are available

23
and the training has a significantly high strategic impact for a large population whose
performance should be exemplary rather than merely successful.

Management Learning Summary


This research will compare the learning and learning transfer from a strategic
management programme using three different activities, a management simulation, a
management game and case studies. Designed to be experiential, Symons (1996) suggests
that in such a Type B programme, learning will only take place when the environment
encourages risk-taking, provides feedback sensitively and quickly and unambiguously.
Whilst this research is concerned with the learning that is manifest by a training
intervention, it is also concerned with the transfer of such learning to the workplace by
means of behavioural change in the business, i.e. participants learn some things about
strategy and management and use this learning in the workplace to the benefit of the
business – often assumed and rarely measured (Thompson et al., 2002). This places this
research firmly in the trait modification approach to management learning. The only
difference between the programmes is the use of particular experiential delivery methods,
namely a management simulation, a management game and case studies. The case method
is the most used approach outside the traditional lecture/instruction format (Burgoyne and
Mumford, 2002) and used here as a accepted standard comparison base, or control group
(Wolfe and Guth, 1975, Jennings, 2000). The literature review now continues to consider
how we might evaluate the development intervention and why such evaluation is
important.

2.2.2 Evaluation models and taxonomies

Why evaluate training and development, and why it doesn’t always happen.
Organisations place an increasing requirement to demonstrate the worth of training
and development, particularly in terms of the impact on the business and often associated
but rarely measured, Return on Investment (Phillips, 1991) as has been seen above.
Training professionals and academics have emphasised other, non-monetary aspects of the
value of training and in particular, why there is a need to evaluate training and
development. Easterby-Smith (1994) notes four general purposes of evaluation (Table 4).

24
Table 4. Four general purposes of evaluation (Easterby-Smith, 1994)
1. Proving the worth and impact of training: Designed to demonstrate
conclusively that something has happened as a result of training or
development activities.
2. Improving: A formative process to explicitly discover improvements
to a training programme.
3. Learning: Where the evaluation itself is, or becomes, and integral
part of the learning on a training programme.
4. Controlling: Quality aspects in the broadest sense, both in terms of
quality of content and delivery to established standards.

More recent literature uses different terminology and offers some variations along
similar themes. Russell (1999), for example, does not include an explicit ‘learning’
purpose, but suggests that the evaluation of performance interventions produces benefits to
determine if performance gaps still exist and if the performance intervention achieved its
goals. Russell goes on to note that evaluation can assess the value of performance
intervention against organisational goals and [evaluation] helps to identify necessary
changes or improvements to the intervention in the future.
Russ-Eft and Preskill (2001) review evaluation across the entire organisation, not
just in the area of training and development, and suggest 6 distinct reasons to evaluate.
Applying the comparatively simple reasons from Easterby-Smith (1994) to Russ-Eft and
Preskill’s (2001) six reasons show strong similarities, suggesting that more is not
necessarily better, as can be seen in Table 5 below:

Table 5. Russ-Eft & Preskill (2001) - Reasons to evaluate


• Ensure quality (Controlling)
• Cont ribut es to increased organisation members’ knowledge
(Learning)
• Helps prioritise resources (Improving)
• Helps plan and deliver organisational improvements (Improving)
• Helps organisation members be accountable (Controlling)
• Findings can help convince others of the need or effectiveness of
various organisational initiatives (Proving)
Words in brack ets author’s adaptation after Easterby-Smith (1994)

The complexity and breadth of management training and development and its cost,
both in terms of time and effort as well as financial, has led to a continuing concern with
evaluation (Burgoyne and Reynolds, 1997a). In organisations, the concern is more about
proving and controlling whilst academia and professional consultants are often as
preoccupied, if not more so, with learning and improving. The problem of evaluation is

25
multi-faceted and whilst there are useful models and processes to address these, issues
remain, not least because evaluation requires time and effort.
Simulation and game based performance interventions need to be evaluated for the
same reasons outlined above, but pose particular additional problems with evaluation.
Burns et al. (1990) consider the multi-fold problem with evaluating experiential
pedagogies stating that there is firstly a need to compare the efficacy to ‘traditional’
approaches, and there is a need to compare alternative experiential pedagogies competing
to achieve the same learning. Not surprisingly, they note a paucity of solid empirical
evidence regarding the relative effectiveness of experiential techniques. Other authors
(e.g. Pierfy, 1977) note two particular problems with respect to evaluating simulations or
experiential techniques: the first being the conceptual problems pertaining to definitions,
domain boundaries and the theoretical basis which underpin and frame pedagogical
research. The second fundamental problem is that there remain significant methodological
difficulties including experimental design, constraints within the organisations and
institutions, time considerations and ethical questions associated with any comparative
study.
In an attempt to address these issues, it is useful to understand the models and
schools of thought of evaluation to help provide a suitable framework to help
conceptualise the problem and find potential solutions.

Schools of thought in evaluation


Easterby-Smith (1994) groups major approaches to evaluation, classifying them on
two dimensions, the scientific-constructivist dimension, and the research-pragmatic
dimension.
The scientific-constructivist dimension represents the distinct paradigms, often
incompatible ways of seeing and understanding the world (Filstead, 1979). The scientific
end favours the use of quantitative techniques involving the attempt to operationalise all
variables in measurable terms. This contrasts greatly with the constructivist end, which
emphasises the collection of different views of various stakeholders before data collection
begins. The constructivist process continues with reviewing largely, but not necessarily
exclusively, qualitative data.
The research-pragmatic dimension represents the contrasting styles of how
evaluation is conducted. Easterby-Smith (1980) describes the two extremes as Evaluation
and evaluation (author’s emboldening) – representing the research and pragmatic styles
respectively.

26
Research style rightly stresses the importance of rigorous procedures, that the
direction and emphasis of the evaluation study should be guided by theoretical
considerations and that these are aimed at producing enduring generalisations, and
knowledge about the learning and developmental process involved.
The pragmatic style, in contrast, emphasises the reduction of data collection and
other time-consuming aspects of the evaluation study to the minimum possible. Easterby-
Smith et al. (1991) point out, that when a researcher is dependant on the cooperation and
given time of managers and other informants, they may, and will have, other, more
important, priorities than the evaluator’s study. This usually means that the researcher is
cornered into a less than rigorous process however good the intention, by balancing the
desire for solid, defensible evidence against the practical problem of influencing
respondents to answer all questions with little personal incentive to do so.
Combining the dimensions into a useful matrix, Easterby-Smith uses arrows to
show the influence of one ‘school’ on the development of the linked ‘school’ shown in
Figure 4, below:

Research

Experimental Illuminative
research evaluation

Goal-free
evaluation
Scientific Constructivist

Systems
model Interventionalist
evaluation

Pragmatic

Figure 4. Models and 'schools of thought' in evaluation (Easterby-Smith, 1986)

In order to understand the approaches to evaluation that may be considered, the


review uses Easterby-Smith’s matrix, to highlight the key models and taxonomies that
may be attributed to each ‘school’ of thought. Though division, in practice, is arbitrary as
most evaluations contain elements of each point of view (Easterby-Smith, 1994). This
provides a useful framework to divide the frequently overlapping considerations within

27
evaluation and we start with the earliest and seemingly still most utilised school of
thought, especially in business training, before reviewing those that have developed since
and offer some significant suggestions to consider.

Experimental Research ‘school’


Experimental research has its roots in traditional social science research
methodology and Easterby-Smith (1994) cites Kirkpatrick (1959/60) as the best known
representative of this ‘school’ - (see also Campbell and Stanley, 1966, Campbell et al.,
1970, Hesseling, 1966). The emphasis in experimental research is:
1. Determining the effects of training and other forms of management development.
2. Demonstrating that any observed changes in behaviour or state can be attributed to
the training, or treatment, that was provided.
This is strongly associated with the ROI approach of Philips (1997) where
practitioners seek to demonstrate the monetary business impact in addition to
Kirkpatrick’s levels of evaluation.
On a more academic side, there is an emphasis on the theoretical considerations,
preordinate designs and quantitative measurements and comparing the effects of different
treatments. In a training evaluation study, this would involve at least two groups being
evaluated before and after the training intervention. One group receives training, whilst the
other group does not, or receives some previously known but different treatment. The
evaluation would measure the differences between the groups in specific, quantifiable
aspects related to their work.
Russ-Eft and Preskill (2001) suggest three evaluation models of this ‘school’, in
particular, are frequently used in management training: Kirkpatrick, Brinkerhoff and
Holton. They do not include Bloom’s taxonomy, which tends to be associated more with
education than with training, however, according to Burns et al. (1990), Bloom’s
taxonomy of educational objectives (Bloom, 1956) is particularly relevant to business
simulations learning assessment, as the scheme offers guidance in the pursuit of internal
validity and provides a structure for the measurement of learning across studies helpful to
external validity concerns. Hence, we first consider Bloom’s taxonomy before the other
three models:

Bloom’s taxonomy of educational objectives: ideal but not practical

Bloom’s taxonomy identifies a hierarchy of outcomes in order to plan learning


experiences, taking into account three domains of behaviour: cognitive, affective and
psychomotor. They range from basic knowledge to higher level critical thinking skills. At
28
the bottom of the hierarchy, the objective is one of acquiring knowledge, which involves
remembering facts, terms, and concepts. The progressive levels above are those of
comprehension (explaining the material), application (using the concept to solve a
problem), analysis (breaking the material down into component parts), synthesis
(producing something new), and finally, evaluation (making a judgement based on a pre-
established set of criteria) (Gopinath and Sawyer, 1999).
Based on Bloom’s taxonomy and rigorous research standards, Gosen and
Washbush (2004), stress the importance of understanding why such highest research
standards have been relatively rare and proposed three reasons why Bloom’s taxonomy
would be ideal, but is not practical for assessing the effectiveness of computer-based
simulations:
1. Careful, rigorous research dedicated to developing a valid instrument and
reflective of thought-out learning objectives is extremely time-consuming.
2. The criterion variable being used, which is learning, is illusive. In their knowledge,
Gosen and Washbush suggest that every attempt to concretise this variable has
failed.
3. Few researchers have called for validity studies to prove the value of lectures,
cases, examinations, discussions, term papers, or any of the other typical class
methods. “Why should we hold simulations to higher standards?” (Gosen and
Washbush, 2004, pp 284-285)
Gosen and Washbush’s (2004) research subjects are predominantly in educational
environments and Bloom’s taxonomy may be ideal within such settings where learners can
be coerced into ensuring that all tests are taken in a controlled environment, however,
within organisations, participants have more pressing matters than answering the
researchers tests (Easterby-Smith et al., 1991). A review of the management education
literature shows a growing awareness of the taxonomy’s potential usefulness, particularly
with college and university level educators where it is often used for curriculum design
and student assessment (Athanassiou et al., 2003). The difficulties in assessing higher
levels of learning and given the realities of the training world, Sackett and Mullen (1993)
conclude that while one should generally take advantage of the opportunity to use a design
that neatly controls for various threats, such as the rigorous use of Bloom’s taxonomy,
situations will often arise when the best one can do is to implement some form of quasi-
experimental design, and attempt a mix of good science with good practice. Hence, we
review more commonly used models designed for the training world rather than the
educational one.
29
Kirkpatrick Four Levels

Kirkpatrick created the most familiar taxonomy of a four step approach to


evaluation (Kirkpatrick, 1959/60) – now referred to as a model of four levels of evaluation
(Kirkpatrick, 1994). It is one of the most widely accepted and implemented models used to
evaluate training interventions (Russ-Eft and Preskill, 2001).
Table 6 below, shows Kirkpatrick’s four levels of measurement:

Table 6. Kirkpatrick's (1959/60, 1994) four levels of evaluation


1. Reaction to the intervention
2. Learning attributed to the intervention
3. Application of behaviour changes on
the job
4. Business results realised by the
organisation

This simple model is well-recognised and Russ-Eft and Preskill (2001) note that
the ubiquity of Kirkpatrick’s model stems from its simplicity and understandability –
having reviewed 57 journal articles in the training, performance, and psychology literature
that discussed training evaluation models, they found that 44 (77%) of these included
Kirkpatrick’s model. They go on to note that only in recent years, 1996 on, that several
alternative models have been developed.
Kirkpatrick (1998b), in an article written in 1977, considered how the evaluation at
his four levels provided evidence or proof of training effectiveness and declares that such
proof of effectiveness requires an experimental design using a control group to eliminate
possible factors affecting the measurement of outcomes from a training programme.
Without such a design, [the model] can only provide evidence of training effectiveness,
but not proof (Kirkpatrick, 1998a).
Kirkpatrick’s model is the basis of Phillips (1997) ROI model – often referred to as
the fifth level and strongly endorsed as a preferred approach by the American Society for
Training and Development (ASTD) and it’s sister organisation the ROI Network
(www.astd.org). Others suggest that it would be possible and desirable to go beyond
business impact and ROI and consider societal impact (Watkins et al., 1998).
Whilst widely used, Kirkpatrick’s model is not without criticism. Alliger and Janak
(1989) and Holton (1996) discuss the flaws of the model in detail – in essence, their
critique is that the model only has limited use in education because it lacks explanatory

30
power. The model is useful in addressing broadly what happens, but not why it has
happened.
A useful question to ask about using the Kirkpatrick (or Phillips) model is for
whom is the evaluation being conducted? At the first level of Reaction – usually called
happy sheets – it is often the trainers who are most interested to discover how much they
were liked, or the client (the persons accountable for the training investment) – who
frequently use this as the only measure of training effectiveness (Thompson et al., 2002,
Suqrue and Kim, 2004). This does not mean that reaction to training is unimportant,
however, if its purpose is to demonstrate effectiveness it has questionable validity. Level
2, learning is clearly of interest to the trainees and trainers and may be of interest to others
as well. Behaviour change on the job is likely to be of interest to the trainee’s managers –
and to trainers as well, particularly if they are interested in understanding the impact of
training. Though some argue that learning and behaviour change only occurs after failure
(Schank, 1997) which is rarely an enjoyable experience of itself and hence the link from
reaction to learning to behaviour change may not always be in place. On a personal note, I
recall vividly not enjoying Latin class in school at all yet to this day I can recite any
number of verb conjugations that I have never used in real life.
Kirkpatrick’s model assumes that the levels represent a causal chain such that
positive reactions lead to greater learning, which in turn produces greater transfer and
hence to more positive business impact or results. Kirkpatrick is, however, vague about
the actual nature of the linkages, his writings do imply that a simple causal relationship
exists between the levels of evaluation (Holton, 1996).
Other authors suggest that Kirkpatrick’s model is incomplete for other reasons.
One example is Brinkerhoff whose model we shall consider next.

Brinkerhoff – adding prior evaluation stages

Brinkerhoff’s (1987, 1988) model has its roots in evaluating training and HRD
interventions. Brinkerhoff’s cyclical model consists of six stages grouped into the
following four stages of performance intervention (Table 7):

Table 7. Four stages of performance intervention (Brinkerhoff, 1987)

• Performance analysis
• Design
• Implementation
• Evaluation

31
Brinkerhoff’s model addresses the need for evaluation throughout the entire human
performance intervention process (Table 8) and insists that the evaluation should be
determined by the purpose for conducting the evaluation, i.e. that the purpose should
reflect the audience’s concerns.

Table 8. Six stages of evaluation (Brinkerhoff, 1987)


1. Goal setting – identify business results and performance
needs and determine if the problem is worth addressing
2. Program design – evaluation of all types of interventions
that may be appropriate
3. Program implementation – evaluates the implementation
and addresses the success of the implementation
4. Immediate outcomes – focuses on learning that takes
place during the intervention
5. Intermediate outcomes – focuses on the after-effects of
the intervention, some time following the intervention
6. Impacts and worth – how the intervention has impacted
the organisation, the desired business results and
whether it has addressed the original performance need
or gap

Whilst using different terminology to ensure clarity and distinction between steps
in the evaluation process, Brinkerhoff’s model essentially adds two prior stages to
Kirkpatrick’s four levels evaluating that the performance intervention is necessary and
appropriate. The process suggested by Brinkerhoff explicitly considers other stakeholders
in the evaluation – a valuable addition to the implication in Kirkpatrick’s work – providing
greater understanding for the purpose of confirmation and for continuously improving
development programmes and performance.

Holton’s HRD evaluation research and measurement model- theory driven and less
pragmatic

Holton (1996) criticised Kirkpatrick’s model arguing that the model is a taxonomy
and not a fully developed theoretical model. He agrees with and identified three outcomes
of training (learning, individual performance and organisational results) however suggests
that these are affected by primary and secondary influences. Learning is affected by
trainee’s reactions, their cognitive ability, and their motivation to learn. The outcome of
individual performance is influenced by motivation to transfer their learning, the
programme’s design and the condition of the training transfer. Organisational results are

32
affected by the expectations for return on investment, the organisation’s goals, and
external events and factors. Holton’s model bears great similarity to Kirkpatrick’s and
highlights an important factor not present – that of the motivation of a trainee to transfer
their learning to the workplace.
Holton’s model may be more testable than others, in that it is the only one that
identifies specific variables that affect the impact of training through identification of
various objects, relationships, influencing factors, predictions and hypotheses. However,
as pointed out by Russ-Eft and Preskill (2001), it is related to a theory-driven approach to
evaluation and less pragmatic than Kirkpatrick’s.
Kirkpatrick’s model remains ubiquitous within the business HRD community in
spite of potential flaws, its simplicity and intuitiveness perhaps at least encouraging
evaluation practice however, on a more academic level there are other issues to consider
with an experimental approach.

Issues with experimental approach


There are a number of reasons why the experimental approach to training
evaluation may not work as well as it might promise, particularly with management
training where sample sizes are limited. Easterby-Smith (1994), discusses four main
reasons: sample size, control groups, causality and time & effort.

Sample size

Using statistical techniques, an essential component of the experimental approach,


samples need to be large to discover statistical significances when evaluating management
development. This becomes particularly problematic when group sizes are typically often
less than ten and rarely greater than thirty.

Control groups

Achieving genuine control groups in management development is especially


difficult. One study by Easterby-Smith and Ashton (1975) is pertinent here: the selection
of the group to receive training were ‘closer’ to their bosses than those who were not
selected – the result, counter-intuitively, was that the non-selected group outperformed the
selected group, perhaps to deliberately upset the selected group or perhaps because the
training intervention simply provided the non-selected group the time and opportunity to
become ‘closer’ to their bosses and prove their worth.

33
Causality

The intention of experimental design is primarily to demonstrate causality between


the training intervention and any subsequent outcomes. However, it is often hard to isolate
the intervention from other influences on the manager’s behaviour – the study cited above
provides a possible example of this. It may be possible to reduce the clutter of other
influences for training interventions that have a clear, specific and identifiable skills focus.
Interventions of a more complex nature may be subject to myriad external influences
between evaluations.

Time & Effort

Kirkpatrick (1998b) identifies that in proving the effectiveness of training, the time
of the post-test evaluation must be undertaken long enough after the intervention for the
participant to have had an opportunity to change behaviour or for results to be realisable.
The additional effort of evaluating training is regularly cited as being the main
reason for not undertaking it (Phillips, 1991, 1998). It adds to the burden of the trainer or
HRD team and the evaluation results are not the foremost priority of the organisation
(Russ-Eft and Preskill, 2001).

Illuminative evaluation
The illuminative evaluation ‘school’ takes issue with the comparative and
quantitative aspects of the experimental approach. Instead, this view tends to concentrate
on the views of different people in a more qualitative way. The aim of illuminative
evaluation is to discover not how a training intervention performs on standard measures,
but what factors and issues are important to the participants in that particular situation
(Parlett and Hamilton, 1987). However, formal illuminative evaluation has been noted to
be more costly than anticipated (Jenkins et al., 1981), and has been rejected by sponsors
(Easterby-Smith, 1994).

Systems model
The systems model ‘school’ of evaluation falls into the tradition of behavioural
objectives approach (Russ-Eft and Preskill, 2001) and is similarly scientific to the
experimental ‘school’ though more pragmatic. There are three main features of the
systems model, starting with the objectives with an emphasis on identifying the outcomes
of training and a stress on providing feedback about these outcomes to those involved in
providing training inputs (Easterby-Smith, 1994).

34
Evaluation here assesses the total value of training in social as well as financial
terms. Hamblin (1974), suggests that evaluating social as well as financial terms is over-
ambitious.
Also widely referenced, Hamblin (1974), devised a five-level model (Figure 5)
similar to Kirkpatrick’s. Hamblin adds a fifth level that measures ‘ultimate value
variables’ of human good.

Training
Event

Reactions
effects
Learning
effects

Job behaviour
effects
Organisation
effects

Ultimate value
effects

Figure 5. 'Chain of consequences' for a training event (Hamblin, 1974)

An important feature of Hamblin’s work is the emphasis on measurement of


outcomes from training at different levels. It assumes that any training event will, or can,
lead to a chain of consequence. Hamblin suggests that it would be unwise to conclude
from an observed change at one of the higher levels of effect, that this was due to a
particular training intervention, unless one has followed the chain of causality through the
intervening levels of effect. Should a job change behaviour, for example, be observed, the
constructivist take on this would be to ask the individual for his own views of why they
were now behaving in a different way and then compare this interpretation with the views
of one or two close colleagues.
The stress on feedback to trainers and decision makers in the training process is an
important feature of the systems model ‘school’. Warr et al. (1970) take a very pragmatic

35
view of evaluation, suggesting that it should be of help to the trainer in making decisions
about a particular programme as it is happening - reflecting in action to continuously
improve the process as it happens (Schon, 1983). Rackham (1973) builds on earlier work
(Warr et al., 1970) making a further distinction between assisting decisions that can be
made about current programmes and feedback that can contribute to decisions about future
programmes. Rackham notes that the process of feedback from one programme to the next
resulted in clear improvements when the programmes were non-participative in nature –
but in programmes involving a lot of participation, there was no apparent improvement
after feedback to improve future programmes.
Feedback, as an important aspect of evaluation, is further developed by Burgoyne
and Singh (1977) who distinguish between evaluation as feedback and feedback adding to
the body of knowledge. The former they saw as perishable data of momentary value
directly to decision-making and the latter, as generating permanent and enduring
knowledge about education and training processes.
Burgoyne and Singh relate evaluative feedback to a range of decisions about
training in the broadest sense. Figure 6 shows an adaptation of their model with examples
of decisions at each level. This not only highlights the critical importance of the evaluation
and feedback process, but also the level of importance of each decision to the training and
development process.

Policy
Role of the training institution
Provision of funding and resources Body of knowledge
about training and
Strategy education
Optimisation of resources
Organisation of the institution
Decision levels

Programme
Longer or shorter programme?
More or less structured?
Internal or external speakers?
Evaluation
Method research
Employ a lecture, case study or
simulation to introduce topic?

Intra-method
Straight delivery to lively debate?
Behavioural/
Programme organisational
Learning
Course/Event outcomes

Figure 6. Evaluation of outcomes and link to decision making (adapted after Burgoyne and Singh,
1977)

36
The systems model has been widely accepted, especially in the U.K., but there are
a number of problems and limitations with the systems model approach to evaluation –
over feedback, the emphasis on outcomes and on the establishment of objectives.
Easterby-Smith (1994) suggests that feedback, as data provided from an evaluation
of a past event, can only contribute marginally to decisions about the future due to a
legacy of the past training event. Thus, feedback can highlight incremental improvements
based on a previous design, but not accentuate when radical change is needed.
The emphasis on outcomes provides a good and logical basis for evaluation but it
represents a mechanistic view of learning. In the extreme, this suggests that learning
consists of facts and knowledge being placed in people’s heads and that this becomes
internalised and gradually incorporated into behavioural responses. Indeed, this criticism
is often levelled at many forms of e-Learning (Schank, 2002).
MacDonald-Ross (1973) questions if there is any value in specifying formal
objectives for training at all, since among other things, this might place unnecessary
restrictions on the learning that could be achieved. This leads to the next major evaluation
‘school’: goal-free evaluation.

Goal-free evaluation
A radical view that the evaluator should take no notice of the formal objectives of
a programme during its investigation was proposed by Scriven (1972). Goal-free
evaluation leans more to the constructivist method where the evaluator should avoid
discussing, or even seeing, published objectives of the programme, and through a
discovery process with participants, establish what happened and what was learned. Goal-
free evaluation avoids the risk of narrowly defining the scope of enquiry and missing
unanticipated learning from a programme (Patton, 1990). This may not help the trainer
improve the programme towards the intended objectives and is likely to be time-
consuming as an evaluation process; however, identifying learning that is incidental or
extra can be of immense value.

Interventionalist evaluation
Contrasting responsive, or illuminative, evaluation with the preordinate approach
of experimental research, Stake (1980) suggests that the latter requires the design to be
clearly specified and determined before the evaluation begins – it makes use of objective
measures evaluating these against criteria determined by programme staff. Responsive
evaluation, on the other hand, is concerned with programme activities rather than
intentions – taking into account different value perspectives. Additionally, Stake (1980)
37
stresses the importance of responding to audience requirements for information (unlike
goal-free evaluators who distance themselves from principal stakeholders).
Guba and Lincoln (1989) take this method further in what they call responsive
constructivist evaluation. They recommend starting with the identification of stakeholders
and their concerns, arranging for these concerns to be exchanged and debated before
collection of further data.
Patton (1978), stresses the importance of identifying the motives of key decision
makers and their information needs and recognising that some stakeholders have more
influence than others (which Guba and Lincoln (1989) argue should not be the case) but
goes further by concentrating on the uses of the subsequent information.
Interventionalist evaluation including it’s guises of responsive and utilisation-
focussed evaluation (Easterby-Smith, 1994) is in danger of being so flexible in its
approach that it may produce results that are weak and inconclusive as the whims and
circumstance of each stakeholder changes. Impartiality and credibility may also be
reduced as the evaluator becomes too close to the programme and stakeholders.

2.2.3 Management learning and evaluation summary


Many researchers and organisations measuring the effects of training have looked
at one or more of the outcomes identified by Kirkpatrick (1959/60, 1994): reaction,
learning, behaviour, and results. It is reasonable to assume that enjoyment is a precursor to
learning, and that if a trainee enjoys the training, they are likely to learn (Russ-Eft and
Preskill, 2001). However in a meta-analytic study combining the results of several other
studies, Alliger et al. (1997) found no support for the assumption.
Kirkpatrick’s second level, Learning, is the most commonly used measurement
after trainee reaction to assess the impact of training. Studies investigating the relationship
between learning and work behaviour have shown mixed results and offer little concrete
evidence to support the notion that increased learning from a training programme relates
to performance in the organisation (Collins, 2004).
Transfer of Learning is the application of knowledge, skills and attitudes
acquired during training to the work setting. Some research in this area has focused on
comparison of alternative conditions to training transfer. A typical design of such research
compares groups who receive different training methods and/or a control group that
receives no training. Frameworks to investigate learning transfer have been developed by
a number of researchers with some commonality that includes three phases: pre-training
phase –including needs analysis; learning phase – the delivery of the training; and a post-

38
course phase – management of the work environment to promote transfer (see Huczynski
and Lewis, 1980). Using the transfer factors highlighted in such studies, Lim and Johnson
(2002) identified that the opportunity to use [the learning] on the job was the main reason
for high transfer, conversely, lack of opportunity to apply on the job was the main reason
for low transfer. The research into learning transfer indicates that some for of post-training
transfer strategy facilitates transfer (Russ-Eft and Preskill, 2001).
Results of training in terms of business results, financial results and return on
investment is much discussed in the popular literature – most offering anecdotal evidence
or conjecture about the necessity of evaluating training’s return on investment and
methods that trainers might use to implement such an evaluation. Solid research on this
topic is not, however, so voluminous. Mosier (1990) proposes a number of capital
budgeting techniques that could be applied to evaluating training whilst recognising that
there are common reasons why such financial analyses are rarely conducted or reported
(Table 9):

Table 9. Reasons that training is not evaluated with financial analyses (Mosier, 1990)

• It is difficult to establish a monetary value for many


aspects of training
• No usable cost-benefit tool is readily available
• The time lag between training and results is problematic
• Training managers and trainers are not familiar with
financial models

Phillips (1991, 1998) used Kirkpatrick’s four levels and created a useful
framework for measuring ROI – however, its use requires considerable time and effort and
is typically used only for the more expensive training interventions and results most often
are not released by the organisations conducting them to the public domain.
Reviewing the history and development of training evaluation research shows that
there are many variables that ultimately affect how trainees learn and transfer that learning
to the workplace. Russ-Eft and Preskill (2001) suggest that a comprehensive evaluation of
learning, performance and change would include the representation of a considerable
number of variables. In such an approach, whilst laudable in terms of academic research,
organisations need to recognise the trade-off between the cost and duration of the
evaluation process and increasing the quality of the information it generates (Warr et al.,
1970).

39
A considerable amount of research has been carried out with a great variety of
focal theories, usually looking for consistent relationships between educational methods
and learning outcomes. The underlying theory appears to be taken from behaviouralist
psychology – where the view is that what the subject (patient) does (behaviour or
response) is a consequence of what has been done to him (treatment or stimulus)
(Hamblin, 1974).
There are no forced rules about the style of evaluation to be used for a particular
purpose, however, Easterby-Smith (1994) suggests that studies aimed at fulfilling the
purpose of proving will tend to be located towards the research end of the dimension, and
studies aimed at improving, will tend towards the pragmatic end. On the methodological
dimension, there may be more concern with proving at the scientific end, and learning at
the constructivist end (Figure 7).

Research

Experimental Illuminative
research evaluation
Proving

Goal-free
evaluation
Scientific Proving Learning Constructivist
Improving

Systems
model Interventionalist
evaluation

Pragmatic
Adapted from Easterby-Smith

Figure 7. Use of evaluation style matrix (Easterby-Smith 1994)

Management development which is intended to increase the effectiveness of


managers can be evaluated at a number of levels and using any individual or combination
of models. Kirkpatrick’s (1959/60) framework remains the most widely used and most
organisations carry out evaluation at the reaction level, some measure learning but few
attempt to assess changes in behaviour or the impact on the business. This research study
aims to demonstrate how effective the use of management simulations or games are in
developing managerial competency in comparison to the use of case studies – as such, the
evaluation model of the training programme leans towards the proving ends of each of
Easterby-Smith’s dimensions in the above matrix and strongly suggests that this is
experimental research. We now need to review and understand in more detail the aspects
of transfer of learning and how it impacts the business.
40
2.3 Learning transfer
The literature about learning transfer suggests multiple definitions though there is
general agreement that transfer involves the application, generalisability and maintenance
of new knowledge and skills (Ford and Weissbein, 1997). A number of writers identify
specific factors that affect transfer particularly relating it to work environment factors
(Awoniyi et al., 2002, Cromwell and Kolb, 2002, Lim and Johnson, 2002) and the
associated transfer climate supporting or inhibiting transfer (Holton et al., 2001). Ruona et
al. (2002) argue that the transfer climate is only one of a network of factors that affect
transfer of learning to the workplace, other factors include the training design, personal
characteristics, the opportunity to use the training, and motivational influences.
A notable issue with many of the studies cited above is the opportunity for trainees
to utilise the learning in the workplace – this is understandable in regard to highly specific
technical or task-specific skills, but to a lesser extent to general management skills. In this
research, exactly what is to be measured as part of the evaluation is a particularly
problematic area. Aspects of behaviour or reaction that are relatively easy to measure are
usually trivial. Measuring change in behaviour, for example, may require the observations
to be reduced to simpler or more marginal characteristics. Alternatively, these
characteristics may be assessed by the individual’s subordinates – however, the purist
would no doubt claim that such holistic judgments are of dubious validity. The problem is
that the general requirement for quantitative methods tends to produce a trivialisation in
the focus of the evaluation. According to Bedingham (1997, p89), ultimately “the only
criteria that make sense are those which are related to on-the-job behaviour change”.
Bedingham (1997) advocates the use of 360º questionnaires that objectively
measure competency sets and skills applicable to most organisations, functions or
disciplines and making the results of the feedback taken immediately prior to the event
available to trainees during their training – thus allowing individuals to easily see how
they actually do something and the relevance of the training. Thus, they can then start
transferring the learned skills immediately on return to the workplace. The downside of
providing feedback on pre-test measurements during the training is that this may
immediately skew the results, as will be discussed in the methodology section later.
The inhibitors to learning transfer, particularly within the locale of this research,
are well shown in research by Tsang (1997), who set out to discover what and how
Singaporean companies learn from direct investment in China (FDIs) and from conducting
joint ventures with Chinese companies. His research design involved a sample of 19
Singaporean companies with business experience in China, he then carried out
41
approximately 80 interviews with Singaporean and Chinese managers working for these
companies. Tsang also examined meeting records and reports. On the basis of his data, he
concluded that Singaporean companies rarely learn much from their business links in
China, although there is considerable evidence of technological and managerial systems
transfer to the Chinese partners. From the evidence, Tsang inferred a number of reasons
• Singaporeans felt that their systems were superior to those in China and
therefore would learn little from their Chinese partners
• There was little transfer of learning back to the parent company because no
institutional structures were set up for this purpose
The interesting conundrum posed here is that whilst Singaporeans seek greater
exposure to management abroad, once back in the home organisation, they may not put
newly developed competencies and capabilities to effective use for reasons suggested by
Holton et al. (2000) and Russ-Eft and Preskill (2001).
It is a popular assumption that the purpose of management development is that it
provides the means and the route to improving performance and in creating or maintaining
competitive advantage, yet there remains little robust empirical evidence to support this.
According to Mumford (1994) it is necessary to define effective management behaviour
and an increasingly common way of doing so is by understanding and defining what
managers do, how they behave and what they should do to be successful (Robotham and
Jubb, 1996). Managerial competencies and the movement associated with their definition
and understanding provides a way of defining effective management. Schroder (1989), for
example, suggests that competences are simply personal effectiveness skills, whilst others
consider competences as being linked to personality and therefore, within the context of
the intended research, potentially impact on the understanding of managerial performance
and effectiveness.
The next section reviews managerial competency models and frameworks building
on the literature above identifying this research to be within the trait modification
approach to management learning - where competency models have become a common
way to identify and plan development programmes.

2.3.1 Managerial competency models


There is a growing level of interest and focus on managerial competences and
managerial performance with a wealth of literature (see Boyatzis, 1982, Finn, 1993,
Sarawano, 1993, Spencer and Spencer, 1993, Higgs, 1999, Young, 2002), with
considerable debate and confusion over the definition of competences and competencies

42
(Finn, 1993). The author will follow the distinction from Young (2002), in the meantime,
the terms used are as used in the citation.
Many researchers have attempted to identify and isolate the competencies or
characteristics or dimensions of superior performers in the practice of management.
McClelland is often cited as the father of the modern competency movement. In 1973, he
challenged the then orthodoxy of academic aptitude and knowledge content tests, as being
able to predict performance or success in life as being biased against minorities and
women (Young, 2002). Identified through patterns of behaviour, competencies are
characteristics of people that differentiate performance in a specific role or job
(McClelland, 1973, Kelner, 2001). Competencies distinguish well between roles at the
same level in different functions and also between roles at different levels (even in the
same function) often by the number of competencies required to define the role. A
competency model for a middle manager is usually defined within ten to twelve
competencies, of those two are relatively unique to a given role. Kelner (2001), cites a
1996 unpublished paper by the late David McClelland were he performed a meta-analysis
of executives assessed on competencies, where McClelland discovered that only eight
competencies could consistently predict performance in any executive with 80 percent
accuracy.

Competence and Competency


The concept of competence remains one of the most diffuse terms in the
organisational and occupational literature (Nordhaug and Gronhaug, 1994). Exactly what
does an author mean when using any of the terms of competence?
The concept of individual competence is widely used in human resource
management (Boyatzis, 1982, Schroder, 1989, Burgoyne, 1993). This refers to a set of
skills that an individual must possess in order to be capable of satisfactorily performing a
specified job. Although the concept is well developed, there is continuing debate about its
precise meaning.
Others take a job-based competence view that according to Robotham and Jubb
(1996) can be applied to any type of business where the competence-based system is based
on identifying a list of key activities (McAuley, 1994) and behaviours identified through
observing managers in the course of doing their job.
A useful view is to look at competence to mean a skill and the standard of
performance, whilst competency refers to behaviour by which it is achieved (Rowe, 1995).
That is, competence describes what people do and competency describes how people do it.

43
Rowe (1995, p16) further distinguishes the attributes an individual exhibits as
“morally based” behaviours – these are important drivers of behaviours but especially
difficult to measure – and “intellectually based” behaviours as capabilities or
competencies. Capabilities are distinguished as these refer to development behaviours –
i.e. are graded to note development areas to improve behaviours in how people undertake
particular tasks.
Young (2002) develops on a similar theme and builds on Sarawano’s (1993)
model, linking competency and competence to performance and identifies competency as
a personal characteristic (motives, traits, image/role and knowledge) and how the
individual behaves (skill). Competence is what a manager is required to do – the job
activities (functions, tasks). These in turn lead to performance of the individual [manager].
Jacobs (1989) considers a distinction between hard and soft competences. Soft
competences refer to such items as creativity and sensitivity, and comprise more of the
personal qualities that lie behind behaviour. These items are viewed as being conceptually
different from hard competences, such as the ability to be well organised. Jacob’s
distinction fits neatly into Young’s model with hard competences referring to identifiable
behaviours, and soft competences as the personal characteristics of the individual.
Further distinctions relate to the usefulness of measuring competenc[i]es. Cockerill
et al. (1995) define threshold and high-performance competences. Threshold competences
are units of behaviour which are used by job holders, but which are not considered to be
associated with superior performance. They can be thought of as defining the minimum
requirements of a job. High performance competences, in contrast, are behaviours that are
associated with individuals who perform their jobs at a superior level.
In the UK, the Constable and McCormick Report (1987) suggested that the skill
base within UK organisations could no longer keep pace with the then developing
business climate. In response, the Management Charter Initiative sought to create a
standard model where competence is recognised in the form of job-specific outcomes.
Thus, competence is judged on performance of an individual in a specific job role. The
competences required in each job role are defined through means of a functional analysis –
a top-down process resulting in four levels of description:
• Key purpose
• Key role
• Units of competence
• Elements of competence

44
Elements are broken down into performance criteria, which describe the
characteristics of competent performance, and range statements, which specify the range
of situations or contexts in which the competence should be displayed.
The MCI model now includes personal competence, missing in the original,
addressing some of the criticisms levelled at the MCI standards. Though the model tends
to ignore personal behaviours which may underpin some performance characteristics,
particularly in the area of management, where recent work has indicated the importance of
behavioural characteristics such as self-confidence, sensitivity, proactivity and stamina.
The US approach to management competence, on the other hand, has focused
heavily on behaviours. Boyatzis (1982) identifies a number of behaviours useful for
specifying behavioural competence. Schroder (1989) also offers insights into the personal
competencies which contribute to effective professional performance.
Personal competencies and their identifying behaviours form the backbone of
many company-specific competency frameworks and are used extensively in assessment
centres for selection purposes. This is because behavioural (or personal) competence may
be a better predictor of capability – i.e. the potential to perform in future posts – than
functional competence – which attests to competence in current post. The main weakness
of the personal competence approach, according to Cheetham and Chivers (1996), is that it
doesn’t define or assure effective performance within the job role in terms of the outcomes
achieved.
In his seminal work “The Reflective Practitioner”, Schon (1983) attempts to define
the nature of professional practice. He challenges the orthodoxy of technical rationality –
the belief that professionals solve problems by simply applying specialist or scientific
knowledge. Instead, Schon offers a new epistemology of professional practice of
‘knowing-in-action’ – a form of acquired tacit knowledge – and ‘reflection’ – the ability to
learn through and within practice. Schon argues that reflection (both reflection in action
and reflection about action) is vital to the process professionals go through in reframing
and resolving day-to-day problems that are not answered by the simple application of
scientific or technical principles.
Schon (1983) does not offer a comprehensive model of professional competence,
rather he argues that the primary competence of any professional is the ability to reflect –
this being key to acquiring all other competencies in the cycle of continuous improvement.
There are criticisms of competency-based approaches to management and these
tend to argue that managerial tasks are very special in nature, making it impossible to
capture and define the required competences or competencies (Wille, 1989). Other writers
45
argue that management skills and competences are too complex and varied to define
(Hirsh, 1989, Canning, 1990) and it is an exercise in futility to try and capture them in a
mechanistic, reductionist way (Collin, 1989). Burgoyne (1988) suggests that the
competence-based approach places too much emphasis on the individual and neglects the
importance of organisational development in making management development effective.
It has also been argued that generic lists of managerial competences cannot be applied
across the diversity of organisations (Burgoyne, 1989, Canning, 1990).
Considering the wealth of often contradictory literature, are there useful
frameworks that will enable the measurement of behaviour as a means of assessing the
effectiveness of learning transfer? The next section reviews different competency
frameworks with this intended purpose.

Competency frameworks
One framework used frequently in the UK was developed by Dulewicz and
Herbert (1992), who empirically identified twelve independent dimensions of managerial
performance – ‘supra-competencies’ – based on research of assessment centres (Dulewicz
and Fletcher, 1982, Fletcher and Dulewicz, 1984) and developed the Job Competency
Survey questionnaire (JCS). Dulewicz and Herbert (1992) correlated the responses on the
importance and performance of each competency in the JCS and cross-validated with the
Occupational Personality Questionnaire (OPQ). The twelve competencies, outlined in
(Table 10) below, are grouped into four broad clusters:

Table 10. JCS competencies (Dulewicz and Herbert, 1992)


Intellectual Interpersonal
• Strategic perspective • Managing staff
• Analysis and judgement • Persuasiveness
• Planning and organising • Assertiveness
• Interpersonal sensitivity
• Oral communication
Adaptability Results-orientation
• Adaptability and • Energy and initiative
resilience • Achievement motivation
• Business sense

The JCS provides a useful instrument having been derived from Dulewicz’s
assessment centre work and literature and already been used to test the responses of

46
hundreds of managers. However, some authors (Sarawano, 1993, Gay, 1995, Birchall et
al., 1996, Chong, 1997) have sought to check its suitability in an international context. As
this research is being carried out in South East Asia, the author reviewed research
pertaining to competency frameworks and their use outside the UK and US to identify if
other frameworks might be more appropriate.

Is the JCS suitable in the location of the research?


Gay (1995) reviews generic competency models from McBer (Spencer and
Spencer, 1993), Dulewicz and Herbert (1992), Coulson-Thomas (1992), Barham and
Oates (1991), and Barham and Wills (1992) – with the intention of seeking differences in
the relative importance that practicing international managers might give to the ranking of
competences which had been carried out with respect to national managers.
Gay’s (1995) research design used the JCS model as a basis, supplemented with
specific competences for international managers derived from his literature review
including consideration of the influence of cultural differences on the conduct of
international business. Building on the work of Hutton (1988), Hofstede (1980),
Elashmawi and Harris (1993) and Adler (1986), Gay (1995) develops three additional
categories to supplement the JCS including Global Awareness, Cross-border Cultural
Awareness and Foreign Language Skills. However, the instrument would contain some
fifty competences which Gay sought to reduce to a more workable twenty, considered by a
chosen sample of international managers that they regarded as being the most relevant to
successful performance in the international context.
Chong’s working paper (1997) builds upon a follow-up study by Dulewicz and
Herbert (1996) comparing the managerial competences and performance of UK managers
with Singaporean public sector managers. He notes similarities in ten factors and distinct
competences peculiar to each nationality.
Sarawano’s (1993) research focused on the effect of cultural differences on
managerial style and compared competences exhibited by UK managers (from the
Dulewicz and Herbert 1992 study) and those exhibited by Malaysian managers. Sarawano
used Chattel’s 16PF and a modified JCS to facilitate comparison with the UK data. He
puts forward several hypotheses to predict differences expected between the two groups,
largely based on Hofstede’s (1980) work on cultural differences.
Each of these studies suggests that a generic managerial competency framework
could be used to assess both expatriate managers and managers in their own country –

47
allowing for differences in culture stemming from national cultures, the organisations
culture and the individual’s cultural heritage.

Is the JCS up-to-date?


It is more than twelve years since the JCS was created and the world of business
has seen considerable change during that period of time, not least is the mid-90’s drive
towards a Knowledge Based Economy across the world and, of particular interest to this
research, in South East Asian countries.
The Singapore Training and Development Association (STADA) in conjunction
with the Nanyang Technological University (NTU) undertook a study in 2001 on
‘Managerial Competencies in a Knowledge Based Economy (KBE)’ (Lee Mei Ching et
al., 2002). This study sought the views of Human Resource Development (HRD)
professionals in Singapore on the implications of the KBE on human resource
development. Table 11 notes the top ten competencies in a KBE resulting from this study:

Table 11. Top Ten competencies in a KBE - Singapore HRD perspective (Lee Mei Ching et al., 2002)
1. Adaptability to changes
2. Ability to see the ‘big
picture’
3. Communication skills
4. Visioning skills
5. Knowledge of own
strengths and weaknesses
6. Creative thinking skills
7. Relationship building skills
8. Leadership skills
9. Consulting skills
10. Understanding of
improvement in human
performance

Kenworthy and Wong (2003) undertook a qualitative study with focus groups of
business leaders, senior managers and HRD practitioners based in Singapore to understand
what makes for an effective manager and to compare the findings with the frameworks
above – this to help establish if the existing frameworks are still current and applicable in
South East Asia. The study identifies those competencies considered to be base or
threshold and those behaviours that differentiate effective managers. Mapping the
constructs across shows close similarities, though there does appear to be a newer explicit
emphasis on creativity and passion (Table 12):

48
Table 12. A study of the attributes of managerial effectiveness in Singapore - competency model -
(Kenworthy and Wong, 2003)

Descriptor Behaviours Base /


Differe
ntiator
Achievement / Achieves the business results (goals, sales targets, B
Results financial targets)
Oriented Completes tasks or targets on time B
Sets own and others’ goals or targets D
Accountability Shows strong commitment to organisation B
and Prepared to make tough decisions and see them through B
Responsibility Help to grow the business and grow with the business
(intellectually and behaviourally) B
Able to stand on own feet to make decisions B
Take accountability and responsibility for decisions made D
Executive Positive thinker B
Maturity Eye for detail B
Resourceful B
Analytical B
Attuned to own potential and limitations B
Prepared to learn from mistakes and open to continuous B
self-learning
Unafraid of criticism or feedback B
Cultural Aware of ‘face’ or ‘guan xi’ issues in dealing with people B
Sensitivity from different races
Works around cultural barriers B
Adopts and adapts the way business is conducted B
Respects different mindsets B
Enjoys mixed cultural environments and working in a D
‘melting-pot’
Works well with foreigners and locals D
Building and Invests time in building both formal and informal B
Managing relationships
Relationships Good at identifying others’ strengths and weaknesses B
Earns trust B
Develops and uses skills in influencing, networking and D
customer satisfaction
Getting along upwards, downwards and laterally D
Communication Able to package and present message B
Actively listens to others B
Shows strong interpersonal skills and always ready to B
understand other’s view
Team Recognises, understands and uses team dynamics, B
Leadership social structure, culture, strengths and weaknesses
Works and relates well with people around and aligned B
with the company culture and builds consensus
Fits in and wants to belong and contribute B
Open communication style B
Supports and knows where to gain support B
Teaches or coaches others D
Motivates others and inspires team members D

49
Measuring managerial effectiveness?
There is a strong focus with the JCS on the competencies being predictors of
superior performance measured by means of advancement in the organisation or success.
However, in this research, the author is interested in measuring learning transfer as a result
of a training programme, and whilst the JCS is recognised as being a good indicator, it is
more future-oriented than historical.
Historically, there has been little agreement on what exactly it is that constitutes
managerial effectiveness or excellence (Koch and Cebula, 1994). Robotham and Jubb
(1996) debate whether it is possible to identify elements of effective performance from
observing practicing managers and consider that this is because the precise nature of
management is, to a degree, contingent on the context or environment in which it is being
practiced. Thus, behaviours that are considered effective in one sector may be less
appropriate in another.
Traditionally, the views surrounding the issue of managerial effectiveness have
tended to be largely based on the assumptions about what managers do, and what they
should do to be successful (Robotham and Jubb, 1996). Such assumptions are challenged
(Luthans et al., 1985) in that rather than relying on an evaluation of managers’
performance that is based on the activities traditionally prescribed for managerial success,
a focus on the activities managers actually perform has emerged.
A useful set of scaled competencies – competencies that have sets of behaviour
ordered into levels of sophistication or complexity – was developed by Spencer and
Spencer (1993). The competencies found to be most critical for effective managers are
shown in (Table 13):

Table 13. Managerial competencies (Spencer and Spencer, 1993)

• Achievement orientation
• Developing others
• Directiveness
• Impact and influence
• Interpersonal understanding
• Organisational awareness
• Team leadership

This set of characteristics, or individual competencies, that a manager brings to the


job need to match well to the job or additional effort may be necessary to carry out the job

50
or the manager may not be able to use certain managerial styles effectively. These in turn
are affected by the organisational climate and the actual requirements of the job.
Managerial effectiveness is the combination of these four critical factors:
organisational climate, managerial styles, job requirements and the individual
competencies that a manager brings to the job. Reddin (1970) points out that managerial
effectiveness is not a quality but a statement about the relationship between his behaviour
and some task situation.
If the Spencer and Spencer (1993) framework of competencies are able predict
performance in any executive (Kelner, 2001) and these competencies are ‘trainable’ – then
any training programme designed to develop managerial effectiveness in any role can be
evaluated by means of assessing the changes in behaviour of the participant that
demonstrates the competency.
Spencer and Spencer’s (1993) framework was used to develop the Hay/McBer
Managerial Competency Questionnaire (MCQ) (McBer, 1997) and (Table 14) below
shows a mapping across each of the frameworks discussed in detail above.

Table 14. Comparing MCQ (Spencer and Spencer, 1993), JCS (Dulewicz and Herbert, 1992),
Lee et al. (2002), Kenworthy and Wong, (2003)
MCQ JCS Stada/NTU Kenworthy & Wong
Achievement Achievement Adaptability to Achievement /Results
Orientation Motivation changes Oriented
Proactive
Developing Others Understanding of Team Leadership
improvement in human
performance
Directiveness Assertiveness & Accountability &
Decisiveness Responsibility
Independent
Impact and Persuasiveness Communication skills Communication
Influence Consulting skills Adaptability &
Creative thinking skills Flexibility
Interpersonal Interpersonal Relationship building Cultural Sensitivity
Understanding Sensitivity skills Communication
Adaptability &
Resilience
Organisational Strategic Ability to see the “big Executive Maturity
Awareness Perspective picture”
Knowledge of own
strengths and
weaknesses

Team Leadership Professionalism & Leadership skills Team Leadership


Judgement Visioning skills Passion

Mapping the competencies across the frameworks was done by means of using the
competency descriptors in each case that describe the behaviour demonstrated in each
51
competency. The MCQ maps well onto the JCS and brings in an important element of
Developing Others – shown in the two later studies. Since the competencies are scaled, it
also provides a means of measuring change in behaviour readily, and as such provides a
suitable basis to measure managerial effectiveness (competence in doing the job of
management). Appendix 5 shows the MCQ Competency descriptors.

2.3.2 Learning transfer and competency frameworks summary


The effectiveness of training programmes is not solely contingent upon either the
learning content or the quality of delivery methods (Naquin and Holton, 2003) and the
transfer of learning to the workplace may be inhibited by the lack of an appropriate
transfer climate (Huczynski and Lewis, 1980) and/or the motivation of participants to use
the knowledge and skills gained during the training (Ruona et al., 2002). It makes intuitive
sense that motivation to learn would be an important precondition of learning, and a
motivation to transfer learning to the workplace would also be an in important
precondition of effective transfer (Naquin and Holton, 2003), however the debate does not
contribute to assessing behaviour change. To assess the effectiveness of a management
development programme, it is necessary that we define effective management behaviour
according to Mumford (1994) and Winterton and Winterton (1999) and the literature on
management competencies provides useful frameworks to describe the behaviour and
skills and application of appropriate knowledge that is associated with effective
management.
This part of the review has considered a number of commonly used competency
frameworks and assessed their suitability both at the time of the research and the location
in which the study is taking place concluding that the Hay/McBer MCQ framework, based
on Spencer and Spencer (1993) and associated instrument, the MCQ (Hay/McBer, 1997),
provides the author with a means of measuring competencies before and after the training
event to demonstrate any change in behaviour. Using a generic competency model has the
benefit of not being job or situation specific and thus reducing the most prevalent inhibitor
of transfer of a learned skill to the workplace, that of no opportunity to do so (Awoniyi et
al., 2002, Cromwell and Kolb, 2002, Lim and Johnson, 2002).
The next section will review how individual competencies link to the performance
of the organisation and Kirkpatrick’s highest level of evaluation – business results.

52
2.4 Business results – linking individual competency models to
organisation outcomes
Some writers have identified competencies that are considered to be generic and
overarching across all occupations. Reynolds and Snell (1988) identify ‘meta-qualities’ –
creativity, mental agility and balanced learning skill – that they believe reinforces other
qualities. Hall (1986) uses the term ‘meta-skills’ – as skills in acquiring other skills.
Linstead (1991) and Nordhaug and Gronhaug (1994) use the term ‘meta-competencies’ to
describe similar characteristics. The concept of meta-competence falls short of providing a
holistic, workable model, but it does suggest that there are certain key competencies that
overarch a whole range of others.
There is however, some doubt about the practicability of breaking down the entity
of management into its constituent behaviours (Burgoyne, 1989). This suggests that the
practice of management is almost an activity that should be considered only from a
holistic viewpoint.
Baker et al. (1997) link the various types of competence by first establishing a
hierarchy of congruence as a backbone to the model. In broad terms, they describe the
congruence of an entity to be the degree of match or fit between some external driver to
the entity and the response of that entity to the driver. This method enables them to take
into consideration the idea that management, as an entity, and the individuals who perform
the function do so within a particular environment. Measurement of congruence or
goodness of fit, has been attempted in studies of operations (Cleveland et al., 1989,
Vickery, 1991). Baker et al.’s hierarchy is shown in Figure 8 below, with four levels of
congruence: 1) Organisation level, 2) Core business process level, 3) Sub-process within
core process level, and 4) Individuals level.

53
Figure 8. Hierarchical model of competence (Baker et al., 1997)

At the organisation level, there is congruence when a firm adopts a strategy that is
consistent with the competitive priorities derived from the firm’s business environment.
The strategy, in turn, determines the operational priorities of the firm, following Platts and
Gregory (1990), Baker et al. (1997) using their own terminology, consider these
operational priorities to drive the core processes of the firm. These, in turn, can be broken
down into a number of sub-processes – and congruence is needed between the sub-
processes and the core processes. At the individual level, the skills and knowledge should
also match the priorities driven by the sub-processes.

This hierarchical model follows a traditional approach that structure follows


strategy (Vickery, 1991, Cleveland et al., 1989, Kim and Arnold, 1992). Others view that
competences are a part of the structure of the firm and should influence strategy making,
Bhattacharaya and Gibbons (1996) point out that Prahalad and Hamal (1990) and Stalk et
al. (1992) take this approach.
The hierarchical model has been tested analysing case studies of seventeen
manufacturing plants that won Best Factory Awards during the period 1993-95 in the UK
(Cranfield) and established benchmarks. Baker et al. (1997) found some direct cause-
effect links between enabling competences at the sub-process level and competitive
54
performance (at the core process level). However, they also found many ‘best practices’
such as employee empowerment and team working which were harder to link to specific
competitive competences.
This model provides an insightful way to break down the complex issue of how
individual performance influences the competitive competences of the firm. Baker et al.’s
research is limited within the manufacturing sector where core processes are often easier
to identify and define with a clear delineation of individual effort, technology and product.
It is also established on the basis that structure follows strategy – whereas, most firms will
already have structure and will be adapting their strategies continuously as the external
environment changes.
Cheetham and Chivers (1996) describe a model of competence that draws together
the apparently disparate views of competence - the ‘outcomes’ approach and the
‘reflective practitioner’ (Schon, 1983, Schon, 1987) approach.
Their focus was to determine how professionals maintain and develop their
professionalism. In drawing together their model, they consider the key influences of
different approaches and writers. The core components of the model are:
Knowledge/cognitive competence, Functional competence, Personal or behavioural
competence and Values/ethical competence with overarching meta-competencies include
communication, self-development, creativity, analysis and problem-solving. Reflection in
and about action (Schon, 1983) surround the model, thereby bringing the outcomes and
reflective practitioner approaches together in one model shown in Figure 9 below:

55
Figure 9. Model of professional competence (Cheetham and Chivers, 1996)

Cheetham and Chivers model of professional competence is useful in bringing the


concept of individual competence to bear on the competence of the organisation in a non-
manufacturing context, but it still falls short of providing a useful model to link an
individuals behaviour with the business results of an organisation across industries – a
generic model if you will.
Young (2002) does so neatly, by developing his individual model further to the
organisational perspective adopting the concept of core competence, as articulated by
Prahalad and Hamal (1990) and further developed by Stalk et al. (1992) and Tampoe
(1994), suggesting that the collection of individual competences within the organisation
create the organisational core competence (Figure 10):

56
INDIVIDUAL JOB

Personal Job
Characteristics Behaviour Activities
•Motive •Skill •Functions
•Trait
•Tasks
•Image/role
•Knowledge

COMPETENCY COMPETENCE PERFORMANCE

Organisation
Activities
Organisation Level •Functions
•Tasks

CORE PERFORMANCE
COMPETENCE

Figure 10. Individual variables of competency, competence and performance and


organisation core competence (adapted from Young, 2002)

This model provides a way to understand how developing competency (personal


characteristics and behaviours) at the individual level enables an individual to demonstrate
competence (the functions and tasks of the job) which in turn cascades through a
hierarchy of the organisation (core competence and other activities supporting the
organisation) to deliver business results.

2.4.1 Business results summary


Hierarchical models of competence within an organisation show how an
individuals competencies link to the competence and hence performance of the
organisation. Young’s (2002) model provides a neat and simple clarity to the linkages and
then this research focuses on how the hierarchy is managed and what are the competencies
that provide a manager the capability to determine strategy, design core and sub-processes
(for many the value chain or value system) and develop the individual competencies
(skills and knowledge) to better deliver the enabling competences of the sub-processes. A
manager who has this capability may be considered to be an effective manager. It is, for
this reason that the training programme chosen for this research is about strategy
development – providing all participants with the knowledge and opportunity to develop
the skills in developing strategy. Neither the simulation nor the game was designed
specifically to develop the managerial competencies under investigation. Strategy as a
57
subject was also suitable for this research because the simulation and game existed and
were directly comparable with a case study based programme.
Given the time-frame for business results to be noticeable and measurable, this
research does not measure business results directly but will ask participants’ bosses to rate
the performance impact of the participants enabling the author to consider the research
question that rated performance impact will improve following the training.

58
2.5 Personality and personal background
This section of the literature review explores possible factors that are often
considered as influencing how an individual learns and transfers such learning into
behaviours that have a positive impact in the workplace. Aspects of an individual’s
personality and the influence on learning and competencies has been briefly covered in the
sections on management learning and managerial competency frameworks above, this
section of the review will consider how we might anticipate how an individual’s
background, such as their age, gender, previous academic attainment, position in the
organisation and cultural heritage (Sternberg, 1997) may affect their preference for a
particular learning environment and effect on their learning and learning transfer. The
author also considers the affect of groups of individuals working together as teams within
the learning environment to foster teamwork and leadership in the workplace. First, the
author examines in some detail the literature on personality, and in particular, the two
most frequently considered aspects of personality, learning style and personality type.
Young’s (2002) model linking competency, competence and performance
considers personal characteristics to be a fundamental part of competency, this follows
other researchers notably Boyatzis (1982) and McClelland (1973). The purpose of this
section of the literature review is to determine what factors may need to be considered
when evaluating managers’ behaviour and what influences might we expect about an
individual’s personality and background on their behaviour and if personality measures
should be included because they may affect performance.
Boerlijst and Meijboom (1989) argue that human behaviour is continually subject
to changes in human attributes and attributions. They discuss the differences between long
and short term locus of concern suggesting that people adapt a task and identity for self in
the long term, whilst they will concentrate on performance of a task and fulfilment of self
needs in the short term.
Evidence suggests that individual personality factors have a great effect on
performance (Byrne and Wolfe, 1974, Brenenstuhl and Catalanello, 1977, Henke, 2001).
When attempting to evaluate the effectiveness of a training intervention, we should
consider the potential for an individual’s personality to affect outcomes.
In management development there is a strong influence from humanistic
psychology, and from this a growing practice of student-centred learning (Burgoyne and
Reynolds, 1997b). In particular, the influence of experiential learning theory in
management learning and how individuals learn, or their learning style, is a central

59
consideration in the design and potential success of learning interventions in achieving the
desired outcomes.

2.5.1 Learning styles


As suggested by Sadler-Smith (1996), in addition to a consideration of factors such
as organisational and environmental context, the characteristics of the learner and in
particular, their learning styles, should be taken into account when designing, developing
and facilitating learning experiences. According to Loo (2002), learning style is intimately
related to cognitive style, the learner’s personality, temperament and motivations, and
refers to the customary approach in which a learner perceives and processes information
initially proposed by Kolb (1976).
A close examination of Kolb’s (1976) four stage model of learning (Table 3 on
page 22 above) reveals that learning requires abilities that are polar opposites or
dimensions (action and reflection, concreteness and abstraction) and that the learner must
continuously choose which set of learning abilities to bring to bear in any specific learning
situation. Kolb (1976) argues that most people develop a learning style in which some of
the learning abilities are emphasised over others and proposes that the combination of
dominant abilities along the two dimensions defines an individual’s learning style. Table
15 below, shows Kolb’s four different learning styles highlighting strengths of the style
and a common example.

Table 15. Kolb's four different learning styles (Johnson and Stratton, 1978)
Learning Style Combination Characteristics strengths of style -
of abilities example
Converging AC+AE Practical application of ideas –
engineers
Diverging CE+RO Imaginative ability – managers with
humanities or liberal arts background
Assimilating AC+RO Creation of theoretical models –
researcher
Accommodating CE+AE Doing things – marketing or sales

It is argued that there may be no generalisable learning process or that an


individual is particularly inclined to a preferred learning style (Symons, 1996) and Kolb’s
learning styles are not without critiques. Beard and Wilson (2001) report that Kolb’s
theory is extremely influential and rarely seen as problematic in the field of management
education. Kolb’s theory overlooks or mechanically explains the social, historical and
cultural aspects of self, thinking and action located as it is in the cognitive psychology
60
tradition (Reynolds, 1997a). Holman et al. (1997) suggest that the idea that a manager
reflects like a scientist in isolation of events does not take sufficient account of the social
interactions of a person that are important in the development of the self, thought and
learning. They also question the sequential progress through the cycle – suggesting that it
is rather, practical argumentation with oneself and others that forms the basis of learning.
Reynolds (1997a) presented a critique of learning styles and the readiness of the
HRD profession to use instruments such as Kolb’s Learning Style Inventory (LSI) (1984)
and Learning Styles Questionnaire (LSQ) (Honey and Mumford, 1982). Although
recognising the intuitive appeal of such instruments, Reynolds questions the validity of the
theoretical and empirical evidence. Sadler-Smith (2001) in a reply to Reynold’s critique
suggests that whilst the labelling for learning styles is justified, extending the argument to
cognitive styles is not, and would close off a promising avenue of research and may deny
some individuals access to greater self-awareness and, potentially, development.
Honey and Mumford (1992) criticise Kolb’s LSI as not being meaningful for
managers with whom they dealt. They developed instead the Learning Styles
Questionnaire (LSQ) yielding four separate scores, purporting to measure a learning style
equivalent to each of the four stages in Kolb’s learning cycle – noting that Kolb’s LSI
being based on research with students rather than managers. Honey and Mumford's LSQ
exhibits better validity and reliability than the LSI, though Duff (2000) finds that in
reviewing studies on the reliability of the LSQ that it may not be a suitable alternative for
education researchers.
Learning from experience is often considered to produce tacit knowledge that
affects performance in real-world settings (Nonaka and Takeuchi, 1995, Argyris, 1999).
For example, in a study to explore the acquisition of tacit knowledge being influenced by
managers’ learning styles, Armstrong and Mahmud (2004) found that Kolb’s Converging
learners had significantly higher level of measured tacit knowledge than Divergers. Such
studies suggest that an individual’s learning style may influence how well and how much
they learn from a particular style of management development intervention and, following,
how much and how well they can transfer to the workplace in the form of changed
behaviour.

2.5.2 Factors that shape and influence learning


The importance of individual differences in ability to learn and transfer learning
has long been a theme in educational psychology with a growing emphasis on how
learners seek to learn and transform information to create and construct knowledge in their

61
own mind (Pintrich et al., 1986). Yet the literature on training effectiveness has paid little
attention to the potentially dynamic interaction of individual differences with different
segments of training and the ultimate success in terms of learning or transfer (Herold et
al., 2002). Those factors of individual differences most frequently associated with
experiential learning theory are discussed by Kolb et al. (2000) noting the factors that
shape and influence learning styles noting five particular levels of behaviour: personality
types, early education, professional career, job role and adaptive competencies. Sternberg
(1997) also argues that national culture may be one of several variables (others including
gender, age, schooling and occupation) that are likely to affect the development of an
individual’s thinking styles.

Personality types and motivation


In particular, Kolb et al. (2000) link Myers Briggs Type Indicator (MBTI) with
learning style, suggesting that the Accommodating learning style is Extroverted Sensing
type, and the Converging style the Extroverted Thinking type. Figure 11 maps the MBTI
types onto the LSI dimensions.

Feeling

Concrete Experience

Experience

Testing implications
Active Observation Reflective
of concepts in
Experimentation & Reflection Observation
new situations

Extroversion Introversion
Formation of
abstract concepts
& generalisations

Abstract
Conspetualisation
Thinking

Figure 11. Combining LSI and MBTI dimensions (Kolb et al., 2000)

The apparently close linkage to experiential learning theory and learning styles
means that the Myers-Briggs personality type has been frequently used in effectiveness
studies. For example, Patz (1990, 1992) undertook a study to assess the influence of MBTI
dominant personality on the performance of students in Total Enterprise simulations,

62
finding that Intuitive (N) and Thinking (T) dominance among team members was
positively related to performance. However, Patz made no attempt to assess the
complicating influence of group dynamics on his results. Anderson and Lawton (1993)
eliminated this complication by having each individual operate his or her own Total
Enterprise simulation company. They also related simulation performance to the student’s
relative MBTI preferences rather than dominant personality type and found no significant
differences between personality types at any stage of the simulation’s operation and the
direction and differences found were contrary to Patz’s findings. Gosenpud and Washbush
(1992) more directly tested Patz’s hypothesis concerning dominant personality types and
found inconclusive evidence.
In addition to the potential for an individual’s personality type to be an important
influence on preferred learning style, personal characteristics may influence their
motivation to learn and their actual performance (Naquin and Holton, 2003). Intuitively,
motivation to learn seems to be an important precondition of learning, and the educational
psychology literature discusses many different concepts of what motivation is and where it
stems from. Well known conceptions of motivation include a broad-range of ideas from
single-motive conceptions such as Freud’s notion of libido (Gage and Berliner, 1998) to
hierarchical conceptions such as Maslow’s hierarchy of needs (1954). Despite many
studies in education, motivation to learn as a variable in training has been largely
neglected in training-related research (Clark et al., 1993) and fewer studies have examined
motivation to transfer learning (Holton et al., 2000).
There is a proliferation of conceptual frameworks, but with little agreement on
practical issues of how personality and learning style are assessed, and few models have
sound conceptual or empirical base (Sadler-Smith, 2001). There does appear to be
agreement that an individual’s personality and their motivation are considered to be
important factors that affect learning and transfer with most emphasis within the
experiential learning and trait modification approaches towards personality type and
learning style and the implicit suggestion that aligning the design of learning with an
individual’s preference for learning style will be both more motivational and lead to
greater performance.

Early education and age


According to Kolb et al. (2000), early educational experiences shape an
individual’s learning styles through encouraging specific sets of learning skills. They go
on to suggest links between particular undergraduate majors with preferred learning styles.

63
Of particular interest in this research study is the individual’s age at the time of the
training event. Whilst simulations have been around for over 40 years, few people had the
opportunity in their education to experience them. The power and flexibility of modern
computer technology and its increasing accessibility and affordability means that
computers and the Internet now form a significant part of the education curriculum –
either learning about or with the assistance of technology. The simulation and
management game are examples of comparatively new ways of experiential learning using
modern computer technology and both Prensky (2000) and Aldrich (2002) suggest that
younger individuals will prefer such learning methods to older individuals. According to
Hannafin et al. (1996), the computer represents a unique learner-centred opportunity;
learners are able to control a wide variety of instructional variables including the subject
matter content, the context of instructional situation, amount of practice undertaken and
the amount of advice to be provided during the instruction. In effect, computer based
learning systems can encourage learners to build relationships among learning concepts
through exploration, experimentation and manipulation much more easily than in a
traditional classroom environment.
This suggests that younger learners may be more accepting of and comfortable
with computer based simulations or games than older learners, in part because computer
technology has been more prolific in education in recent years.

Academic attainment, gender and organisation position differences


The influence of prior educational experience in developing a greater acceptance
of technology supported learning has been discussed above, but the influence of other
individual differences on an individual’s willingness to embrace learning technology is
often overlooked (Hoskins and Van Hooff, 2005). Gender and age are frequently
considered factors – in part possible because the data is easy to obtain. Duff (2004) cites
many examples of previous research indicating that age, gender (e.g. Hayes and
Richardson, 1995) and prior academic achievement (e.g. Eskew and Faley, 1988) have
direct effects on individuals approaches to learning and their performance and progression.
An individual with a higher level of academic attainment may demonstrate a greater
ability and understanding of the concepts than an individual with a lower academic
attainment, and is a strong predictor of performance and progression (Duff, 2004).
Studies of the use of learning technology and gender have shown mixed results,
earlier studies suggest that males are more comfortable with and active in the use of
learning technology (Chmielewski, 1998, Sussman and Tyson, 2000), while more recent

64
studies suggest that this is not the case, and that there is essentially no difference between
males and females (Hoskins and Van Hooff, 2005). This may be the simple effect of time
and increasing ease of accessibility and suggests that an individual’s gender may affect
their preference for using learning technology at an individual level as a result of their
personal experience, but overall it is unlikely to be a significant factor.
An individual’s position or grade within an organisation has long been associated
with the demonstration of managerial competencies (Dulewicz and Fletcher, 1982) and it
is reasonable to assume that an individual who is more senior in the same organisation, in
a meritocracy, will demonstrate higher levels of managerial competencies than an
individual in a lower position. It makes intuitive sense that an individual with a higher
level of competency before a training intervention has less opportunity to improve,
however there are some suggestions (Anton, 1992) that seniority in a learning-by-doing
environment encourages the greater adaptive competencies that Kolb et al. (2000) suggest
are the most immediate pressure that shapes learning. Thus, through greater experience, a
more senior manager adapts the skills required to effectively complete a particular task
congruent between personal skills and the demands of the task (Kolb, 1984, Ridley et al.,
1995, 1996). Bertsche et al. (1980, 1991) suggest that computer-based simulations are
important tools to support learning and an ideal way of leveraging the experience of senior
managers and this may be explained by the adaptive competencies brought to bear on the
learning situation.

Cultural heritage
Several authors argue that national culture may be related to differences in
cognitive style and that such differences may be important in a management learning
situation, especially in an increasingly internationalised environment (Abramson et al.,
1996, Allinson and Hayes, 2000). Hofstede’s (1980) cultural dimensions are often used to
explain differences in outcomes and behaviour from training (Hwang, 1987, De Vos,
1985). The four dimensions Hofstede operationalised, based upon research among IBM
employees in various countries throughout the world, were power distance, collectivism-
individualism, uncertainty avoidance and masculinity-femininity. Power distance refers to
the extent to which members of a national culture group are willing to accept the uneven
distribution of power. Collectivism is the tendency for people to belong to groups and look
after each other. Individualism is the predisposition to look after oneself and immediate
family only. Uncertainty avoidance is the extent to which people feel threatened in
ambiguous situations. Masculinity-femininity refers to the distribution of roles between

65
genders. Masculinity is associated with competitiveness and assertiveness, while
femininity is associated with modesty and a caring disposition.
We might anticipate that national culture would affect an individual’s thinking
style though Kirton (1994) reports a study by Thompson (1980) showing little difference
in score using the Kirton Adaptive-Innovator between English speaking managers in
Singapore and Malaysia and their UK counterparts. However, in a comparative study of
cognitive styles, Savvas et al. (2001) found differences between cultural groups and found
some cross-cultural differences at post-graduate level between UK and Hong Kong
groups. It may be, that at the time of Thompson’s research, both Malaysia and Singapore
had relatively recently gained independence from Britain and the dominant educational
system remained English, while Hong Kong, although under British rule for longer, is
physically and economically much closer to Asia and China in particular. Potentially,
these studies suggest that national cultural differences may shape individual’s learning
style and preference but such influence may not clearly be defined by something as simple
as race or citizenship, but the influence of the environment at the time.
In this research, we will consider the possible influence that a participant’s cultural
heritage has on their reaction, learning and transfer. In particular, the concept of “face” is
well established in differences of ways of doing business (Ho, 1976) and saving face by
making judgements in private rather than public (Keys and Wolfe, 1990, p316) often
makes for quieter training rooms in the east than those in the west. Computer-based
simulations and games may provide a useful medium to offset this proclivity attributed to
Asians (Hofstede, 1980), allowing participants to make their decisions within the privacy
of the human-computer interface.
The literature review on factors that shape learning has highlighted potential
influences that may be considered through the comparative evaluation of training
methodologies as is the intention of this research study. Another important factor to
consider in relation to the way people learn and transfer that learning is the effect of team
working and group dynamics.

Teams, group dynamics and performance


Management involves group problem solving and leveraging cross-functional
expertise, and simulations provide management teams with enhanced means of learning
from experience and from each other, as the “group must work together for optimal
learning to occur” (Keys and Wolfe, 1990, p316).

66
A great deal of research in group dynamics and more recently, teams, suggests a
range of potential benefits arising from team working in an organisational context. Higgs
(1999) notes the benefits highlighted in the literature as:
• Performance: suggesting that teams are more productive than individuals or
competing groups,
• Satisfaction: that team members are more satisfied when working in a team, and
• Pooling information: that teams pool information and discuss to improve the
quality of decisions.
Belbin’s seminal work (Belbin et al., 1976, Belbin, 1981) on effective teams shows
that the impact of the mix of differing ‘types’ within a team has a significant impact on
team performance. Based on a long term study, Belbin presented evidence of the
performance of different groups of managerial teams on a computerised management
game included in the Henley General Management course. This work was developed in an
experimental setting yet its findings have been widely upheld within organisations and
supported by a range of follow-up studies demonstrating the organisational applicability of
the findings. Margerison and McCann (1985) in similarly structured research have
demonstrated similar results of comparative performance on differing mixes of team roles.
In a study of some 100 managers, Dulewicz (1995) found evidence of clear
linkages between Belbin’s team roles and the supra-competencies developed by Dulewicz
and Fletcher (1982, Fletcher and Dulewicz, 1984). These competencies were also linked to
the 16PF and OPQ instruments and both were shown to be related to the Belbin Team
Roles.
In developing competencies of the team and team members, Bal (1995) argues that
effective development depends on the team members developing trust and collaboration,
without which the team working environment is less conducive to the exchange of ideas
and learning. Furthermore (p147) that teams working in close proximity engender a
feeling of belonging encouraging freer debate and discussion. Such positive feelings are
likely to be seen in an individual’s reaction to team working within the training
programme and reflect greater performance both in learning and learning transfer –
particularly those competencies closely associated with working with others and teams.

2.5.3 Personality and personal background summary


Acquiring competencies and knowledge and transferring them to the workplace is
a complex process with many potential factors affecting each individual’s learning from a

67
particular training and application of the learning afterwards. The literature on personality
and personal background provides some insight into the multitude of variables that can be
considered in management learning. The question then is which factors are most relevant
or considered to be the most influential with regard to improving effectiveness and then, in
relation to the intended study, which of those factors can to an extent be controlled and/or
measured.
A number of studies, particularly those in the field of simulation or game based
learning, consider that an individual’s learning style and/or their personality type are of
great influence in the enjoyment and, usually perceived, usefulness for learning. The
review has considered the literature in regard to learning style and personality type
discussing the strong support from those closely associated with experiential learning
theory, and those who critique the simplicity and, in particular, the measurement
instruments commonly used (Mainemelis et al., 2002). The evidence from studies of
performance in simulations suggests that individual personality factors, as measured by
the MBTI instrument, are less reliable measures for predicting performance (Schneier and
Beatty, 1977, Gosenpud and Washbush, 1992, Anderson and Lawton, 1993) and that
group or team dynamics have a greater affect on performance in a simulation (Kickul,
2001, Dulewicz and Herbert, 1992, Dulewicz, 1995, Belbin, 1981). This suggests that
when attempting to evaluate the effectiveness of a training intervention to develop
managerial competencies, we should consider the potential of personality and team
dynamics to affect the outcomes, particularly when measuring changes in behaviour.
Motivation to learn and motivation to transfer of individuals is considered by many
to be of particular importance and this may be considered to be closely associated with an
individual’s personality but also is concerned with the learning environment and methods
employed (Russ-Eft, 2002) as well as the potential effect of working in groups or teams
(Higgs, 1999). There does not seem to be a clear definition of what motivation is that
works in a generalisable way, though attempts to measure the transfer climate show
potential for predicting the effectiveness of training (Holton et al., 2000, Holton et al.,
2001).
The section has also reviewed other factors considered to shape learning and
influence the effectiveness of learning and transfer including some of the key factors
identified by Sternberg (1997), gender, age, schooling and occupation, as well as the
possible influence of national culture. In respect of the latter, the review has considered
Hofstede’s (1980, 1991) work in particular in this area.

68
The final section of the literature review considers previous research in the field of
study, drawing together the work of other researchers and noting the calls for further
research and the limitations of studies previously undertaken in this field of study.

2.6 Previous research in simulations and games


Although for more than 40 years, researchers have lauded the benefits of
simulations (Keys and Wolfe, 1990), very few of these claims are supported with
substantial research regarding the learning benefits of the technique (Brenenstuhl and
Catalanello, 1977, Keys and Wolfe, 1990, Hannafin et al., 1996, Gopinath and Sawyer,
1999). A large amount of the business gaming literature has dealt with how its method
fared against traditional methods for delivering course material (Keys and Wolfe, 1990).
For example, the studies by Kaufman (1976), McKenney (1962, 1963), Raia (1966),
Wolfe and Guth (1975), and Kenworthy and Wong (2005) found superior results for
game-based groups versus case groups either in course grades, performance on concepts,
examinations, or goal-setting exercises. Yet a review of 68 studies that compared the
instructional effectiveness of simulations with other instructional methods by Randel et al.
(1992) revealed that a majority (56%) of them found no difference between simulations
and other pedagogical methods, 32% found that simulations led to better student
performance, and only 5% favoured traditional instruction.
There are also a number of empirical studies that have examined the effects of
game-based instructional programmes on learning. For example, both Whitehall and
McDonald (1993) and Ricci et al. (1996), found that instruction incorporating game
features led to improved learning. In a recent review, Druckman (1995) concludes that
games seem to be effective in enhancing motivation and increasing student interest in the
subject matter, yet the extent to which this translates into more effective learning is less
clear.
Business gaming’s largest set of literature has dealt with factors that concern the
nature of the simulation itself, the particular aptitudes and abilities of those playing the
business game, and the game administration by the instructor (Keys and Wolfe, 1990).
Certo (1976), Keys (1977), and McKenney (1967) highlight the importance of instructor
guidance during crucial stages during the use of a simulation or game and the debriefing
stage to insure some degree of closure and summary insights gained from the experience.
Garris et al. (2002) agree and found that the role of the instructor in debriefing learners is a
critical component in the use of instructional games, as are other learner support strategies.

69
Education literature has further suggested that the use of simulation has different
effects on different students due to variations in individual methods of processing
information and subsequent learning (Snyder and Vaughan, 1996). As highlighted by
Feinstein et al. (2002), computer simulation is not an educational panacea. Simulation
design must be rooted in research on teaching and learning tools, and resources provided
must support the learner’s efforts (Salomon, 1993).

2.6.1 Support for simulations and games


Hoberman and Mallick (1992) and Geber (1994) suggest an impressive number of
benefits of training using simulations including: 1) Improved transfer of learning to the
work venue; 2) Well-suited for teaching participants how to respond to change; 3)
Relatively risk-free environment in which to try new behaviours; 4) Higher participant
involvement and motivation; 5) Ability to manipulate several variables at once; and 6)
Potential for immediate feedback.
Researchers have identified benefits that are unique to simulation techniques:
improved ability to teach teamwork (Keys et al., 1994); unique contribution to long-term
strategy making (Gopinath and Sawyer, 1999); demonstrate the complexities of dynamic
business systems (Romme, 2003); and positive relationship between business game
experiences and outcomes such as income and organisational position (Wolfe and Roberts,
1993). It has also been proposed that simulations seem well suited to promoting what
Argyris and Schon (1978) have termed ‘double-loop learning’, a learning climate that
supports valid information, free and informed choice, and internal commitment: and
strategic knowledge that requires applying learned principles to different contexts or
deriving new principles from general or novel situations (Garris et al., 2002).
Simulations have become widely accepted pedagogical techniques, in part because
participants are more actively involved in the learning process and receive immediate
feedback on the results of their actions (Brenenstuhl and Catalanello, 1979). This supports
Senge’s (1990) view that human beings learn best through first-hand experience,
particularly when feedback from actions is rapid and unambiguous. Garris et al. (2002)
support this view and propose that the game cycle is iterative, in that game play involves
repeated judgement-behaviour-feedback loops and user reactions to this lead to greater
persistence or intensity of effort because it is enjoyable, interesting and builds confidence.
Support for simulations in greater learning in and of itself is one aspect, but Swanson and
Holton (1999) also argue that learning activities that recreate work situations, such as
simulation enhanced learning activities, foster better transfer of learning.

70
Henke (2001) emphasised that a central characteristic of a game is that they are fun
and a source of enjoyment. Schank (1997) and Prensky (2000) agree and stress that the
best way to break through resistance and apathy to learning is with an opening that is
immediately involving and fun.

2.6.2 Criticisms of simulation and game research


In spite of the extensive literature, many of the claims and counterclaims for the
teaching power of business games and simulations rest on anecdotal evidence or
inadequate or poorly implemented research. These research defects, according to Keys and
Wolfe (1990), have clouded the business gaming literature and hampered the creation of a
cumulative stream of research.
Most studies have been conducted using college students as subjects for the
experiments (Chang, 2003, Cook, 1999, Kim et al., 2002), yet Babb et al. (1966) stressed
nearly 40 years ago, that there are striking differences in behaviour and game results
between students and experienced managers. They concluded that these differences in
behaviour raise serious questions as to the extent that results from student behaviour in
management games can be generalised to behaviour patterns in the business world.
The inability to make supportable claims about the efficacy of simulations can be
traced to poorly designed studies, the lack of a generally accepted research taxonomy, and
no well-defined constructs with which to assess learning outcomes (Feinstein and Cannon,
2001, Gosenpud, 1990). Sales and Cannon-Bowers (2001) highlight the somewhat
misleading conclusion that simulation (in and of itself) leads to learning; unfortunately
most of the evaluations rely on trainee reaction data and not on performance or learning
data. There are also such a variety of stimuli (e.g. teacher attitudes, student values,
teacher-student relationship) in the complex environment of a game that it is difficult to
determine the exact stimuli to which learners are responding (Keys, 1977).
Gosen and Washbush (2004) point out that although it seems appropriate to
undertake research assessing the value of simulations, the majority of early studies have
focused on performance in a simulation. However, research on the relationship between
learning and performance suggests that the two variables do not co vary, performance is
not a proxy for learning, and it is inappropriate to assess simulations using performance as
a measure of learning (Washbush and Gosen, 2001, Wellington and Faria, 1992). There is
evidence to suggest that simulations are effective, but the studies showing these results do
not meet the highest standards of research design and measurement, and any conclusion
about them must be tentative (Gosen and Washbush, 2004).

71
Gredler (1996) notes, as did Pierfy (1977) that a major design weakness of most
studies evaluating simulation based training is that they are compared to regular classroom
instruction. However, the instructional goals for each can differ. Similarly, many studies
show measurement problems in the nature of the post-tests used.
Rose (1995) and Whitelock et al. (1996) are amongst many (McKenna, 1996,
Clark and Craig, 1992, Reeves, 1993) criticising the evaluation of learning effectiveness
using interactive technologies (games, multimedia, simulations and virtual reality) and ask
for more research to quantitatively evaluate the real benefits of these forms of learning and
teaching. Since the early days of simulation and gaming as a method to teach, there have
been calls for hard evidence that support the teaching effectiveness of simulations (Hays
and Singer, 1989). Feinstein and Cannon (2002) suggest that in spite of the extensive
literature, it remains difficult to support even the most fundamental claims for the efficacy
of games as a teaching pedagogy. There is little hard evidence that simulations produce
superior learning to other methodologies. They go on to review the reasons as being
traceable to the selection of dependent variables and the lack of rigour with which
investigations have been conducted.

2.6.3 Evaluation of simulations


The criticisms above show that it is important to develop a coherent framework for
pursuing the evaluation problem. There are three prominent constructs appearing in the
literature: fidelity, verification and validation. Validation is further split into two,
validation that the simulation programme as a method of teaching and that the simulation
programme as a training intervention, produces (or helps to produce) learning and transfer
of learning (Hays and Singer, 1989). Yet fidelity and verification are easier to evaluate
(but not necessarily to measure objectively) and often distract evaluators from the more
tricky issue of validation.

Simulation fidelity
Fidelity is the level of realism that a simulation presents to a learner. This is how
similar the training situation is, relative to the operational situation, in order to train most
efficiently (Hays and Singer, 1989). Fidelity focuses on the equipment that is used to
simulate a particular learning environment.
In sophisticated (technologically) simulations that use virtual reality, for example,
the construct of fidelity has an additional dimension, that of presence. Presence is more
than the suspension of disbelief (Bailey and Witmer, 1994, Witmer and Singer, 1994,

72
Dede, 1997, Salzman et al., 1999) that users ignore ‘unreality’ but the degree to which an
individual believes that they are immersed within the virtual world of the simulation
(Witmer and Singer, 1994).
The degree of fidelity or presence in a learning environment is a difficult element
to measure (Feinstein and Cannon, 2001, 2002), but much research in the 1960’s and 70’s
studied the relationship between fidelity and its effects on training and education. These
studies, according to Feinstein and Cannon (2002) found that a higher level of fidelity
does not translate into more effective training or enhanced learning (see Salzman et al.,
1999, Stanney et al., 1998, Gagne, 1984, Alessi, 1988). In fact, it may be that lower levels
of fidelity but with effective Human Computer Interface (HCI) or navigational simplicity
yet with a significant degree of presence, can assist trainees in acquiring knowledge or
skills within the simulation (Pegden et al., 1995).

Simulation verification
Verification is the process of assessing that a model is operating as intended.
During the process, simulation developers test and debug the software through a series of
tests under different conditions, verifying that the model works under such conditions.
Often, developers are distracted by this process producing what appears to be brilliant
models that work ‘correctly’ but with no appreciation of the educational effects and hence
their validity. It is a crucial step but can be a trap (Feinstein and Cannon, 2002).

Simulation validation
Building on the work of Feinstein and Cannon (2002) and others (Cannon and
Burns, 1999, Teach and Giovahi, 1988, Teach, 1989, Miles and Randolph, 1985,
Anderson, 1982, Anderson and Lawton, 1997a), there are three main questions beyond
those of the general issues of evaluation noted above:
1. Are we measuring the “right thing”? - Validity in the constructs
2. Does the simulation provide the opportunity to learn the ‘right thing’? –
Verification of the simulation and the appropriate fidelity for the content and
audience
3. Does the method of using a simulation deliver the learning? – Validity as a
method of developing.
Figure 12 shows these three faces of simulation validation:

73
Figure 12. Three faces of simulation evaluation (adapted from Anderson, Cannon, Malik, and
Thayikulwat, 1998)

2.6.4 Evaluation of simulations for learning outcomes


An analysis of simulation literature identifies learning outcome instructors adopt as
they strive to educate business students. These learning outcomes have been advanced as
targeting the skills and knowledge needed by practicing managers (Gosenpud, 1990).
Simulation researchers have speculated that the method is an effective pedagogy for
achieving many of these outcomes (Table 16).
It is clear from the table below that the vast majority of evaluations have relied on
the learner’s perceptions of their learning outcomes. Objective measurements (for any
learning intervention) are more difficult, however, there appears to be a need to bring in
more objective measures to help understand if simulations are an effective method for
people to learn business management skills and to transfer that learning to the workplace.

74
Table 16. Possible learning outcomes for simulations (adapted from Anderson and Lawton,
1997)
Facts and concepts of the business discipline
Increase the student’s knowledge of basic principles and concepts of the discipline
Interpersonal skills
Improve the students ability to…
Participate effectively in group problem solving P
Motivate coworkers P
Provide meaningful feedback to coworkers P
Resolve conflicts P
Communicate clearly with coworkers P
Develop people P
Lead P
Form coalitions P
Develop consensus P
Delegate responsibility P
Supervise P
Manage People P
Work as a member of a team P
Work in a group environment P
Appraise performance P
Increase the student’s knowledge of human behaviour in a group setting P
General analytical, critical thinking, problem-solving, or decision-making skills
Improve the student’s ability to…
Identify problems P
Frame problems P
Structure unstructured problems P
Analyse problems P O
Use data to make better decisions P
Distinguish relevant from irrelevant data P
Interpret data P
Implement ideas and plans P
Make decisions using incomplete information P O
Solve problems P
Solve problems creatively P
Solve problems systematically P
Make good decisions P O
The interrelationships among things
Improve the student’s ability to…
Integrate material from vaiour functional areas of business P
See the ‘big picture’ P
Increase the student’s understanding of the complex interrelationships in a business P
organisation
Increase the student’s understanding of why organisational subsystems must be integrated P
for organisational effectiveness
Business specific knowledge and skills
Improve the student’s ability to…
Assess the situation quickly P
Plan effectively P
Plan business operations P
Schedule and coordinate P
Prioritize tasks P
Forecast O
Use spreadsheet for decision-making P
Increase the student’s understanding of…
The decision process P

75
2.6.5 Previous research summary
The review on previous research in simulations helps clearly position this research
such that it is concerned with validating (Feinstein and Cannon, 2002) the use of games
and simulations as a method of delivery and we might expect, following Wolfe and Guth
(1975) and others that both games and simulations would show greater learning than using
case studies. We might also anticipate, following Hoberman and Mallick (1992) and
Geber (1994), that games and simulations would show a greater transfer of learning into
the workplace. However, most previous studies have been reliant on perceived benefits,
and the literature frequently calls for [more] empirical research into the effectiveness of
games and simulations.
The concept of fidelity and associated sense of presence is an important aspect to
consider in using simulations or games in training. The evidence from studies suggests
differences regarding the enjoyment and perception of usefulness of the experiential
activity within the programmes. Fidelity is thought to be an important factor in user
enjoyment but not necessarily that greater fidelity causes greater enjoyment or learning
(Feinstein and Cannon, 2001), nor is it the converse, it seems that the level of fidelity or
realism is a fine balance that creates sufficient presence for the user (Salzman et al., 1999,
Stanney et al., 1998).

2.7 Literature review summary


The need for research in this broad field is driven by the substantial spending by
organisations, and by academic institutions, worldwide on technology that supports
management development. The interest and use of computer-based simulations and games
is growing significantly as a means of accelerating the learning process and the belief that
such methods are more enjoyable for users, generate greater learning and hence are more
effective than more traditional methods of training and teaching. Yet, as we have seen,
there is little empirical evidence to support the notion.
The management learning literature section has discussed the emphasis on what
people learn and how such learning is transferred to the workplace and the different
schools of management learning that help us understand the influences of different
theorists and practice and what that means we, as trainers and educators, assume about the
way people learn and behave. An important aspect of management learning is how we
evaluate the effectiveness of development and the various schools of thought consider

76
what can, and should be measured. In the realm of business training, Kirkpatrick’s four
levels of evaluation remain the most well known and used, in spite of the possible
shortcomings of the model, its very simplicity has probably lent its continued ubiquity.
Learning itself is a complex and widely debated construct and as researchers seek
to transform it into an observable phenomenon there is an equally if not greater
complexity of factors that may or may not affect the way an individual learns and applies
or transfers learning to another situation. The educational and cognitive psychology
literature show many competing and often contradictory theories to explain the process of
learning and how it is affected by myriad influences at any given time though there is a
seeming consensus that some factors have a greater effect on learning and transfer, notably
learning style (or cognitive style), motivation and some key defining elements in a persons
background, such as their national culture, gender, age and schooling, and their adaptive
competencies. Some factors are easy to identify, whilst others are complex in their own
right and may not be observable, however, as human beings we experience them ourselves
even if we are unable to explain them.
Finally, the literature review considered previous research in this field of study,
what other researchers had found and the particular complexity of evaluation when some
elements of computer-based simulations and games are difficult to precisely identify –
fidelity in particular is believed to be an important factor in learning from simulations, yet
it is a construct we have difficulty in observing.
Computer-based simulations and games are regarded by many as an important tool
in the educators and trainers armoury of methods. For some organisations, they represent a
critical and only alternative to on-the-job learning (such as airline flight simulators) whilst,
for others, they represent a new and more effective way to develop knowledge and skills.
The former is a belief that anyone wishing to board an aircraft is happy to accept
unequivocally, the latter has yet to be demonstrated. The purpose of this research is to at
least, make a start on that by answering the research questions and hypothesis in the
following chapter.

77
Chapter 3 Research Questions and Hypothesis
The author’s research interest is to establish if computer-based simulations and
games used in management development programmes are as effective as more traditional,
case-study, methods. After Schuman et al. (2001) to overcome some of the issues
surrounding evaluation, particularly experiential learning events such as simulation-based
training, assessing effectiveness utilising a holistic framework approach using
Kirkpatrick’s 4 levels provides a means of making a useful generalised assessment.
In particular, this research investigates learning and learning transfer –
Kirkpatrick’s second and third levels of evaluation. The literature on Management
Learning and Evaluation and Competencies helps develop the first and overriding research
question: Do the development interventions show a positive change in learning and
management behaviour? The overriding question of this research is:
RQ1. Are computer-based business training simulations and games an effective way
to develop management learning and learning transfer?
The hypotheses underlying this question require that the training in all groups
shows that participants have learned something and demonstrate a positive behaviour
change representing the transfer of learning to the workplace. Continuing to use
Kirkpatrick’s four levels of evaluation and the evidence from previous research (Wolfe
and Guth, 1975, Wolfe, 1985, Swanson and Holton, 1999), this is broken down into four
hypotheses:
H1.1. The simulation and game groups will show higher ratings in participant
reaction than the case study groups
H1.2. The simulation and game groups will show greater learning than the case
study groups
H1.3. The simulation and game groups will show greater change of demonstrated
managerial competency (learning transfer) than the case study groups.
H1.4. The simulation and game groups will show higher bosses rated
performance change between groups.
We also anticipate that there will be a degree of positive correlation between each
of the four levels of evaluation following the implication in Kirkpatrick (1959/60, 1974):
H1.5. Participants reaction will correlate with learning
H1.6. Participant learning will correlate with change in managerial competency
H1.7. Participant change in managerial competency will correlate with bosses
rating of performance impact.
78
This research includes participant assessment of both the learning style and
personality type for the purposes of comparison and to evaluate whether learning style
preference reflects in the enjoyment, perceived usefulness, learning and behaviour change
of individuals on the programmes. The research question developing from this following
Kolb, is that we would expect Converging learners to enjoy and find more useful, the
simulation or game activities more than other learning preferences. Kolb’s learning styles
suggests that Converging learners (those combining the learning steps of Active
experimentation and Concrete experience) will prefer formal learning situations where
they can experiment with simulations (Kolb, 1999).
RQ2. Do participants with different learning style preferences show differences in
reaction, learning or learning transfer?
Tentative hypotheses are developed from this:
H2.1. Converging learners prefer and show greater improvement in managerial
competency from a computer-based simulation or game intervention than
individuals with other learning style preferences.
Following Kolb et al (2000) and mapping MBTI types to the learning cycle we
might expect that an individual’s personality type and their preferred learning style will
show commonality, in particular, this research is interested in the converging learning
style preference that Kolb (1999) suggests would also show a preference for using
simulations in formal learning situations:
H2.2. An individual with MBTI type EN will prefer a converging learning style
on the LSI
Following Patz (1990, 1992) studies that found differences between personality
types in learning and perceived usefulness for application in the workplace:
H2.3. An individual with MBTI type EN will prefer and show greater
improvement in managerial competency from a computer-based simulation or
game intervention than individuals with other dominant personality types.

After Higgs (1999) and Keys and Wolfe (1990), teamwork is considered an
important factor in participant enjoyment and learning and learning transfer. Following
Higgs (1999) suggestion that teams are more productive than individuals or competing
groups – develops a research question that greater learning may take place in the
simulation or case study teams over the competing groups in the game. Also, from
satisfaction and pooling information – that members’ who enjoy the team working and
find it useful will learn more. The researcher would also anticipate that, after Bal (1995)
79
the close team working environment and exchange of ideas engenders greater feelings of
belonging to the team and this would be associated with enjoyment and perceived
usefulness of the team and subsequent performance.
RQ3. Does participant rating of their enjoyment and perceived usefulness of team
work reflect differences in learning or learning transfer?
H3.1. Participant rating of enjoyment and perceived usefulness of teamwork will
positively correlate with learning and change in managerial competency.

This research will also consider the influence of personal characteristics including
age, gender, cultural heritage and organisational position after Sternberg (1997):
RQ4. How do age, gender, seniority, prior education and cultural background
influence the results?
Following Aldrich (2002) we may anticipate that younger managers will show a greater
preference for computer-based simulation and game than older managers
H4.1. Younger managers will enjoy the simulation or game more than older
managers.
H4.2. Male and female participants will show no distinguishable differences in
competencies before or after the training intervention
H4.3. Male and female participants will show no significant difference in change
of managerial competency
Following Spencer and Spencer (1993) and Dulewicz (1992) we might anticipate that
senior managers will show a greater level of managerial competency than more junior
managers before the training.
H4.4. Senior managers show a greater level of managerial competency before the
training.
Following logical thought and the literature on management learning we might anticipate
that:
H4.5. Participants with higher prior academic achievement demonstrate higher
levels of managerial competency

Following Hofstede (1980) and Sarawano (1993) this research also considers how cultural
heritage affects performance from the simulation and learning. Following Savvas et al.
(2001), it is phrased as a null hypothesis:
H4.6. There will be no difference between participants from a different cultural
heritage at the four levels of evaluation.
80
A summary of the research questions and hypothesis is presented in Table 17
below:

Table 17. Summary research questions and hypotheses


Are computer-based business training simulations and games an
RQ1 effective way to develop management learning and learning
transfer?
The simulation and game groups will show higher ratings in
H1.1
participant reaction than the case study groups
The simulation and game groups will show greater learning than the
H1.2
case study groups
The simulation and game groups will show greater change of
H1.3 demonstrated managerial competency (learning transfer) than the
case study groups.
The simulation and game groups will show higher bosses rated
H1.4
performance change between groups.
H1.5 Participants reaction will correlate with learning
Participant learning will correlate with change in managerial
H1.6
competency
Participant change in managerial competency will correlate with
H1.7
bosses rating of performance impact.
Do participants with different learning style preferences show
RQ2
differences in reaction, learning or learning transfer?
Converging learners prefer and show greater improvement in
H2.1 managerial competency from a computer-based simulation or game
intervention than individuals with other learning style preferences
An individual with MBTI type EN will prefer a converging learning
H2.2
style on the LSI
An individual with MBTI type EN will prefer and show greater
improvement in managerial competency from a computer-based
H2.3
simulation or game intervention than individuals with other
dominant personality types
Does participant rating of their enjoyment and perceived
RQ3 usefulness of team work reflect differences in learning or
learning transfer?
Participant rating of enjoyment and perceived usefulness of
H3.1 teamwork will positively correlate with learning and change in
managerial competency
How do age, gender, seniority, prior education and cultural
RQ4
background influence the results?
Younger managers will enjoy the simulation or game more than
H4.1
older managers
Male and female participants will show no distinguishable
H4.2
differences in competencies before or after the training intervention
Male and female participants will show no significant difference in
H4.3
change of managerial competency
Senior managers show a greater level of managerial competency
H4.4
before the training.
Participants with higher prior academic achievement demonstrate
H4.5
higher levels of managerial competency
There will be no difference between participants from a different
H4.6 cultural heritage at the four levels of evaluation.

81
Chapter 4 Methodology

4.1 Key choices in methodology


Management research is a rich and diffuse field of study and the variety of
frameworks available to underpin the conduct of research makes the selection of approach
potentially difficult. The literature review above includes discussion on the approaches to
evaluation of management learning suggesting that a positivistic empirco-rational stance
derived from the disciplines of sociology, education and psychology as the majority of
books on research methods would indicate is appropriate (Easterby-Smith et al., 1991). In
this section, this researcher develops a greater sense of reflexitivity into the research and
firstly identifies a number of different epistemological and ontological stances that can be
taken and continues to critique the approach identified in the evaluation of management
learning literature that is essentially positivistic.
The research questions and hypothesis developed from the literature in the
previous chapter indicate that whilst essentially derived from the positivistic tradition of
management research, the researcher recognises that the ontological stance is far from a
single truth. The concepts and constructs are widely debated within the literature and there
are many choices in methodological approach ranging from the traditions of positivism to
social constructionism (Easterby-Smith et al., 1991). Hence, the ‘truth’ may be complex,
and different researchers have widely differing views and the researcher is faced with a
choice of simplifying a way of knowing a complex truth, or using a complex way of
knowing a simple truth (Worral, 2004). The former, positivistic stance, means the
researcher accepts the complexity of differing views and subjects but seeks to synthesise
these into unified perspectives, while the latter, phenomenological, approach accepts
complexity and allows the researcher to borrow or adapt the protocols of other established
research disciplines. The research may also be positioned as a complex truth ontologically
and require complex ways of knowing epistemologically – here the researcher adopts a
more social constructivistic approach that accepts the multiple realities of the differing
views of researchers exploring on many levels with established methods and protocols or
creating new methods to make some sense of the complexities. Some may argue that this
latter stance is also the domain of the postmodern that, paraphrasing Burr (2003, p204)
means the researcher rejects “grand narratives in theory” and replaces “a search for truth
with a celebration of the multiplicity of (equally valid) truths”. Whilst this may be a great
intellectual exercise it may not provide the business training community with the evidence
they and other researchers in this field call for.
82
The literature review on evaluation of management learning (see paragraph 2.2.2
Evaluation models and taxonomies on page 24 above) considers the methodological
approaches to the intended study and the research aims, questions and hypotheses suggest
that a positivistic approach would be considered suitable. However, there are considerable
practical difficulties in firmly rooting the research within a positivistic epistemology.
Easterby-Smith et al. (1991, p43) provide a useful checklist of key choices in research
design which will be used to show the assessment against each criteria (Table 18).

Table 18. Key choices of research design (Easterby-Smith et al, 1991)


Researcher is independent vs Researcher is involved
Large samples vs Small samples
Testing theories vs Generating theories
Experimental design vs Fieldwork methods
Universal theory vs Local theory
Verification vs Falsification

4.1.1 Independence of the researcher


In this study, the researcher is unable to be completely detached from the
phenomena being observed as being both closely involved in the training events and the
measurements of each individual, further, the training events are intended to change the
organisation through changing the individuals participating and their involvement in the
research process itself – this suggests an Action Research tradition. The researcher
facilitates all programmes and the organisations sponsoring participants do so on a fully
commercial basis – the researcher is dependent upon positive outcomes for all regardless
of the organisation’s choice of activity method for business sustainability which goes
some way to eliminate the effects of researcher bias identified by Argyris (1980).

4.1.2 Sample size


The positivistic stance suggests that the research study is across a reasonable
sample of organisations and (and on a practical level) cultures in South East Asia
(principally in Singapore and Malaysia) – crossing cultures is inevitable in such multi-
cultural societies and across different organisations as this researcher believes, on
anecdotal evidence, that different organisations (ownership, size, location of operations
and location of parent organisation) show large differences in the way they operate and the
organisational culture impact highlighted by Trompenaars and Hampden-Turner (1993).
The sample size to carry this out would be at least 100 participants on the simulation based
83
programme and a further 100 on the game-based programme, from at least six different
organisations, with a control group of 50 participants on a similarly designed training
programme which utilises case studies rather than a simulation or game. This sample size
should be sufficient for discriminant analysis on the main independent variable (activity
type) being in excess of 20 observations in each programme run, and allow, where
appropriate, factor analysis with groups of at least 50 observations (Hair et al., 1998).
A case study approach, according to Yin (1989), may have an advantage over an
experimental approach in this respect that allows the researcher to ask the questions of
‘how’ and ‘why’ about the training intervention and subsequent transfer to the workplace
– i.e. a contemporary set of events over which the researcher may not have control. It may
also allow for greater flexibility to understand the complexity of the ‘truth’ in a more
constructionist way than the above positivistic approach. A case study approach with
smaller numbers is likely to be practically easier to organise than the larger, and cross
organisation sample required in a positivistic study, however, the case study approach may
not be as generalisable – which will be discussed later.

4.1.3 Theory testing or generation


The theories surrounding experiential learning and trait modification are well-
researched and in a positivist paradigm, this study would attempt to test these rather than
generate new theories. The wealth of literature, particularly in respect of experiential
learning theory and in competency development, whilst replete with debate and conflicting
views, has the robust tradition of education and psychology research, and a researcher can
choose a particular standpoint and method from which to work.
A more constructionist and qualitative approach may highlight more complexity in
the understanding of how individuals learn and what it is about simulations or games that
help or hinder that process and this may allow the researcher to develop new theories. For
this researcher, this would mean a leaning to the approach of Strauss (1987), than Glaser
(1992), as the researcher is already familiar with the prior research in this field.

4.1.4 Experimental or fieldwork design


The literature points towards a positivistic paradigm but the practical difficulties
anticipated of pure experimental design (i.e. preventing external influences after the
training and before the post-test) suggests a quasi-experimental design (Easterby-Smith et
al., 1991) and the most common method is the pre-test/post-test comparison design. To
anticipate potential flaws of using a related control group (Easterby-Smith and Ashton,

84
1975), this design would use an unrelated control group made up of participants with
similar backgrounds and similar organisations. While this may not be truly random, the
choices are made by the participating organisations and not the researcher, and whilst not
identical, it recognises the practicalities of undertaking such research in the business
world.
In particular, a case study approach may be considered as an alternative and not
attempt to compare the use of simulation or game methods with others which as Pierfy
(1977) and Gredler (1996) have pointed out is the major design weakness of many
previous studies. However, such comparisons were made with traditional classroom
instruction that may have had entirely different objectives – which is not the situation with
the comparison suggested above.

4.1.5 Universality
There is a concern that the research design assumes the universal applicability of a
generic managerial competency model. Researchers have attempted to validate or build
models appropriate to the assessment of managers in the location of this study, Singapore
and Malaysia (Sarawano, 1993, Chong, 1997, Kenworthy and Wong, 2003) that show
some variation in emphasis of the importance of particular competencies and particular
traits that are more highly valued than those in UK and US studies. However, the
suggested model is robust and provides a basis for comparison between similar groups, all
of whom are in a similar external environment. The practical considerations of the
research mean that the outcomes and focus will be on the local knowledge rather than
universal and reflects a critical management research view (Easterby-Smith et al., 1991,
Reynolds, 1997b).
Adopting a more universal approach would mean increasing the breadth of the
study to include other organisations in other countries across the world, and in a
positivistic paradigm, attempting to control for the myriad influences of local economic
and environmental impacts for fair comparison. A case study approach would place the
applicability into a more localised context and be considerably easier for the researcher to
take environmental and economics impacts into consideration. The majority of the
research studies in this field, have been conducted in the US and to a lesser extent, Europe
– hence by adding localised knowledge to this, even as a more rigorous and empirical
study may allow researchers to extend the applicability of the findings.

85
4.1.6 Verification or falsification
Easterby-Smith et al. (1991) recommend a strategy of falsification rather than
intending to seek evidence that supports the currently held views of the world. There is
some difficulty with this approach as the currently held views are the subject of much
debate and, in particular, contrasting the positivistic and the constructionist paradigms
highlighted by Reynolds (1997) in his critique of learning styles. The evidence for
previous research in this field is inconclusive, some, Wolfe and Guth (1975) for example
found gaming to be superior to cases, while others suggest that it remains difficult to
support the efficacy of games as a teaching pedagogy (Feinstein and Cannon, 2002).
However, the general leaning in the existing research is that simulations and games have
been evaluated to be more enjoyable, show greater learning and learning transfer. As such,
this researcher is inclined to seek evidence that supports this current view.

4.1.7 Summary key choices


The methodological discussion above demonstrates that there is no clear and
absolute choice. The research is being carried out in the real world and in a complex,
multi-disciplinary field of study. Ontologically, the standpoint must be that there is not
one universal truth but that there may be multiple truths. Epistemologically, a positivistic
paradigm is certainly not the only option and many argue that with a complex
phenomenon we call management learning, is demanding of a critical evaluation leaning
to a more qualitative approach. However this researcher is keen to synthesise the
complexities into unified perspectives such that the research may be applied in the real
world and partly fulfil the repeated calls for empirical evidence. In addition, this
researcher needs to consider the willingness of clients to participate in this research (and
pay for the privilege) and provide them with results that are meaningful and useful. As
such, this maintains the approach in a positivistic paradigm, accepting the complexity of
truths and a scientific method. A case study approach is suitable for consideration and
would both allow the researcher to look in depth at one, or maybe a few organisations, and
discover how simulations or games support the management development activities in
those organisations. However, the researcher is more interested in the phenomena at the
individual level, and hoping to develop a greater understanding of how simulations and
games may support learning and differences between individuals rather than the
organisation and this lends itself more to an experimental approach, however, there
remains complexities to consider.
86
4.1.8 Scientific method – ideal but inherently complex
Experiential learning research lacks rigorously designed studies (Gosenpud, 1990,
Easterby-Smith, 1994) and there are relatively few studies attempting to assess the
learning and transfer of learning effectiveness of experiential learning interventions and
that which does exist, lacks sufficient rigour. The ideal research design is probably
impossible to implement in experiential learning which includes random selection of
treatment and control groups, full pre-testing, standardised appropriate post-testing and
capturing all sources of learning that occur in an experiential learning environment for
measurement purposes (Feinstein and Cannon, 2002).
Additionally, modern business training simulations are inherently complex both in
terms of learning content and fidelity (Gosenpud, 1990) and the intended outcomes are
vague, since the focus is usually on very complex, abstract phenomena (Schumann et al.,
2001).
To overcome some of the issues surrounding evaluation of experiential learning
events, Schumann et al. (2001) suggest a framework for assessing the effectiveness of
simulations utilising a holistic approach using all Kirkpatrick’s 4 levels of evaluation to
provide a means of making a generalised assessment. This concurs with the literature on
evaluation of management learning across different approaches (Feinstein and Cannon,
2002).

4.2 Research model


While scientific method would suggest that the purest form of test of the
experiential learning model would be one that isolates a single learning cycle, Gibbs
(1988) suggests that may not be either possible or even desirable, as all experiences (and
therefore the interpretation of those experiences) are influenced by the sum of the
preceding experiences. Easterby-Smith (1994) suggests that the classic design of
experimental research to assess the effectiveness of a particular training intervention
would require two groups, one group to be trained (or given the treatment) and a
comparable group not to be trained (receive no treatment). Individuals within the
experiment would be assigned randomly to each group and both groups measured
immediately before and after the training. The difference between the groups could then
be attributed to the training received. In any evaluation of experiential learning, the
existing portfolio provides the foundation upon which any test must be based (Morse,
2001).

87
This research design is based on the ‘before and after’ experimental design
methodology commonly used in education and the social sciences (May, 1993). The test
assumes that the background of each participant remains constant during the cycle and
implicitly accepts the existing portfolio of knowledge, experience, motives, traits and
values. Therefore, a pre and post test approach seems most appropriate.
Figure 13 below shows an overview of the research model adopted:

Pre-test Post-test

Level 1 - Reaction
Enjoyment
Usefulness for
Learning Style learning
LSI III Affect?

Level 2 – Learning
Personality & Presentation to SMs
Background

Correlation?
Gender Training event
Position Affect? Simulation
Difference?
Age Game
Race Case Study

Competencies Level 3 – Transfer


MCQ 180 Difference? MCQ 180

Performance Level 4 – Results


Boss performance Boss performance
Difference?
rating rating

Figure 13. Research model

4.2.1 Validity, reliability and generalisability


The research design is based in the positivistic paradigm yet recognises the
pragmatic realities of undertaking the research and as such, will lend itself more to the
relativist viewpoint. This section summarises the relativist viewpoint based on Easterby-
Smith et al.’s (2002, p53) summary table of validity, reliability and generalisability:

Validity
The research design requires at least 50 appropriate participants undertaking a
simulation based training programme, 50 on a game-based programmes and 50 on a case
study based programme (acting as a control group) from at least 6 different organisations
in Singapore and/or Malaysia. The organisations representing a cross-section of those
found typically in the countries (both local and foreign MNCs, SMBs). This is considered

88
to be a sufficient number of perspectives to include for validity for statistical purposes
(Remenyi et al. 1998).
The Managerial Competency assessment used for pre- and post-testing will be on a
180º basis requiring participants to self-assess and nominate at least two third-parties to
assess the participant independently (Higgs and Rowland, 2001). A full 360º process
would be desirable but the time required by non-involved parties is considered to be too
demanding (Wimer, 2002).

Reliability
The literature and on-going research (particularly in the US) will provide
comparable observations to assess the reliability of the outcomes from the research. The
statistical methods used will be rigorous and transparent.

Generalisability
The research will seek to observe patterns which may be applied to a particular
group of the population and may be specific to a particular identifiable set of individuals –
though the initial intention will be to present evidence to confirm or contradict the
hypothesis presented above.

The chosen research model allows the researcher to take an appropriate


epistemological and ontological standpoint yet there are issues which need to be addressed
or understood as potential limitations on the research.

4.2.2 Issues with experimental design


Easterby-Smith continues warning against experimental design (1994) suggesting
that there are innumerable problems to achieve matching control groups and cites several
studies where difficulties arise in interpreting results because either: the control group was
not truly random (Easterby-Smith and Ashton, 1975); the accepted criterion was open to
debate (Bligh, 1971); or the experiment may have been methodologically flawed
(Partridge and Scully, 1979). There are, however, dangers in more qualitative methods as
in a study by Argyris (1980) who found that despite best efforts to assess delivery methods
of faculty, that the behaviour of faculty in practice was contrary to their espoused theories
and values.

89
4.2.3 Learning evaluation design
Anderson and Lawton (1997b) suggest that there are two models to choose from
regarding the assessment of the effectiveness of a simulation, a pre and post test design to
measure the learning (using an objective measure) or an after-only test design using a
random control group. They advocate the latter approach but recognise that whilst this
may highlight the difference between different pedagogies used, it does not measure
learning at an individual level. Since we are likely to be affecting the outcomes anyway by
becoming involved (action research) and ethically, it is difficult to justify why one would
deliberately give (even if randomly) a treatment that one believes is inferior (researcher
bias) – such methodological approaches are unethical (Remenyi et al., 1998). For this
reason, the design includes a measure of learning that is post-test only.

4.2.4 Quasi-experimental design

As discussed above, the difficulties in assigning individuals to random groups


mean that a true experimental design is not feasible (Easterby-Smith et al., 1991) and
precluded (Ross and Morrison, 2003). As such, this research will be a quasi-experimental
design. Pre-testing of each individual presents the opportunity to qualify the similarities of
each of the groups and provide a benchmark of the basis for the post-test to establish
change in individuals’ behaviour at the workplace according to a 180º assessment. The
programmes are scheduled to run within the same time frame in an attempt to minimise
differences between groups caused by major economic environmental changes (Figure
14).

Game Group

Simulation Group

Training course

Case Study Group

Training course
Pre-test Post-test

Time

Figure 14. Research design

90
Internal validity – threats to the research

There are threats to the internal validity of the research and these are considered
using Campbell and Stanley’s (1966) classification of history, maturation, testing,
instrumentation, statistical regression, selection, experimental mortality and diffusion of
treatments. This provides a comprehensive view of potential and real threats to be
considered and mitigated against where possible. Each factor is defined for clarity and
how this may affect the research is discussed:

History
Events occurring, other than the training event, may influence the results. As the
intervention occurs over time, participants will be exposed to other influences and learning
or change may take place. It is considered that such is inevitable as the research cannot be
conducted under laboratory conditions, but such events will be random and equitable to all
groups. Major events occurring during the period discovered through observation or
discussions with individuals will be noted and assessed if the impact does not affect all
groups equally. For example, a rapid devaluation of currency in one country, as occurred
in 1997 in South East Asia, may have greater impact on one or two groups than others as
their currency of trade is adversely affected.

Maturation
This factor considers the physical or psychological changes taking place within the
subjects during the experimental period. This would represent a greater concern for young
subjects such as school children who change rapidly in a relatively short space of time. All
subjects in this research are adults and the period over which subjects are measured in this
research is over a three-month duration, hence, the researcher does not anticipate that
subjects will show great maturation during this time-frame.

Testing
This considers that exposure to a pre-test, or intervening assessment influences
performance in the post-test. This is anticipated to influence subjects and behaviour
change in the workplace and because of this, all participants, in all groups will take the
same pre-assessments and receive feedback only after the post-test in an attempt to nullify
the argument that the process becomes a self-fulfilling hypothesis (Burgoyne and Cooper,
1975).

91
Instrumentation
This refers to the inconsistent use of testing instruments or conditions. To alleviate
the potential problem here, all instruments used are fixed, on-line assessments using
standard web browser technology providing equitable and directly comparable results.
Subjects are asked to complete the instruments under normal work conditions within a
particular time-frame.

Statistical regression
This considers that subjects scoring very high, or very low on the pre-test naturally
tend to score closer to the mean (regress) on the post-test (Ross and Morrison, 2003). This
may show an effect on the scores as subjects are made more self-aware and become more
open to learning from others – Johari window (Marsick and Watkins, 1990). By not
revealing the pre-test scores until after the post-test as suggested under ‘Testing’ above,
should alleviate this potential problem.

Selection
This refers to there being a systematic difference between the treatment groups
under comparison. In this study, the subjects of the research are chosen by the client
organisation and as such, personal details such as prior educational qualification
attainment, age, position in the organisation, gender and race will enable the researcher to
show the similarities and differences apparent between individual subjects and the groups.

Experimental mortality
This refers to the loss of subjects during the treatment period and potential
measurement bias as a result. Such is not within the control of the researcher and subjects
lost before the post-test will not form part of the analysed data.

Diffusion of treatments
This considers that particular treatment to one group influences the behaviour of
the comparison group. In this study, unlike Easterby-Smith and Ashton’s (1975) problem,
the individuals (or their organisation) have chosen which programme they will undertake
without being aware of the experiment beforehand and comparison is not being made with
a group who have deliberately not chosen or been chosen.

92
4.2.5 Summary research model
In this chapter, the researcher has reflected on the ontological and epistemological
stances that may be taken with this research study concluding that while different
standpoints and methods have potential advantages, the keen interest of this researcher is
at the individual level, and hence an experimental approach is proposed. The choice of
approach aligns with the literature on evaluation of management learning in the literature
review and aligns with clients participating in the research as being both familiar to them,
in the guise of Kirkpatrick’s four levels of evaluation, and useful to them in terms of the
results emanating from the research of their programme.
The chapter continued with identification of important issues with regard to the
proposed approach and highlights the threats to the internal validity and how these would
be mitigated or accepted as beyond the researcher’s control and declared limitations of the
study.
The next chapter will consider how, using this model, the constructs to be
measured will be operationalised and details of the programmes to be investigated.

93
Chapter 5 Constructs and programmes investigated
This research in the positivist paradigm requires that the constructs to be measured
are operationalised. As this research study intends to evaluate the effectiveness of the
training programmes across Kirkpatrick’s four levels of evaluation to provide an holistic
assessment, there are a number of different constructs within the research model:
Personality Type; Preferred Learning Style; Position in the organisation; Cultural
Heritage; Managerial Competencies; Performance Rating; Enjoyment of training
programme; Perceived usefulness of training programme; and Learning. Other constructs
that are identified in the literature as important factors but were eliminated after the two
pilot studies include motivation to learn and motivation to transfer learning, and the
transfer climate.
Some of these constructs are clear and easily identified within the literature as
being frequently used in positivist management learning research, others are subject to
greater debate and greater attention is paid to these constructs in this section.

5.1 Operationalisation of constructs

5.1.1 Personality type


The MBTI instrument is commonly used to provide an assessment of an
individual’s preferences for processing information and decision-making. The MBTI
instrument measures an individual’s preference on four dichotomous scales (Table 19):

Table 19. MBTI dimensions (Myers and Myers 1980)

Extroversion (E) ---------versus -------- (I) Introversion


Sensing (S) ---------versus -------- (N) Intuition
Thinking (T) ---------versus -------- (F) Feeling
Judging (J) ---------versus -------- (P) Perceiving

These four dimensions translate into 16 basic personality types, such as ESFP or
INTJ. For each personality type a dominant and auxiliary personality pattern for
information processing and decision-making can be identified: sensing, intuitive, thinking
or feeling (Myers and Myers, 1980). In examining the internal consistencies based on
Alpha coefficients, Myers and McCaulley (1989) report coefficients above 0.7 for the four
scales and conclude that test-retest reliabilities are consistent over time. There are some
questions over the validity of the MBTI instrument, however there is evidence indicating a
94
degree of validity for the scales and construct validity when compared with a number of
instruments (Myers and McCaulley, 1989).
Other instruments were considered and after Higgs (1996), Cattell’s 16PF was
considered in particular as this may have provided the possibility to analyse the effect of
team roles on any differences found. However, the researcher is less familiar with
administration of the instrument and providing feedback to subjects but is trained and
licensed in the use of MBTI.

5.1.2 Preferred learning style


Two particular instruments are frequently used within management learning
research and have been discussed above, Kolb’s Learning Style Inventory (1999) and
Honey and Mumford (1983) Learning Style Questionnaire. Both instruments are the
subject of critiques in terms of validity of constructs and the validity of the instrument.
Kolb (1981) reports test-retest reliability studies with coefficients ranging from a low 0.33
to 0.73 and argues that the LSI dimensions are not considered to be fixed traits but
responses are variable and situation dependent. Duff (2000) reports Alpha coefficients for
the LSQ instrument range from 0.52 to 0.71, indicating modest internal consistency
reliability and concludes that the findings of his study indicate the LSQ may not be a
suitable alternative to the LSI for education researchers. However, in spite of the
arguments about the instruments respective reliability and validity there is also
considerable support for their use and the final choice of instrument became a practical,
financial consideration for the researcher where the Kolb LSI version III was considerably
less expensive given the intended number of respondents than the Honey and Mumford
LSQ.

5.1.3 Position in the organisation


The research model intends that a number of different organisations are included in
the study to provide a breadth of different operations and provide for greater
generalisability of the results. Unfortunately, especially in the location of study in
Singapore and Malaysia, job titles and management grades are less than a clear way to
identify different levels within different organisations – for example an ‘executive’ is most
often a lower level employee, whilst in predominantly western organisations, an
‘executive’ is often at the top of that organisation. The training programmes are
deliberately targeted for middle and senior managers in organisations for the client and,

95
following discussions with clients, the researcher chose a simple distinction between
managers and senior managers.
Hardy (1996) provides a useful definition that was agreed with client
organisations: A manager is any person who has responsibility for people, money or
resources. A senior manager is any person who has responsibility for other managers.

5.1.4 Cultural heritage


It is clear from research into the affect of national culture and anthropological
studies, that the influence of culture on an individual is not as straightforward as it may
have been when Hofstede undertook his seminal study with IBM in the late 1970’s.
However, it is considered that national culture may have an effect that shapes the way in
which individuals learn, their comfort level and potentially enjoyment of different training
methods. As this study is being conducted in Singapore and Malaysia, there are four
readily identifiable cultures in both countries, defined by race, they are, Chinese, Malay,
Indian and Others. However, the effect of national culture may not be dominated by a
person’s race but where they reside and/or the influence of their upbringing or education.
Client organisations indicated a keen interest in being able to compare the ethnic groups,
whilst this researcher is more concerned with comparison of those with a predominantly
Asian upbringing, and those with a western upbringing – irrespective of race. Hence the
use of the construct, cultural heritage. To satisfy both the clients desire to compare
ethnically, and this researcher’s desire to explore the broader concept of east-west, the
construct for ‘others’ includes a person of any race whose home base is not in Asia and
who were predominantly brought up and educated in the west – as such this would also
include the majority of Eurasians in the sample.

5.1.5 Managerial competencies


The literature review above include an extensive discussion about competency
frameworks considered for this research, and concluded that the Hay McBer MCQ
(McBer, 1997) instrument was chosen for it’s suitability in assessing levels of competency
in general management. The competencies in the instrument are taken from Hay Group
competency dictionary and based on the work of Spencer and Spencer (1993). Every core
competency in the dictionary reliably differentiates performance in a variety of
organisations. Psyfactor undertook a study using the MCQ and conducted analysis to

96
ensure the reliability of the instrument finding over 80% reliability except for team
leadership which was tempered by distance management (Psyfactor, 2005).
Hay Group were unable to provide technical data for the MCQ and personal
communication with the principle author of the instrument (Kelner, 2005) led this
researcher to test the reliability of the instrument.
The method chosen, the Cronbach-alpha coefficient, produces the mean scores of
all the possible split-half combinations. Normally accepted range for reliability
coefficients lay between .60 and .80. Above .80 suggests that the scale items have similar
wording or measure virtually the same behaviours – as we would expect in this case due to
the scaled level of the competencies under scrutiny. Coefficients below .60 are likely to be
excessively heterogeneous or ambiguous.
The results of the reliability analysis conducted on the seven competency factors of
the MCQ from all 266 participants are shown in Table 20 below:

Table 20. Cronbach alpha reliability analysis on MCQ

Reliability Section
--------- It em Values ------- ------------------- If This Item is Omitted ----------------
--- --- R2
Standard Total Total Coef Corr Other
Variable Mean Deviation Mean Std.Dev. Alpha Total Items
AO 15.66278 2.853559 96.68835 14.41401 0.8769 0.7678 0.7307
DO 17.11917 3.485665 95.23196 13.62304 0.8642 0.8563 0.7768
DI 15.10376 2.824753 97.24737 14.73271 0.8899 0.6492 0.7916
II 15.95564 2.760583 96.39548 14.20925 0.8645 0.8864 0.8605
IU 16.40714 2.262072 95.94398 15.34614 0.8996 0.5537 0.5212
OA 17.22932 2.686803 95.1218 15.19556 0.9045 0.5014 0.733
TL 14.87331 3.981002 97.47782 13.44179 0.8797 0.7712 0.7652
Total 112.3511 16.70521 0.8988

Cronbach's Alpha 0.898784 Std. Cronbachs Alpha 0.899645

The Alpha coefficients range from .86 to .90 with an overall standard Cronbach
Alpha shown at .899 – as a widely used instrument, this shows an acceptable reliability
(Carmines and Zeller, 1990).

5.1.6 Bosses performance rating


In order to measure the business impact of training, it is now common to undertake
a Return on Investment approach such as the model developed by Phillips (1997).
Typically, such an analysis is taken some 4 to 6 months after the training intervention.
Following discussions with client organisations willing to participate in the study, this
aspect was not included in the research for two reasons, firstly, clients considered the data
to be confidential and did not want such data disseminated, and secondly the timescale to
obtain results was prohibitive for this research. Instead, participants’ direct boss would

97
rate the participant’s business performance on a 5-point scale similar to those already used
within the client organisation (Appendix 3), before and after the intervention.

5.1.7 Reaction to the programme


Reaction evaluation is the most commonly used in business training, often forming
the only evaluation undertaken. Feedback from participants is sought most often using a
rating on a five-point Likert scale. For the purposes of this research, participants’ reaction
to the training programme would be assessed immediately following the event asking for
their rating of their enjoyment and perceived usefulness of each method used in the
programme delivery, i.e. the activity (simulation, game or case study), the theory sessions,
the debrief and feedback sessions, and the teamwork during all sessions. Additional
reaction data on the facilities, organisation and similar items were collected for the client
organisation but do not form part of this research. Appendix 8 shows the common reaction
feedback form.

5.1.8 Learning
Participant learning is measured on a post-test only basis (Anderson and Lawton,
1997b, Psyfactor, 2005). Two pilot studies were carried out in developing the final
research model to establish suitable measures, in particular, for learning from training.
Participants were not keen or comfortable with a formalised test and stated a preference
for being assessed on the presentation at the end of the programme. Learning is thus
assessed by participants’ bosses at the final presentation on a five-point scale which may
be found in Appendix 6. The scale was developed from case analysis measures (O'Rourke,
2003, Schneider, 2001) to be easy for bosses to assess performance with 6 directive
questions to provide a single score. Participants are required, as part of their final
presentation, to demonstrate what they have learned through the programme by applying
their learned understanding of strategic analysis to a real business problem previously
identified by the client organisation and allocated to participants. This measure of learning
suits the practical nature of the training programme rather than a knowledge test which
may be more suitable in an educational environment.

5.1.9 Motivation to learn and transfer learning and transfer climate


The literature on management learning identifies an individual’s motivation to
learn and motivation to transfer learning and the transfer climate within the organisation as
being a factor in the effectiveness of management learning. Holton (1996) in particular

98
criticised Kirkpatrick’s four level model and considers motivation and transfer climate to
be as important, if not more so, than the learning content or pedagogy adopted.
This researcher considered the use of instruments to assess the motivation of
participants to learn and transfer learning with Holton et al.’s (2000) Learning Transfer
Systems Inventory (LTSI), a sixty-eight item instrument with sixteen factors regarding an
individual’s attitude to training and motivation to implementing the training. In two pilot
studies prior to the finalisation of the research model, participant feedback on the use of
the LTSI suggested that the inclusion of this questionnaire, as well as all the others, was
too time-consuming and the phrasing was less than ideal for the audience.
It seemed that participants were willing to complete questionnaires that would
provide them with useful, actionable feedback, such as the MCQ, but motivation may be
overly demanding in a business setting.

5.2 Programmes investigated


This research study is designed to investigate three specific blended delivery
methods of the same programme:
The training programme is designed to develop strategic management capability
with emphasis on strategy formulation and even more so on strategy implementation as
this has been identified in the literature as an important component in a manager’s
knowledge and skills armoury to be an effective manager (Rausch et al., 2001). The
programme includes standard MBA Strategy foundations module content. Importantly,
content elements are led by the same tutors in each programme and taught using examples.
The programmes differ in the activity undertaken at each session. The simulation groups
use the computer-based cooperative management simulation, Strategy CoPilot in small
groups of three. The game groups use the computer-based competitive management game
in teams of 6 to 8, Strategy at the Edge. The case study groups use case studies. Table 21
shows the programme outcomes and Appendix 1 shows an overview of the programme
and the activities associated with each session for each of the three groups:
Table 21. Programme outcomes
Strategy Programme Outcomes
• Identify and prioritise critical strategic issues
• Generate and evaluate creative ideas for new strategic directions
• Build the assets, relationships and capabilities required to sustain
superior returns
• Plan an achievable implementation strategy
• How to align organisation strategy and stakeholder needs
• Present new strategic plans to senior management

99
Each of the programmes is highly pragmatic in nature and focuses on the
application of the learning to the real business after the activity sessions, the tutor leads the
debrief and links the learning to the organisation – the debrief sessions are naturally
different in every programme run as the learning that takes place within the activities and
the issues raised are closely linked to the participants interaction and their own business
issues. Each programme culminates in a presentation of strategic recommendations to
senior managers of the sponsoring organisation.
The training programme was not designed directly to develop the managerial
competencies but to develop strategic management capability amongst the participants –
which one would anticipate would include the development of competencies for effective
management.
The theoretical content, interaction in the workshop and the activities undertaken
were mapped to the MCQ competency framework identified. This process considered the
key tasks and requirements in different parts of the programme and linked these to the
competency definitions as being implicitly or explicitly part of the programme. For
example, the programmes contained a theoretical workshop element on the topic ‘Shared
Situational Awareness’ – this topic, based on strategic military training, considers how the
interaction of command instruction, shared experience and shared situational awareness
allow different units or people to work together more effectively, it explicitly includes a
focus on developing others through sharing knowledge and experience in particular
contexts with a focus on the intention to develop others knowledge and awareness – this
maps well to the competency ‘Developing Others’ as does the requirement for participants
to share experience with each other in groups or teams. Implicit mapping to the
competency framework includes, for example, ‘Achievement Orientation’ which is
identified as a concern for working well or for surpassing a standard of excellence.
Implicitly this maps to the game activity because of the intention for the team to win in
competition with others, in the simulation, to demonstrate their knowledge and skills to
the simulation in order to gain favourable feedback from the system, and from the case
study activities as the desire to show understanding, skills and knowledge. The mapping
was discussed with fellow trainers and three client representatives and agreed as a fair
representation of how and where the competencies would be developed as part of the
training programme. It was also agreed and understood that this is not to suggest that the
competencies would not be demonstrated or developed in other aspects of the programme,
but to ensure that each competency is mapped to some part of the training programme
such that there is a reasonable expectation that it would be developed.
100
Table 22 below provides a summary of the mapping of each competency and the
mapping to the content, workshop activity or simulation, game or case study either as an
explicit component or implicitly:

Table 22. Linking MCQ to training programme

Competency Implicit/ Key programme elements


area Explicit

Game – compete against others to


win
Achievement Sim – to demonstrate knowledge
I
orientation and skills
CS – desire to show understanding
and knowledge
Content – shared situational
Developing
E awareness
others
Sharing expertise with team/group
Team/group discussion on analysis
Directiveness I
and decisions
Content – communicating strategy
Impact and
E Team/group discussions
influence
Final presentations
Interpersonal
I Working with others
understanding
Debriefing and application to
Organisational
E business
awareness
Final presentations
Content – Belbin team roles
Team
E Rotating roles in activities
Leadership
Working in teams/groups

101
5.3 Evidence collection
Following the insights from the literature on evaluation of management
development, this research will measure participants at three levels of Kirkpatrick’s
model, Reaction, Learning and Transfer. The fourth level, Results, is not measured
directly partly because some client organisations were not willing to agree to publication
and partly because the time necessary for business results to be realised. Results is thus
assessed by means of a proxy, the participants boss’s rating of their performance before
and after the training event.

5.3.1 Procedures
The same procedures were adopted across the programmes; Figure 13 (page 88)
above provides an overview of the procedures adopted.

Pre-test
Instruments used were reproduced in an online, web based format for easier
distribution and completion. Each included a brief description of the instrument and the
appropriate background information.
Participants complete a self-assessment of their learning styles using an online
version of the Kolb LSI III (Kolb, 1999) which may also be found in Appendix 2 and
complete a self-assessment MBTI Profile.
Participants undertake an online 180º Managerial Competency Assessment based
on the Hay/McBer Managerial Competency Questionnaire (MCQ) instrument (McBer,
1997), nominating at least two third-party assessors each (boss and staff or peer). Whilst a
360° assessment may provide greater objectivity in assessment, the administrative
difficulties and potential for greater data mortality (Wimer, 2002) is considered too high.
A 180° assessment is more manageable and should provide good correlation of assessment
with performance (Beehr et al., 2001). The mean rating of MCQ will be used
predominantly to provide an objective measure (Higgs and Rowland, 2001).

Training programme
The training programme was designed for managers to enhance their strategic
analysis and thinking skills. All participants were provided with a general programme
overview and appropriate workshop materials. The general outcomes are shown above in
Table 21 and Appendix 1 shows the programmes and the activities.
102
The emphasis on all programmes was on gaining practical knowledge and skills
applied to each group’s own business culminating in a presentation to senior managers on
strategic recommendations and a plan of action for implementing the business
transformation.
The following sections provide an overview of each of the activities used as an
experiential learning activity and simulation and game overviews may be found in
Appendix 4.

Management Simulation
The management simulation used was Imparta’s Strategy CoPilot®, a highly
sophisticated, animation-based computer simulation that combines interactive tutorials,
application exercises, built-in intelligent coaching, and tailored feedback to provide an
accelerated experience of applying key strategy tools within a realistic and engaging
business setting.
Working in a richly simulated bottle-making company environment, participants
worked in groups of three and began defining their objectives and identifying the biggest
issues and opportunities facing the business. In subsequent phases, teams refined the value
delivered to customers, sought value capturing positions, and pushed back against the
competitive forces acting on the company. Teams also explored different ways of building
competitive advantage, taking into account the dynamic nature of most strategic resources
and exploring creative ways of building capability. Finally, teams had to plan a strategy
and recommend a series of consolidated moves.

Management Game
The management game used was Celsim’s Strategy at the Edge™ - a total
enterprise management game based on the airline industry. Data and algorithms for this
game are based on real market data and projections for economic growth, inflation,
interest rates and passenger numbers supplied by British Airways, Airbus Industrie,
Singapore Airlines, Malaysian Airlines System and the Economist Intelligence Unit.
Participants worked in teams of 6 to 8 to establish a new company and launch a new
business into the global market. To assist participants with their evaluation of decisions
about the business, they had access to a computer loaded with Microsoft Excel and a full
working template business model to explore scenarios and decisions.
Participants were provided with an initial (virtual) funding base and evaluated
product service launch into the market, considering the market dynamics, cost of
deployment and operations, and assumptions on customer and competitor action and
103
reaction. The team’s strategic direction is encapsulated in the number and configuration of
aircraft, hub and routes together with additional services and pricing to reflect their
marketing strategy from budget, no-frills airline to first-class only. Through strategic and
financial analysis teams made a separate set of decisions each round (year of operation) of
the company. Once decisions are entered from all teams, the game computes market share
and profitability based on a market algorithm. Teams competed to achieve their own
objectives and must be profitable to win the game.

Case Study
The case studies used were Boo.com (Kunnath and Sedick, 2001), Cooley’s
Distillery (O'Gorman, 1997) and NTT DoCoMo (Times, 1998). These were chosen from
case studies used in the Henley MBA Strategic Direction module as appropriate vehicles
for practising the use of strategic models and application to real life and allowed for the
consideration and application of the same concepts as from the different phases or rounds
of the simulation or game respectively.
Working in non-competing teams of 6 to 8, participants would analyse the case
study and discuss and agree courses of action based on tutor instruction of the objectives
in each session. No computer models were provided but participants were free to use any
technology to assist their analysis and presentations.
The lead tutor for the case studies is an approved Henley MBA, INSEAD and
Open University tutor in strategic management.

Post-test

Reaction
Participants’ reaction to the training programme was assessed immediately
following the event asking for their rating on a five-point Likert scale about their
enjoyment and perceived usefulness of each method used in the programme delivery, i.e.
the activity (simulation, game or case study), the theory sessions, the debrief and feedback
sessions, and the teamwork during all sessions. Additional reaction data on the facilities,
organisation and similar items were collected for the client organisation but do not form
part of this research. Appendix 8 shows the common reaction feedback form.

Learning
Participant learning is measured on a post-test only basis (Anderson and Lawton,
1997b, Psyfactor, 2005). Participants were not keen or comfortable with a formalised test
and stated a preference for being assessed on the presentation at the end of the programme.
104
Participants were required, as part of their final presentation, to demonstrate what they
have learned through the programme by applying their learned understanding of strategic
analysis to a real business problem previously identified by the client organisation and
allocated to participants. The presentations were assessed on a five-point scale (Appendix
6) by participants’ bosses.

Learning transfer
Participants undertake the same 180º online MCQ as in the pre-test, eight to ten
weeks after the training programme. This provides a direct comparison with the pre-test on
their demonstrated managerial competencies in a timeframe that is sufficient for behaviour
change to be demonstrated in a range of everyday situations and considered short enough
for other intervening events that may affect any change to be recalled easily and noted
(e.g. another training intervention, a new project, a new team).

Business Results
Participants’ bosses were asked to rate the performance of each individual using
the same rating scale used in the pre-test. This allows a direct comparison with the pre-test
and was conducted at the same time as the boss’s completion of the post-test MCQ. This
rating is used as a proxy for business results which would otherwise require considerably
more time and effort to be measured for each individual (Phillips, 1991, 1998).

Feedback on results
Feedback to participants on their performance rating and competencies assessment
forms an important, if not critical, aspect of the evaluation process. However, in order to
help ensure that the research is evaluating the training programme and methodology of
delivery, and not a self-fulfilling hypothesis (Burgoyne and Cooper, 1975) because
individuals become more aware of their competency gaps, each participant received a full
report of their assessments and individual feedback on the results after the post-test to
facilitate personal development understanding and future planning. An anonymous
example feedback report may be found in Appendix 7.

105
5.4 Data analysis strategy
In planning this research study, choices have been made about the research design
to be quasi-experimental, the setting in the real-world environment, and the measures of
the constructs to be investigated. This section outlines the intended strategies for data
analysis to answer the research questions and hypotheses identified as appropriate from
the literature. Several techniques will be employed and the principle data analysis software
chosen is NCSS (Number Cruncher Statistical System – www.ncss.com). This software
was chosen as an alternative as suggested in Remenyi et al. (1998) to using SPSS and SAS
as it offers an excellent range of analytical procedures, publication quality graphics and
has two distinct advantages over both packages: ease of use through a comprehensive
help/tutorial system and, a substantial cost saving at less than half the cost of the
alternatives. However, like other software packages, the reports are very comprehensive
and the author chose to re-format output using Microsoft Excel to ease the creation of
summary tables and some charts.
Ross and Morrison (2003) provide a useful table of commonly used techniques in
research in educational technologies and this was used as the starting point for the
statistical techniques to be employed in the analysis of the data. The next section indicates
the statistical tests that will be undertaken with the data based on commonly used
techniques in research in educational technologies:

5.4.1 Statistical procedures to be used in the research


t test Independent samples
Types of data: Nominal independent variable, one-interval ratio measure
Features: Test the differences between treatment groups – can test for causal
effects
Example: Does the simulation treatment group surpass the game or case
study group? – can test for causal effects

t test Dependent samples


Types of data: Interdependent variable nominal (repeated measure), dependant
one-interval ratio measure
Features: Test the difference between treatment means for a given group– can
test for causal effects
Example: Will participants change their behaviour in demonstrating
particular competencies from pre-test to post-test following the
treatment?

106
Analysis of variance (ANOVA)
Types of data: Interdependent variable = nominal, dependent variable = one
interval-ratio measure
Features: Test the differences between 3 or more treatment means. If ANOVA
is significant, follow-up comparison of means are performed– can
test for causal effects
Example: Will there be differences in learning among three groups that learn
from simulation, game and case study

Multivariate analysis of variance (MANOVA)


Types of Data: Independent variable = nominal, dependent = two or more
interval-ratio measures
Features: Test the difference between 2 or more treatment groups means on 2
or more learning measures. Controls type 1 error rate across the
measure. If MANOVA is significant, an ANOVA on each individual
measure is performed– can test for causal effects
Example: Will there be differences among three different learning methods on
problem solving and knowledge learning?

Analysis of covariance (ANCOVA) or multivariate analysis of covariance


(MANCOVA)
Types of Data: Independent variable = nominal, dependent variable = one or
more interval ratio measures. Covariate = one or more measures
Features: Replicates ANOVA or MANOVA but employs an additional variable
to control for treatment group differences in aptitude and/or to
reduce error variance in the dependent variable(s) – can test for
causal effects
Example: Will there be differences in concept learning among learner-
control, program-control and advisement strategies, with
differences in prior knowledge controlled?

Pearson r
Types of Data: Two ordinal or interval ratio measures
Features: Test relationship between two variables
Example: Is age related to test performance?

Multiple linear regression


Types of Data: Independent variable = two or more ordinal or interval-ratio
measures, dependent = one ordinal or interval-ration measure
Features: Test relationship between set of predictor (independent) variables
and outcome variable. Shows the relative contribution of each
predictor in accounting for variability in the outcome variable.

107
Example: How well do experience, age, gender, and educational qualification
predict demonstration of managerial competencies?

Discriminant analysis
Types of Data: Nominal variable (groups) and 2 or more ordinal or interval-ratio
variables
Features: Test relationship between a set of predictor variables and subjects’
membership in particular groups.
Example: Do students with different learning style preferences or MBTI types
differ with regard to ability, age and enjoyment of sessions?

Chi-square test of independence


Types of Data: Two nominal variables
Features: Test relationship between two nominal variables
Example: Is there a relationship between gender and attitudes towards the
instruction?

The data analysis considers the impact of multiple variates and variables on the
results and as such following the advice of Hair et al. (1998), the analysis of data will
commence with examination of the data to observe basic relationships that may be
apparent. This will be undertaken through a graphical examination of the data, an
evaluation of the process for missing data, identification of outliers and how they may
distort the relationships by their uniqueness, and, as has been shown above, the analytical
methods appropriate to analysing the data.
The data analysis process is iterative in nature and thus allows the researcher to
further investigate interesting findings through other techniques such as factor analysis or
data filtering that may helpfully explain phenomena that may not have been anticipated.

108
Chapter 6 Results and analysis
Data was gathered from a total of 12 separate groups of participants across three
distinct programmes. Specifically, six groups participated in a programme that utilised the
management game, four groups participated in a programme that utilised the management
simulation and three groups used case studies. Programmes were organised for private
companies in Malaysia and Singapore by their respective Human Resource Development
departments.
Participants on the programmes were either volunteers or nominated by their
managers to attend, and were targeted as high potentials in their respective companies for
strategic thinking development. The researcher had no influence on the choice of
participants.
All participants were pre-determined to be computer-literate at time of programme
registration – and this information served to establish that the participants were capable of
operating a computer and should be relatively comfortable in using a computer-based
simulation or game when appropriate.
A total of 266 participants completed all assessments and the training programme,
a further 35 participants on the programmes did not complete one or more of the
assessments (28 the MCQ post-test) or had left the organisation before the post-test was
administered. Data was analysed using NCSS (www.ncss.com) and Microsoft Excel.
Table 23 shows a breakdown of the percentage of personal characteristics across each
group.

109
Table 23. Participant breakdown statistics in each group
Average by
Group
All
Sim Game Case Groups
# of Groups 4 6 3 13
Avg pax/group 20.5 21.2 19.0 20.5
Male 69% 70% 62% 68%
Female 31% 30% 38% 32%

Chinese 49% 49% 57% 51%


Western* 12% 24% 21% 19%
Indian 16% 11% 6% 11%
Malay 23% 16% 16% 18%

<30 12% 16% 27% 17%


30-35 36% 32% 45% 36%
36-40 29% 33% 19% 29%
>40 22% 19% 9% 18%

Manager 18% 24% 74% 33%


Senior
Manager 82% 76% 26% 67%

Undergrad 15% 12% 23% 15%


Graduat e 45% 48% 57% 48%
Post Grad 40% 40% 20% 36%

*Western includes Caucasians and


other races with main upbringing in the
west.

The principle aim of this section is to provide empirical analysis of the data
collected to answer the research questions posed following the literature review using
appropriate techniques as outlined in the data analysis strategy above.
The interpretation of the results was carried out using the convention that a
significance level of 0.05 and below is statistically significant (Hair et al., 1998) and
follows the dominant thinking within the field of educational psychology and whilst in
business research it may be acceptable to choose a less restrictive Type I error, the calls
for empirical research in this field demand robust design and analysis and a level of 0.05 is
judged appropriate by this researcher. This means that less than 5% of the variation is
likely to relate to chance than rater unreliability. Such a level represents a challenging
demand and provides a relatively rigorous basis for evaluating change and difference
(Higgs and Rowland, 2001). When significance is shown at the 5% level, the researcher
undertakes a further test at the 1% level in many instances providing a greater assurance
that the test in question is rigorous, and on occasion, when significance is apparent close
to but above 5%, the researcher undertakes the same routine at the 10% level. This
provides less rigour but may indicate that something is worth further investigation.

110
6.1 Effectiveness of simulations and games
RQ1 Are computer-based business training simulations and games an effective way
to develop management learning and learning transfer?
The difference between mean pre and post-test scores for each of the groups was
reviewed and the result may be found in Figure 15 below. All groups show mean higher
levels of demonstrated managerial competency in the post-test MCQ. Figure 15 shows the
mean differences across the three groups.

Mean Differences Competency Change

6.5

6.0

5.5

5.0

4.5

4.0

3.5

3.0
Achievement Interpersonal Organisational
Developing Others Directiveness Impact & Influence Team Leadership
orientation Understanding Awareness
Simulation (n=86) 5.1395 4.1395 4.9070 5.5349 5.2791 4.1628 5.4186
Game (n=125) 5.7200 4.7360 6.3840 5.6800 4.1440 4.6400 5.7760
Case Study (n=55) 4.8127 3.8709 5.3345 5.0291 4.4436 3.3909 3.3891

Figure 15. Mean Differences in Competency

If we consider that case studies are an effective way to develop managerial


competency, then both the game and the simulation are also effective based on the mean
difference shown. As such, this indicates that there is a difference worth investigating and
as the pre and post-test MCQ scores are carried out on the same individual, a more
powerful test, a paired t test, is suitable. The paired t test calculates the difference between
each pair of means for pre and post-test MCQ rating and calculates the standard deviation.
The paired t test was run for all groups and showed significant difference for all
competencies. As the particular interest of this research question is regarding the
difference for the simulation and game groups, the paired t-test was re-run filtering the
data to include only those subjects on the simulation or game supported programmes. The
results of this analysis are summarised in Table 24 below, which shows a paired t test

111
summary of the differences of average MCQ ratings between pre and post-test for the 211
subjects in the game and simulation groups. All clearly show a significant difference
suggesting that computer-base business training simulations and games result in positively
significant work-related behaviours:

Table 24. Differences MCQ pre to post test t-test summary

Alternative
95% Confidence
Hypothesis
Difference Pre to Post Test Interval in the
Average PRE-
Difference
Average Post>0
Mean SD SE Lower Upper T-Value Sig
Achievement -
-5.483 3.613 0.249 -5.971 -22.048 0.000*
orientation 4.996
Developing -
-4.493 4.068 0.280 -5.042 -16.044 0.000*
Others 3.944
-
Directiveness -5.782 3.994 0.275 -6.321 -21.029 0.000*
5.243
Impact & -
-5.621 3.948 0.272 -6.154 -20.679 0.000*
Influence 5.088
Interpersonal -
-4.607 3.501 0.241 -5.079 -19.112 0.000*
Understanding 4.134
Organisational -
-4.445 3.289 0.226 -4.889 -19.632 0.000*
Awareness 4.002
-
Team Leadership -5.630 4.434 0.305 -6.229 -18.444 0.000*
5.032
* Significant .05

In order to provide a comparison with the case study groups, we are able to use a
two-group discriminant analysis by combining the simulation and game groups and using
the case study groups for comparison. The results of the analysis, seen in Table 25, show
superior performance improvement in the MCQ factors of ‘Directiveness’ and ‘Team
Leadership’ for the simulation and game group combined. Other factors, whilst showing a
greater difference than the case study group are not significant. Directiveness and team
leadership alone are able to reduce classification errors of 71.4%:

112
Table 25. Discriminant analysis Sim and Game group against Case Study MCQ

Discriminant Analysis Sim and Game Group against Case Study


R-
Removed Alone
Squared
Lamb F- Lamb F- F- Other
Variable F-Value
da Prob da Value Prob X's
Difference
0.987 3.46 0.064 0.998 0.56 0.456 0.108
Directiveness
Difference
Team 0.966 9.27 0.003 0.956 12.29 0.001 0.092
Leadership

Linear Discriminant Functions


Sim and Game Group
Variable Sim or Game Case Study
Constant -23.436 -14.161
Difference -0.006 0.004
Directiveness
Difference 0.259 0.125
Team
Leadership

Classification Count Table for Sim


and Game Group
Predicted
Actual Sim or Case Total
Game Study
Sim or 207 4 211
Game
Case 34 21 55
Study
Total 241 25 266
Reduction in classification error due to
X's = 71.4%

The data suggests that simulations and games are an effective way to develop
managerial competency, they may also be superior to the use of case studies in such a
training programme.

The analysis above in answering the research question suggests that the hypotheses stated
in the positive following the literature and previous studies is appropriate.

113
H1.1 The simulation and game groups will show higher ratings in participant
reaction than the case study groups.
It is appropriate to use a two sample t test of independent samples and comparing
each group with each other to establish if the difference in means, seen in the descriptive
statistics of the data, are significant.
Table 26 below summarises t tests of reaction data across the three groups. Both
the simulation and game groups show significant differences compared with the case study
group on enjoyment and usefulness of the activity at the 1% level. Similarly the enjoyment
and usefulness of teamwork were rated significantly more highly than the case study
group at the 1% level.

Table 26. Summary t test reaction

Significance
Summary T-Test Mean Standard Deviation
** = 0.01, * = .05, g = .10
Simula Case
Game Simula Case Sim- Sim- Game-
tion Study Game
(n=125) tion Study Game Case Case
(n=86) (n=55)
Enjoy Activity 4.657 4.545 2.858 0.565 0.582 0.727 0.165 0.000** 0.000**
Enjoy Teamwork 3.973 4.059 3.469 0.427 0.682 0.914 0.301 0.000** 0.000**
Enjoy Debrief 4.507 4.156 4.275 0.687 0.601 0.934 0.000** 0.091g 0.309
Enjoy Theory 3.944 3.996 4.220 0.391 0.674 0.867 0.521 0.011* 0.062g
Useful Activity 4.742 4.518 3.206 0.450 0.618 0.904 0.004** 0.000** 0.000**
Useful
4.081 4.112 3.416 0.404 0.629 0.893 0.691 0.000** 0.000**
Teamwork
Useful Debrief 3.611 4.356 4.256 0.786 0.595 0.988 0.000** 0.000** 0.404
Useful Theory 3.558 4.120 4.093 0.669 0.742 0.980 0.000** 0.000** 0.838

The results show support for the hypothesis as suggested by previous research
studies in this field and also Henke (2001) , Schank (1997) and Prensky (2000) amongst
many others suggest that games and simulations are, and should be, by design more fun
than more traditional pedagogies – the data concurs and as Dixon (1990) emphasised,
enjoyment as a by-product of the learning process may enhance effectiveness, which may
explain why participants also rated the perceived usefulness more highly.
We will return to the aspect of teamwork in RQ3 below.
There are obvious and notable differences and further analysis to understand if
there is an interaction between the groups by activity type and each of the reaction
variables is appropriate, shown in Table 27 this analysis uses Analysis of Variance or
ANOVA of the reaction data. The results show that there is a significant difference
between the three groups and together with the results in the t test above, supports
hypothesis H1.1 in that participant reaction from the simulation and game groups for the
activity is rated significantly higher than the case study groups.

114
Table 27. ANOVA Reaction data by activity type

Activity Type
Summary ANOVA F-Ratio Sig Power
Enjoy Activity 177.13 0.000* 1.000
Enjoy Teamwork 15.36 0.000* 0.999
Enjoy Debrief 6.28 0.002* 0.894
Enjoy Theory 3.30 0.038* 0.623
Useful Activity 107.09 0.000* 1.000
Useful Teamwork 25.32 0.000* 0.999
Useful Debrief 26.51 0.000* 1.000
Useful Theory 14.86 0.000* 0.999
* Significant 0.05

However, some reaction variables are contrary to the hypothesis – where the case
study groups’ reaction was higher than the simulation or game groups. These are in the
enjoyment and perceived usefulness of the theory sessions, and the usefulness of the
feedback/debrief sessions.

H1.2 The simulation and game groups will show greater learning than the case study
groups
The same procedures adopted above for the reaction variable were used for
participant learning across the groups.
Table 28 below shows a summary of the t-tests between different groups on
assessed learning at the end of the training programme. This shows a significant difference
between the simulation and game groups and the case study group.

Table 28. Summary t test learning

Summary T-Test Mean Standard Deviation Sig


Simula Case
Game Simula Case Sim- Sim- Game-
tion Study Game
(n=125) tion Study Game Case Case
(n=86) (n=55)
Learning 4.233 4.192 3.291 0.425 0.534 0.975 0.557 0.000* 0.000*
Significant * 0.05

The data supports the hypothesis and supports the views of Brenenstuhl and
Catalanello (1979) that the greater involvement in and enjoyment of the learning activity
will lead to greater learning, and also the findings in Wolfe and Guth’s (1975)
comparative study where they found that business games showed superior learning to case
studies.

115
H1.3 The simulation and game groups will show greater change of demonstrated
managerial competency (learning transfer) than the case study groups.
As has been seen in Figure 15 above, there are differences between groups in the
change of managerial competency. Again, the same statistical techniques were used as for
the reaction and learning variables to compare the mean difference in managerial
competency. Table 29 below shows a t-test summary of the differences of average MCQ
ratings between pre and post-test:

Table 29. Summary t tests MCQ differences

Summary T- Significance
Mean Standard Deviation
Test ** = .01, * = .05
Simul Case Simul Case Sim- Sim- Game-
Game Game
ation (n=125) Study ation Study Game Case Case
(n=86) (n=55)
Mean Difference
Achievement
5.134 5.720 4.813 3.344 3.782 3.682 0.252 0.587 0.137
orientation
Developing
4.140 4.736 3.871 3.351 4.492 3.682 0.296 0.656 0.211
Others
Directiveness 4.907 6.384 5.335 3.603 4.150 3.818 0.008** 0.503 0.111
Impact &
5.535 5.680 5.029 3.289 4.356 3.835 0.794 0.406 0.340
Influence
Interpersonal
5.279 4.144 4.444 3.847 3.177 3.414 0.020* 0.191 0.569
Understanding
Organisational
4.163 4.640 3.391 3.254 3.313 3.246 0.302 0.171 0.020*
Awareness
Team
5.419 5.776 3.389 5.306 3.735 3.271 0.566 0.012* 0.000**
Leadership

There are two comparisons showing a significant difference at the 1% level:


between the simulation group and the game group in Directiveness; and between the game
group and case study group in Team Leadership. Three more at the 5% level: Simulation
to game group in Interpersonal understanding; Simulation to case study group in Team
leadership and Game to Case study group in Organisational awareness.
The difference in Directiveness may be explained by the size of the teams during
the training, the simulation group of three compared to the game and case study teams of 6
to 8 participants. Larger teams may require individuals to be more directive to ensure that
their argument is considered. In addition, the way the simulation was used was
cooperative in nature, and this might suggest a higher level of Interpersonal Understanding
being developed, which is shown in the data.
The difference in Organisational awareness is more difficult to explain other than
the game uses real data and forecasting future environments, whilst case studies, by their
very nature, are historical. Organisational awareness as a competency is future-oriented

116
including the “[ability to identify] and to predict how new events or situations will affect
individuals and groups within the organisation” (Hay/McBer, 1997, p12).
Again, we will return to Team leadership in RQ3.

H1.4 The simulation and game groups will show higher bosses rated performance
change between groups.
Table 30 below shows significant difference using t tests between the different
groups – and both the simulation and game groups show a higher performance
improvement than the case study group. Thus accepting the hypothesis that there is a
difference between the groups.

Table 30. Summary t test performance change


Significance
Summary T-Test Mean Standard Deviation
** 0.01
Simula Case
Game Simula Case Sim- Sim- Game-
tion Study Game
(n=125) tion Study Game Case Case
(n=86) (n=55)
Performance
0.837 0.528 0.436 0.717 0.589 0.601 0.000** 0.001** 0.000**
improvement

Of particular interest is that the simulation group performance improvement is


significantly greater than that of the game group and pre-test performance ratings for each
group at 3.314, 3.504 and 3.491 for the simulation, game and case study groups
respectively were not significantly different. The conjecture from the competency
improvement seen above, is that this may relate to greater improvement in Interpersonal
Understanding for participants in the simulation group. This makes intuitive sense as the
higher levels of Interpersonal Understanding competency demonstration is about
understanding underlying issues of other people (including the boss), which may be
appreciated and be reflected in perceived greater performance. Correlation of the variables
does not however, show a significant relationship between Interpersonal Understanding
and Boss’s performance rating.

6.1.1 Correlation on Kirkpatrick’s levels of evaluation


H1.5 Participants reaction will correlate with learning
After Kirkpatrick (1959/60, 1974) we might assume that higher ratings for reaction
would correlate with higher levels of learning. Correlation routines allow the researcher to
see and understand if a relationship exists between variables, how strong that relationship

117
may be and whether it is a significant relationship. Table 31 below shows each of the
reaction variables correlating significantly with learning from the programme. Enjoyment
of theory sessions and perceived usefulness of the debrief sessions are significant at the
5% level and all others significant at the 1% level.

Table 31. Learning correlation with participant reaction


Correlation participant reaction and learning
Enjoyment Usefulness
Activity Teamwork Debrief Theory Activity Teamwork Debrief Theory
Correlation 0.505 0.356 0.232 0.129 0.557 0.337 0.149 0.175
Significance 0.000** 0.000** 0.000** 0.035* 0.000** 0.000** 0.014* 0.004**

Significance ** .01, * .05

Individually, aspects of participant enjoyment and usefulness are significantly


correlated to learning, so using Canonical correlation is the multivariate extension of
correlation analysis and allows the researcher to determine the overall correlation, in this
case of reaction variables, through finding a weighted average of the reaction scores to
learning, shown in Table 32 below, to establish the overall affect of reaction variables to
learning. The results of canonical correlation shows that whilst the linear relationship is
not strong – R2 – 0.39, it is significant at the 1% level, and the data suggest that enjoyment
and usefulness for the activity and for teamwork have the strongest correlation of all
reaction data.

Table 32. Correlation reaction to learning

Correlation Section

Learning
Enjoy Activity 0.505
Enjoy
0.356
Teamwork
Enjoy Debrief 0.232
Enjoy Theory 0.129
Useful Activity 0.557
Useful
0.337
Teamwork
Useful Debrief 0.150
Useful Theory 0.175
Learning 1.000

Canonical Correlations Section

Variate Canonical R- Num Den Sig Wilks'


F-Value
Number Correlation Squared DF DF .01 - ** Lambda
1 0.626 0.392 20.74 8 257 0.000** 0.608

118
This result is consistent with previous studies showing significant though relatively
weak correlation between reaction and immediate learning (Warr et al., 1999) but these
results show a stronger link than those studies (R2 at .07) and led the researcher to use
factor analysis in a confirmatory way to help define the underlying structure in the data
matrix. This addresses the problem of analysing the structure of the interrelationships
among the large number of variables by defining a set of common underlying dimensions
or factors. The results from the canonical correlation suggest, a priori, that two factors
should be extracted. Factor analysis using varimax rotation simplifies the columns of the
factor matrix such that the maximum possible simplification is reached (Hair et al., 1998)
of the reaction variables, Table 33 suggests also that reaction on the activity and teamwork
will show the strongest correlation with learning.

Table 33. Factor analysis reaction data

Communalities after Varimax Rotation


Factors
Variables Factor1 Factor2 Communality
Enjoy
0.603 0.009 0.613
Activity
Enjoy
0.366 0.168 0.534
Teamwork
Enjoy
0.133 0.146 0.279
Feedback
Enjoy
0.017 0.422 0.439
Theory
Useful
0.717 0.012 0.729
Activity
Useful
0.319 0.151 0.470
Teamwork
Useful
0.000 0.496 0.497
Feedback
Useful
0.004 0.748 0.752
Theory

Bar Chart of Communalities after Varimax Rotation


Factors
Variables Factor1 Factor2 Communality
Enjoy
Activity ||||||||||||| | |||||||||||||
Enjoy
Teamwork |||||||| |||| |||||||||||
Enjoy
Feedback ||| ||| ||||||
Enjoy
Theory | ||||||||| |||||||||
Useful
Activity ||||||||||||||| | |||||||||||||||
Useful
Teamwork ||||||| |||| ||||||||||
Useful
Feedback | |||||||||| ||||||||||
Useful
Theory | ||||||||||||||| ||||||||||||||||

119
The factor loadings seen in the results for enjoyment and usefulness of the activity
and teamwork show practical significance, the former above 0.50 are considered
practically significant, the latter (teamwork) are considered to meet the minimum level
(Hair et al., 1998). To assess the statistical significance of these indications, the researcher
created a variate of the average of reaction to the activity and teamwork, and using
canonical correlation, the results show R2 of 0.324 significant at the 1% level. A variate of
reaction of the activity alone against learning shows R2 of 0.321 significant at the 1% level
(Figure 16).

4.5

4
R2 =0.3213

3.5

2.5

1.5

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

Enj oy m e n t a n d U se f ul ne ss of A c t i v i t y A v e r a ge

Variate Canonical R- Sig Wilks'


F-Value
Number Correlation Squared ** .01 Lambda
1 0.567 0.321 125 0.000** 0.679

Figure 16. Correlation enjoyment and usefulness reaction on activity to learning

The results indicate that participant enjoyment and perceived usefulness of the
activity undertaken can explain 32% of the increase in learning supporting the notion that
greater enjoyment of experiential activities is an important precursor to learning as
suggested by Russ-Eft and Preskill (2001) amongst others. However, as the results above
indicated, while the enjoyment and usefulness of the activity reaction data show the
strongest significant correlation to learning, the communalities of the variables seen in the
factor analysis above for enjoyment and usefulness of the activity being above a general
guideline of 0.50 (Hair et al. 1998) suggests that it does not meet an acceptable level of

120
explanation. The indications are that other factors may be in play, and the researcher will
investigate this in more detail with multiple linear regression after considering the
correlations for learning and learning transfer, and for learning transfer and performance.
Other elements that may influence learning, and potentially the results seen above will be
investigated in RQ2.

H1.6 Participant learning will correlate with change in managerial competency


To assess the potential affect of any factor correlating with the demonstrated
change in behaviour. In viewing the raw data, it is reasonable to assume that an individual
with a high pre-test MCQ is likely to show a smaller increase and vice-versa is possible.
Hence, the researcher chose to remove the influence of pre-test MCQ scores and ‘partial’
out using linear regression to remove the influence of prior competency differences. Table
34 below shows a summary correlation matrix of learning against each of the MCQ factors
and against the sum total of MCQ difference.

Table 34. MCQ difference correlation with learning

Correlation with Learning - pre-test scores ‘partialled out’


Sum
AO DO DI II IU OA TL
MCQ
Correlation 0.197 0.061 0.153 0.075 0.040 0.121 0.210 0.328
Significance 0.001** 0.332 0.014* 0.229 0.521 0.051 0.001** 0.000**
Significance ** .01, * .05

The results show that Achievement Orientation and Team Leadership and the sum
of MCQ differences correlate significantly at the 1% level with learning, whilst
Directiveness correlates significantly at the 5% level. Organisational Awareness is
significant at 10%.
Since the training programme is holistic in nature and designed to enhance
managerial effectiveness as a whole, the literature on managerial competency frameworks
suggests that an effective manager has a balance of strengths across all the competencies
and thus the sum of MCQ differences represents a useful variate to measure overall
effectiveness. The results show that learning from the training programme is significantly
related to overall (sum) change in behaviour of demonstrated managerial competency and
may be interpreted to be 32% of the factors causing overall change in behaviour.
Following H1.5, we might expect that participant reaction to the activity would
also correlate with the change in behaviour. Table 35 shows a similar significance in
Achievement Orientation, Team Leadership and sum of all MCQ factors, Directiveness is
less significant at the 10% level.

121
Table 35. MCQ difference correlation with activity reaction

Correlation with activity reaction - pre-test scores partialled out


AO DO DI II IU OA TL Sum MCQ
Correlation 0.167 0.058 0.109 0.019 -0.049 0.077 0.297 0.263
Significance 0.007** 0.353 0.081 0.765 0.434 0.218 0.000** 0.000**
Significance ** .01

From this, the results suggest that participant enjoyment and perceived usefulness
of the activity are a precursor to learning, and also a precursor to learning transfer.
To help understand the factors that appear to have the greatest influence on the
development of managerial competencies, multiple linear regression identifying the sum
of all MCQ differences as the dependent variable, and participant reaction and learning as
predictors. The objective of the multiple regression undertaken here was to establish if
there is a general linear model that can be used to explain the data. The dependent variable
is the difference in the sum of all MCQ differences, and the analysis included all reaction
variables and the learning variable. Using a model of hierarchical forward with switching,
the calculations ultimately establish those variables that explain most of the difference, the
relationship and the significance of the relationship. After several iterations, the full model
providing the most useful observation included a maximum of 5 subset terms which when
reached terminated the algorithm. The routine was run for all groups and then for each of
the activity type groups in turn through data filtering and the results are shown in Table
36.

122
Table 36. Multiple Regression MCQ Differences, all groups and each Activity type

R-Squared Section - all groups


R2
Total R2 Partial
Increase R2 R2
for R2
when this Decrease when
this I.V. adjusted Sig
I.V. when this this I.V.
and for all * .05
added to I.V. Is is fit
those other
Independent those removed alone
above I.V.'s
Variable above
Enjoy Feedback 0.004 0.004 0.001 0.004 0.001 0.548
Enjoy Theory 0.010 0.006 0.013 0.010 0.014 0.058
Learning 0.070 0.060 0.021 0.048 0.022 0.016*
Enjoy Feedback *
0.078 0.009 0.003 0.012 0.003 0.366
Enjoy Theory
Enjoy Theory*Learning 0.088 0.010 0.010 0.005 0.011 0.088
R-Squared Section - Simulation groups
Enjoy Feedback 0.006 0.006 0.152 0.006 0.176 0.000*
Enjoy Theory 0.089 0.083 0.130 0.078 0.154 0.000*
Enjoy Activity 0.170 0.081 0.044 0.066 0.058 0.029*
Useful Feedback 0.246 0.076 0.114 0.025 0.138 0.001*
Useful team 0.290 0.044 0.044 0.005 0.058 0.029*
R-Squared Section – Game groups
Learning 0.060 0.061 0.095 0.061 0.107 0.410
Useful Theory 0.127 0.067 0.005 0.020 0.006 0.005*
Useful Team 0.146 0.018 0.055 0.044 0.065 0.005*
Learning*Useful Team 0.164 0.019 0.054 0.000 0.064 0.010*
Useful Theory* Useful
0.210 0.045 0.045 0.040 0.054 0.000*
Team
R-Squared Section – Case Study groups
Enjoy Feedback 0.005 0.005 0.028 0.005 0.040 0.161
Useful Feedback 0.005 0.000 0.039 0.002 0.054 0.101
Useful Theory 0.100 0.095 0.113 0.062 0.142 0.006*
Useful Team 0.258 0.158 0.094 0.167 0.122 0.012*
Enjoy Feedback *
0.319 0.062 0.062 0.000 0.083 0.040*
Useful Feedback
Linear Regression – Full Model up to 5 subsets – Hierarchical Forward with
Switching

123
The multiple regression for all groups show that learning has the strongest significant
relationship to the increase in managerial competencies. When the results are analysed for
each activity type group on their own, the results show interesting differences. For the
simulation groups, the strongest influence is the participant enjoyment of the activity –
something already indicated above. However, the game and case study groups do not
feature this reaction factor, the game groups show that usefulness of teamwork and
learning as showing the strongest relationship, while the case study groups show
usefulness of theory sessions and usefulness of teamwork and the enjoyment and
usefulness of the feedback and debrief sessions as having the strongest relationship.
Considering all of these results, it is clear that it is not only the type of experiential
activity and how much participants enjoy this or find it useful but a combination of
different aspects of the programme and the theory sessions and feedback/debrief sessions
and working in teams are important factors in both learning and development of the
competencies and transfer to the workplace. The results provide strong support for the use
of a blended approach to management development.

H1.7 Participant change in managerial competency will correlate with bosses rating
of performance impact.
Using the same techniques as described above, Table 37 shows each of the MCQ
competencies correlation with change in boss’s performance rating. Achievement
Orientation and Team Leadership and the sum of MCQ change show significance at the
1% level able to represent some 47% of the differences observed.

Table 37. MCQ difference correlation with performance change

Correlation – MCQ behaviour change and boss performance rating


change
Prior MCQ and prior performance rating ‘partialled out’
Sum
AO DO DI II IU OA TL
MCQ
Correlation 0.553 0.084 0.007 0.048 0.048 0.039 0.474 0.474
Significance 0.000** 0.176 0.916 0.439 0.445 0.532 0.000** 0.000**
Significance ** .01

It appears from these results that participant boss’s performance rating of


individual business impact reflects demonstration of managerial competency with a
particular emphasis on Achievement Orientation and Team Leadership. As the
performance rating is the boss’s perception, it is reasonable to assume that the boss’s
performance rating change would reflect their own rating of the MCQ and the difference
from pre to post-test. Table 38 shows this correlation, with Organisational Awareness
124
significant at the 5% level, however, the correlation is negative14% – suggesting that
higher levels of Organisational Awareness may actually have a negative impact on
performance. Achievement Orientation is positively significant at the 1% level, as is
overall MCQ change able to represent for some 21% of the variance observed.

Table 38. Correlation boss's performance rating change with boss's rating of change in MCQ

Change in performance rating and change in MCQ - correlation


Sum
AO DO DI II IU OA TL
MCQ
Change in
0.337 0.077 -0.112 -0.025 0.053 -0.137 -0.029 0.218
Performance Rating
Significance 0.000** 0.221 0.072 0.687 0.397 0.026* 0.645 0.000**
Significance ** .01, * .05

It makes intuitive sense that Achievement Orientation, being a competency


concerned with achieving results and competitiveness, has a significant impact on business
results and this concurs with Young’s (2002) model discussed above in the literature
review on business results and linking individual managerial competency to organisation
competence and performance.
Achievement Orientation development is implicit in the training programme as has
been discussed in the Operationalistion chapter, and considered to be developed in
particular in the experiential activity. It follows from these results, that if a participant
enjoys and finds the activity useful, they will, ultimately, have a higher rating for their
performance at work after the training event. Table 39 shows this correlation as being
significant at the 1% level:

Table 39. Correlation participant reaction to activity and performance change


Correlation of participant reaction to the
activity and performance rating change
Enjoy Useful
Activity Activity
Performance Rating
0.273 0.236
change
Significance 0.000** 0.000**
Significance ** .01

The results from the analysis on correlation across each of Kirkpatrick’s four levels
of evaluation suggest that there is a significant relationship, and in particular, that
enjoyment and usefulness reaction to the activity is positively correlated to learning,
transfer and results. Other factors are certainly involved and these will be considered in
more detail in the remaining sections of this chapter. However, this being a possible and
significant explanation of the results the researcher uses multiple linear regression to
125
analyse the relationship with single dependent or criterion variables and several
independent or predictor variables. The objective of the technique is to confirm the results
indicated above.
The objective of the multiple regression undertaken here was to establish if there is
a general linear model that can be used to explain the data. The dependent variable is the
difference in boss’s performance rating, and the analysis included all reaction variables,
learning variable and the MCQ difference variables. Using a model of hierarchical
forward with switching, the calculations ultimately establish those variables that explain
most of the difference, the relationship and the significance of the relationship. After
several iterations, the full model providing the most useful observation included a
maximum of 3 subset terms which when reached terminated the algorithm. The results are
shown in Table 40

Table 40. Multiple Regression - dependent boss performance rating


Analysis of
Variance Dependent = Difference Boss Performance
Power
Independent Mean Sig of
R2 F-Value
Variable Square * .05 Test
at 5%
Model 0.078 12.443 7.423 0.000* 0.985
Enjoy Theory 0.009 4.207 2.510 0.114 0.352
Enjoy Activity 0.069 32.714 19.517 0.000* 0.993
Usefulness
0.002 0.908 0.542 0.462 0.114
Feedback

The best fit model included the factors of reaction of enjoyment of the theory,
enjoyment of the activity and usefulness of the feedback. The only significant relationship
found was with the enjoyment of the activity, able to explain less than 7% of the
difference. When we consider the effectiveness of a training intervention holistically
across all four levels of Kirkpatrick’s model, the results suggest that enjoyment of the
activity is an important factor ultimately in the business results realised, and has already
been seen above, is also important to learning gained, and learning transfer. However,
clearly, it does not explain close to everything and there are other, more substantive
factors that affect the effectiveness of the training programme.
The data analysis shows significant supporting evidence to show that the
simulation and game groups’ participants rated a more favourable reaction to the training

126
programme, particularly the enjoyment and perceived usefulness of the activity, and this in
turn shows a significant correlation with learning, learning transfer and performance
impact - something that the supporters of simulations and gaming have long claimed in the
literature. However, the correlations suggest that other factors have an influence and the
analyses move on to consider learning styles and personality in RQ2, the effect of
teamwork in RQ3 and personal background in RQ4.

127
6.2 Effect of learning styles
The literature on learning styles and how an individual’s preference may shape
learning suggests that an individual with a convergent learning style, the combination of
Active Experimentation and Abstract Conceptualisation would, according to Kolb (1976)
have a characteristic strength in the application of ideas and enjoy formal learning
situations that include the use of simulations. Whilst there is considerable debate about
how learning style can be and is measured (Mainemelis et al., 2002) and whether learning
style is the same as cognitive style (Sadler-Smith, 2001, Reynolds, 1997a) – the concept of
learning style is well-understood in the business training community and provides some
benefit in an individual’s self-awareness that may help them in their learning. The
researcher is interested to establish in this analysis if, as Kolb (1997) suggests a link
between learning style preference and participants reaction to the different aspects of the
training programme, and the activity in particular.
RQ2 Do participants with different learning style preferences show differences in
reaction, learning or learning transfer?
The summary results of ANOVA of the participants KOLB LSI preference for
each variable in reaction, learning and learning transfer between the groups is shown in
Table 41 below:.
Participants’ reaction to the usefulness of the activity, teamwork and debrief show
significant difference at the 5% and 1% levels respectively, whilst in learning transfer,
Achievement Orientation and Developing Others are significant at 5% and Interpersonal
Understanding at 1% level. There are differences worth investigating and particularly to
understand if, as Kolb et al (2000) suggest, converging learning preference will prefer and
show greater improvement in learning, and learning transfer than other LSI preferences.

128
Table 41. Summary ANOVA table Kolb LSI preference on reaction, learning and learning
transfer between groups
Summary ANOVA table Kolb LSI preference on
reaction, learning and learning transfer between
groups
Summary Kolb LSI Preference
ANOVA F-Ratio Significance Power
Enjoy Activity 1.29 0.280 0.342
Enjoy Teamwork 0.78 0.506 0.217
Enjoy Debrief 0.78 0.508 0.216
Enjoy Theory 1.21 0.306 0.323
Useful Activity 3.01 0.031* 0.706
Useful Teamwork 4.90 0.003** 0.906
Useful Debrief 4.20 0.006** 0.854
Useful Theory 0.62 0.603 0.179

Achievement
3.35 0.020* 0.756
orientation
Developing
3.05 0.029* 0.713
Others
Directiveness 0.14 0.936 0.076
Impact &
1.34 0.261 0.356
Influence
Interpersonal
9.39 0.000** 0.997
Understanding
Organisational
1.67 0.173 0.436
Awareness
Team Leadership 2.06 0.106 0.524
Performance
1.27 0.286 0.337
Improvement

Learning 1.54 0.204 0.404

Significance ** .01, * .05

H2.1 Converging learners prefer and show greater improvement in managerial


competency from a computer-based simulation or game intervention than individuals
with other learning style preferences.
The data subjects were recombined according to their preferred learning style,
grouping all the Diverging, Accommodating and Assimilating participants into one group
- named ‘Other LSI Preferences’ leaving those with a Converging style in their own
group. The chart and combined table in Figure 17 provides a summary of ANOVA of
Convergent and Other LSI Preferences for the simulation and game groups only and the
mean difference of MCQ competencies. The competency factors Achievement Orientation
and Developing Others are significant at the 5% level supporting the hypothesis. However,
Interpersonal Understanding is significant at the 5% level counter to the hypothesis. Other
competency factors show marginal and non-significant differences. Given the results, the
hypothesis is not supported.
129
Summary Means & Effects

7.000

6.000

5.000

MCQ Difference

4.000
Mean Converging
Mean Other LSI
3.000

2.000

1.000

0.000
Achievement Developing Directiveness Impact & Interpersonal Organisational Team
orientation Others Influence Understanding Awareness Leadership

Mean Converging 6.387 5.532 5.790 5.532 2.887 4.581 5.661


Mean Other LSI 5.107 4.060 5.778 5.658 5.322 4.389 5.617
Mean Square 71.700 94.847 0.006 0.689 259.604 1.604 0008
F-Ratio 5.61 5.87 0.00 0.04 23.44 0.15 0.00
Sig 0.0187* 0.016* 0.984 0.834 0.000** 0.701 0.948
Power (ά=0.05) 0.655 0.674 0.050 0.055 0.997 0.067 0.050
Significance ** .01, * .05

Figure 17. Converging and other LSI ANOVA MCQ difference

The LSI has been subject to much criticism as has been discussed, partly because it
is a self-assessment and partly because it may not be a suitable instrument for working
managers (Honey and Mumford, 1982). This may explain why the results of the analysis
do not support the hypothesis, or it may indicate that the critiques for learning style as a
concept would consider this supporting evidence for their position. Either way, the results
do not appear to provide much insight into the observed differences in learning, transfer or
results. The more robust MBTI instrument may provide better insight.

130
H2.2 An individual with MBTI type EN will prefer a converging learning style
on the LSI
The analysis here requires simple comparison between the results of the LSI and
MBTI for each individual. The results of this simple comparative analysis show that 78
individuals within the groups show MBTI type EN, of these, only 26 show an LSI
preference for the converging learning style. If the MBTI instrument has greater construct
validity than the LSI III instrument, it may be more accurate in reflecting an individual’s
learning style. This may explain why the comparative ANOVA in H2.1 does not show the
same results.
H2.3An individual with MBTI type EN will prefer and show greater improvement in
managerial competency from a computer-based simulation or game intervention
than individuals with other dominant personality types.
Using the MBTI learning style indication that type EN would prefer a converging
learning style shows similar differences to those shown above for LSI preferences,
however, none of the differences are significant. The combined chart and table in Figure
18 below provides a summary overview of this analysis:

Summary Means & Effects

7.000

6.000

5.000
MCQ Difference

4.000
Mean Converging
Mean Other LSI
3.000

2.000

1.000

0.000
Achievement Developing Directiveness Impact & Interpersonal Organisational Team
orientation Others Influence Understanding Awareness Leadership

Mean Converging 6.572 5.000 5.552 5.345 3.983 4.690 5.776


Mean Other LSI 5.412 4.301 5.869 5.725 4.843 4.353 5.575
Mean Square 2.857 20.569 4.241 6.094 31.133 4.768 1.694
F-Ratio 0.22 1.24 0.26 0.39 2.56 0.44 0.09
Sig 0.641 0.266 0.607 0.533 0.111 0.508 0.769
Power (ά=0.05) 0.075 0.199 0.081 0.095 0.357 0.101 0.059

Figure 18. Converging and other MBTI LSI ANOVA MCQ difference

131
Reviewing the results of ANOVA of all reaction, learning and MCQ data by
activity type and Kolb LSI (or MBTI) learning style preferences shows significant
difference in every factor, often as a combination of preference and the activity type.
However the results are mixed, showing converging and other preferences apparently in
counter positions to the literature. This may be due to the fact that all programmes are
blended and designed, as a whole, to appeal to, and be useful to all learning preferences.
Future research may consider comparison with a group who only undertake the
simulation, with no tutor input or interference – however, this researcher’s personal
experience suggests that such a group would not gain the intended learning or behaviour
change intended and was ruled out as unethical during the research design (Remenyi et al.,
1998).
An interesting finding regarding converging MBTI preference over other
preferences was found to be evident in the enjoyment and usefulness of the activity. It was
found that other learning preferences rated their enjoyment and usefulness of the activity
significantly higher than converging learners, and this was in turn found to be only
significant in the simulation group. Figure 19 below, shows a summary of the ANOVA
and charts for the simulation group.

132
Analysi s of Variance Table
Sig
Mean F- Power
Source * .05
Square Ratio (Alpha=0.05)
Term Level
Enjoy
4.892 5.76 0.017* 0.666607
Activity
Useful
2.970 4.04 0.046* 0.516595
Activity
Simulation Group only
ANOVA by MBTI Converging and other LS
Enjoy
1.389 4.53 0.036* 0.557571
Activity
Useful
2.133 11.89 0.001* 0.926216
Activity

Means of ENJSIM
4.8

4.7
E NJS IM

4.6

4.5

4.5
Convergi OtherLSI
MBTI_Converge

Simulation group only

Means of USEFULSM
4.9

4.8
U SEF UL SM

4.7

4.6

4.5
Convergi OtherLSI
MBTI_Converge

Simulation group only

Figure 19. MBTI Learning style and enjoyment/usefulness of activity

Further analysis showed that Assimilators and Accommodators rated their


enjoyment of the simulation higher than converging and diverging, and all three other
preferences rated higher than converging for usefulness of the simulation – significant at
the 5% level.

From this, we might surmise that the management simulation appeals to other
learning style preferences contrary to literature – this may be explained by the high fidelity
of the simulation causing high levels of presence (Salzman et al., 1999, Stanney et al.,

133
1998) in the simulation for the participants who consider the activities within the
simulation to be concrete experience and reflective observation of events – certainly the
simulation’s designers intended this to be the case. The game, on the other hand, has less
fidelity and is more obviously a game and not reality and appealed almost equally to all
participants.
For the purposes of this research, learning style does not appear to have a
significant influence on learning or learning transfer in such blended training programmes
but the design of the simulation or game in terms of its fidelity may be viewed as
increasingly important (see Mowbray et al., 2003, for more on evaluating fidelity). The
analysis has highlighted a number of potentially interesting avenues for further research
but any conclusion drawn from these results should be considered suspect and in need of
both verification and validation. We will now consider the effect of teams and teamwork
on reaction, learning and transfer of learning.

6.3 Effect of teams and groups


The literature on how individual performance is affected by their membership of
teams or groups suggests that teams produce superior performance to individuals (Higgs,
1999). This researcher is interested to observe if this is reflected in participant reaction to
working in a team during the training, suggesting that if an individual enjoys working in
his or her team, they will gain greater performance shown in learning and transfer.
RQ3 Does participant rating of their enjoyment and perceived usefulness of team
work reflect differences in learning or learning transfer?
We have already seen in RQ1 that the participant’s reaction to the experiential
activity correlates with learning and learning transfer and noted in the same section that
participant’s reaction to the teamwork element of the training programme was the next
most influential factor. This question relates more directly to the literature that greater
learning may take place in the simulation or case study teams over the competing teams in
the game (Higgs, 1999).
The combined chart and table Figure 20 below shows the differences in MCQ and
learning by activity type.

134
Means by Activity type

6.500

6.000

5.500
DI

5.000 II
AO

4.500
IU

4.000
DO
OA
TL
3.500

Learning
3.000
Simulation Game Case study

Thicker lines highlight those significant at 1% level and interpersonal


understanding (IU) whose profile is counter to all others.

ANOVA by Activity type


Mean Prob Power
F-Ratio
Square Level (Alpha=0.05)
AO 18.40 1.40 0.248 0.299
DO 17.50 1.10 0.335 0.242
DI 59.94 3.92 0.021* 0.702
II 8.18 0.53 0.589 0.136
IU 33.40 2.80 0.063 0.547
OA 30.06 2.79 0.063 0.546
TL 112.83 6.31 0.002** 0.895
Learning 18.41 47.43 0.000** 1.000
Significance ** .01, * .05

Figure 20. ANOVA MCQ and learning by activity type

The data are counter to Higgs (1999) suggestion excepting Interpersonal


Understanding which, however, is not significant. Learning, Directiveness and Team
Leadership all show a significant difference and all show the game groups fairing better
than the simulation or case study groups.
Interpersonal Understanding development in the simulation group may be because
the teams of three discuss their decisions both with each other, and effectively, with the
simulation – being co-operative in design and requiring participants to consider the
reactions of the simulation characters (players). The case study teams typically work for
consensus in decision making and hence this may be a better medium to develop such
skills over a competitive game.
135
The game, being competitive (and sometimes extremely competitive!) may provide
a better medium for participants to develop their Team Leadership and Directiveness
skills.
H3.1 Participant rating of enjoyment and perceived usefulness of teamwork will
positively correlate with learning and change in managerial competency.
Table 42 below shows a summary correlation matrix of the reaction variables
enjoyment and perceived usefulness of teamwork with each of the MCQ competency
changes and learning.

Table 42. Correlation teamwork with MCQ and learning


Correlation Enjoyment and usefulness of teamwork and MCQ and learning
AO DO DI II IU OA TL Learning
Enjoy
0.090 -0.062 -0.005 -0.029 -0.058 0.007 0.181 0.372
Teamwork
Sig 0.148 0.322 0.943 0.640 0.355 0.915 0.003** 0.000**
Usefulness
0.062 -0.052 0.029* 0.084 -0.095 0.074 0.225 0.332
Teamwork
Sig 0.320 0.405 0.641 0.176 0.129 0.234 0.000** 0.000**
Significance ** .01, * .05

Following Bal (1995) those who enjoy the teamwork and find it more useful may
have a greater sense that they belong to a team, engendering freer debate and discussion
leading to greater performance. However, only Team Leadership and Learning are
significant at the 1% level whilst the remaining competencies show no significant
relationship. The hypothesis is only partially supported and – considering the evidence
above, counter intuitive regarding Interpersonal Understanding. We might expect someone
who enjoys and finds teamwork useful to also want to understand others and, particularly
in these inter-cultural groups and teams, to demonstrate greater inter-cultural sensitivity
(Hofstede, 1980, 1991, Trompenaars and Hampden-Turner, 1993) – such does not appear
to be the situation here.
This supports the notion that it is not only the team’s composition (Belbin, 1981,
Belbin et al., 1976), but also the task and the context in which the task takes place that
shapes performance and the results suggest that a different learning context such as
cooperation, collaboration or competition with others may influence which behaviours or
skills are most developed through the training (Higgs et al., 2003). Given the simplicity of
the constructs measured, all be it ones that are frequently seen in training programme
feedback forms, the evidence is somewhat sparse and the multitude of variables that are
considered to affect a team’s effectiveness and how the mix of individuals in the team
affects performance suggests that many factors should be taken into consideration before
passing judgment on the results seen here. However, it may help focus further research

136
efforts to determine, if, as is suggested here, that the nature and context of the team task
within a learning environment affects the development of particular competencies (Higgs
et al., 2003). This should be considered carefully, as the number of team members is
variable within any particular programme group and different between programmes under
study. As the team size was not pre-determined nor able to be enforced rigorously by the
researcher this may be a limitation on the results.

137
6.4 Effect of demographics
The final area of analysis considers the demographic data collected as part of the
study. The literature review highlights a number of ways in which an individual’s personal
background and current situation may shape learning.
RQ4 How do age, gender, seniority, prior education and cultural background
influence the results?
It can be seen from above that the experiential activity and to a significant, but
lesser extent teamwork, are significant precursors to learning and learning transfer, but
they do not account for all learning or change in behaviour. Nor, from the literature, would
we expect this to be the case. Other factors influence how an individual learns and,
importantly, their choice to transfer such learning to the workplace. This research does not
intend to be comprehensive by attempting to cover all possible variables that may
influence an individual to learn and transfer learning, and most notably does not include a
measure for motivation to learn or transfer learning for reasons seen above in the
operationalisation of the research. However, from the literature and life experience, we
might anticipate some commonly referred differences in individuals to show differences in
their own learning or behaviour change.

H4.1 Younger managers will enjoy the simulation or game more than older
managers.
Participants reaction on their enjoyment and usefulness of the simulation or game
activity during the training programme shows that younger manager (those under 30
years) significantly enjoy the activity more than older managers (Figure 21).
Similarly, younger participants significantly rated the usefulness of the simulation
or game activity higher than older managers (Figure 22)

138
Standard
Mean
Error
Under 30 4.879 0.105
30 to 35 4.521 0.007
36-40 4.609 0.007
Over 40 4.479 0.008

Mean Power
F-Ratio Sig
Square (Alpha=0.05)
1.103 3.44 0.018* 0.766
Significance * .05
Means of ENJSIM
4.9

4.8
EN JSIM

4.7

4.5

4.4
Under 30 30 to 35 36-40 Over 40
AGE

Figure 21. Enjoyment of Simulation or Game by age group

Standard
Mean
Error
Under 30 4.831 0.102
30 to 35 4.454 0.007
36-40 4.725 0.007
Over 40 4.531 0.008

Mean Power
F-Ratio Sig
Square (ά=0.05)
1.441492 4.75 0.003** 0.896
Significance ** .01
Means of USEFULSM
4.9

4.8
U SEFU LSM

4.7

4.6

4.5
Under 30 30 to 35 36-40 Over 40
AGE

Figure 22. Usefulness of Simulation or Game by age group

139
The anomaly with both ratings appears to be with managers in the 30 to 35 age group,
so by means of comparison, the same ANOVA test was applied to the case study
groups only. Usefulness of the case study does not show significant results, though
Figure 23 indicates that the 30 to 35 age group significantly (at the 10% level) enjoyed
the case study activity more than other age groups.

Standard
Mean
Error
Under 30 2.688 0.175
30 to 35 3.128 0.014
36-40 2.556 0.023
Over 40 2.600 0.031

Mean Power
F-Ratio Sig
Square (Alpha=0.05)
1.148 2.33 0.085* 0.681
Significance * .05
Means of ENJSIM
3.2

3.0
EN JSIM

2.9

2.7

2.5
Under 30 30 to 35 36-40 Over 40
AGE

Figure 23. Enjoyment of Case Study by age group

Such results may be counter-intuitive to the indication by Aldrich (2002, Aldrich,


2005), however, in the societies in which this research was conducted, and only on
anecdotal evidence and the authors personal experience, those in the 30 to 35 age group
have a tendency to prefer a very safe environment, enjoying training that is familiar in
style and methodology to their own higher education. Such will be investigated in part
when we consider the influence of culture and prior academic achievement in the

140
following analyses H4.5 and H4.6. In the meantime, we shall consider the vexing question
of gender difference.
Further analysis of the differences of enjoyment and usefulness rated by
participants, their age and the activity type was undertaken to understand why the results
for all groups are counter-intuitive. Table 43 shows ANOVA of enjoyment and usefulness
and learning by age and activity type with significant differences for each evident.
Table 43. Age and activity type ANOVA
Age and Activity Type

Analysis of Variance Table


Source Mean F- Sig Power
Term Square Ratio (ά=0.05)
Enjoyment Activity
A:Age 0.493 1.39 0.245 0.366
B:Activity
71.142 200.06 0.000** 1.000
Type
AB 0.733 2.06 0.058 0.741
Usefulness Activity
A:Age 0.431 1.09 0.354 0.293
B:Activity
38.951 98.47 0.000** 1.000
Type
AB 1.006 2.54 0.021* 0.839
Learning
A:Age 0.165 0.44 0.722 0.139
B:Activity
12.179 32.83 0.000** 1.000
Type
AB 1.475 3.98 0.001** 0.970

Significance ** .01, * .05

Figure 24 shows the charts of the same ANOVA providing a clear view that the
simulation and game rated higher enjoyment, usefulness and learning than the case study
group and that the simulation was found enjoyable by both the younger managers and the
over 40’s managers. The over 40’s managers show higher learning from the case study
than their younger counterparts and found it more useful.

141
Means of ENJSIM
5.0
SIMTYPE
Simulation
Game
4.4 Case Study

EN J SIM
3.8

3.1

2.5
Under 3030 to 35 36-40 Over 40
AGE

Means of USEFULSM
5.0
SIMTYPE
Simulation
Game
4.4 Case Study
U SEF U LSM

3.8

3.1

2.5
Under 3030 to 35 36-40 Over 40
AGE

Means of Learning_Post
5
SIMTYPE
Simulation
Game
4 Case Study
Learning_Pos t

3
Under 3030 to 35 36-40 Over 40
AGE

Figure 24. Age and activity type ANOVA charts

A plausible reason for such a finding is that these older learners find the game less
enjoyable than the simulation due to their possible preference for the ‘softer’ strategic
level decision-making attributes or the non-competitiveness of the simulation. This is
consistent with Drew and Davidson (1993) claims suggesting that these learners seem to
prefer practical and relevant training environments which draw on their experience in
promoting strategic thinking and understanding of complex business systems.

142
H4.2 Male and female participants will show no distinguishable differences in
competencies before or after the training intervention
After Kenworthy and Wong (2005) we might anticipate differences in learning and
learning transfer between male and female participants. The data show a significant
difference in learning across all groups at the 10% level and significant difference at the
5% level in the case study and game groups, but not the simulation group.
Form this and following the correlation seen between learning and behaviour
change we might anticipate that female managers from the case study and game groups
will show a greater improvement in learning transfer than males contrary to the next
hypothesis:
H4.3 Male and female participants will show no significant difference in change of
managerial competency
In spite of the evidence above, pre and post test MCQ results show only one
significant (at 5%) difference between male and female participants – Achievement
Orientation (Figure 25).

Standard
Mean
Error
Male 5.044 0.268
Female 5.985 0.039

Mean Power
F-Ratio Sig
Square (ά=0.05)
51.16111 3.92 0.049* 0.506
Significance * .05
Means of DIFAVGAO
6.0

5.8
D IF AVGAO

5.5

5.3

5.0
Male Female
GENDER

Figure 25. Gender and achievement orientation

Considering that participant’s bosses appear to rate performance highly when an


individual demonstrates high Achievement Orientation, this change in behaviour may not
be simply a nice to have, but a critical differentiator to business results. So why should this
be the case? Anecdotal evidence of discussions between this researcher and the
143
participants suggests that the training programme provided them an opportunity, otherwise
not available, to practice and demonstrate this competency which would otherwise be
frowned upon in these more patriarchal (Hofstede, 1980) societies. Something for further
research using a case study approach and more qualitative approach would seem more
appropriate for this.

H4.4 Senior managers show a greater level of managerial competency before the
training.
It is reasonable to assume that managers in more senior organisation positions
would demonstrate higher levels of managerial competency as measured by the MCQ.
Figure 26 shows that this is not the case, managers show small, non-significant higher
levels of managerial competency before the training programme than senior managers.

Pre MCQ Managers and Senior Managers

20
18
16
14
12 Managers
10
8 Senior managers
6
4
2
0
Influence

Understanding
Achievement

Developing

Organisational
Directiveness

Impact &

Leadership
Interpersonal
orientation

Awareness
Others

Team

Power
Mean square F-Ratio Sig
(ά .05)
14.954 1.84 0.176 0.272
32.152 2.66 0.104 0.369
23.084 2.91 0.089 0.397
4.852 0.64 0.426 0.124
5.607 1.10 0.296 0.180
7.618 1.06 0.305 0.175
34.616 2.19 0.140 0.314

Figure 26. Pre MCQ managers and senior managers

We will investigate this apparent anomaly further in H4.6.

144
H4.5 Participants with higher prior academic achievement demonstrate higher
levels of managerial competency
The summary chart and table Figure 27 below shows the post-test means across the
MCQ competencies by prior academic attainment. The error bars on each show the
difference from pre to post-test MCQ:

23

22

21

20

Undergrad 19
Graduate
PostGrad 18

17

16

15

14
Achievement Developing Directiveness Impact & Interpersonal Organisational Team
orientation Others Influence Understanding Aw areness Leadership

Mean Difference
Undergrad 4.220 4.902 5.878 4.122 4.415 3.854 4.585
Graduate 5.649 4.278 5.476 5.578 4.957 3.988 5.600
PostGrad 5.417 4.250 5.896 5.979 4.125 4.708 4.833
Mean Square 32.169 7.041 5.713 50.346 19.639 17.651 24.372
F-Ratio 2.47 0.44 0.36 3.32 1.63 1.63 1.31
Sig 0.087 0.645 0.696 0.037* 0.198 0.199 0.270
Power (ά=0.05) 0.493 0.121 0.108 0.626 0.343 0.342 0.283
Pre-Test significant
0.0025** 0.0134*
difference
Post-test significant
0.022* 0.002**
difference

Figure 27. Summary ANOVA MCQ differences on prior academic attainment

Across 6 of the 7 competencies – graduates were rated lowest on the pre-test with
Organisation Awareness and Interpersonal Understanding showing a significant
145
difference. Graduates were rated higher than both post-grads and undergrads in
Directiveness though not significantly.
Post-test ratings show increments across all groups across all competencies with
Impact and Influence and Organisational Awareness showing a significant difference.
The data does not support the hypothesis and it appears that prior academic
attainment does not explain differences in managerial competencies prior to the training
programme, nor the change in competency levels observed following the training
programme.
H4.6 There will be no difference between participants from a different cultural
heritage at the four levels of evaluation.
After Hofstede (1980, 1991) we might anticipate that participants with different
cultural heritage would rate their enjoyment of each aspect of the training programme
differently. The data showed that western participants consistently rate their enjoyment
lower than the others, and when grouping the Asian participants together and comparing
the results, enjoyment of the activity and debrief sessions were significantly (5%) rated
higher by Asian participants.
At the level of learning, Asian participants showed significant greater learning than
their western counterparts (Figure 28):

Asian-Western Learning Difference


Mean Prob Power
F-Ratio
Square Level (ά=0.05)
4.727 9.3 0.003** 0.859
Significance ** .01
Means of Learning_Post
4

4
Learning_Post

4
Asian Western
Asian_Western

Figure 28. ANOVA Learning difference Asian and Western heritage

146
Significant differences are seen between participants based on their cultural
heritage in the simulation and game groups. This might be anticipated when taking the
concept of “face” into consideration (De Vos, 1985, Ho, 1976, Hwang, 1987) where both
the simulation and game provide a setting in which participants are making judgements in
a safe environment and thus offset the relative proclivity of Asians to dwell on face-saving
in the presence of an audience. At the learning transfer level, Asian participants show a
significant greater improvement in Developing Others (Figure 29).

Asian-Western Sim and Game Group


Developing Others
Mean Power
Square F-Ratio Sig (ά=0.05)
118.9208 7.41 0.007** 0.773
Significance ** .01
Means of DIFAVGDO
5.0

4.4
D IF AVGD O

3.8

3.1

2.5
Asian Western
Asian_Western
Asian-Western Sim and Game Group
Learning
Mean Power
Square F-Ratio Sig (ά=0.05)
1.951081 8.34 0.004** 0.819
Significance ** .01
Means of Learning_Post
4

4
Learning_Post

4
Asian Western
Asian_Western

Figure 29. ANOVA Simulation and Game Group - learning and developing others - Asian-
Western difference
147
An interesting aspect uncovered during the analysis suggests evidence that Asian
managers show higher ratings in five MCQ factors at pre-test, and three MCQ factors at
post-test, than Western managers and Western senior managers.
The Western senior managers are predominantly expatriates, and the data suggests
that they do not demonstrate as high a level of managerial competency as their Asian,
predominantly local, subordinates in Achievement Orientation, Developing Others and
Interpersonal Understanding. The combined table and chart Figure 30 below shows this
finding which is somewhat surprising given that reasoning frequently given for employing
expatriate senior managers is due to the shortage of local talent. The data suggests that this
may not be the situation and that bringing expatriates in could be causing local talent to be
overlooked for promotion in spite of their demonstration of superior levels of managerial
competency. It may suggest that Organisational Awareness is a critical competency factor,
for participating organisations, when deciding whether to promote local managers or bring
in expatriates.
Pre MCQ Post MCQ
Western Western
Asian F- Asian F-
Senior Sig Senior Sig
Manager Ratio Manager Ratio
Manager Manager
Achievement
16.33 15.10 7.36 0.007** 21.60 20.00 8.11 0.005**
orientation
Developing Others 17.67 17.45 21.52 21.00 9.82 0.002**
Directiveness 15.90 15.10 4.12 0.043* 20.57 21.06
Impact & Influence 16.19 16.13 21.19 22.77
Interpersonal
16.57 16.87 21.86 21.13
Understanding
Organisational
16.62 18.06 21.52 22.00
Awareness
Team Leadership 16.00 15.48 3.95 0.047* 20.07 20.48
Significance ** .01. * .05

23.00 23.00

22.00 22.00

21.00 21.00

20.00 20.00

19.00 19.00

18.00 18.00

17.00 17.00

16.00 16.00

15.00 15.00

14.00 14.00
Achievement Developing Others Directiveness Impact & Influence Interpersonal Organisational Team Leadership
orientation Understanding Awareness

Pre MCQ Western Senior Manager Post MCQ Western Senior Manager Pre MCQ Asian Manager Post MCQ Asian Manager

Figure 30. ANOVA Cultural heritage and position, pre and post test MCQ

148
Chapter 7 Discussion
7.1 Analysis summary
Table 44 below presents a summary of the results for the research questions and
hypotheses:
Table 44. Summary Research Questions and Hypotheses and Findings
Summary
Research Question/Hypothesis
Finding
Are computer-based business training simulations and games an
RQ1 Yes
effective way to develop management learning and learning transfer?
The simulation and game groups will show higher ratings in participant
H1.1 Accept at .01
reaction than the case study groups
The simulation and game groups will show greater learning than the case
H1.2 Accept at .01
study groups
The simulation and game groups will show greater change of
H1.3 Reject
demonstrated managerial competency than the case study groups.
The simulation and game groups will show higher bosses rated
H1.4 Accept at .01
performance change between groups.
R2 0.39
H1.5 Participants reaction will correlate with learning
Accept at .01
R2 0.33
H1.6 Participant learning will correlate with change in managerial competency
Accept at .01
Participant change in managerial competency will correlate with bosses R2 0.47
H1.7
rating of performance impact. Accept at .01
Do participants with different learning style preferences show Partially
RQ2
differences in reaction, learning or learning transfer? shown
Converging learners prefer and show greater improvement in managerial
H2.1 competency from a computer-based simulation or game intervention than Reject
individuals with other learning style preferences
An individual with MBTI type EN will prefer a converging learning style
H2.2 Reject
on the LSI
An individual with MBTI type EN will prefer and show greater
improvement in managerial competency from a computer-based
H2.3 Reject
simulation or game intervention than individuals with other dominant
personality types
Does participant rating of their enjoyment and perceived usefulness Partially
RQ3
of team work reflect differences in learning or learning transfer? shown
Participant rating of enjoyment and perceived usefulness of teamwork
H3.1 will positively correlate with learning and change in managerial Reject
competency
How do age, gender, seniority, prior education and cultural
RQ4
background influence the results?
Younger managers will enjoy the simulation or game more than older Partially
H4.1
managers accept
Male and female participants will show no distinguishable differences in
H4.2 Reject at .10
competencies before or after the training intervention
Male and female participants will show no significant difference in Accept
H4.3
change of managerial competency except AO
Senior managers show a greater level of managerial competency before Not
H4.4
the training. supported
Participants with higher prior academic achievement demonstrate higher Not
H4.5
levels of managerial competency supported
There will be no difference between participants from a different cultural Not
H4.6
heritage at the four levels of evaluation. supported

149
The simulation and game groups demonstrate participant reaction to the training
programme, learning, learning transfer and business performance impact comparable, and
often superior to that demonstrated by participants in the case study group. It is clear from
the data, that both simulations and games can be said to be effective and perhaps superior
methods to develop managerial competencies to the case study method as found by Wolfe
and Guth (1975). Thirty years later on this study adds an important dimension to their
study, firstly it has been conducted in the business training world rather than an academic
environment and secondly, the research has considered not just the learning gained by
participants but also the learning transfer of managerial competencies into the workplace
and to an extent, the business impact of the training.
The results show some support for the idea that each of Kirkpatrick’s levels of
evaluation are precursors to the next higher level, with significant correlations from
participant reaction to learning and from learning to transfer of learning – noting in
particular, that participant enjoyment and perceived usefulness of the experiential activity
correlates most strongly and with business performance impact. The results provide
empirical support for the advocates of simulation and game-based learning that they are
deemed more enjoyable and more useful than more traditional methods such as case
studies or lecture by participants and partly through greater enjoyment, lead to greater
learning and to greater transfer of learning in terms of demonstrated behaviours in the
workplace after the training event.
Whilst strong and significant correlations are seen, the use of one method over
another does not explain all the learning, transfer or business performance impact seen.
Other factors have been considered and in particular, the research has considered the
impact of personal learning preference and personality type with inconclusive results.
The results do not provide support that computer-based simulations are more
effective for converging than any other learning styles. The failure to detect a significant
influence of learning styles may be due to issues relating to the use of the Kolb (1999)
LSI. Although extensively used in training settings, the evidence for the construct validity
and reliability of this measure has not been convincing (Towler and Dipboye, 2003) and
this may be reflected in the results seen. The literature having forewarned this possible
issue led the researcher to also include the use of the well-known and widely used MBTI
instrument (Hodgkinson and Sadler-Smith, 2003). The MBTI instrument is well validated
(Myers and McCaulley, 1985) and shows similarly inconclusive results to the LSI
preferences. The fact that the MBTI and LSI instruments did not concur with each other
may have led us to consider that one or the other may be more useful, however the LSI
150
showed converging style (not significantly) enjoying the simulation or game more than
other LSI preferences, the MBTI showed other preferences (significantly) than converging
preference. The use of computer-based simulations may have different effects on different
learners and these differences can transcend to differences in the preferred way of
encountering and assimilating new information (Snyder and Vaughan, 1996). The
inconclusive results may highlight the importance of fidelity in simulations and games
(Alessi, 1988, Mowbray et al., 2003), the concept of presence (Salzman et al., 1999,
Stanney et al., 1998) and/or the suitability of simulations and games, in a blended training
programme, for participant enjoyment, learning and learning transfer regardless of
indicated learning preference.
Other factors considered about participants’ background have highlighted some
surprising results, in particular that females show a greater improvement in Achievement
Orientation than their male counterparts and that this competency, above others seems to
be foremost driver of bosses’ performance ratings. The research also shows that Western
senior managers show lower levels of competency than Asian managers, excepting
Organisational Awareness – suggesting that this competency may be considered more
important when choosing who will fill senior management posts in the region.
The findings about participant age and their enjoyment and perceived usefulness
and learning within simulations and game based training programmes have important
implications for the design and delivery of such interventions. It suggests that simulations
used in the present study may be well-suited for older, senior managers as learning is
grounded in a meaningful experience of ‘doing’. Garris et al. (2002) proposed that the key
features of such advanced, dynamic simulations are that they represent real-world systems,
and contain rules and strategies that allow flexible and variable learning activity to evolve.
Comparatively, in a Game, what we are triggering is the ‘competitive spirit’ (i.e.
playfulness, achievement, greed, and victory) (Prensky, 2000), and the weakness of games
is their inability to provide the learner with a dynamic environment (Feinstein et al.,
2002). This finding should, however, be interpreted carefully as results could be biased by
the fewer representatives in both the youngest and oldest groups defined. The data show
that younger managers find the competitive nature of the game more enjoyable since they
enter the workforce with a lot more competitive computer-gaming experience (Lundy,
2003, Prensky, 2000, Aldrich, 2005); while older, more senior managers, find the
simulation more enjoyable as it provides an ideal way of learning from leveraging their
business experience (Bertsche et al., 1996).

151
In short, the results provide evidence to support that using a management
simulation or game within a blended learning training programme show greater participant
enjoyment and perceived usefulness, greater learning and higher levels of demonstrated
managerial competency than the same programme using case studies. It could be that
knowledge learned from simulations and games is more likely to be integrated into the
cognitive structure of learners because of the higher level of active participation, interest,
enjoyment or involvement (Randel et al., 1992). As highlighted by Feinstein et al. (2002),
these advanced computer-based simulations involve immersing learners in an environment
in which they actively participate in acquiring knowledge. By allowing participants to
“practice their skills of decision-making and skills of planning alternative strategies” and
evaluate the outcome of their decisions (Hyman, 1978, p155), such simulations provide
dynamic visual environments to see the results of manipulating variables that cannot be
duplicated in typical turn-based strategies for games (Feinstein et al., 2002).
There may also be further benefits for Asian participants over their Western
counterparts because there is no issue of losing face with a computer simulation or game,
and for female participants in Asian societies because the environment provides them an
opportunity to demonstrate traditionally more masculine traits of Achievement Orientation
and leadership.
The results offer support for the learning benefits of computer-based simulations
and games and is consistent with the findings of recent research in cognitive science and
situated learning which indicate that simulations and games promote the ‘what-if?’
reasoning and research (Millman, 2000). As emphasised by Romme (2003), business
simulations create opportunities to build substantial synergy between learning to think in
relevant theoretical frameworks and learning how to deal with the complexity of actual
settings. Similarly, Gopinath and Sawyer (1999) suggested that the goal in business
education of producing managers who can function effectively in complex business
environments requires learning experiences that lead to higher levels of learning, and
computer simulations and games can help achieve the desired higher-level conceptual
processes of relativism and comprehension through application, analysis, synthesis and
evaluation.
The significance of the effect of teamwork and debriefing has wider implications
for training and development. The importance attributed to feedback and teamwork
suggests that computer-based simulations supplement, but do not replace, immediate
involvement in real settings (Dede, 1996). Henke (2001) emphasises that the use of
computer simulations, particularly those with modern graphical user interfaces, may have
152
much to commend it, but the simulation is still only a supportive tool, albeit a very
powerful one, and not an end in itself, it is not an educational panacea (Feinstein et al.,
2002). Similarly, Aldrich (2005) supports the notion that simulations as part of training
programmes support the learning process and learning transfer and Fripp (1993, p54)
concluded that “the best results are achieved when simulations are used in conjunction
with other learning methods” and that “no one learning method is able to provide all the
knowledge and skills required by managers.”
Computer-base simulations have been, and continue to be, valuable tools for many
current and future challenges in management learning and development. To enhance
learning from simulations, it is essential to look beyond and either/or approach to its use
alongside traditional methods in order to understand how and why computer simulations
work or do not work. This study has highlighted the importance of several individual
factors on the effectiveness of computer-based simulations, and demonstrated its
usefulness for learning and learning transfer in developing managerial competency.
Although this study is limited and further research is recommended, a step has been taken
towards the development of a more comprehensive and empirical understanding of the
effectiveness of computer-based simulations in management learning and development. It
is not possible to make management training perfect, but over time, it is possible to reduce
our errors and gradually improve the quality of our professional efforts (Athanasou, 1998).

153
7.2 Limitations
Although this study has led to some important results, several limitations should be
noted: First, the relatively small size of the sample in this research limits the degree of
confidence we can have in drawing generalisable conclusions from the present results. The
research is limited in generalisability for two principle reasons, geographic location and
sample size. Haseman et al. (2002) argue that a larger sample size is necessary to ensure
adequate statistical power even though the sample size achieves more than the minimum
appropriate for the statistical techniques employed (Hair et al., 1998). As outlined above,
this was beyond the resources of the present study and due to pragmatic constraints and
the real development needs of client organisations willing to participate in the study.
Easterby-Smith (1986, 1994) highlighted that it is often quite difficult in practice to
achieve samples of sufficient size. Moreover, the present results are necessarily
conditional, specific to the sample characteristics, computer-based simulations and games,
and the instructional tools used in the training programmes. Caution is advised in
projecting the results to other settings. As suggested by Chapman and Sorge (1999),
although computer-based simulations can be effective learning tools, all simulations are
not created equal, and it is the responsibility of individual instructors to ensure the quality
and efficacy of the one chosen to support learning.
Secondly, the instruments used were either entirely self-reporting or included self-
report measures, and data based on such requires caution in its interpretation. Albertson
(1995) suggests common problems with self-report measures, including: 1) ratings may
not correlate with training application; 2) data are sensitive to participant mood at the time
of completion; 3) ratings are sensitive to wording nuances; 4) surveys are often completed
quickly and without a great deal of thought; 5) the surveys are often given once only, and
thus, cannot assess training concept retention. Indeed, a possible issue with this research
could be that certain terminology in the LSI, for example “hunches”, “quiet and reserved”,
may have negative connotations in an Asian cultural context, which may bias self-scoring
and reduce the extent to which we can obtain significant relationships. Such issues have
been mitigated in regard to the measures for learning and learning transfer and business
impact, however, drawing conclusions relating self-reported measures with more objective
measures still require caution. Learning in this study was demonstrated through a practical
exercise and assessed by participant bosses – it is possible to include more formal
measures of knowledge learned either by means of testing or, perhaps, through qualitative
investigation to establish knowledge learned and used after the training. It was not

154
appropriate for the client organisations in this study, but may be included in studies
wishing to understand a more complete picture of the aspect of learning itself.
Thirdly, geographically, the research was undertaken in Singapore and Malaysia,
with a mixture of local and international client organisations and different industries.
Though similar studies in the USA and Europe have shown similar results (e.g. Wolfe and
Guth, 1975, Wolfe and Roberts, 1986, Wolfe and Roberts, 1993), many have previously
been criticised for relying on perceptions of participants only and not on empirical data
(Anderson and Lawton, 1997b). Inclusion of organisations from other countries would
enable greater generalisability. As has been outlined above, this was not within the
resources available to the researcher and, as the training programmes are run by a
commercial entity, the study is restricted to willing client organisations to participate.
Fourthly, the participants were chosen by the sponsoring client organisation in
each case based on their needs (and not those of the researcher), as such the groups are not
absolutely equal in terms of age, experience, gender etc and thus do not allow for an ideal
comparison. Further, the team sizes in each group varied according, in part, to the total
number of participants in the group, the available facilities at the training location. As
such, the diversity in each team may not be apparent and the potential to analyse the
effects of teamwork on performance is more difficult to compare equitably.
The results also clearly show that factors other than those identified in this research
model are likely to be having an influence. This is not surprising given the complexity of
the human mind and the ongoing debate about what learning is, let alone how people
learn. However, the researcher considers that the limitation of not including motivation to
learn or motivation to transfer learning may be important. The use of a potentially suitable
instrument, the LTSI (Holton et al., 2001, Holton et al., 2000) proved too much for
participants on the pilot programmes, and this may have been more useful than the
Learning Style or Personality Type questionnaires in retrospect.

155
7.3 Directions for further research
Additional research replicated on other training settings and larger sample sizes
would strengthen the argument for the effectiveness of computer-based simulations and
games across different learning styles, in different countries. A deeper understanding of
the process of learning will also require information about what has happened
longitudinally as a result of the training and observe how participants change over time
consequent to being exposed to the training programme.
Certainly, an optimal way would be to implement strict experimental designs,
obtain conclusive empirical evidence and replicate the results found in the present study.
Wolfe (1990) argued that maintaining rigour in simulation-based learning assessment is
important because there is controversy surrounding its learning value, and because non-
rigorous evaluation research has only added to the confusion surrounding the controversy;
but Wolfe also noted that many features of the ideal research design are probably
impossible to implement in management training, a view shared by Easterby-Smith
(1994). It is thus recommended that for any training evaluation to be meaningful, training
criteria must be psychometrically sound, valuable for decision-makers, and must be able to
be collected within typical organisational constraints (Tannenbaum and Woods, 1992). An
alternative approach for researchers would be to consider a case study approach and
include qualitative assessment to provide a richer source of data that may help identify
more clearly the factors that drive learning and learning transfer when using simulations or
games.
Future research should attempt to increase the validity and credibility of evaluation
data by using multiple instruments and data sources to assess the effectiveness of
computer-based simulations and games. In general, the more data sources used to evaluate
a training programme, the more complex is the picture of its effectiveness (Carnevale and
Schulz, 1990). It would, for example, be useful to also assess participant personality with a
robust instrument (such as the OPQ or 16PF) before the training to potentially explain the
impact of, for example, team roles on the team leadership measurement. It may also be
useful to understand participant motivation and personal values which are often
considered critical in transferring the learning to the workplace (Zalatan and Mayer, 1999,
Whitehall and McDonald, 1993, Garris et al., 2002, Tannenbaum et al., 1991).
This study has provided useful insights that could facilitate further research to
examine the impact of factors concerning the nature of the simulation itself (i.e. the
fidelity and veracity) on learning and learning transfer. For example Hannafin et al. (1996)

156
argue that few researchers have questioned the benefits of high-fidelity simulation design,
suggesting that more is not necessarily better, and that we should be aware of the cognitive
demands that computer-based simulations place on learners and thoughtfully apply
techniques that support, not interfere with, learner effort. While it is believed that
multimedia provides a tool and medium to accommodate different learning styles, it
remains a question as to whether this accommodation influences the retention, recall and
application of subject content (Snyder and Vaughan, 1996).
Finally, future consideration should be given to the comparative effectiveness of
various computer-based training (CBT) methods (e.g. CD-ROM vs. web-based,
synchronous vs. asynchronous delivery). According to Hannafin et al. (1996), the
computer represents a unique learner-centred opportunity, learners are able to control a
wide variety of instructional variables including the subject matter content, the context of
instructional situation, amount of practice undertaken and the amount of advice to be
provided during the instruction. In effect, computer based learning systems can encourage
learners to build relationships among learning concepts through exploration,
experimentation and manipulation much more easily than in a traditional classroom
environment. The relatively recent ease of access to the Internet has already fuelled the
proliferation of CBT to both industry and university (Henke, 2001, Schank, 2002, Aldrich,
2005). Drew and Davidson (1993) proposed that in the near future, new types of
technology, such as artificial intelligence and virtual reality, will provide expanded
possibilities for CBT – we don’t appear to be in that near future yet, however examples of
the implementation of such technologies abound in research laboratories (Witmer and
Singer, 1994, Stanney et al., 1998, Salzman et al., 1999, Amstutz et al., 2004) and as
recommended by Feinstein (2004) rather than comparing CBT to traditional forms (i.e.
lectures or case studies), future research should focus on evaluating the educational
effectiveness of CBT on its own merits.

157
7.4 Practitioner guide
What does this mean for the practitioner? Inclusion of computer-based simulations or
management games is becoming increasingly common in both management education and
management training. The claims for such programmes include that they accelerate the learning
process and provide a realistic environment for managers to practice and hone their skills, often in
a holistic fashion in Total Enterprise Simulations or capstone programmes. The first problem for
practitioners is that the term ‘simulation’ includes a plethora of computer-based and non-
computer-based activities, as such, this practitioner guide is focused towards the use of computer-
based ‘business’ or ‘management’ simulations of a non-competitive nature which have a
reasonable degree of realism and utilise graphical interfaces and interactive characters who
approximate artificial intelligence through the inclusion of complex decision trees and multiple
paths to achieving an outcome. A business or management game, adds the intention to compete
with others (including computer players) using the same game at the same time but not necessarily
synchronously and tend to be largely driven by numbers and market-like algorithms that simulate
reality. A game may be less interactive in and of itself and usually has a clear objective to win.
Simulations are suitable in particular, for the younger generation – those who have grown up with
and use computer games, and the older generation, those born in the 60’s and before who are
computer literate and providing the simulation has a degree of behavioural realism and requires a
deeper analysis of the situation. Competitive games appear to be most suitable for the group of
managers in between – and, depending upon the sophistication of the interactivity, appeal to the
younger computer game players as well.
The results of this research suggest that simulations and games have advantages over more
traditional teaching or training methods, such as case studies, in the development of practical
knowledge and the skills of using such in the workplace and participants will enjoy these activities
more. However, the results of this research suggest that it is the inclusion of such activities within
a blended development programme that includes human tutors, in particular for feedback and
debriefing sessions, and with participants working together in teams is likely to benefit both the
individual in the development of his or her competencies and also, benefit the organisation in
terms of the impact on the business. In more direct terminology, practitioners should not simply
replace training with a simulation-based method, but consider how such technology can enhance
training effectiveness and ensure that tutors or facilitators are on-hand to help draw out the
learning and apply it to real life.
From this research there are five key questions for the practitioner to answer based on the
objectives of the training for the participants and for the business:
Use a game or a simulation? Essentially the difference here is competition over deeper
analysis and decision-making. Games are likely to develop Directiveness over a simulation, and a
simulation is likely to develop Interpersonal Understanding over a game.

158
Team-based or individual? This research has not compared directly with individuals, but
team working is likely to benefit development of Team Leadership and Directiveness
competencies in particular and participants are likely to learn more.
Fidelity? This is a difficult construct to define and even more difficult to measure but it
refers to the visual, auditory (and in virtual reality, kinaesthetic) qualities of the representation in
the simulation or game. Greater fidelity is likely to lead to greater enjoyment of the activity, i.e. if
it looks, sounds and feels like a real situation but it is known not to be, then participants will enjoy
the activity more and find it more useful.
Realism? Other than how realistic the virtual environment is, the amount of realism in the
simulation or game is considered important. The game, in this research, has more realism in that it
is competitive, but importantly it is a simplified realism. If a simulation or game has near complete
realism the number of variables influencing the participant makes the tasks near impossible to
achieve and takes longer than in real-life to achieve and frustration is likely to take over learning.
For an example you might like to play The Sims 2.
Stand-alone or blended? Some simulations and games are designed for an individual to
use, and may be very effective in developing the learning objectives for which they were designed.
However, this research suggests that using simulations or games to enhance learning environments
rather than replace them, is likely to be more effective. The (human) tutor or facilitator is able to
help draw out the learning from the activity and help participants apply the learning to real-life
situations that are personalised and internalised. The artificial intelligence or AI embedded into an
increasing number of simulations and games has vastly improved and goes part way to doing this,
but the AI does not have the flexibility of creative thought…yet!.
Lastly, can simulations or games be used for anyone? The research here suggests that yes,
anyone and everyone will benefit from using a simulation or game in an enhanced learning
environment, some more so than others and this author’s advice to anyone choosing a simulation
or game to enhance learning environments is to ask the question of the designer or supplier “what
evidence do you have to support the claims of meeting the objectives?”

159
Chapter 8 Conclusions

8.1 Summary of key findings and conclusions


This research set out to assess the effectiveness of using simulations and games as
a means of developing managerial competency, heeding the call for rigorous empirical
research to understand effectiveness at each of Kirkpatrick’s four levels of evaluation. In
the scientific yet pragmatic tradition of quasi-experimental design the aim has been
achieved. This research has considered the effectiveness across different learning styles
and the effect of participant background on the results as well as the importance of team
and group effects.
Simulation and game activities were found to be more enjoyable by participants
and they perceived the activities to be more useful than those using case studies.
Enjoyment of learning activities and perceived usefulness (Warr and Bunce, 1995, Warr et
al., 1999) is often considered to be motivational, and importantly, intrinsically
motivational and lead to greater performance (Druckman, 1995, Hoberman and Mailick,
1992, Geber, 1994). The t-test and ANOVA results show that there are significant
differences in enjoyment and perceived usefulness between the different programmes and
this is similar to findings in other studies (e.g. Wolfe, 1990, Wolfe and Guth, 1975).
In demonstrating the potential link of enjoyment and perceived usefulness to the
next level of evaluation, the results show that the enjoyment and usefulness of the activity
has been seen to correlate significantly with learning and learning transfer and business
impact in this study, i.e. across all four levels of Kirkpatrick’s (1959/60) model of training
evaluation supporting the notion that higher ratings of participant reaction do lead to
greater learning, transfer and results. However, whilst the correlations are significant, they
are not strong and other factors have been shown to have an effect including team work,
participant age, gender and cultural heritage. Important considerations of personal learning
style preferences and personality type have shown inconclusive results and cannot be
deemed to either support the theories in the literature or explain the observed results.
Reaction and Learning level evaluation studies are relatively common in business
education and in the evaluation of business simulations. This study is more
comprehensive, and evaluated across all four levels of Kirkpatrick’s model and has clearly
shown differences in change of demonstrated managerial competencies and in
performance rating. The principle aim of this research was to understand if computer-
based simulations or games are an effective way to develop managerial competencies. By

160
comparing three strategic management training programmes using a simulation or a game
or case studies it has been found that both simulations and games are effective in
enhancing the development of managerial competencies. This research has been
conducted in the real business training world and shows that the results of previous studies
in education are reflected in this environment and has gone further to provide empirical
evidence that simulations and games are effective in enhancing learning, transfer and
business impact. The realities of conducting research in the business environment have
limited the scope of this research and it is recommended that future research should
include other factors believed to shape learning, and in particular motivation to learn and
motivation to transfer, in a more qualitative way.
This research is an important contribution to the ongoing debate on the efficacy of
business simulations and games as a teaching pedagogy and a learning medium. Using a
robust methodology that recognises real-world issues, the research provides empirical
evidence in support of the use of business simulations and games to develop managers. By
evaluating the programmes across all Kirkpatrick’s four levels, the study provides a
holistic perspective with strong indications that there are causal links between levels but
also highlights that other factors are important. In particular, the role of motivation to learn
and to transfer learning is highlighted as a potentially critical factor to consider.
There is no evidence in this study to support or contradict the effect of an
individual’s learning style preference or their personality type in their demonstrated
learning or behaviour change that links to a particular intervention method but there are
aspects of an individual’s demographics that are noteworthy. The evidence in this study
suggests that female managers may benefit more, than their male counterparts, from a
simulation-based training programme – it is also noted that this may be peculiar to the
more patriarchal societies in which the research was conducted; however, it may be worth
further investigation.
There is also evidence that Asian managers may have found the simulation-based
training more beneficial for them to develop particular competencies than their western
counterparts. The literature on the concept of ‘face’ provides some insight as to why this
may be and, in short, suggests that computer-based simulation training may provide a
genuinely safe environment where fear of ‘loss of face’ has no place and allows people to
experiment freely as they learn.
Several authors have criticised previous simulation research that compares
simulations with other, more traditional, methods because the objectives and purpose may
be different for each method. This is not the situation with this research, and the
161
comparison with case study groups allows this researcher to compare against a standard,
accepted base that provides a useful benchmark. In doing so, this research shows that
using business simulations or games in training is as effective as using case studies, and
almost certainly more effective. Perhaps efforts to compare them with other
methodologies can now be redirected to better understand how to design and use them to
greatest effect.

8.2 Personal learning reflection


When I started my DBA journey, I considered that my background and
competencies and continuous quest for personal development meant that I had the right
blend of relevant skills, practical experience and knowledge and motivation to bring my
twin passions - the development of people and the use of technology to enhance learning -
together. I chose to undertake the DBA at Henley as a vehicle for pulling these strands
together and consolidate them establishing the effectiveness of using simulations to
develop managers during training events.
Increasingly my consultancy work has been driven by a move towards strategic
training and development, and price sensitive services. This has required me to compete
with many other management trainers, business schools and management consultancies. I
have had to originate, design, develop, manage and deliver practical, added value and
easily understood people interventions that managers, senior managers and directors can
own, use and promote within their organisations. I hope to extend myself further, utilising
the DBA in parallel with my business role to obtain an international market niche within
the management development arena.
Discussions with fellow Henley DBA and other Doctoral researchers made me
realise, sometimes late in the day, that I could have chosen a much easier route than the
one on which I embarked. Notwithstanding, the changes in my life and career during the
period have often caused me to re-consider the route I was taking. On occasions, it became
evident that the hoops I had created to jump through were too stretching and that I was
unlikely to achieve the targets if I couldn’t gain some control over my weakest personal
competency cluster, organising and planning. Strongly linked to this was prioritisation.
The fact is that my DBA studies often took a back seat, usually to work commitments and
I was unable to dedicate sufficient blocks of time to write. I discussed the problem with
friends and colleagues who were either still in the process of completing their own
doctorates, or had successfully completed.

162
After the first four months, I was finding it impossible to keep up the initial pace of
study, my work commitments increased and I was finding it difficult to find an appropriate
route and academic hook that would give my research appropriate structure and
achievability. I found myself reading seemingly vast amounts of literature just trying to
find out what I needed to do and how others had gone about it, but I wasn’t finding
anything in the specific area of my interest. Researchers who had written about
simulations either concentrated on the technology aspect or were talking about simulations
of a very different nature. I visited the Singapore Airlines flight simulator training centre
to see if I could use the idea of learning to fly with learning to manage a business but it
seemed to me to be similar but too removed and was tempting me further down a
technology route that I was determined was not going to be my focus. In the first year I
was becoming downbeat, and on one of my very infrequent visits to the UK and Henley
for a Research Techniques workshop I, at last, began to share my frustrations with my
mentor and fellow DBA cohort. The good news was that I was not alone in my
frustrations, in fact a couple of my fellows made me feel that I had actually been making
positive progress. The bad news was that I still hadn’t found the hook I needed until a
casual drink in the bar with the person who was eventually (after another beer) to be
persuaded to be my supervisor. The amazing thing to me was he simply mentioned a name
as in ‘you might take a look at John Burgoyne’s work on evaluation’. What evaluation had
to do with simulations was, it transpired, for me to find out and reviewing this led me to
management learning, which led to others and experiential learning theory and developing
competencies and then, to a new seeming dead-end.
I recall being warned in the opening programme of the DBA that anyone
attempting to research a new field of study should stop and rethink, because it wasn’t
going to be achievable. The new brick wall was an apparent complete lack of research in
the evaluation of simulations outside of the US Armed Forces, NASA and a couple of
interesting but deep in the world of educational and cognitive psychology studies in
Cambridge. The USAF reports bore promise but access to more than tantalising glimpses
was strictly controlled. So should I postpone the DBA, study Psychology and come back
later, or was there another route? I was convinced that somebody, somewhere was
researching simulations and then I met a fellow trainer for lunch. She knew my interest
and passed me a book ‘The Guide to Business Gaming and Experiential Learning’
published by the Association for Business Simulation and Experiential Learning. Not only
was someone doing research in this field, there was an Association of people doing it –
they just happen to be rather keen on referring to ‘Gaming’ rather than ‘Simulations’ and
163
do not tend to get published in the more ‘respected’ academic journals. I joined the
association and received a CD ROM complete with some 30 years of research papers. For
me, this was the researcher’s birthday and Christmas rolled together.
It was another year before I went to the US ABSEL conference and met with these
similarly interested academics and researchers and was able to make substantial progress
in understanding the pitfalls and issues that they had faced and that I would contend with.
Following this, my Research Proposal had been accepted and forewarned that I
really did realise just how much I was undertaking, Malcolm Higgs formally agreed to be
my first supervisor after a half pint of Tiger – I’ve often wondered what he’d do for a pint?
My lit review and proposal had guided my efforts and now all that remained was to line up
as many clients as possible to agree to the training and the associated research.
In a highly competitive market and within economies that were still officially in
recession, this proved to be traumatic both professionally and personally. I had to retain as
much academic rigour as possible and I was forced to compromise some in order to be
able to carry this out in the business world with all the associated pressures of constant
change and external forces impacting on the businesses, then on a professional and
economic level, I was being forced to cut prices and potentially undermine the long-term
sustainability of my business. Some clients were only willing to participate in the research
if the training was free but I felt that this compromised my ethics and they were not
included, nor are they still clients but something had to give.
Once I was into the fieldwork fully, the difficulties were mostly collecting post-test
data – in spite of regular automated emails asking people to complete their MCQ, or get
their boss to do so, the vast majority needed a phone call and a personal pleading to please
help. Most of the participants were great in completing all the questionnaires themselves,
but getting their boss to do so was often exasperating as the time pressure to try and ensure
everything was done within a small window took hold. Nevertheless, the fieldwork was
completed within the time frame, a few data subjects had either moved or changed jobs
but sufficient numbers had all completed.
I wrote my second working paper before all the data had been collected and
submitted it for the ABSEL 2005 conference to be held in Florida the following March. It
was accepted and I received some excellent feedback from peers on how to improve it. I
had been unable to get over to the UK for the Research Colliquia at Henley – the timing
was always inconvenient – and I hadn’t really had feedback on what I was producing as a
result of my efforts, so the feedback from ABSEL members was welcomed. OK, that’s not
quite how I felt at the time as it made me realise just how little I had really done. It was
164
only through pulling everything together that I noticed gaping holes in my original paper
and also therefore, my proposal.
The last part, at this point, is the writing up of the thesis. Taking the feedback I had
received into account I wrote, what I thought was a pretty decent first full draft. Both
Malcolm and Vic were fabulously swift in their feedback and suggested that I had the
main components in place and gave me great direction on what else I needed to do. Once
again, I discovered that there was a lot left to do – it seemed initially that it would be
pretty straightforward but, thanks to their feedback, I found that actually there were still
holes in the literature, the method, the results, the conclusions, oh, and the introduction…
in short, everywhere.
Not to be disheartened, I have, I believe, plugged the gaps and realise that when I
write, there’s an awful lot that I have assumed and usually consider unnecessary to include
and am beginning to realise that when you take out the assumption and try to support it,
sometimes you know it’s anecdotal, other times, you’ve read it somewhere and filing and
organisation come to the fore for those that have it, for me, it means a re-search through
piles of paper, books and hundreds of pdf files.
So, what have I learned that I would pass onto others? Like the research above,
motivation is key – sometimes you have to dig very deep inside and re-find the purpose.
Then you need friends – talk to peers and colleagues and share your research as much as
you can – someone will provide a snippet of an idea, or a whole lot, and the way ahead
becomes clearer and it is highly likely that someone else, somewhere has done something
very similar and the Internet makes it much easier to find them. Lastly you need support
from your partner (and family) – every weekend, many mornings and many long nights
are spent with your head in books, reading journals, getting data and then you develop
CTS as you finish – and having someone who encourages, cajoles and totally supports
your study just makes it all possible.

165
Bibliography

ABRAMSON, N. R., KEATING, R. J. & LANE, H. W. (1996) Cross-national Cognitive


Process Differences: A Comparison of Canadian and Japanese Managers.
Management International Review, 36, 123-148.
ADLER, N. J. (1986) International Dimensions of Organisational Behaviour, Boston,
Mass., Kent.
ALBERTSON, D. S. (1995) Evaluating Experiential Training: Case Study and
Recommendations. Developments in Business Simulation & Experiential
Exercises, 22, 166-171.
ALDRICH, C. (2002) A Field Guide to Educational Simulations, Alexandria, VA.,
American Society for Training & Development.
ALDRICH, C. (2005) Learning by Doing: The Essential Guide to Simulations, Computer
Games, and Pedagogy in e-Learning and other Educational Experiences, San
Francisco, John Wiley & Sons, Inc.
ALESSI, S. M. (1988) Fidelity in the Design of Instructional Simulations. Journal of
Computer-Based Instruction, 15, 40-47.
ALLIGER, G. M. & JANAK, E. A. (1989) Kirkpatrick's Levels of Training Criteria:
Thirty Years Later. Personnel Psychology, 42, 331-342.
ALLIGER, G. M., TANNENBAUM, S., BENNET, W., TRAVER, H. & SHOTLAND, A.
(1997) A Meta-Analysis of the Relations among Training Criteria. Personnel
Psychology, 50, 341-358.
ALLINSON, C. W. & HAYES, J. (2000) Cross-national Differences in Cognitive Style:
Implications for Management. International Journal of Human Resource
Management, 11, 161-170.
AMSTUTZ, P., HEDGES, R. & OTTO, K. (2004) Creating Interreality: The Virtual
Object System.
ANDERSON, J. R. (1982) Acquisition of Cognitive Skills. Psychology Review, 89, 369-
406.
ANDERSON, P. H. & LAWTON, L. (1993) Dominant Personality Types and Total
Enterprise Simulation Performance: A Follow-up Study. Developments in Business
Simulation & Experiential Learning, 20, 1-3.
ANDERSON, P. H. & LAWTON, L. (1997a) Demonstrating the Learning Effectiveness
of Simulations: Where We are and Where We Need to Go. Developments in
Business Simulation & Experiential Exercises.
ANDERSON, P. H. & LAWTON, L. (1997b) Demonstrating the Learning Effectiveness
of Simulations: Where We are and Where We Need to Go. Developments in
Business Simulation & Experiential Exercises, 24, 68-73.
ANTON, J. J. (1992) Seniority and Efficiency. Scandinavian Journal of Economics, 94,
425-442.
ARGYRIS, C. (1980) Some Limitations of the Case Method: Experiences in a
Management Development Program. Academy of Management Review, 5, 291-
298.
ARGYRIS, C. (Ed.) (1999) Tacit Knowledge and Management, Mahwah, NJ, Erlbaum.
ARGYRIS, C. & SCHON, D. (1978) Organizational Learning: A Theory Action
Perspective, Reading, MA, Addison-Wesley.
ARMSTRONG, S. J. & MAHMUD, A. (2004) The Influence of Learning Styles on the
Creation of Actionable Knowledge in Public Sector Managers. Academy of
166
Management Best Conference Paper 2004. New Orleans, USA, Academy of
Management.
ATHANASOU, J. A. (1998) A Framework for Evaluating the Effectiveness of
Technology-assisted Learning. Industrial and Commercial Training, 30, 96-103.
ATHANASSIOU, N., MCNETT, J. M. & HARVEY, C. (2003) Critical Thinking in the
Management Classroom: Bloom's Taxonomy as a Learning Tool. Journal of
Management Education, 27, 533-555.
AUSUBEL, D. (1978) In Defence of Advance Organizers: A Reply to the Critics. Review
of Educational Research, 48, 251-257.
AWONIYI, G. M., GRIEGO, O. V. & MORGAN, G. A. (2002) Person-environment Fit
and Transfer of Training. International Journal of Training and Development, 6.
BABB, E. M., LESLIE, M. A. & VAN SLYKE, M. D. (1966) The Potential of Business-
Gaming Methods in Research. The Journal of Business, 39, 465-472.
BAILEY, J. & WITMER, B. (1994) Proceedings of Human Factors & Ergonomics
Society.
BAKER, J. C., MAPES, J., NEW, C. C. & SZWEJCZEWSKI, M. (1997) A Hierarchical
Model of Business Competence. Integrated Manufacturing Systems, 8, 265-272.
BAL, S. (1995) The Interactive Manager, London, Kogan Page.
BARHAM, K. & OATES, D. (1991) The International Manager, London, London
Business Books.
BARHAM, K. & WILLS, S. (1992) Management Across Frontiers: Identifying the
Competences of Successful International Managers. Ashridge, Ashridge Research
Group and the Foundation for Management Education.
BARNETT, T. (1984) Evaluations of Simulations and Games: A Clarification.
Simulation/Games for Learning, 14, 37-44.
BARR, R. B. & TAGG, J. (1995) From Teaching to Learning: A New Paradigm for
Undergraduate Education. Change, 13-25.
BEARD, C. & WILSON, J. P. (2001) The Power of Experiential Learning: A Handbook
for Trainers and Educators, London, Kogan Page.
BEDINGHAM, K. (1997) Proving the Effectiveness of Training. Industrial and
Commercial Training, 29, 88-91.
BEE, R. & BEE, F. (1994) Training Needs Analysis and Evaluation, London, Institute of
Personnel and Development.
BEEHR, T. A., IVANITSKAYA, L., HANSEN, C. P., EROFEEV, D. &
GUDANOWSKI, D. M. (2001) Evaluation of 360 degree feedback ratings:
relationships with each other and with performance. Journal of Organizational
Behavior, 22, 775-789.
BELBIN, R. M. (1981) Management Teams, Why They Succeed or Fail, Oxford,
Heinemann.
BELBIN, R. M., ASTON, R. R. & MOTTRAM, R. D. (1976) Building Effective
Management Teams. Journal of General Management, 3, 23-29.
BERTSCHE, D., CRAWFORD, C. & MACADAM, S. E. (1996) Is Simulation Better
Than Experience. The McKinsey Quarterly, 1, 15-22.
BHATTACHARAYA, A. K. & GIBBONS, A. M. (1996) Strategy Formulation:
Focussing on Core Competences and Processes. Business Change and Re-
engineering, 3, 47-55.
BINSTED, D. (1988) The Key to the Use of Interactive Video for Management Education.
IN H., M., RUSHBY, N. & BUDGETT, R. (Eds.) Aspects of Educational
Technology Vol XXI Designing New Systems and Technologies for Learning.
BIRCHALL, D., TAN, J. H. & GAY, K. (1996) Competences for International
Management. Singapore Management Review, 18, 1-13.
BLIGH, D. (1971) What's the Use of Lectures? Harmondsworth, Penguin.
167
BLOOM, B. S. (1956) Taxonomy of Educational Objectives, New York, David McKay
Company.
BOEHLE, S. (2005) Simulations: The Next Generation of E-learning. trainingmag.com.
VNU.
BOERLIJST, G. & MEIJBOOM, G. (1989) Matching the Individual and the Organisation.
IN HERRIOT, P. (Ed.) Assessment and Selection in Organisations. London, John
Wiley and Sons.
BOYATZIS, R. E. (1982) The Competent Manager: A Model for Effective Performance,
New York, NY, John Wiley.
BRENENSTUHL, D. C. & CATALANELLO, R. F. (1977) An Analysis of the Impact
upon the Learning Effectiveness of Traditional Instruction, Simulation, Gaming
and Experiential Teaching Methodlogies: An Experimental Design. Computer
Simulation and Learning Theory, 3, 463-473.
BRENENSTUHL, D. C. & CATALANELLO, R. F. (1979) The Impact of Three
Pedagogue Techniques on Learning. Journal of Experiential Learning and
Simulation, 1, 211-225.
BRINKERHOFF, R. O. (1987) Achieving Results from Training, San Francisco, Jossey-
Bass.
BRINKERHOFF, R. O. (1988) An Integrated Evaluation Model for HRD. Training &
Development, 42, 66-68.
BRUNER, J. (1973) Going Beyond the Information Given, New York, Norton.
BURGOYNE, J. (1988) Management Development for the Individual and the
Organisation. Personnel Management, 40-44.
BURGOYNE, J. (1989) Creating the Managerial Portfolio: Building on Competency
Approaches to Management Development. Management Education and
Development, 20, 56-61.
BURGOYNE, J. (1993) The Competence Movement: Issues, Stakeholders and Prospects.
Personnel Management, 22, 6-13.
BURGOYNE, J. (Ed.) (2002) Learning Theory and the Construction of Self: What kinds
of people do we create through the theories of learning that we apply to their
development? John Wiley & Sons Ltd.
BURGOYNE, J. & COOPER, C. L. (1975) Evaluation Methodology. Journal of
Occupational Psychology, 48, 53-62.
BURGOYNE, J. & MUMFORD, A. (2002) Learning from the Case Method. European
Case Clearing House.
BURGOYNE, J. & REYNOLDS, M. (1997a) Introduction. IN BURGOYNE, J. &
REYNOLDS, M. (Eds.) Management Learning; Integrating Perspectives in
Theory and Practice. London, Sage Publications Ltd.
BURGOYNE, J. & REYNOLDS, M. (Eds.) (1997b) Management Learning: Integrating
Perspectives in Theory and Practice, London, Sage Publications.
BURGOYNE, J. & SINGH, R. (1977) Evaluation of Training and Education: Micro and
Macro Perspectives. Journal of European Industrial Training, 1, 17-21.
BURGOYNE, J. & STEWART, R. (1977) Implicit Learning Theories as Determinants of
the Effects on Management Development. Personnel Review, 6, 5-14.
BURNS, A. V., GENTRY, J. W. & WOLFE, J. (Eds.) (1990) A Cornucopia of
Considerations in Evaluating Effectiveness of Experiential Pedagogies, London,
Kogan Page.
BURR, V. (2003) Social Constructionism, London, Routeledge.
BUTLER, R. J., MARKULIS, P. M. & STRANG, D. R. (1988) Where Are We? An
Analysis of the Methods and Focus of the Research on Simulation Gaming.
Simulation & Games, 19, 3-26.

168
BYRNE, E. T. & WOLFE, D. E. (1974) The Design, Conduct and Evaluation of a
Eomputerized Management Game as a Form of Experiential Learning.
Simulations, Gaming and Experiential Learning Techniques, 1, 22-30.
CAMPBELL, D. T. & STANLEY, J. C. (1966) Experimental and Quasi-Experimental
Design for Research, Chicago, Rand-McNally.
CAMPBELL, J. P., DUNNETTE, M. D., LAWLER, E. E. & WEICK, K. E. (1970)
Managerial Behaviour, Performance and Effectiveness, Maidenhead, McGraw-
Hill.
CANNING, R. (1990) The Quest for Competence. Industrial and Commercial Training,
122, 12-16.
CANNON, H. M. & BURNS, A. V. (1999) A Framework for Assessing the Competencies
Reflected in Simulation Performance. Developments in Business Simulation &
Experiential Learning, 26.
CARMINES, E. G. & ZELLER, R. A. (1990) Raliability and Validity Assessment. Sage
University Paper.
CARNEVALE, A. P. & SCHULZ, E. R. (1990) Evaluation Framework, Design, and
Reports. Training and Development Journal, 44, 15-23.
CELSIM (2003) Strategy at the Edge. Singapore, Corporate Edge Ltd.
CERTO, S. C. (1976) The Experiential Exercise Situation: A Comment on Instructional
Role and Pedagogy Evaluation. Academy of Management Review, 1, 113-116.
CHANG, J. (2003) Strategic Management: An Evaluation of the Use of Three Learning
Methods in Hong Kong. Developments in Business Simulation and Experiential
Learning, 30, 146-151.
CHAPMAN, K. J. & SORGE, C. L. (1999) Can a Simulation Help Achieve Course
Objectives? An Exploratory Study Investigating Differences Among Instructional
Tools. Journal of Education for Business, 225-230.
CHEETHAM, G. & CHIVERS, G. (1996) Towards a Holistic Model of Professional
Competence. Journal of European Industrial Training, 20, 20-30.
CHMIELEWSKI, M. A. (1998) Computer Anxiety and Learner Characteristics: Their
Role in the Participation and Transfer of Internet Training. Dissertation Abstracts
International Section A: Humanities and Social Sciences, 59.
CHONG, E. (1997) A Aomparative Study of the Managerial Competences and
Performance of British Managers and Singaporean Public Sector Managers.
Henley-on-Thames, Henley Management College.
CLARK, C. S., DOBBINS, G. H. & LADD, R. T. (1993) Exploratory Field Study of
Training Motivation. Group & Organization Management, 18, 292-307.
CLARK, R. & CRAIG, T. (1992) Research and Theory on Multi-Media Learning Effects.
IN GIARDINA, M. (Ed.) Interactive Learning Environments; Human Factors and
Technical Consideration on Design Issues. Berlin, Springer-Verlag.
CLEVELAND, G., SCHROEDER, R. G. & ANDERSON, J. C. (1989) A Theory of
Production Competence. Decision Sciences, 20, 655-68.
COCKERILL, T., HUNT, J. & SCHRODER, H. M. (1995) Managerial Competencies:
Fact or Fiction? Business Strategy Review, 6, 1-12.
COLLIN, A. (1989) Managers' Competence: Rhetoric, Reality and Research. Personnel
Review, 18, 20-25.
COLLINS, D. B. (2004) Performance-Level Evaluation Methods Used in Management
Development Studies from 1986-2000. Human Resource Development Quarterly,
15, 217.
CONSTABLE, C. & MCCORMICK, R. (1987) The Making of British Managers. A
Report for the BIM and CBI into Management Training, Education and
Development. London, British Institute of Management and Confederation of
British Industry.
169
COOK, J. E. (1999) Assessing Effectiveness of an Experiential Oriented Course Over
Time. Developments in Business Simulation and Experiential Learning, 26, 160-
164.
COULSON-THOMAS, C. (1992) Creating the Global Company, Maidenhead, McGraw-
Hill.
CROMWELL, S. E. & KOLB, J. A. (2002) The Effect of Organisational Support, and
Peer Support on Transfer of Training. IN EGAN, T. & LYNHAM, S. A. (Eds.)
2002 Academy of Human Resource Development Annual Conference. Bowling
Green, OH, Academy of Human Resource Development.
DAVIES, P. (2003) Simulation: Bringing e-learning to a New Level. ComputerUser.com.
DE VOS, G. A. (1985) Dimensions of the self in Japanese culture. IN MARSELLA, G.,
DE VOS, G. A. & HSU, F. L. K. (Eds.) Culture and self: Asian and Western
perspectives. New York, Tavistock.
DEDE, C. (1996) Emerging Technologies in Distance Education for Business. Journal of
Education for Business, 71, 197-205.
DEDE, C. (1997) The Evolution of Constructivist Learning Environments. Educational
Technology, 52, 54-60.
DEWEY, J. (1938) Experience and Education, New York, MacMillan.
DIXON, N. (1990) Evaluation: A Tool for Improving HRD Quality, San Diego, CA.,
University Association.
DREW, S. A. W. & DAVIDSON, A. (1993) Simulation-based Leadership Development
and Team Learning. The Journal of Management Development, 12, 39-52.
DRUCKMAN, D. (1995) The Educational Effectiveness of Interactive Games. IN
CROOKALL, D. & ARAI, K. (Eds.) Simulation and Gaming Across Disciplines
and Cultures: ISAGA at a Watershed. Thousand Oaks, CA., Sage.
DUBOIS, D. & ROTHWELL, W. J. (2004) Competency-Based or a Traditional Approach
to Training? T&D, 58, 46-58.
DUFF, A. (2000) Learning Styles of UK Higher Education Students. Bristol Business
School Teaching and Research Review.
DUFF, A. (2004) Understanding Acdemic Performance and Progression of First-year
Accounting and Business Economics Undergraduates: The Role of Approaches to
Learning and Prior Acdemic Achievement. Accounting Education, 13, 409.
DULEWICZ, V. (1992) Assessment Centres as the Route to Competence. Personnel
Management.
DULEWICZ, V. (1995) A Validation of Belbin's Team Roles from 16PF and OPQ using
Bosses' Ratings of Competence. Occupational and Organisational Psychology, 68,
81-100.
DULEWICZ, V. & FLETCHER, C. A. (1982) The Relationship Between Previous
Experience, Intelligence and Background Characteristics of Participants and their
Performance in an Assessment Centre. Journal of Occupational Psychology, 55,
197-207.
DULEWICZ, V. & HERBERT, P. (1992) Personality, Competences, Leadership Style and
Managerial Effectiveness. Henley-on-Thames, Henley Management College.
DULEWICZ, V. & HERBERT, P. (1996) General Management Competencies and
Personality: A 7-year follow-up study. Henley-on-Thames, Henley Management
College.
EASTERBY-SMITH, M. (1980) The Evaluation of Management and Development: an
Overview. Personnel Review, 10, 28-36.
EASTERBY-SMITH, M. (1986) Models and 'Schools of Thought' in Evaluation.
EASTERBY-SMITH, M. (1994) Evaluating Management Development, Training and
Education, Aldershot, Gower.

170
EASTERBY-SMITH, M. & ASHTON, D. J. L. (1975) Using Repertory Grid Technique
to Evaluate Management Training. Personnel Review, 4, 15-21.
EASTERBY-SMITH, M., THORPE, R. & LOWE, A. (1991) Management Research: An
Introduction. London, Sage.
EASTERBY-SMITH, M., THORPE, R. & LOWE, A. (2002) Management Research: An
Introduction, London, Sage.
ELASHMAWI, F. & HARRIS, P. R. (1993) Multicultural Management: New Skills for
Global Success, Houston, Gulf Publishing.
ESKEW, R. K. & FALEY, R. H. (1988) Some Determinants of Student Performance in
the First College-Level Financial Accounting Course. The Accounting Review, 63,
137-148.
FARIA, A. J. & WELLINGTON, W. J. (2004) A Survey of Simulation Game Users,
Former-users and Never-users. Simulation & Gaming, 35, 178-207.
FEINSTEIN, A. H. (2004) A Model for Evaluating Online Instruction. Developments in
Business Simulation and Experiential Learning, 31, 32-39.
FEINSTEIN, A. H. & CANNON, H. M. (2001) Fidelity, verifiability, and validity of
simulation: constructs for evaluation. Working Paper. Detroit, Wayne State
University.
FEINSTEIN, A. H. & CANNON, H. M. (2002) Constructs of Simulation Evaluation.
Simulation and Gaming, 33, 425,440.
FEINSTEIN, A. H., MANN, S. & CORSUN, D. L. (2002) Charting the experiential
territory: clarifying definitions and uses of computer simulation, games, and role
play. The Journal of Management Development, 21, 732-744.
FILSTEAD, W. J. (1979) Qualitative Methods: A Needed Perspective in Evaluation
Research. IN COOK, T. D. & REICHARDT, C. S. (Eds.) Qualitative and
Quantitative Methods in Evaluation Research. Beverly Hills, Sage.
FINN, R. (1993) A synthesis of current research on management competencies. Henley
Working Paper Series. Henley-on-Thames, Henley Management College.
FLETCHER, C. A. & DULEWICZ, V. (1984) An Empirical Study of a U.K. based
Assessment Centre. Journal of Management Studies, 21, 83-97.
FORD, J. K. & WEISSBEIN, D. A. (1997) Transfer of Training: An Update Review and
Analysis. Performance Improvement Quarterly, 10, 22-41.
FOX, S. (1998) Situated Learning Theory Versus Traditional Cognitive Learning Theory:
Why Management Education Should Not Ignore Management Learning. Systems
Practice, 10, 727-748.
FRIPP, J. (1993) Learning Through Simulations: A Guide to the Design and Use of
Simualtions in Business and Education, New York, McGraw-Hill.
GAGE, N. L. & BERLINER, D. C. (1998) Educational Psychology, Boston, Houghton
Mifflin Company.
GAGNE, R. M. (1984) Learning outcomes and their effects: Useful categories of human
performance. American Psychologist, 39, 377-385.
GARRIS, R., AHLERS, R. & DRISKELL, J. E. (2002) Games, motivation, and learning:
a research and practice model. Simulation & Gaming, 33, 441-467.
GAY, K. (1995) Competences for International Management. Henley Management
College, Brunel University.
GEBER, B. (1994) Let the games begin. Training, 31, 10-15.
GIBBS, G. (1988) Learning by Doing: a guide to teaching and learning methods, London,
Kogan Page.
GLASER, B. G. (1992) Basics of Grounded Theory Analysis: Emergence versus Forcing,
Mill Valley, CA, Sociological Press.
GOPHER, D., WEIL, M. & BAREKET, T. (1994) Transfer of skill from a computer game
trainer to flight. Human Factors, 36, 387-405.
171
GOPINATH, C. & SAWYER, J. E. (1999) Exploring the learning from an enterprise
simulation. Journal of Management Development, 18, 477-489.
GOSEN, J. & WASHBUSH, J. (2004) A review of scholarship on assessing experiential
learning effectiveness. Simulation & Gaming, 35, 270-293.
GOSENPUD, J. (Ed.) (1990) Evaluation of Experiential Learning, London, Kogan Page.
GOSENPUD, J. & WASHBUSH, J. (1992) The influence of Myers-Briggs type and group
dynamics factors on total enterprise performance. Developments in Business
Simulation & Experiential Learning, 19, 64-67.
GREDLER, M. E. (1996) Educational Games and Simulations: A Technology in search of
a (Research) Paradigm. IN JONASSEN, D. H. (Ed.) Handbook of Research for
Educational Communications and Technology. New York, Simon & Schuster
Macmillan.
GUBA, E. G. & LINCOLN, Y. S. (1989) Fourth Generation Evaluation, London, Sage.
HAIR, J. F., ANDERSON, R. E., TATHAM, R. L. & BLACK, W. C. (1998) Multivariate
Data Analysis, Upper Saddle River, NJ, Prentice Hall.
HALL, D. T. (1986) Career Development in Organisations, San Francisco, CA, Jossey-
Bass.
HAMBLIN, A. C. (1974) Evaluation and control of training, London, McGraw Hill.
HANNAFIN, M. J. (1992) Emerging technologies, ISD, and learning environments:
critical perspectives. Educational Technology Research and Development, 40, 49-
63.
HANNAFIN, M. J., HANNAFIN, K. M., HOOPER, S. R., RIEBER, L. P. & KINI, A. S.
(1996) Research on and research with emerging technologies. IN JONASSEN, D.
H. (Ed.) Handbook of Research for Educational Communications and Technology.
New York, Simon and Schuster.
HARDY, C. (1996) Understanding Power: Bringing about Strategic Change. British
Journal of Management, 7.
HASEMAN, W. D., NUIPOLATOGLU, V. & RAMAMURTHY, K. (2002) An Empirical
Investigation of the Influences of the Degree of Interactivity on User-outcomes in a
Multimedia Environment. Information Resources Management Journal, 15, 31-48.
HAY/MCBER (1997) MCQ Profile and Interpretive Notes, TRG Hay/McBer.
HAYES, K. & RICHARDSON, J. T. E. (1995) Gender, subject and context as
Determinants of Approaches to Studying in Higher Education. Studies in Higher
Education, 20, 19-31.
HAYS, R. T. & SINGER, M. J. (1989) Simulation fidelity in training systems design:
Bridging the gap between reality and training., New York, Springer-Verlag.
HENKE, H. (2001) Learning theory: applying Kolb's learning style inventory with
computer based training. Project Paper.
HEROLD, D. M., DAVIS, W., FEDOR, D. B. & PARSONS, C. K. (2002) Dispositional
Influences on Transfer of Learning in Multistage Training Programs. Personnel
Journal, 55, 851-869.
HESSELING, P. (1966) Strategy of Evaluation Research in the Field of Supervisory and
Management Training, Anssen, Van Gorcum.
HIGGS, M. (1996) A Comparison of Myers Briggs Type Indicator Profiles and Belbin
Team Roles. Henley-on-Thames, Henley Management College.
HIGGS, M. (1999) Teams and Team Working: What do we know? (A Literature Review).
Henley Working Paper Series. Henley-on-Thames, Henley Management College.
HIGGS, M., PLEWNIA, U. & PLOCH, J. (2003) Influence of Team Composition on
Team Performance and Dependence on Task Complexity. Henley Working Paper
Series. Henley-on-Thames, Henley Management College.
HIGGS, M. & ROWLAND, D. (2001) Developing change leaders: Assessing the impact
of a development programme. Journal of Change Management, 2, 47-64.
172
HIRSH, W. (1989) Defining Management Skills. Brighton, University of Sussex, Institute
of Manpower Studies.
HO, D. Y. F. (1976) On the Concept of Face. American Journal of Sociology, 81, 867-
890.
HOBERMAN, S. & MAILICK, S. (1992) Experiential Management Development, New
York, Quorum.
HODGKINSON, G. P. & SADLER-SMITH, E. (2003) Complex or Unitary: A Critique
and Empirical Re-assessment of the Allinson-Hayes Cognitive Style Index.
Journal of Occupational and Organizational Psychology, 76, 243-269.
HOFSTEDE, G. (1980) Culture's consequences: international differences in work-related
values, Beverly Hills, Sage.
HOFSTEDE, G. (1991) Cultures and organisations: software of the mind, London,
McGraw-Hill.
HOLMAN, D., PAVLICA, K. & THORPE, R. (1997) Rethinking Kolb's theory of
experiential learning in management education: The Contribution of Social
Constructionism and Activity Theory. Management Learning, 28, 135-148.
HOLTON, E. F., III (1996) The Flawed Four-Level Evaluation Model. Human Resource
Development Quarterly, 7, 5-21.
HOLTON, E. F., III, BATES, R. & RUONA, W. E. A. (2000) Development and
Validation of a Generalized Learning Transfer Climate Questionnaire. Human
Resource Development Quarterly, 11, 333-360.
HOLTON, E. F., III, CHEN, H. & NAQUIN, S. S. (2001) An Examination of Learning
Transfer System Characteristics across Organisational Settings. IN ALIAGA, O.
(Ed.) Proceedings of the 2001 Academy of Human Resource Development
Conference. Baton Rouge, LA, Academy of Human Resource Development.
HONEY, P. & MUMFORD, A. (1982) The Manual of Learning Styles, Maidenhead, Peter
Honey.
HONEY, P. & MUMFORD, A. (1992) The Manual of Learning Styles, Maidenhead, Peter
Honey.
HOSKINS, S. L. & VAN HOOFF, J. C. (2005) Motivation and Ability: Which Students
Use Online Learning and What Influence Does it have on their Achievement?
British Journal of Educational Psychology, 36, 177-192.
HOULDSWORTH, E. (1994) Multimedia in Education: Theoretical Framework and
Methodological Choices. IN HIGGS, M. (Ed.) Henley Working Paper. Henley-on-
Thames, Henley Management College.
HOULDSWORTH, E. (2004) The Learning Manager. IN REES, D. & MCBAIN, R.
(Eds.) People Management: Challenges and Opportunities. Palgrave Macmillan.
HUCZYNSKI, A. A. (2001) Encyclopedia of Development Methods, Aldershot, Gower.
HUCZYNSKI, A. A. & LEWIS, J. W. (1980) An Empirical Study into the Learning
Transfer Process in Management Training. Journal of Management Studies, 17,
227-240.
HUTTON, J. (1988) The world of the international manager, Oxford, Phillip Allan.
HWANG, K. K. (1987) Face and Favor: The Chinese Power Game. American Journal of
Sociology, 92, 944-974.
HYMAN, R. T. (1978) Simulation Gaming for Values Education: The Prisoner's
Dilemma, New Brunswick, NJ, University Press of America.
IMPARTA (2003) Strategy CoPilot. London, Imparta Ltd.
IVANCEWICH, J. M. & MATTESON, M. T. (1996) Organisational Behaviour and
Management, London, Irwin.
JACOBS, R. (1989) Getting the measure of managerial competence. Personnel
Management, 21, 32-7.

173
JENKINS, D., SIMMONS, H. & WALKER, R. (1981) Thou Nature are my Goddess.
Naturalistic Enquiry in Educational Evaluation. Cambridge Journal of Education,
11, 169-89.
JENNINGS, D. (2000) Strategic Management: An Evaluation of the Use of Three
Learning Methods. ABSEL: Developments in Business Simulation & Experiential
Learning, 27, 20-27.
KAUFMAN, F. L. (1976) An empirical study of the usefulness of a computer-based
business game. Journal of Educational Data Processing, 13, 13-22.
KELNER, S. P. (2001) A Few Thoughts on Executive Competency Convergence. Center
for Quality of Management Journal, 10, 67-71.
KELNER, S. P. (2005) Email response to questions on technical data for MCQ. IN
KENWORTHY, J. M. (Ed.) Singapore.
KEMMIS, S., ATKIN, R. & WRIGHT, E. (1977) How Do Students Learn? Working
Papers on Computer Assisted Learning. Norwich, Centre for Applied Education in
Research, UEA.
KENWORTHY, J. M. & WONG, A. (2003) A Study of the Attributes of Managerial
Effectiveness in Singapore: Implications for a Competency model for Managers in
Singapore. Singapore, Corporate Edge White Paper.
KENWORTHY, J. M. & WONG, A. (2005) Developing Managerial Effectiveness:
Assessing and Comparing the Impact of Development Programmes using a
Management Simulation or a Management Game. IN LEDMAN, R. (Ed.) ABSEL
National Conference. Orlando, FL., ABSEL.
KEYS, J. B. (1977) The management of learning grid for management development.
Academy of Management Review, 2, 289-297.
KEYS, J. B., WELLS, R. A. & EDGE, A. G. (1994) The multinational management game:
a simuworld. The Journal of Management Development, 13, 26-37.
KEYS, J. B. & WOLFE, J. (1990) The role of management games and simulations in
education and research. Journal of Management, 16, 307-336.
KICKUL, G. (2001) Antecedents of work team performance in a business simulation,
personality and group interaction. Developments in Business Simulation &
Experiential Learning, 28, 128-136.
KIM, B., WILLIAMS, R. & DATTILO, J. (2002) Students' perception of interactive
learning
modules. Journal of Research on Technology in Education, 34, 453-473.
KIM, J. S. & ARNOLD, P. (1992) Manufacturing competence and business performance:
a framework and empirical analysis. International Journal of Operations and
Production Management, 13, 4-25.
KIRKPATRICK, D. (1959/60) Techniques for evaluating training programs: Parts 1 to 4.
Journal of the American Society for Training and Development, November,
December, January and February.
KIRKPATRICK, D. L. (1994) Evaluating Training Programs: The Four Levels, San
Francisco, Berret-Koehler.
KIRKPATRICK, D. L. (1998a) Another Look at Evaluating Training Programs,
Alexandria, VA., American Society for Training and Development.
KIRKPATRICK, D. L. (1998b) Evaluating Training Porgrams: Evidence vs. Proof. IN
KIRKPATRICK, D. L. (Ed.) Another Look at Evaluating Training Programs.
Alexandria, VA, ASTD.
KIRTON, M. J. (1994) Adaptors and Innovatrors at Work. IN KIRTON, M. J. (Ed.)
Adaptors and Innovators: Styles of Creativity and Problem Solving. London,
Routeledge.
KNOWLES, M. (1970) The Modern Practice of Adult Education, New York, Association
Press.
174
KNOWLES, M. (Ed.) (1996) Adult Learning, New York, McGraw-Hill.
KNOWLES, M. S., HOLTON, E. F. & SWANSON, R. A. (1998) The Adult Learner,
Houston TX, Butterworth Heinemann.
KOCH, J. V. & CEBULA, R. J. (1994) In Search of Excellent Management. Journal of
Management Studies, 31, 681-99.
KOLB, D. A. (1976) Management and the learning process. California Management
Review, 18, 21-31.
KOLB, D. A. (1981) Experiential learning theory and the learning style inventory: a reply
to Freedman and Stumpf. Academy of Management Review, 6, 289-296.
KOLB, D. A. (1984) Experiential Learning: Experience as a source of learning and
development, Englewood Cliffs, NJ, Prentice Hall.
KOLB, D. A. (1999) Learning Style Inventory, HayGroup.
KOLB, D. A., BOYATZIS, R. E. & MAINEMELIS, C. (2000) Experiential Learning
Theory: Previous Research and New Directions. IN STERNBERG, R. J. &
ZHANG, L. F. (Eds.) Perspectives on cognitive, learning, and thinking styles. NJ,
Lawrence Erlbaum.
KRAIGER, K., FORD, J. K. & SALAS, E. (1993) Application of cognitive, skill-based,
and affective theories of learning outcomes to new methods of training evaluation.
Journal of Applied Psychology, 78, 311-328.
KUNNATH, G. & SEDICK, R. (2001) Boo.com: The Path to Failure. ECCH - Garduate
School of Management University of Western Australia.
LANE, D. C. (1995) On a Resurgence of Management Simulations and Games. The
Journal of the Operational Research Society, 46, 604-626.
LAVE, J. (1998) The Practice of Learning. IN CHAIKLIN, S. & J., L. (Eds.)
Understanding Practice: Perspectives on Activity and Context. Cambridge,
Cambridge University Press.
LEE MEI CHING, W., CHIAN, K. Y. & TAN, Y. H. (2002) Managerial competencies in
a knowledge-based economy: Its implications for human resource development.
Singapore, Nanyang Technological University, STADA.
LEWIN, K. (1951) Field Theory in Social Sciences, New York, Harper and Row.
LIM, D. H. & JOHNSON, S. D. (2002) Trainee Perceptions of Factors that Influence
Learning Transfer. International Journal of Training and Development, 6, 36-48.
LINSTEAD, S. (1991) Developing management meta-competence: can distance learning
help? Journal of European Industrial Training, 6, 17-27.
LOO, R. (2002) A meta-analytic examination of Kolb's learning style preferences among
business majors. Journal of Education for Business, 77, 252-256.
LUNDY, J. (2003) E-learning simulation: putting knowledge to work. Gartner U.S.
Symposium/Itxpo.
LUNDY, J., LOGAN, D. & HARRIS, K. (2002) Simulation May be your E-Learning
'Killer Application'. Gartner.
LUTHANS, F., ROSENKRANTZ, S. A. & HENNESSEY, H. W. (1985) What do
successful managers really do? An observation study of managerial activities. The
Journal of Applied Behavioural Science, 21, 255-70.
MABEY, C., TOPHAM, P. J. & ROLAND KAYE, G. (1998) Computer-based
Courseware: a Comparative Review of the Learner's Experience. Accounting
Education, 7, 51-65.
MACDONALD-ROSS, M. (1973) Behavioural Objectives: A Critical Review.
Instructional Science, 2, 1-52.
MACDONALD, B., ATKIN, R., JENKINS, D. & KEMMIS, S. (1977) Computer Assisted
Learning: its Educational Potential. IN HOOPER, R. (Ed.) The National
Development Programme in Computer Assisted Learning. The Final Report of the
Director. London, Council for Educational Technology.
175
MAINEMELIS, C., BOYATZIS, R. E. & KOLB, D. A. (2002) Learning Styles and
Adaptive Flexibility: Testing Experiential Learning Theory. Management
Learning, 33, 5-33.
MANTYLA, K. & WOODS, J. A. (2001) Evaluating Program Success. IN MANTYLA,
K. & WOODS, J. A. (Eds.) The 2001/2002 ASTD Distance Learning Yearbook.
New York, McGraw-Hill.
MARGEISON, C. & MCCANN (1985) Team Management Index and Description of
Team Roles, Bradford, MCB University Press.
MARSICK, V. J. & WATKINS, K. E. (1990) Informal and Incidental Learning in the
Workplace, London, Routeledge & Kegan Paul.
MASLOW, A. H. (1954) Motivation and Personality, New York, Harper and Row.
MAY, T. (1993) Social Research: Issues, Methods and Process, Buckingham, Open
University Press.
MCAULEY, J. (1994) Exploring Issues in Culture and Competence. Human Relations, 47,
417-30.
MCBER (1997) Managerial Competency Questionnaire, London, TRG Hay/McBer.
MCCLELLAND, D. C. (1973) Testing for Competence Rather Than Intelligence.
American Psychologist, 28, 1-14.
MCKENNA, S. (1996) Evaluating IMM: Issues for Researchers. Charles Stuart
University.
MCKENNEY, J. L. (1962) An Evaluation of Business Game in an MBA Curriculum.
Journal of Business, 35, 278-286.
MCKENNEY, J. L. (1963) An Evaluation of a Decision Simulation as a Learning
Environment. Management Technology, 3, 56-67.
MCKENNEY, J. L. (1967) Simulation Gaming for Management Development, Boston,
MA., The Division of Business, Harvard College.
MCLAUGHLIN, H. & THORPE, R. (1993) Action Learning - A Paradigm in Emergence:
the Problems Facing a Challenge to Traditional Management Education and
Development. British Journal of Management, 4, 19-27.
MILES, R. H. & RANDOLPH, W. A. (1985) The Organisation Game: A Simulation,
Glenview, Il, Scott, Foresman and Company.
MILES, W. G., BIGGS, W. D. & SCHUBERT, J. N. (1986) Students Perceptions of Skill
Acquisition Through Cases and a General Management Simulation: A
Comparison. Simulation & Games, 17, 7-24.
MILLMAN, I. J. (2000) Flight Simulators for Business'. 7th Annual EDINEB Conference:
Educational Innovation in Economics and Business. Newport Beach, USA.
MITCHELL, R. C. (2004) Combining Cases and Computer Simulations in Strategic
Management Courses. Journal of Education for Business, 79, 198-204.
MORSE, K. (2001) Assessing the Efficacy of Experiential Learning in a Multicultural
Environment. Developments in Business Simulation & Experiential Learning, 28,
153-159.
MOSIER, N. R. (1990) Financial Analysis: The Methods and Their Application to
Employee Training. Human Resource Development Quarterly, 1, 45-63.
MOWBRAY, C. T., HOLTER, M. C., TEAGUE, G. B. & BYBEE, D. (2003) Fiedlity
Criteria: Development, Measurement and Validation. American Journal of
Evaluation, 24, 315-340.
MUMFORD, A. (Ed.) (1994) Effectiveness in Management Development, Aldershot,
Gower.
MYERS, I. B. & MCCAULLEY, M. H. (1985) A Guide to the Development and Use of
the Myers-Briggs Type Inidcator, Palo Alto, Consulting Psychologists Press.
MYERS, I. B. & MCCAULLEY, M. H. (1989) A Guide to the Development and Use of
the Myers-Briggs Type Inidcator, Palo Alto, Consulting Psychologists Press.
176
MYERS, I. B. & MYERS, P. B. (1980) Gifts Differing, Palo Alto, Consulting
Psychologists Press.
NAQUIN, S. S. & HOLTON, E. F., III (2003) Motivation to Improve Work through
Learning in Human Resource Development. Human Resource Development
International, 6, 355-370.
NONAKA, I. & TAKEUCHI, H. (1995) The Knowledge-creating Company, Oxford,
Oxford University Press.
NORDHAUG, O. & GRONHAUG, K. (1994) Competencies as Resources in Firms. The
International Journal of Human Resource Management, 5, 89-106.
O'GORMAN, C. (1997) Cooleys Distillery plc: A New "Spirit" in the World Whiskey
Industry. Dublin, University College.
O'ROURKE, J. (2003) Management Communication: A Case Analysis Approach, Upper
Saddle River, NJ, Prentice Hall.
PARLETT, M. & HAMILTON, D. (1987) Evaluation as Allumination: A New Approach
to the Study of Innovatory Programmes. IN MORPHY, R. & TORRANCE, H.
(Eds.) Evalauting Education: Issues and Methods. Milton Keynes, Open
University Press.
PARTRIDGE, S. E. & SCULLY, D. (1979) Cases Versus Gaming. Management
Education and Development, 10, 172-180.
PATTON, M. Q. (1978) Utilization-Focussed Evaluation, Beverly Hills, Sage.
PATTON, M. Q. (1990) Qualitative Evaluation and Research Methods, London, Sage
Publications.
PATZ, A. L. (1990) Group Personality Composition and Total Enterprise Simulation
Performance. Developments in Business Simulation & Experiential Exercises, 17,
132-137.
PATZ, A. L. (1992) Personality Bias in Total Enterprise Simulations. Simulation &
Gaming, 23, 45-76.
PEGDEN, C. D., SHANNON, R. E. & SADOWSKI, R. P. (1995) Introduction to
Simulation using SIMAN, Hightstown, NJ, McGraw-Hill.
PHILLIPS, J. (1991) Handbook of Evaluation and Measurement Methods, London, Gulf
Publishing.
PHILLIPS, J. (Ed.) (1997) Measuring Return on Investment, Alexandria, VA., ASTD.
PHILLIPS, J. J. (Ed.) (1998) ROI: The Search for Best Practices, Alexandria VA,
American Society for Training & Development.
PHILLIPS, P. P. (1999) The ROI Process: Trends and Issues. IN PHILIPS, J. (Ed.)
Measuring Return on Investment. Alexandria, VA., ASTD.
PIERFY, D. A. (1977) Comparative Simulation Game Research: Stumbling Blocks and
Steppingstones. Simulation and Gaming, 8, 255-68.
PINTRICH, P. R., CROSS, D. R., KOZMA, R. B. & MCKEACHIE, W. J. (1986)
Instructional Psychology. Annual Review of Psychology, 37, 611-651.
PLATTS, K. W. & GREGORY, M. J. (1990) Manufacturing Audit in the Process of
Strategy Formulation. International Journal of Operations and Production
Management, 10.
PRAHALAD, C. K. & HAMAL, G. (1990) The Core Competences of the Corporation.
Harvard Business Review, 63, 79-91.
PRENSKY, M. (2000) Digital Game Based Learning, McGraw Hill.
PSYFACTOR (2005) Management Development in Chemical Manufacturing. Psyfactor.
RACKHAM, N. (1973) Recent Thoughts on Evaluation. Industrial and Commercial
Training, 5, 454-61.
RAIA, A. P. (1966) A Study of the Educational Value of Management Games. Journal of
Business, 39, 339-352.

177
RANDEL, J. M., MORRIS, B. A., WETZEL, C. D. & WHITEHALL, B. V. (1992) The
Effectiveness of Games for Educational Purposes: A Review of Recent Research.
Simulation & Gaming, 23, 261-276.
RAUSCH, E., HALFHILL, S. M., SHERMAN, H. & WASHBUSH, J. B. (2001) Practical
Leadership in Management Education for Effective Strategies in a Rapidly
Changing World. The Journal of Management Development, 20, 245-257.
REDDIN, W. J. (1970) Managerial Effectiveness, London, McGraw Hill.
REEVES, T. (1993) Research Support for Interactive Multimedia: Existing Foundations
and New Directions. IN LATCHEM, C., WILLIAMSON, J. &
HENDERSONLANCETT, L. (Eds.) Interactive Multimedia. London, Kogan Page.
REINGOLD, J. (1999) Exec Ed: Learning to Lead. Business Week.
REMENYI, D., WILLIAMS, B., MONEY, A. & SWARTZ, E. (1998) Doing Research in
Business and Management, London, Sage Publishing.
REVANS, R. (1983) The ABC of Action Learning, Bromley, Chartwell-Bratt.
REVANS, R. W. (1971) Developing Effective Managers, London, Longman.
REVANS, R. W. (1980) Action Learning, London, Blond and Briggs.
REYNOLDS, M. (1997a) Learning Styes: A Critique. Management Learning, 28, 115-
134.
REYNOLDS, M. (1997b) Towards a Critical Management Pedagogy. IN BURGOYNE, J.
& REYNOLDS, M. (Eds.) Management Learning: Integrating perspectives in
theory and practice. London, Sage Publications Ltd.
REYNOLDS, M. & SNELL, R. (1988) Contribution to Development of Management
Competence, Sheffield, Manpower Services Commission.
RICCI, K., SALAS, E. & CANNON-BOWERS, J. A. (1996) Do Computer-based Games
Facilitate Knowledge Acquisition and Retention? Military Psychology, 8, 295-307.
RIDLEY, M., LASCHINGER, H. & GOLDENBERG, D. (1995) The Effect of a Senior
Preceptorship on the Adaptive Competencies of Community College Nursing
Students. Journal of Advanced Nursing, 22, 58-65.
ROBOTHAM, D. & JUBB, R. (1996) Competences: Measuring the Unmeasurable.
Management Development Review, 9, 25-29.
ROMME, A. G. L. (2003) Learning Outcomes of Microworlds for Management
Education. Management Learning, 34, 51-61.
ROSE, H. (1995) Assessing Learning in VR: Towards Developing a Paradigm. HITL.
ROSS, S. & MORRISON, G. R. (2003) Experimental Research Methods. IN JONASSEN,
D. H. (Ed.) Handbook of Research Methods in Educational Communications and
Technologies. Second ed., AECT.
ROWE, C. (1995) Clarifying the Use of Competence and Competency Models in
Recruitment, Selection and Staff Development. Industrial and Commercial
Training, 27, 12-17.
RUONA, W. E. A., LEIMBACH, M., HOLTON, E. F., III & BATES, R. (2002) The
Relationship between Learner Utility Reactions and Predicted Learning Transfer
among Trainees. International Journal of Training and Development, 6, 218-228.
RUSS-EFT, D. (2002) A Typology of Training design and Work Environment Factors
Affecting Workplace Learning and Transfer. Human Resource Development
Review, 1, 45-65.
RUSS-EFT, D. & PRESKILL, H. (2001) Evaluation in Organizations
A Systematic Approach to Enhancing Learning, Performance, and Change, Cambridge,
MA., Perseus Publishing.
RUSSELL, S. (1999) Evaluating Performance Interventions. Info-line.
SACKETT, P. R. & MULLEN, E. J. (1993) Beyond Formal Experimental Design:
Towards an Expanded View of the Training Evaluation Process. Personnel
Psychology, 46, 613-627.
178
SADLER-SMITH, E. (1996) Learning Styles: A Holistic Approach. Journal of European
Industrial Training, 20, 29-39.
SADLER-SMITH, E. (2001) A Reply to Reynold's Critique of Learning Style.
Management Learning, 32, 291-304.
SALES, E. & CANNON-BOWERS, J. A. (2001) The Science of Training: A Decade of
Progress. Annual Review of Psychology, 52, 471-499.
SALOMON, G. (Ed.) (1993) On the Nature of Pedagogic Computer Tools: The Case of
the Writing Partner, Hillsdale, NJ., Erlbaum.
SALZMAN, M. C., DEDE, C., R., B. L. & CHEN, J. (1999) A Model for Understanding
How Virtual Reality Aids Complex Conceptual Learning. Presence: Teleoperators
and Virtual Environments, 8, 293-316.
SARAWANO, R. (1993) Assessment of Managerial Competencies and Potential. Henley-
on-Thames, Henley Management College/Brunel University.
SAVVAS, M., EL-KOT, G. & SADLER-SMITH, E. (2001) Comparitive Study of
Cognitive Styles in Egypt, Greece, Hong Kong and the UK. International Journal
of Training and Development, 5, 64-74.
SCHANK, R. (1997) Virtual Learning: A Revolutionary Approach to Building a High
Skilled Workforce, New York, McGraw-Hill.
SCHANK, R. (2002) Designing World-Class E-Learning, New York, McGraw-Hill.
SCHNEIDER, M. (2001) A new test for MBA wannabes? The GMAT is good at
evaluating analytical skills. But what about “common sense” or right-brain skills?
Enter the SIA. BusinessWeek Online.
SCHNEIER, C. E. & BEATTY, R. W. (1977) Predicting Participants' Performance and
Reactions in an Experiential Learning Setting: An Empirical Investigation. New
Horizons in Simulation Games and Experiential Learning, 4, 291-298.
SCHON, D. (1983) The Reflective Practitioner: How Professionals Think in Action,
London, Maurice Temple Smith.
SCHON, D. (1987) Educating the Reflective Practitioner, San Francisco, CA, Jossey-
Bass.
SCHRODER, H. M. (1989) Managerial Competence: The Key to Excellence, Iowa,
Kendall/Hunt.
SCHUMANN, P. L., ANDERSON, P. H., SCOTT, T. W. & LAWTON, L. (2001) A
Framework for Evaluating Simulations as Educational Tools. Developments in
Business Simulation & Experiential Learning, 28, 215-220.
SCRIVEN, M. (1972) Pros and Cons about Goal-Free Evaluation. Evaluation Comment,
3, 1-4.
SENGE, P. (1990) The Fifth Discipline, New York, Doubleday Currency.
SKINNER, B. F. (1950) Are Theories of Learning Necessary? Psychology Review, 57,
193-216.
SNYDER, L. T. & VAUGHAN, M. J. (1996) Multimedia & learning: Where's the
connection? Developments in Business Simulation & Experiential Exercises, 23,
179-180.
SPENCER, L. M. & SPENCER, S. (1993) Competence at Work: Models for Superior
Performance, New York, John Wiley & Sons.
STAKE, R. E. (1980) Responsive Evaluation. University of Illinois.
STALK, G., EVANS, P. & SHULMAN, L. E. (1992) Competing on Capabilities: The
New Rules of Corporate Strategy. Harvard Business Review, 70, 57-69.
STANNEY, K., MORRANT, R. & KENNEDY, R. (1998) Human Factor issues in Virtual
Environments. Presence: Teleoperators and Virtual Environments, 7, 327-351.
STERNBERG, R. J. (1997) Thinking Styles, Cambridge, Cambridge University Press.
STRAUSS, A. L. (1987) Qualitative Analysis for Social Scientists, Cambridge, Cambridge
University Press.
179
SUMMERS, G. J. (2004) Today's Business Simulation Industry. Simulation & Gaming,
35, 208-241.
SUQRUE, B. & KIM, K. (2004) 2004 State of the Industry Report, ASTD.
SUSSMAN, N. M. & TYSON, D. H. (2000) Sex and Power: Gender Differences in
Computer-Mediated Interactions. Computers in Human Behaviour, 16, 381-394.
SWANSON, R. A. & HOLTON, E. F., III (1999) Results: How to Assess Performance,
Learning in the Art of Systematic Change, San Francisco, Berret-Koehler
Publications, Inc.
SYMONS, J. (1996) What Type of Learning? A Review of Learning Approaches in the
Context of the Design and Delivery of Academic Management Development
Programmes in the UK. Henley-on-Thames, Henley Management College.
TAMPOE, M. (1994) Exploiting the Core Competences of Your Organization. Long
Range Planning, 27, 66-77.
TANNENBAUM, S. I., MATHIEU, J. E., SALAS, E. & CANNON-BOWERS, J. A.
(1991) Meeting Trainees' Expectations: The Influence of Training Fulfillment on
the Development of Commitment, Self-efficacy, and Motivation. Journal of
Applied Psychology, 76, 759-769.
TANNENBAUM, S. I. & WOODS, S. B. (1992) Determining a Strategy for Evaluating
Training: Operating within Organizational Constraints. Human Resource Planning,
15, 63-81.
TEACH, R., D. & GIOVAHI, G. (Eds.) (1988) The Role of Experiential Learning and
Simulation in Teaching Management Skills.
TEACH, R. D. (Ed.) (1989) Using Forecast Accuracy as a Measure of Success in
Business Simulations.
THOMAS, R., CAHILL, J. & SANTILLI, L. (1997) Using an Interactive Computer Game
to Increase Skill and Self-efficacy Regarding Safer Sex Negotiation: Field Test
Results. Health Education & behavior, 24, 71-86.
THOMPSON, C., KOON, E., WOODWELL, W. H. J. & BEAUVAIS, J. (2002) Training
for the Next Economy: An ASTD State of the Industry Report on trends in
employer-provided training in the United States. Alexandria, VA., ASTD.
THOMPSON, D. (1980) Adaptors and Innovators: A Replication Study on Managers in
Sinagpore and Malaysia. Psychological Reports, 47, 383-7.
TIMES, F. (1998) NTT DoCoMo "Questions hang over record-breaking issue".
TOLMAN, E. C. (1932) Purposive Behaviour in Animals and Men, New York, Appleton-
Century-Crofts.
TOPHAM, P. J. (1990) Humanistic Computing in the Management Learning Field.
University of Lancaster.
TOTTY, M. (2005) Business Solutions: Better Training Through Gaming. The Wall Street
Journal. WSJ.com ed.
TOWLER, A. J. & DIPBOYE, R. L. (2003) Development of a Learning Style Orientation
Measure. Organisational Research Methods, 6, 216-235.
TROMPENAARS, F. & HAMPDEN-TURNER, C. (1993) Riding the Waves of Culture:
Understanding Cultural Diversity in Business, Nicholas Brealey Publishing.
TSANG, E. W. K. (1997) Learning from Joint Venturing Experience: the case for foreign
direct investment by Singapore companies in China. University of Cambridge.
VICKERY, S. K. (1991) Theory of Production Competence Revisited. Decision Sciences,
22, 635-43.
VYGOTSKY, L. S. (1978) Mind in Society, Cambridge, Mass, Harvard University Press.
WARR, P. B., ALLAN, C. & BIRDI, K. (1999) Predicting Three Levels of Training
Outcome. Journal of Occupational and Organizational Psychology, 72, 351-375.
WARR, P. B., BIRD, M. W. & RACKHAM, N. (1970) Evaluation of Management
Training, Aldershot, Gower.
180
WARR, P. B. & BUNCE, D. (1995) Trainee Characteristics and the Outcomes of Open
Learning. Personnel Psychology, 48, 347-375.
WASHBUSH, J. & GOSEN, J. (2001) Learning in Total Enterprise Simulations.
Simulation & Gaming, 32, 281-296.
WATKINS, R., LEIGH, D., FOSHAY, R. & KAUFMAN, R. (1998) Kirkpartick Plus:
Evaluation and Continuous Improvement with a Community Focus. Educational
Technology Research and Development, 46, 90-96.
WELLINGTON, W. J. & FARIA, A. J. (1992) An Investigation of the Relationship
Between Team Cohesion, Player Attitude, Performance Attitude on Player
Performance. Developments in Business Simulation & Experiential Exercises, 19,
184-189.
WHITE, B. (1984) Designing Computer Games to Help Physics Students Understand
Newton's Laws of Motion. Cognition and Instruction, 1, 69-108.
WHITEHALL, B. & MCDONALD, B. (1993) Improving Learning Persistence of Military
Personnel by Enhancing Motivation in a Technical Training Program. Simulation
& Gaming, 24, 294-313.
WHITELOCK, D., BIRNA, P. & HOLLAND, S. (1996) Proceedings. IN EDITIONS, C.
(Ed.) European Conference on AI in Education. Lisbon Portugal, Colibri Editions.
WIEBE, J. H. & MARTIN, N. J. (1994) The Impact of a Computer-based Adventure
Game on Achievement and Attitudes in Geography. Journal of Computing in
Childhood Education, 5, 61-71.
WILLE, E. (1989) Managerial Competencies and Management Development. Training
Officer, 25, 326-328.
WIMER, S. (2002) The Dark Side of 360-degree. Training & Development, 37-42.
WINN, W. & SNYDER, D. (1996) Cognitive Perspectives in Psychology. IN
JONASSEN, D. H. (Ed.) Handbook of Research for Educational Communications
and Technology. First ed., Simon and Schuster.
WINTERTON, J. & WINTERTON, R. (1999) Developing Managerial Competence,
London, Routledge.
WITMER, B. & SINGER, M. J. (1994) Measuring Immersion in Virtual Environments.
ARI Technical Report 1014.
WOLFE, J. (1985) The Teaching Effectiveness of Games in Collegiate Business Courses:
A 1973-1983 Update. Simulation & Games, 16, 251-288.
WOLFE, J. (Ed.) (1990) The Evaluation of Computer-based Business Games:
Methodology, Findings, and Future Needs, USA, Association for Business
Simulation and Experiential Learning (ABSEL).
WOLFE, J. & CROOKALL, D. (1998) Developing a Scientific Knowledge of
Simulation/Gaming. Simulation & Gaming, 29, 7-19.
WOLFE, J. & GUTH, G. (1975) The Case Approach vs. Gaming in Evaluation. Journal of
Business, 48, 349-364.
WOLFE, J. & ROBERTS, C. R. (1986) The External Validity of a Business Management
Game: A five-year longditudinal study. Simulation & Games, 17, 45-49.
WOLFE, J. & ROBERTS, C. R. (1993) A Further Study of the External Validity of
Business Games: Five year peer group indicators. Simulation & Gaming, 24, 21-
33.
WOOD, L. E. & STEWART, P. W. (1987) Improvement of Practical Reasoning Skills
with a Computer Game. Journal of Computer-Based Instruction, 14, 49-53.
WORRAL, L. (2004) A Perspective on Management Research. Wolverhampton,
University of Wolverhampton.
YIN, R. K. (1989) Case Study Research: Design and Methods, Newbury Park, CA., Sage.
YOUNG, M. (2002) Clarifying Competency and Competence. Henley Working Paper.
Henley, Henley Management College.
181
ZALATAN, K. A. & MAYER, D. F. (1999) Developing a Learning Culture: Assessing
changes in student performance and perception. Developments in Business
Simulation & Experiential Learning, 26.

182
Appendix 1 - Strategy Programme Overview

Programme Conte nt Strategy CoPilot – Strategy at the Edge - Case Study


Over vie w Simulati on Game
Introduction to the programme
Mission
Objectives
Proble m solving Introduction to the Introduction to the game Introductory case
Strategy
Tactics Mission and Objectives simu lation Principles of decision study. How to read
Phase 0 – Proble m ma king and input and analyse cases.
Solving
Internal Analysis
Finance Finance Phase 1 – Diagnosis – Round 1 – Establish Case: NTT DoCoMo
Value C hai n
Industry
attractiveness
Value chain analysis of current objectives and strategy to “Questions hang over
Industry attractiveness situation achieve mission record-breaking
External Analysis
PEST
External forces (PEST) issue”
Industry
segmentati on Debrie f and applicat ion to business
Customer ne eds
Identifying customer needs Phase 2 – Customer Round 2 – Assess Case: Cooleys
SWOT
Industry segmentation Interviews and content attractiveness and Distille ry: A Ne w
Attractiveness and achievability of new e xerc ise achievability of chosen “Spirit” in the World
positions strategy against competitors Whiskey Industry
Competitive Advantage
Competitiv e Positio ns Debrie f and applicat ion to business
Attractiveness and
Achieva bil ity
ANSOF matrix
Strengthening value chain Phase 3 – Co mpetit ive Round 3 & 4 Case: Boo.Co m The
Pushing back against 5 forces Advantage Mergers and acquisitions Path to Failure
Using partnerships and alliances and alliances allowed.
Exp lo it co mpetitor wea knesses
Options
Debrie f and applicat ion to business
M&A
Generic Strategi es
Selecting strategy Phase 4 – Se lect and Round 5 Case: Cooleys and
SFA Refine Strategy Consultancy round – each Boo.Co m
Strategic staircase team consults another to
Choice evaluate decisions and
Suitabi lity,
Feasibi lity effectiveness
Acceptabi lity
Scenari os Debrie f and applicat ion to the business
Group presentation on Business unit
strategy

183
Appendix 2 – Online Version of Kolb’s LSI III
Instructions

The learning styles inventory describes the way you learn and how you deal with
ideas and day-to-day situations in your life. Below are 12 sentences with a choice of four
endings. Rank the endings for each sentence according to how well you think each one
fits with how you would go about learning something. Try to recall some recent
situations where you had to learn something new, perhaps in your job. Then, using the
bullets provided, rank a "4" for the sentence describing how you learn best, down
to a "1" for the sentence ending that seems least like the way you would learn. Be
sure to rank all the endings for each sentence unit. Please do not make ties.

Please provide the following (* required):

First Name *

Last Name *

Company *

Email *

Least like Most like


When I Learn: 2 3
me me

I like dealing with my feelings

I like to think about ideas

I like to be doing things

I like to watch and listen

Least like Most like


I Learn Best When: 2 3
me me

I listen and watch carefully

I rely on logical thinking

I trust my hunches and feelings

I work hard to get things done

184
Least like Most like
When I am Learning: 2 3
me me

I tend to reason things out

I am responsible about things

I am quiet and reserved

I have strong feelings and


reactions

Least like Most like


I Learn By: 2 3
me me

Feeling

Doing

Watching

Thinking

Least like Most like


When I Learn: 2 3
me me

I am open to new experiences

I look at all sides of issues

I like to analyze things, break


them down into their parts

I like to try things out

Least like Most like


When I am Learning: 2 3
me me

I am an observing person

I am an active person

I am an intuitive person

I am a logical person

185
Least like Most like
I Learn Best From: 2 3
me me

Observation

Personal relationships

Rational theories

A chance to try out and practice

Least like Most like


When I Learn: 2 3
me me

I like to see results from my work

I like theories and ideas

I take my time before acting

I feel personally involved in


things

Least like Most like


I Learn Best When: 2 3
me me

I rely on my observations

I rely on my feelings

I can try things out for myself

I rely on my ideas

Least like Most like


When I am Learning: 2 3
me me

I am a reserved person

I am an accepting person

I am a responsible person

I am a rational person

Least like Most like


When I Learn: 2 3
me me

186
I get involved

I like to observe

I evaluate things

I like to be active

Least like Most like


I Learn Best When: 2 3
me me

I analyze ideas

I am receptive and open-minded

I am careful

I am practical

187
Appendix 3 – Performance Rating Scale
Generalised example of the boss’s Performance Rating Scale

Name:

• Fails to meet objectives


1 • Demonstrates poor skills in working with others
or hits own targets at the expense of others
• Below target – meets objectives with detailed
supervision
2 • Demonstrates limited level of knowledge and
problem solving skills to meet objectives
independently
• On-target – meets the key objectives of the role
3 • Demonstrates strong ability to deliver to targets
and takes a proactive approach to problem
solving
• Exceeds the demands of the role with minimum
supervision or assistance
4 • Shows high levels of initiative and problem-
solving
• Demonstrates thorough knowledge and ability to
achieve the role requirements
• Outstanding – substantially exceeds demands of
5 KPI’s and the role in general
• Demonstrates high levels of pro-activity, insight
and creativity

188
Appendix 4 – Simulation and Game Overviews
STRATEGY CoPilot™

The simulation is divided into five phases, which follow the same conceptual
structure as the tutorial. In outline:
• Phases 0 and 1: An introduction to Acme Bottle and its
competitive context, based on the issue of whether the
parent company should accept a bid for the division. In
answering this question the user is also encouraged to
diagnose the current issues around Acme’s positioning
and sources of advantage. The user gathers data
through interviews with the management team and an
outside broker, and is guided in their problem solving
and communication efforts by the Mentor.

• Phase 2: In this phase the user develops ideas


for new competitive positions for Acme. They
interview a range of customers to understand
their needs, and then work with the management
team to turn this data into a needs-based
segmentation. Mapping competitors onto this
segmentation identifies potential new positions
for Acme to serve. The programme then
generates a long list of possible positions using
the creative idea generation techniques in the
theory section and the phase finishes with an initial assessment of the relative
attractiveness and achievability of a shortlist of these options.

• Phase 3: The next phase revolves around


investigating how Acme can develop or strengthen the
capabilities it needs in order to serve the favoured
competitive positions. Again, users must ensure they
are asking the right questions to elicit valuable
insights, and they practice a range of powerful
techniques for generating ideas about competitive
advantage. On the basis of these findings, the user
may adjust their previous analyses of the
attractiveness of potential new positions. However, the
pressure is now building from the CEO of the parent
company. In this and the final phase, the user must
keep their team happy and involved, as well as ensuring thorough consideration of the
key issues.

• Phase 4: In the final phase the user selects and presents the chosen strategy for
Acme Bottle, and lays out the initial outline for the key steps to implementation. This
includes understanding the financial implications of the strategy, as well as identifying
internal barriers to implementation and the implications of uncertainty. As in real life,
there are political and emotional challenges as well as analytical issues to address.

189
STRATEGY at the Edge
The simulation is divided into five rounds or years, which follow the same
conceptual structure as the tutorial. In outline:
• Round 1: Your team is the new management team of an International Airline. With
capital raised by the founding shareholders, you need to establish your strategy and
objectives to provide the return on equity
expected. You need to make fundamental
choices about your airline and how it will be run.
Decisions in this round include: Your hub airport,
the number and configuration of aircraft (you are
limited to two aircraft type in the first round, the
Airbus 200 and 400 series, for short and long
haul flights respectively), the seating
configuration in each aircraft, the in-flight
catering, and your seat pricing in each class of
service you offer. The team gathers data through
reviewing market research information and using the simulation to test your
assumptions. Your tutor will guide you through any difficulties. At the end of this round,
the simulation calculates the market share of each team based on a market engine that
accurately predicts the effects of interactions.

• Round 2: In this round, teams develop ideas for new


competitive positions for their airline. They review new
market research to understand customer needs, and
then work with the simulation to turn this data into a
needs-based segmentation. Mapping competitors
onto this segmentation helps identify potential new
positions for your airline. Using a number of different
combinations of services, routes, pricing and
incentives (such as air miles), teams may use the simulation to establish likely effects
based on assumptions.

• Round 3: The next round revolves around investigating how the team can develop or
strengthen the capabilities it needs in order to serve
the favoured competitive positions. Again, users must
ensure they are asking the right questions to elicit
valuable insights, and they practice a range of
powerful techniques for generating ideas about
competitive advantage. On the basis of these findings,
the user may adjust their previous analyses of the
attractiveness of potential new positions. However, the
pressure is now building from the founding
shareholders. In this and the final rounds, the team
needs to show profitability or face the consequences of potential merger or acquisition.

• Round 4: In the final round of the competitive


game, teams may form alliances, merge with each
other or mount friendly (or even hostile) takeovers.
Or teams may elect to go it alone

• Round 5: Teams rotate to another team and act as


consultants to assess the selection of chosen
strategies and reviews the implementation of the
strategy. This includes understanding the financial implications of the strategy, as well
as identifying internal barriers to implementation and the implications of uncertainty. As
in real life, there are political and emotional challenges as well as analytical issues to
address.

190
Appendix 5 – MCQ Competency Descriptors
Achievement Orientation
Core: Does the person think about meeting and surpassing goals and taking calculated
risks for measured gains?
A concern for working well of for surpassing a standard of excellence. The standard may
be one’s own past performance (striving for improvement); an objective measure (results
orientation); outperforming others (competitiveness); challenging goals one has set; or
even what no one has ever done (innovation). Thus a unique accomplishment also
indicates achievement orientation.
Developing Others
Core: Does the person work to develop the long-term characteristics (not just skills) of
others?
Involves a genuine interest to foster the long-term learning or development of others,
with an appropriate level of need analysis and other thought or effort. Its focus is on the
developmental intent and effect rather than on a formal role of training.
Directiveness
Core: Does the person set firm standards of behaviour and hold people accountable for
them?
Implies the intent to make others comply with one’s wishes by appropriate and effective
use of personal power or the power of one’s position, with the long-term good of the
organisation in mind. It includes a theme or tone of “telling people what to do.” The tone
ranges from firm and directive to demanding or even to threatening.
Impact and Influence
Core: Does the person use deliberate influence strategies or tactics?
Implies an intention to persuade, convince, influence, or impress others in order to get
them to go along with or to support one’s own agenda. It is based on the desire to have a
specific impact or effect on others: the person has his or her own agenda – a specific
type of impression to make or a course of action that he or she wants others to adopt.
Interpersonal Understanding
Core: Is the person aware of what others are feeling and thinking, but not saying?
Implies wanting to understand other people. It is the ability to accurately hear and
understand the unspoken or partly expressed thoughts, feelings, and concerns of others.
It measures increasing complexity and depth of understanding of others and may include
cross-cultural sensitivity.
Organisational Awareness
Core: Is the person sensitive to the realities of organizational politics and structure?
The ability to learn and understand the power relationships in one’s organisation or in
other organisations (customers, suppliers. etc.). This includes the ability to identify the
real decision-makers and the individuals who can influence them, and to predict how
new events or situations will affect individuals and groups within the organisation.

191
Team Leadership
Core: The intention to take a role as leader of a team or other group. It implies a desire
to lead others. Team leadership is generally, but certainly not always, shown from a
position of formal authority. The “team” here should be understood broadly as any group
in which the person takes on a leadership role.

192
Appendix 6 – Learning Scale
Assessment of presentation in demonstrating learning

Please consider the following questions when assessing the


learning of you staff at their presentation:

• Have they demonstrated analysis of the business problem in


question?
• Have they defined the problem?
• Have they defined the goals and objectives?
• Have they shown how facts were analysed and how the problem
developed?
• Have they considered realistic alternatives?
• Have they chosen the best solution based on the alternatives?

Please rate your staff member’s learning, in your opinion that they have
demonstrated from the training programme on the following scale

Participant name

Demonstrated learning from the training programme


Not at all A little Somewhat A lot A great deal

Please add any comments for feedback to the participant:

193
Appendix 7 – Example Report

Managerial Effectiveness Competency Assessment

Personal Profile

Ann Onymous
Thursday July 14th 2004

Interpreting the Competency Frequency profile

The profile you and your third party nominees have completed illustrates the frequency with which you demonstrate managerial
behaviours. In other words, if you "Occassionally" demonstrate each of the different behaviours that compose Developing
Others, your frequency graph would be at 50% of the way up the graph.

When reviewing your profile you should consider four key questions:
1. Do I demonstrate each competency at least some of the time?
It is easier to develop an existing competency than to start one from nothing. If you score 25% or higher, it
means you have demonstrated the competency at least once or twice, and would find it easier to increase
your use of it.
2. How often are each of the competencies seen?
The higher the frequency, the more often you have demonstrated all these behaviours. If a competency is
scored 50% or higher, it has been seen multiple times; 75% or higher, frequently.
3. Which are strengths and which areas for growth?
All seven competencies are needed to be the most effective manager. To reasch that point, all seven should be
at the 50% mark. For development, you should concentrate on those rated below 50%.
4. How do my thrid party nominees see me demonstrate these competencies?
Often, our own assessment is different from those we work with. Remember the question asked for each
competency is: "How often have you seen.... in the last six months". You may consider that you do
demonstrate something, others however, may not see this - this suggests that you may need to clearly
demonstrate particular behaviours that you are comfortable with. Conversely, others may see behaviours that
you don't recognise.

Consider your own scores with your thrid parties - what does this tell you and how might you go about changing this perception?

The consolidated level profile will help you identify particular behaviours that may guide your self-awareness and/or what you
might undertake to develop yourself in competency areas.

The following page describes each of the comptencies measured within this assessment.

If you wish to discuss your profile in confidence, please log on to the Corporate Edge Mentor Forum - we will be pleased to
review the profile with you by email, phone or in person.

Thank you for taking the time to complete the MECA, we hope that you find it beneficial and a helpful guide for your further
personal development.

Corporate Edge Asia


www.ce-asia.com

194
Competency Level profile
for Ann Onymous Bachelors degree
38 year old Female, Chinese. Malaysian national, Manager
Achievment Orientation A stable part of your repertoire: you are comfortable enough with these behaviours that people see them frequently.

Average Level Level Level Level Level Achievment Orientation Level Profile
Scale Frequency Frequency Frequency Frequency Frequency Frequency
Level Behaviour Percentage Self Boss Peer Staff Friend
Wants to do job well (not Frequency Percentage
1
scored) Not Scored
Creates own measures of 0.0% 25.0% 50.0% 75.0% 100.0%
2
excellence (not scored) Not Scored
Wants to do job well (not scored)
3 Improves performance Creates own measures of excellence (not
30.0% 50% 63% 38% scored)
Sets and works to meet

Level
Improves performance
4
challenging goals 37.5% 63% 75% 50% Sets and works to meet challenging goals
Makes cost-benefit
5 Makes cost-benefit analyses
analyses 37.5% 50% 63% 75%
Takes calculated entrepreneurial risks
Takes calculated
6
entrepreneurial risks 27.5% 63% 25% 50%

Takes calculated entrepreneurial risks: Commits significant resources and/or time (in the face of uncertainty) to increase benefits (i.e., improve performance, reach a challenging goal, etc.).

Boss assessment higher than your own


Peer assessment higher than your own
Not assessed by Staff
Not assessed by Friend

Developing Others A stable part of your repertoire: you are comfortable enough with these behaviours that people see them frequently.

Average Level Level Level Level Level Developing Others Level Profile
Scale Frequency Frequency Frequency Frequency Frequency Frequency
Level Behaviour Percentage Self Boss Peer Staff Friend Frequency Percentage
Expresses positive
1 expectations of person
0.0% 25.0% 50.0% 75.0% 100.0%
(not scored) Not Scored

2 Gives how-to directions Expresses positive expectations of person (not scored)


50.0% 100% 75% 75%
Gives reasons, other Gives how-to directions
3
suport 37.5% 63% 63% 63%

Level
Gives feedback to Gives reasons, other suport
4
encourage 45.0% 88% 75% 63%
Gives feedback to encourage
Does longer-term
5
Coaching or Training 40.0% 75% 63% 63% Does longer-term Coaching or Training

Does longer-term coaching or training: Arranges appropriate and helpful assignments, formal training, or other experiences for the purpose of fostering a person's learning and development. Has people work out
answers to problems themselves so they really know how, rather than simply giving the answer. This does not include formal training done simply to meet corporate requirements, May include identifying a training or
developmental need and establishing new programmes or materials to meet it.

Boss assessment not as high as your own


Peer assessment higher than your own
Not assessed by Staff
Not assessed by Friend

Impact and Influence A stable part of your repertoire: you are comfortable enough with these behaviours that people see them frequently.

Average Level Level Level Level Level Impact and Influence Level Profile
Scale Frequency Frequency Frequency Frequency Frequency Frequency
Level Behaviour Percentage Self Boss Peer Staff Friend
States intention but takes Frequency Percentage
1 no specific action (not
scored) Not Scored
0.0% 25.0% 50.0% 75.0% 100.0%
Takes a single action to
2 States intention but takes no specific action
persuade 22.5% 38% 63% 13%
(not scored)
Takes multiple actions to
3
persuade 35.0% 63% 50% 63% Takes a single action to persuade
Calculates the impact of
4 Takes multiple actions to persuade
one's actions or words
Level

32.5% 88% 50% 25%


Calculates the impact of one's actions or words
5 uses indirect influence
37.5% 63% 50% 75%
Uses complex influence uses indirect influence
6
strategies (not scored) Not Scored
Uses complex influence strategies (not scored)

Uses indirect influence: Uses chains of indirect influence: "Get A to show B so B will tell C such-and-such" OR takes two steps to influence, with each step adapted to the specific audience. Uses experts or third parties
to influence.

Boss assessment not as high as your own


Peer assessment higher than your own
Not assessed by Staff
Not assessed by Friend
Team Leadership A stable part of your repertoire: you are comfortable enough with these behaviours that people see them frequently.
Team Leadership Level Profile Frequency Percentage
Average Level Level Level Level Level
Scale Frequency Frequency Frequency Frequency Frequency Frequency 0.0% 25.0% 50.0% 75.0% 100.0%
Level Behaviour Percentage Self Boss Peer Staff Friend
Manages meetings well (not scored)
Manages meetings well
1
(not scored) Not Scored Keeps people informed (not scored)
Keeps people informed
2 Promotes team effectiveness
(not scored)
Level

Not Scored
Promotes team Takes care of the group
3
effectiveness 30.0% 50% 63% 38%
Positions self as the leader
4 Takes care of the group
30.0% 88% 63%
Communicates a compelling vision
Positions self as the
5
leader 40.0% 88% 63% 50%
Communicates a
6
compelling vision 42.5% 75% 75% 63%

Communicates a compelling vision: Has genuine charisma; communicates a compelling vision that generates excitement, and commitment to the group mission.

Boss assessment not as high as your own


Peer assessment same as own
Not assessed by Staff
Not assessed by Friend

195
Competency Frequency Profile
for Ann Onymous Bachelors degree post assessment
38 year old Female, Chinese. Malaysian national, Manager

Self Boss Peer Staff Friend


100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Achievment Developing others Impact and Team leadership
orientation Influence
Used Used Used Used

Self Seen Multiple times Seen Frequently Seen Multiple times Seen Frequently

Self Apparent Strength Apparent Strength Apparent Strength Apparent Strength

Boss Seen Multiple times Seen Multiple times Seen Multiple times Seen Multiple times

Peer Seen Multiple times Seen Multiple times Seen Occasionally Seen Occasionally

Staff
Friend

196
Appendix 8 – Reaction Form
Name
(please use block
capitals)
Please rate your opinion about each session/tutor and please add comments. Your feedback will
help us to improve the programme.

Please rate each session during the programme on two scales:


How much you enjoyed the session and
How useful you found the session for your development

Session I enjoyed this se ssion I found thi s se ssion useful


Not at all Greatly Not at all Very
Strategy Co-Pilot Simulation 1 2 3 4 5 1 2 3 4 5

Strategy Co-Pilot Debrief and 1 2 3 4 5 1 2 3 4 4


application

Theory sesions 1 2 3 4 5 1 2 3 4 5

Team Work 1 2 3 4 5 1 2 3 4 5

Please rate the programme overall and each of the tutors

Programme overall Not


Did thi s programme at all Some OK Well Very well
Meet your ex pectations? 1 2 3 4 5
Meet stated objectives? 1 2 3 4 5
Reflect your organisational business issues? 1 2 3 4 5
Please rate your overall satisfaction with this programme 1 2 3 4 5

Programme Content Not


Please state the extent to which: at all Some OK Well Very well
You have inc reased your understanding of strategy 1 2 3 4 5
The programme was relevant to your needs 1 2 3 4 5
The programme was of practical value 1 2 3 4 5

197

You might also like