Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Simulation & Gaming

41(5) 743­–766
Assimilation of Public Policy © 2010 SAGE Publications
Reprints and permission: http://www.
Concepts Through Role-Play: sagepub.com/journalsPermissions.nav
DOI: 10.1177/1046878109353468
Distinguishing Rational Design http://sg.sagepub.com

and Political Negotiation

Pieter W. G. Bots,1 F. Pieter Wagenaar,2


and Rolf Willemse3

Abstract
One important objective of introductory courses in public administration is to sensitize
students to the difference between two concepts: substantive rationality and politi-
cal rationality. Both types of rationality play an important role in policy processes. Yet,
although the difference is straightforward in theory, and is addressed and well-illustrated
in most standard textbooks on public administration, students seem to have difficulty
internalizing it. This article reports on our findings from a role-playing game designed
to make students experience the difference between policy making as a process of
rational design and policy making as a process of political negotiation. We conducted
an experiment involving a large group of students enrolled in a first year, one-semester
course, and a control group of students who enrolled in the same course 1 year later.
The former were tested four times (start of the course, immediately before and after
playing the game, and 3 months later) and the latter two times (at the start of the course
and at the exam) for their understanding of how policy making–as-rational-design and
policy making–as-political-negotiation differ on seven characteristics. Comparison of
test results obtained before and after the role-play indicates a positive learning effect
for some characteristics, and a negative learning effect for others.

Keywords
debt settlement game, education, experiential learning, public policy, rationality, role-
play, Wilcoxon matched-pairs signed-ranks test

1
Delft University of Technology, Delft, The Netherlands
2
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
3
Rotterdam Audit Office, Rotterdam, The Netherlands

Corresponding Author:
Pieter W. G. Bots, Faculty of Technology, Policy and Management, Delft University of Technology, P.O. Box
5015, 2600 GA Delft, The Netherlands
Email: p.w.g.bots@tudelft.nl
744 Simulation & Gaming 41(5)

The combination of public policy and gaming/simulation is hot, and for good reason:
Its potential for changing beliefs and influencing the decisions and behavior of people
is growing (Mayer, in press; Thorngate & Tavakoli, 2009). In this article, however, we
do not use games in the political arena, but—more traditionally—in the classroom. Will
students learn the conceptual distinction between two styles of policy making by role-
playing instead of attending a lecture?
To answer this question, we designed a role-playing game based on a real-world
policy issue that would let our students experience different styles of policy making.
We measured the students’ knowledge at various moments before and after the role-
playing exercise, and observed whether significant learning effects occurred. Our find-
ings indicate that role-playing is indeed conducive to concept learning, but also liable
to convey erroneous beliefs.
Readers may find our work interesting for three reasons. Educators who consider
using gaming/simulation in their classes may appreciate our study as a detailed exam-
ple. Game developers may be inspired by the plot of our role-playing exercise and the
design choices we made. Researchers who consider doing a similar type of study may
learn from our ideas (and our mistakes!) and discover the virtues of the Wilcoxon
matched-pairs signed-ranks test.

Why Role-Play to Learn About Public Policy Making?


The role-playing game we present here was designed to help undergraduate social sci-
ence majors develop a sense for the difference between two approaches to the policy
making process: policy making as a rational design process and policy making as a
political negotiation process. Traditionally, we taught the differences between the
two approaches using a rather conventional method: Students were asked to first read
the seminal article by Lindblom (1959), which succinctly describes policy making as
a “messy,” incremental process of political negotiation, and then confront this concept
with the concept of rational policy design based on causal analysis of a social problem
(Axelrod, 1976; Huff, 1990). It appeared that especially those students who were not
planning to do a bachelor’s in public administration (PA) gained little understanding
from this activity, and that all of our students—those majoring in PA included—did
not feel stimulated by the exercise.
Several political science and public policy colleagues (Chasek, 2005; Frederking,
2005; Lantis, 2004; McCarthy & Anderson, 2000; Shaw, 2004; Shellman & Turan,
2003) have used role-playing games as teaching aids. They expected that gaming/
simulation in the classroom would

• promote active participation in the learning process


• accommodate a variety of learning styles (in part through an alternative pre-
sentation of the material)
• be fun
• enhance retention
Bots et al. 745

Sharing these expectations, we developed a role-playing exercise that lets the


students experience the difference between the two approaches.
We hypothesized that making students enact a policy making process (taking one
of the two approaches) in a gaming/simulation setting, and then having them reflect on
the events during the game and the resulting policy would help them see the difference
between the concepts of “rational design” and “political negotiation.” There is little evi-
dence of role-playing games being effective for this type of learning—called “concept
learning” by Gagné (1965, 1985)—but we assumed that this is due to a lack of games
with this teaching objective (Dempsey, Rasmussen, & Lucassen, 1996).
Because reviews of educational game research (Bredemeier & Greenblat, 1981;
Van Sickle, 1986; Randel et al., 1992) show that gaming in education enhances moti-
vation, we trusted that our new teaching method would be more motivating than the
original. Hence, we designed our experiment to measure only the game’s effective-
ness as a “concept learning” device.

Teaching Objective
The textbook used in the introductory course in PA taught by the authors in Amsterdam
(Bovens, ’t Hart, & van Twist, 2001) characterizes the two approaches to policy making—
commonly contrasted in the PA literature—in the following manner:

• Approaching the policy-making process as a matter of rational design means


trying to establish governance on scientific principles, and looking on policy
as the outcome of purposeful choice. The policy-making process knows tidy
and set stages: from agenda building, to policy design, to implementation, to
evaluation and reconsideration.
• Seeing the policy-making process as a matter of political negotiation implies
having a much messier world view. Policy is regarded to be a process of con-
stant bargaining and struggle, and political support is far more important to
policy makers than scientific tenability.

Adherents of a rational design view of the policy-making process (MacRae &


Whittington, 1997; Miser & Quade, 1988) look on society in terms of causal chains
that can be influenced. Policy design is a matter of constructing a causal model based
on scientific research that makes clear which goals should by attained to what end, by
what means and in which manner.
Advocates of a political negotiation view (Allison, 1971; Kingdon, 1995; Lindblom,
1959) see policy making as a slow, incremental process in which lobbies and interest
groups play an important role. All of the multiple parties involved have their own goals,
often even their own type of rationality, and try to get their way in a constant game of
infighting, negotiation, coalition building, and so on. During each of the stages of the
process—which, as it happens, exist in the eyes of policy analysts only—this politick-
ing continues.
746 Simulation & Gaming 41(5)

Table 1. Typical Differences Between Modes of Policy Making

Emphasis

Item Rational Design Political Negotiation

Knowledge and information High Low


Money Low High
Public interest High Low
Individual or group interests Low High
Problem content High Low
Support Low High
Feasibility Low High

Both approaches differ markedly in the importance and role assigned to knowledge
and information, money, private versus general interest, problem content (i.e., causes
of the problem, available policy instruments, and their effects) and the feasibility of,
and the support for, policy. Table 1 schematizes the differences in terms of the relative
importance of each item. For example, in rational policy design, there will be more
emphasis on gathering and using policy-relevant knowledge and information, while
political negotiators will be most keen on money, that is, the costs and benefits and their
distribution over stakeholders.

Setup of the Role-Playing Exercise


Following Brock and Cameron (1999)—who apparently had experiences similar to
ours while teaching political science to undergraduates—we used the Experiential
Learning Model of Kolb (1984) as a conceptual frame to rationalize our use of a role-
playing exercise. Our objective was to bring about experiential learning concerning
the differences in policy making–as-rational-design and policy making–as-political-
negotiation. The role-play was intended to create a “here-and-now” experience on which
the students were to reflect. Provided that the enacted policy-making process would be
sufficiently realistic, the reflective observations made by students during the debriefing
of the gaming/simulation would pertain to the characteristics of the two approaches to
policy making listed in Table 1. Through assimilation of these reflective observations—
partly during the debriefing and partly during the remainder of the course—students
should come to grasp the general concept of policy making–as-rational-design and
policy making–as-political-negotiation.
In our design, we had to decide whether to have all students experience both approaches
to policy making, or to divide them into two groups and have them jointly compare their
results. Time and resource constraints made us choose for the second setup:

• Two groups of students solve the same policy problem in parallel: one group
through a rational analysis of causes, possible interventions and their effects;
the other group through political negotiation.
Bots et al. 747

• These two groups then convene, and the two group leaders each present their
proposed solution.
• In the subsequent debriefing, differences in process and outcome are discussed.

This parallel setup required less time from students and staff. Moreover, it required
only one policy-making case. The sequential setup would have required two different
policy cases, because otherwise students would have been able, during their second
game session, to use case-specific information obtained in the first session.
We thought that in having the two groups confront each other with their solutions,
students would experience that neither approach is completely successful. We expected
that the “rational design” group would manage to solve the problem at hand, but only
“on the drawing board.” In their plan, they would pay no attention to the actors whose
cooperation was needed to carry this plan through, and thus their proposed solution would
be completely rational but politically unfeasible. Likewise, we expected that the “political
negotiation” group would manage to solve the problem, but that their solution would be
“negotiated nonsense,” that is, agreement on a plan that some of the parties involved
would profit from, but that would contribute little to solving the actual problem.
During the debriefing, we would let the student groups first reflect on the differences
in process and outcome without expressing any normative judgment. This, we hoped,
would stimulate what Kolb (1984) calls “abstract conceptualization” of the character-
istics of the policy-making approaches. We would then ask them to name disadvantages
of each other’s approach, and help them reach the conclusion that both approaches need
to be combined in any policy-making process; we wanted them to learn that “puzzling”
and “powering” are both indispensable for effective policy making.

Research Objective
While as educators we were considering the use of a role-playing exercise to enhance
our teaching, as researchers we wanted to know what the effectiveness of our efforts
would be. Being in a similar educator/researcher position in the field of political science,
McCarthy and Anderson (2000, p. 281) found that research examining the actual effec-
tiveness of active learning relative to traditional teaching formats is rare and has pro-
duced ambiguous results. Likewise, Chasek (2005) sees a need for quantifiable ways “to
measure the impact that simulations have on the students and whether or not they accom-
plish their objectives” (p. 18). Although several studies show that such quantitative
impact measurement is possible, Gosen and Washbush (2004) show the need for more
rigor in this respect. The recent study by Druckman and Ebner (2008) shows that such
rigorous experiments are feasible and effective. We therefore tried to set up the role-
playing exercise in a way that would allow us to measure in detail its contribution to our
teaching objective.
As explained in the previous section, the teaching objective was to make students
see the difference between two views on policy making: policy making as a process
of rational design and policy making as a process of political negotiation. The general
hypothesis tested in our experiment is that after the role-playing exercise, the students’
748 Simulation & Gaming 41(5)

beliefs about these two views will be more consistent with those summarized in Table 1
than before the exercise. To test this hypothesis, a first group of students was tested for
their understanding of the different characteristics of policy making–as-rational-design
and policy making–as-political-negotiation at four moments during the semester:

1. at the start of the course


2. immediately before the game session halfway through the course
3. immediately after this game session
4. 3 months after the game session

A control group of students (who did not do the role-playing exercise) was tested
twice: at the beginning of the course and 9 weeks later at the final examination.

Outline of This Article


Having introduced our educational ambitions and expectations, the remainder of this
article is split into three parts. First, we describe the content of the role-playing game
(i.e., the policy issue and political context within which it has to be resolved) and how
it was played and organized. We then describe the experimental design in more detail
and present the results. We conclude the article with a discussion on the learning out-
comes of our game.

The DEBT SETTLEMENT GAME


In this section, we hope to summarize the role-playing exercise in such a way that the
reader can envision it. Those who are interested in playing it can contact us to obtain
the complete set of game materials.

Game Setting
The game is loosely based on a Dutch policy evaluation report on debt restructuring
Belofte maakt schuld (“promise is debt”) of the Rotterdam Audit Office (Rekenkamer
Rotterdam, 2001). It is situated in the city of Rotterdam, an important economic center
with half a million inhabitants. Although business is thriving and the living conditions
of the average citizen are quite good, 30,000 households in Rotterdam have problem-
atic debts. A debt becomes “problematic” when it mounts so high that the debtor’s
income is no longer sufficient to settle it. Most problematic debtors in Rotterdam earn
minimum wages or even less, as they are either on welfare, or have low-paid jobs. The
average debt is about 10,000 euros. Creditors are usually the tax authorities, the welfare
office, the housing corporations, the energy companies, telecom service providers, and
mail order companies, although debts can also be caused by unpaid fines.
The rectangles in Figure 1 depict the typical phases in a debt settlement process.
After a first intake interview, the debt is reviewed by an employee of the municipal
finance company, which is a noncommercial bank. Some cases are beyond redemption,
Bots et al. 749

clean slate

review of unsolvable
relapse
debt case

intake

psychosocial
temptations
problems
inability loss of
to budget divorce employment

Figure 1. Debt settlement and causes of problematic debts

but usually a refinancing scheme can be devised that will eventually allow the client to
pay the debt while leaving sufficient means for subsistence. In such a scheme, all the
client’s debts are transferred to the municipal finance company. The client then only
pays to this bank, but less than the amount owed to the original creditors, as the finance
company negotiates with these creditors for remission of part of the debt.
The ovals in Figure 1 depict common causes of problematic debts. Most overspend-
ing is caused by the debtor’s inability to keep a housekeeping book or to withstand
temptations. Debts can also develop suddenly as the result of, for instance, a divorce
or the loss of employment. Finally, debts can have psychosocial causes. As the diagram
shows, these factors can cause a relapse after the debtor’s problems have been solved.

Actor Role Descriptions


Five main actors are involved in the restructuring of debts:

1. The responsible alderman has a goal to rid Rotterdam of problematic


debtors.
2. The municipal finance company can restructure problematic debts.
3. The department of economic affairs of Rotterdam can create subsidized jobs.
4. The welfare office provides welfare, can supply supplementary benefits,
and inspects those on welfare periodically to prevent fraud.
5. The social workers’ organization offers courses in household budgeting
skills and provides psycho­social aid.
750 Simulation & Gaming 41(5)

The game is played in two different modes: the “rational design” mode and the
“political negotiation mode.” In the “rational design” mode, players assume the role
of a senior policy analyst who is to advise the alderman and four officials who have
expertise in the areas of the respective stakeholders. Each player receives role-specific
information:

1. The senior policy analyst has been hired by the alderman to design a policy
that will solve the present 30,000 problematic debts, and prevent another
20,000 from developing. The available budget is limited, but can be stretched
somewhat.
2. The municipal finance company has to give a credit of 2,000 euros on average
to all problematic debtors, of which 20% will never pay it off. The intake and
subsequent process of coaching debtors costs another 3,000 euros per client.
3. The Department for Economic Affairs estimates that it can create between
3,000 and 15,000 subsidized jobs for people who are presently unemployed.
Such jobs are known to prevent new problematic debts, and to help debtors
permanently solve their problem. The cost of finding employment (200 euros
per job) is to be paid by the welfare office, and the costs of subsidizing the
wages (15,000 euros per job) is paid by national government.
4. The welfare office knows that more supplementary benefits can prevent
those on welfare to run into problematic debts. Every percentage point
increase in supplementary benefits—costing 7,500 euros—will prevent 25
new debts. Introduction of a special discount card for minimal wage earners
will also help in preventing 4,000 people from running into debts, as will
periodical inspection of earnings and spendings. Inspection once a year
means 625 fewer new debtors, while twice-yearly inspections increase this
number to 1,250. Half of the increase in supplementary benefits necessary
is to be funded by the welfare office itself, the other half will be provided by
the national government. Administrative expenses for special discount cards
for minimal wage earners will be 10 euros per card, but the total amount of
discounts, 350 euros, will be paid by third parties. Periodical inspection of
earnings and spendings does not increase costs.
5. The social workers’ organization knows that individual budgeting training
can prevent the coming into being of 1,250 new problematic debtors. Bud-
geting training in schools can prevent five new cases per class. All former
debtors need budgeting training as well, to prevent them from immediately
running into debts again. In 5,000 cases, psychosocial help is also needed for
people undergoing debt restructuring. Each individual budgeting training
costs 40 euros, training to school classes costs 50 euros, and psychosocial
help costs 100 euros per client, but the latter will be paid by another organi-
zation: the Regional Institute for Mental Welfare.

In the “political negotiation” mode, the players assume the role of the alderman and
four stakeholder delegates. In these roles, they have decision-making authority and the
Bots et al. 751

responsibility that comes with it, and they are expected to negotiate about the measures
to be taken. In addition to the substantive information provided in the “rational design”
mode, the players receive information about their (personal) stakes and means to influ-
ence the decision-making process:

1. The responsible alderman is bent to solve the problematic debtor problem


in Rotterdam, because failing to do so entails political death. Fortunately,
the alderman has a budget of 75 million euros, and access to another 210
million through political contacts at the national government. Moreover,
the alderman is the politician in charge of the managers of the municipal
finance company, the welfare office and the department of economic
affairs, and has the authority to fire them when they fail to perform. The
social workers are not managed by the alderman directly, but their organi-
zation depends heavily on a subsidy authorized by the alderman.
2. By taking over their clients’ debts, the municipal finance company takes a
large financial risk. The manager of the finance company fears bankruptcy.
The company’s annual budget is 100 million euros, which will provide for
debt restructuring of 22,000 debtors. To restructure the 30,000 debtors the
alderman asks for, an additional 38 million euros is needed. Clearly,
the manager is caught between the devil and the deep blue sea: do what
the alderman wants and risk bankruptcy, or refuse and be fired. Because
many of the finance company’s clients are on welfare, the manager tries
to shift the problem by proposing that the welfare office perform inspec-
tions more regularly.
3. The manager of Rotterdam’s department of economic affairs needs the
money meant for finding employment and subsidizing wages for another
project: the construction of a waste disposal installation in the Rotterdam
port. Reckoning that the welfare office will be willing to pay as much as
400 euros for finding employment for each unemployed debtor, and trust-
ing that the alderman’s lobby with the national government will engender
30,000 euros in subsidy per job, the municipality can keep sufficient funds
for the waste disposal project.
4. The manager of the welfare office is asked for an increase in supplementary
benefits, the introduction of special discount cards, and periodical inspec-
tion of earnings and spending. However, the manager is not willing to take
any of these measures, especially not the extra inspections, as these will put
even more pressure on the already hard-pressed welfare officers. This pres-
sure could be relieved if 5 million euros were reallocated from the social
workers’ organization to the welfare office.
5. The manager of the social workers’ organization is prepared to organize
budgeting training and psychosocial help, but to keep a balanced budget,
1 million euros in extra subsidies is needed, and the welfare office will need
to pay between 60 and 75 euros for every client who is given budgeting
training.
752 Simulation & Gaming 41(5)

Gaming Procedure
The role-playing exercise comprises four phases: a short introduction to the problem of
problematic debtors in Rotterdam, a briefing in which the students are told what is
expected from them during the exercise, two parallel games (one in the “rational
design” mode and one in the “political negotiation” mode), followed by a debriefing to
make sure that the intended effects of the game will really materialize. A group should
consist of 12 to 20 students to allow parallel games in two subgroups (of 6 to 10).
The game is organized according to a detailed step-by-step plan for each subgroup.
In the “political negotiation” subgroup, the instructor chooses one of the most asser-
tive students to play the role of alderman. The instructor then explains what the other
stakeholder roles are and assigns these, taking into account student preferences, by
handing out the cards with the role descriptions. The instructor makes sure that every-
body understands that these cards contain three types of information:

1. the effect(s) the use of a specific policy instrument will have on the debt situation
2. the costs involved when the policy instrument is used
3. the private goals a particular stakeholder strives for

The instructor then explains to the “political negotiation” subgroup that players
have to keep the information on their cards secret to the others, and grants it time to
read, meanwhile giving instruction to the “rational design” subgroup. After the first
subgroup has read the cards, the instructor explains that it will have a meeting that will
be chaired by the alderman. During this meeting, the actors will together try to solve
Rotterdam’s problematic debtors’ problem. During the game, the players are required
to stick to their role cards; they should not invent information themselves. After they
are through with their negotiations, they have to write down on which policy they have
agreed to solve the problem of problematic debtors in the city of Rotterdam.
The briefing for the “rational design” subgroup is somewhat different. The group is
led by the student who plays the senior policy analyst. Their task is first to draw a cause–
effect diagram that links policy instruments to effects, and then to choose the extent to
which they should use these policy instruments in order to solve the problem. Players are
told to share information, not to withhold it, and the policy analyst is to start by making
an inventory of all available data. The “rational design” subgroup will end the game
10 minutes earlier than the “political negotiation” subgroup to use its debt resolution
policy as an input for a spreadsheet model (see Figure 2) that will calculate the expected
impact of the policy. When the output shows that the policy does not solve the problem,
the group can use the model for about 5 minutes to reach better results by trial and error.

Debriefing
After the game is over, the instructor shows the rational group’s results and has the
political group put their results into the model. Then the students discuss the differ-
ences. The instructor makes sure that the following questions are addressed:
Bots et al. 753

Figure 2. Spreadsheet model for policy impact assessment

1. Which differences catch the eye?


2. How can these be explained?
Note: at this point, the instructor makes sure that the seven items of interest
(the role of knowledge and information, money, public versus private inter-
est, problem content, political support, and feasibility) are all addressed.
3. Which of the two divisions of costs reached is the fairest?
4. What policy is the most realistic?
5. Which policy do you prefer?
6. What are the advantages and disadvantages of the two views on the policy-
making process? Can they exist separately?

Typical Gaming Performance


The actual playing time varied between 40 and 60 minutes, depending on the ease with
which the groups came to a policy. The students playing in the “rational design” mode
gathered in a corner of the classroom, where in general they quietly began drawing a
model and exchanging information. The speed with which this happened depended to
754 Simulation & Gaming 41(5)

a large degree on who was chairing the “meeting.” In general, the expected knowledge
sharing and problem-solving behavior occurred. A few students apparently missed
the instruction to share the information they possessed, as they deliberately withheld
essential information. Some other students were so much aware of the financial costs
of very effective instruments that they decided not to include these in the model—a
behavior we would rather expect in the “political negotiation” group.
Not surprisingly, the games played in the “political negotiation” mode were much
livelier. Here, too, the ease with which the game was played depended very much on
the personality of the student who played the role of the alderman. A few students
seemed unwilling to really engage in the political game. Some of these immediately
gave everything away. The “political negotiation” groups always ended up with a
solution that was supported by all players.
To debrief the students, the instructors had a checklist of items that had to be dis-
cussed. In all groups all items indeed passed in review, but instructors took care not to
put forward the black-and-white schema of Table 1. The purpose of the experiment was
to see whether students would gain this insight from the game, so a “drilling exercise”
as debriefing was to be avoided. The spreadsheet model was also shown and, judging
by the reactions of the students, it contributed much to the understanding of the rela-
tions included in the model. Whether the spreadsheet and the debriefing also con-
tributed to the understanding of the typical differences between both modes of policy
making is harder to assess on qualitative grounds.

Experimental Design
The general hypothesis tested in our experiment is that after having played the role-
playing game, the students have a better understanding of the difference between the
concepts of policy making–as-rational-design and policy making–as-political-negotiation.
Assessment of the students’ understanding at four different moments in time—(a) at the
beginning of the course, (b) just before the role-playing game, (c) immediately after the
debriefing, and (d) 3 months later—was meant to allow us to monitor the students’ learn-
ing over time.
We conducted the role-playing game in the tutorials of an introductory course in
public administration that is mandatory for all first-year bachelor’s students at the Fac-
ulty of Social Sciences of the Free University of Amsterdam. Because it is an introduc-
tory course in the first year, no specific prior knowledge of public administration theory
was required or expected. During the months of February and March in 2004, students
had seven tutorial sessions out of which they could miss two. The game was played in
the third session in February 2004. The participating students majored in various social-
scientific disciplines.
As the role-play was considered to be a functional part of the course, it was not
possible to intentionally create a control group of students who did not play the game.
So, to collect data on learning with a conventional teaching approach without role-
play, we assessed a group of students enrolled in the next year’s course (2005) on two
Bots et al. 755

Table 2. Example of Questionnaire Questions Used

Policies that come about in an analytical way will least consider feasibility
Strongly disagree/disagree/neutral/agree/strongly agree Don’t know/no answer
Policies that come about in a political way will most consider feasibility
Strongly disagree/disagree/neutral/agree/strongly agree Don’t know/no answer

occasions: during the first meeting when students did not (yet) have prior knowledge
of the subject, and 9 weeks later, after they had studied the textbooks in preparation for
the examination.

Procedures
The students enrolled in the 2004 course were randomly placed in 12 different tutori-
als, each consisting of approximately 20 participants. These groups met each week for
2-hour tutorial sessions that were organized to complement the plenary lectures. Assess-
ments of the students’ understanding of the two policy-making approaches were made
during the first session, twice during the third session (immediately before and after the
role-playing exercise) and 2 months after the first exam (2 weeks after the resit exami-
nation) during a plenary lecture for another course. The 2005 control group was assessed
twice: at the end of the first plenary lecture and 9 weeks later on the day of the first exam.
Assessment questionnaire. The questionnaire started with general questions about the
students’ major field of study, whether they had followed any other course in higher
education before (and if so, on what subject and at what level) and whether they already
had studied some of the prescribed literature. To allow us to trace learning effects at
the individual level, students were asked to specify their student ID number. To test the
students’ knowledge on the difference between the “rational design” and “political
negotiation” approaches to policy making, we asked two closed-form questions about
the relative importance of the seven items listed in Table 1: knowledge and information,
money, the public interest, individual or group interests, problem content, support, and
feasibility. The seven question pairs were similar to the example shown in Table 2. In
this example, students should have circled “(strongly) agree” for both statements. To
mitigate the risk of acquiescent response bias, we phrased the statements in the ques-
tion pairs in such a way that the correct answer pairs constituted a random sequence of
agree/disagree permutations. We considered using more than two questions per item
to increase reliability, but this would have made the questionnaire much longer and
unpractical in view of the time constraints (two class hours, i.e., 105 minutes for the
entire exercise). The same questionnaire was used for all assessments, except that on
the second questionnaire we also asked specifically whether the students had attended
the previous week’s plenary lecture on policy making.
Role-playing exercise. The role-playing exercises strictly followed the gaming proce-
dure described in the previous section, observing the following time schedule: 10 min-
utes for the ex ante assessment, 20 minutes for explaining the policy issue of problematic
756 Simulation & Gaming 41(5)

Table 3. Overview of Completed Questionnaires in 2004

Hand-in Patterna No. of Studentsb

1 2 3 4 79
1 2 3 × 60
1 2 × 4 12
1 2 × × 8
1 × 3 4 6
1 × 3 × 6
1 × × 4 29
1 × × × 30
× 2 3 4 8
× 2 3 × 14
× 2 × 4 1
× 2 × × 3
× × 3 4 3
× × 3 × 1
× × × 4 0
× × × × 0
a. Digits refer to the questionnaire number (T1, . . ., T4), “×” indicates that the student did not complete
this questionnaire.
b. Data were obtained from 260 different students. T1 was completed by 230, T2 by 185, T3 by 177, and
T4 by 138 students.

debts and the setup of the game in two sub-groups (rational design and policy negotia-
tion), 45 minutes for role-play, 20 minutes for a plenary debriefing, and again 10 minutes
for the ex post assessment. The instructors did not give any feedback on the ques-
tionnaires, and the model answers (Table 1) were never presented. To keep students
from trying to seek out the correct answers on the questionnaire and learn them by heart
for the next assessment, the questionnaire explicitly stated that the student’s perfor-
mance would not affect their grade for the course. We realized that this might decrease
the students’ motivation to perform well, but considered it more important not to inter-
fere with the unprompted learning of students.
Data collection. In total, data was obtained from 260 students in 2004 and from 61
students in 2005. For the first group we would ideally have obtained four question-
naires (henceforth T1, . . ., T4) per student. However, because students were allowed
to be absent for two out of the seven tutorial sessions and because the role-playing
exercise was not mandatory, we obtained only 730 completed questionnaires (70%) in
2004. Table 3 gives a detailed account of how many students handed in which ques-
tionnaires. For the assessments of the control group we obtained two questionnaires
(henceforth C1 and C2) from all 61 students in the group.

Analysis Methods
Coding. The answers students marked on the 14 closed-form questions were coded
as correct (value = 1) when they (strongly) agreed with a statement that was consistent
Bots et al. 757

with Table 1, or when they (strongly) disagreed with an statement that was contrary to
Table 1. Neutral answers and “don’t know/no answer” were coded as neutral (value = 0),
and all other answers were coded as wrong (value = -1). For each student, the sum of
the 14 scores was used as the overall performance score. This score has a potential
value range of -14 to 14, and is equal to the difference between the number of correct
answers and the number of wrong answers). To allow matched-pairs testing, the data
obtained with T1, . . ., T4 was stored in a single data set containing one row per student
(with blanks for those questionnaires a student did not complete). The data obtained
with C1 and C2 was stored likewise in a second data set.
Checking for biases. We first verified whether the composition of the tutorial groups
was significantly affected by “no shows” for the game sessions. Comparison of the
data on students in the 12 tutorial groups that went through the role playing exercise
revealed no systematic bias in gender, academic major, lecture attendance, textbook
study, or “questionnaire hand-in pattern” as in Table 3. For each questionnaire T1, . . .,
T4, nonparametric Wilcoxon two-sample tests were used to test for significant differ-
ences between the groups in their overall performance scores, but no such differences
were found.
Checking for consistency per item. The symmetry in Table 1 reflects that, according
to theory, the two approaches to policy making have opposite characteristics. When the
emphasis on an item (e.g., knowledge and information) is high in the “rational design”
approach, this emphasis is low in the “political negotiation” approach, and vice versa.
We expected that students would be (or become) aware of this symmetry, and that this
would be reflected by a strong (or increasing) correlation between the students’ scores
on the two questions related to the same item. We therefore calculated these correla-
tions for each questionnaire.
Hypothesis testing. Our main hypothesis was that student performance would improve
after the role-playing exercise, or, in other words, that the students’ answers on ques-
tionnaire T3 would be better (more consistent with Table 1) than those on T1 and T2.
We also expected that after several months the students would still perform better than
before the role-playing game. As we did not obtain 260 completed questionnaires for
each assessment, we first used Wilcoxon two-sample tests to evaluate differences in
overall student performance between the four assessments in 2004, using all the data
obtained for each questionnaire. We used the same test to evaluate differences between
the 2004 group (with role-play) and the 2005 control group (no role-play). To evaluate
the differences in performance on each of the 14 closed-form questions, we performed
Wilcoxon matched-pairs signed-ranks tests, with each data pair pertaining to the same
individual.

Results
Table 4 gives an overview of the students’ performance on the six questionnaires
(T1, . . ., T4 for the role-playing group, and C1 and C2 for the control group). In gen-
eral, we found the students’ performance disappointing. In the best case (questionnaire
T3, immediately after the game), the 177 students together gave 1,323 correct answers
758 Simulation & Gaming 41(5)

Table 4. Overview of Student Performance, Based on All Collected Data

p Values Obtained With


Overall Performance Score Wilcoxon Two-Sample Test

Questionnaire n Min Max Mean SD T1 T2 T3 T4 C1

Group 2004
T1: Beginning of course 230 -9 10 0.79 3.01
T2: Start of session 185 -4 10 1.35 3.13 .285
T3: After debriefing 177 -8 14 3.27 4.11 .000 .001
T4: 3 months later 138 -5 12 2.74 3.47 .000 .000 .316
Control group 2005
C1: Beginning of course 61 -6 12 0.97 3.32 .819 .651 .000 .001
C2: At final exam 61 -8 13 1.64 3.86 .081 .411 .012 .075 .232

(53%), 744 wrong answers (30%), and 411 neutral answers (17%), resulting in the
average overall performance score of 3.27 = (1323 - 744)/177. The low scores may be
explained by a lack of motivation to seriously answer the questions, but we think that
it is more likely that the students indeed have difficulty in developing a good sense for
the difference between the two approaches to policy making.
Students who attended the lecture on policy making and/or studied the textbook
before playing the game did not perform better. Likewise, there was no significant dif-
ference in performance between students who did well on the final exam for the course
and those who did poorly.
The general picture conveyed by the mean overall performance scores in Table 4
is a moderate (not significant, p ≤ .285) improvement during the first 2 weeks of the
course, followed by a much stronger (significant, p ≤ .001) improvement in overall
performance after the role-playing exercise in the third week. Three months later, the
learning effect seems to have dissipated somewhat, but this decrease is not significant
(p ≤ .316) and the students’ performance is still significantly higher than on the first two
assessments (p ≤ .000 for both differences).
Although the control group does show a slight improvement, its overall performance
score increasing from 0.97 at the start of the course (C1) to 1.64 at the examination
(C2), the observed difference is not statistically significant (p ≤ .232). Given the signifi-
cant increase for the role-playing group (p ≤ .001), this supports our hypothesis that the
game has a positive effect on student learning.
Obviously, the apparent success of the game as a teaching device deserved further
scrutiny. We first checked whether the students were consistent in their answers to ques-
tions pertaining to the same item. We expected that when students would assign high
importance to an item in the “rational design” approach to policy making, they would
assign low importance to that item in the “political negotiation” approach, and vice
versa, or assign neutral in both cases. However, Table 5 shows low correlation between
the variable pairs. Apparently, the students did not look for the symmetry that in Table 1
reflects the opposition between the two approaches.
Bots et al. 759

Table 5. Correlation Between Performance Scores on Questions Pertaining to the Same Item

Correlation Between
Variables Per Questionnaire

Item Variablesa T1 T2 T3 T4 C1 C2

Knowledge and information KIRD, KIPN .06 .01 .05 .12 .24 .24
Money MRD, MPN .18 .08 .12 .21 .18 .12
Public interest PIRD, PIPN .25 .36 .50 .38 .24 .08
Individual or group interest IGIRD, IGIPN .12 .09 .19 .07 .00 .21
Problem content PCRD, PCPN .27 .07 .27 .27 .49 .53
Support SRD, SPN .09 .05 .41 .09 .30 .35
Feasibility FRD, FSN .30 .39 .46 .41 .28 .39
a.Variable names consist of the initials of the item (first column) and the approach (rational design or
political negotiation).

We then looked in detail at the change in test performance of the individual students
over time, going from T1 to T4. For each pair of questionnaires (Ti, Ti+1) we calculated
for each of the 14 variables and also for the total score the mean and standard deviation
of difference (score on Ti+1 - score on Ti) for the students who completed both question-
naires. As can be seen in Table 3, 159 students completed both T1 and T2, 161 students
completed both T2 and T3, 96 students completed both T3 and T4, and only 79 students
completed all four questionnaires. The statistical significance of the differences in perfor-
mance was evaluated using the nonparametric Wilcoxon matched-pairs signed-ranks test.
The results are presented in Table 6, the significant differences marked with asterisks.
The statistics in Table 6 show that the difference in performance is largest between T2
and T3. The comparison before/after the course (T4–T1) shows a significant difference
for 8 out of 14 variables (plus the total performance score), with two remarkable cases:
after playing the game, the negative mean differences for KIPN and SRD indicate that
students score significantly worse on the importance of knowledge and information in
the “political negotiation” approach and of support for the designed policy in the “ratio-
nal design” approach. The first effect is transient, the second effect persistent.
Figure 3 offers a graphic display of the average performance scores in Table 6. To
keep the diagram legible, the “rational design” variables and the “political negotiation”
variables have been plotted in separate graphs. The lines in Figure 3a show the develop-
ment in student performance going from T1 to T2 (effect of initial teaching and study),
from T2 to T3 (effect of the role play), from T3 to T4 (additional learning effect from
teaching and study after the role play), while the lines from C1 to C2 in Figure 3b show
the learning effect of the course as a whole for those students who did not participate in
the role-play. Significant changes (asterisks in Table 6) have been highlighted as dark
line segments.
The lack of correlation between variables that pertain to the same item, which we
noted earlier, is reflected in the differences in shape and the initial scores of correspond-
ing lines in the two graphs in Figure 3a.
760 Simulation & Gaming 41(5)

Table 6. Differences in Test Performance, Based on Pairs of Individual Scores

T2–T1 T3–T2 T4–T3 T4–T1 C2–C1


(n = 159) (n = 161) (n = 96) (n = 79) (n = 61)

Variable Mean SD Mean SD Mean SD Mean SD Mean SD

KIRD -0.04 0.56 0.06 0.61 -0.05 0.67 -0.09 0.54 0.10 0.81
KIPN 0.18* 0.82 -0.22* 0.99 0.41** 0.91 0.43** 0.90 0.40** 1.07
MRD -0.06 0.81 0.30** 0.96 -0.05 0.97 0.25* 0.93 0.30 1.09
MPN -0.06 0.84 0.09 0.98 -0.14 1.07 -0.20 1.00 0.20 1.18
PIRD 0.03 0.99 0.56** 0.93 -0.09 1.03 0.32** 0.91 0.23 1.04
PIPN 0.05 0.83 0.48** 0.96 -0.03 1.08 0.38** 0.91 0.03 1.25
IGIRD 0.01 0.96 -0.08 1.03 0.18 1.05 0.24* 1.11 -0.02 1.13
IGIPN 0.08 1.07 0.11 1.14 -0.31* 1.20 -0.24 1.21 -0.25 1.25
PCRD 0.17** 0.79 -0.02 0.75 -0.09 0.76 0.13 0.77 0.10 1.08
PCPN 0.11 0.87 0.04 1.06 0.23 1.10 0.53** 0.89 0.33* 1.04
SRD 0.01 0.82 -0.38** 0.99 -0.14 1.01 -0.49** 0.90 -0.18 1.15
SPN 0.00 0.99 0.19* 0.98 0.01 0.99 0.24* 1.03 -0.05 1.18
FRD -0.07 1.06 0.63** 1.13 -0.20 1.16 0.29* 1.03 -0.21 0.93
FPN -0.12 0.97 0.37** 1.04 -0.14 0.95 0.03 1.10 -0.31* 1.04
Overall 0.29 3.19 2.13** 4.48 -0.42 4.88 1.81 3.82 0.67 4.13
*p < .05 (for Wilcoxon matched-pairs signed-ranks test). **p < .01 (for Wilcoxon matched-pairs signed-
ranks test).

(a) Students in 2004 (participating in the role-playing exercise)


Questions on ‘Rational Design’ Questions on ‘Political Negotiation’
0.90

0.50

0.10 Questions pertaining to:


Knowledge and information
– 0.30
Money
– 0.70 Public interest
T1 T2 T3 T4 T1 T2 T3 T4 Individual or group interest
(N = 159) (N = 161) (N = 161) (N = 96) (N = 159) (N = 161) (N = 161) (N = 96) Problem content
Support
(b) Students in 2005 (conventional teaching approach)
Feasibility
Questions on ‘Rational Design’ Questions on ‘Political Negotiation’
0.90 Statistical significance of
changes:
0.50 n.s.
p < 0.05
0.10

– 0.30

– 0.70
C1 C2 C1 C2
(N = 61) (N = 61) (N = 61) (N = 61)

Figure 3. Average student performance charted per item (dark lines indicate statistically
significant changes)
Note: The vertical axes (fraction of students with correct answer - fraction of students with wrong
answer) have a natural scale range from -1 to 1.
Bots et al. 761

Table 7. Comparison of Initial Performance of Role-Playing Group (T1) and Control Group (C1)

T1 T1A T1B C1
(n = 230) (n = 79)a (n = 151)b (n = 61) p Valuesc

Variable Mean SD Mean SD Mean SD Mean SD T1 ≠ C1 T1A ≠ C1 T1A ≠ T1B

KIRD 0.81 0.46 0.84 0.41 0.80 0.49 0.69 0.59 .269 .308 .890
KIPN -0.60 0.67 -0.51 0.71 -0.65 0.64 -0.33 0.79 .024 .222 .181
MRD 0.23 0.77 0.27 0.76 0.21 0.78 0.26 0.81 .688 .936 .606
MPN 0.59 0.65 0.62 0.61 0.58 0.68 -0.05 0.90 .000 .000 .853
PIRD -0.06 0.81 0.00 0.82 -0.09 0.80 -0.07 0.87 .941 .658 .438
PIPN -0.61 0.68 -0.58 0.71 -0.62 0.66 -0.03 0.95 .000 .002 .810
IGIRD 0.04 0.81 0.01 0.82 0.06 0.81 0.31 0.76 .031 .044 .700
IGIPN 0.20 0.89 0.20 0.88 0.20 0.90 0.16 0.92 .818 .852 .986
PCRD 0.56 0.68 0.53 0.68 0.57 0.69 0.25 0.81 .012 .056 .619
PCPN -0.45 0.73 -0.48 0.70 -0.44 0.74 -0.51 0.79 .423 .572 .800
SRD 0.43 0.70 0.43 0.71 0.43 0.70 0.15 0.81 .024 .056 .968
SPN -0.38 0.81 -0.37 0.82 -0.39 0.81 0.02 0.88 .003 .017 .852
FRD -0.16 0.80 -0.15 0.82 -0.16 0.80 -0.10 0.89 .723 .780 .970
FPN 0.19 0.82 0.32 0.79 0.12 0.82 0.21 0.88 .748 .592 .102
Overall 0.79 3.01 1.13 2.88 0.61 3.07 0.97 3.32 .819 .807 .382

a. T1A is the subset of T1 containing data only on those students who handed in all questionnaires (T1, . . ., T4).
b. T1B is the subset of T1 that is not contained in T1A.
c. Obtained with Wilcoxon two-sample test.

We compared the initial performance of the role-playing group (T1) and the control
group (C1) in detail. The data in Table 7 show significant differences for 7 of the 14
variables, but there is no apparent pattern—the mean scores for the control group C1
are sometimes higher and sometimes lower than those of the role-playing group T1. As
our comparison T4–T1 in Table 6 involved only the subgroup of 79 students who com-
pleted all four questionnaires, we also checked whether these students did any better on
the initial test. As can be seen in the last two columns of Table 7, the differences between
this subgroup of 79 students (T1A) and the other 151 respondents on the first question-
naire (T1B) are not statistically significant.

Discussion and Conclusion


Although we very much tried to design a properly controlled experiment—challenged
by the observation made by Gosen and Washbush (2004) that this tends to be a weakness
in experiential learning research—there are grounds to be cautious in our interpretation
of the results:

• Our measurement instrument was basic. For reasons we mentioned in the


Experimental Design section (Procedures and Assessment Questionnaire sub-
sections), the questions we posed to the students were mapped one‑to‑one to
the seven aspects on which the “rational design” approach and the “political
negotiation” approach to policy making differ.
762 Simulation & Gaming 41(5)

• There was no strong incentive for students to give serious thought to the
questions, as the questionnaires were not a formal test that would affect their
grade for the course.
• We did not control for the testing effects. The Solomon four-group design
with repeated measures before and after as described by Druckman (2005,
pp. 61–62) would have allowed us to check whether filling out the second
questionnaire (just before the role-play) may have influenced the results on
the third questionnaire (immediately following the role-play).

Nonetheless, we still believe that it is justified to make the following observations.


Although the general performance of the students was disappointing, the role-
playing exercise does seem to have been effective as a teaching device. The scores
immediately after the game are significantly higher, while the low overall performance
scores and the pattern—consistent over all 12 tutorial groups—of more students giving
the wrong answer on the items “Knowledge and information in political negotiation”
and “Support in rational design” seem to rule out that the learning was caused by the
instructors feeding the right answers to the students. Moreover, except for these two
items, the findings for the control group confirm that a conventional teaching approach
is not as effective.
Unfortunately, we failed to ask students whether they had enacted the “rational
design” process or the “political negotiation” process. By consequence, we could not
verify our assumption that the learning effect would be produced by the confrontation
of experiences during the debriefing, irrespective of one’s role.
The detailed findings per item suggest that participating in the role-playing exer-
cise affects the students’ ideas about the “rational design” approach to policy making
in the following ways:

• Participation seems to confirm students in their initial belief that knowledge


and information, and problem content (causes, policy instruments, and their
effects) are important.
• Participation seems to increase the students’ understanding of the role of
money and public interest.
• Participation seems not to sway students in their opinion that individual inter-
ests play an important role when a policy is designed by apolitical experts.
This may be related to the observation we made in the The Debt Settlement
Game section (Typical Gaming Performance subsection) that students in their
expert role did not choose costly instruments. Apparently, financial informa-
tion puts students on the wrong track and makes them behave as stakeholders,
more than as analysts.
• Participation seems to confuse students on the question whether a particular
approach will lead to a policy that is supported by all stakeholders. This
might be because seeing players clash and quarrel in the “political negotia-
tion” disconfirms the image of a supported policy, while the quiet, purposeful
Bots et al. 763

activity in the “rational design” mode is misinterpreted as general support for


the policy.
• Participation seems to make students more aware that a rationally designed
policy may not be politically feasible. We attribute this learning effect mostly
to the way the spreadsheet model confronts the students with who will have to
pay for the implementation of the policy.

The detailed findings per item suggest that participating in the role-playing exercise
affects the students’ ideas concerning the “political negotiation” approach to policy
making in the following ways:

• Participation seems to enhance the students’ view that knowledge and infor-
mation is very important also in a political context. We attribute this to the
fact that our questionnaire did not differentiate between different types and
sources of information. The (secret) information on the role cards used in the
“political negotiation” mode evidently played an important role in the game.
• Participation seems to make students realize that in a process of political policy
making, the public interest is secondary to individual or group interest.
• Participation seems to be only marginally effective in teaching the students
that political negotiation leads to a supported policy. Again, this may be due to
the clash and quarrel students experience in the game.
• Participation seems to increase the students’ awareness of the importance of
feasibility.

These interpretations are but tentative; many questions concerning the students’ learn-
ing process remain unanswered. One aspect to look in to would be the initial beliefs of
students. The detailed comparison of the initial performance levels between the role-
playing group and the control group (see Table 7) suggests that initial beliefs can differ
from group to group.
Another aspect to consider is addressing the limitations of the exercise. The setup
we used relies on reflective observation and abstract conceptualization as the primary
learning mechanisms. Unlike the game used by Shellman and Turan (2003, p. 284),
ours does not support what Kolb (1984) calls “active experimentation,” as the “one
round” role-play does not allow students to engage in a second policy-making process.
Thus, Kolb’s experiential learning cycle is not closed, which impedes the action-to-
knowledge function of the game (Crookall & Thorngate, 2009).
Further research could focus on the relation between the setup of the role-playing
exercise, the preconceived images students have of policy making, and the learning that
takes place. Such research would involve more reflection on behavior, for example—
using the terminology of Argyris and Schön (1974)—by asking students to give their
espoused theory of action first, and after the game reflect on their real actions and the
theory-in-use that guides them. This reflection would not only provide data for research
but also enhance the debriefing phase (Lederman, 1992; Pearson & Smith, 1986).
764 Simulation & Gaming 41(5)

Such research may become feasible when the time pressures imposed by the educa-
tional context are less strong. After all, conducting research into the learning effects of
role-playing games the way we have done, is a very time consuming exercise. Judging
by the reactions of students, the role-playing exercise definitely is more entertaining
than our conventional teaching method. Needless to say, we had hoped for better learn-
ing effects, so as educators we feel slightly disappointed. Nevertheless, we see the exper-
iment as successful, if only because it has raised more questions than it has answered.
While the students may have learned only a little from the game, we as researchers have
learned a lot.

Declaration of Conflicting Interests


The authors declared no conflicts of interest with respect to the authorship and/or publication
of this article.

Funding
The authors received no financial support for the research and/or authorship of this article.

References
Allison, G. T. (1971). Essence of decision: Explaining the Cuban Missile Crisis. Boston: Little,
Brown.
Argyris, C., & Schön, D. A. (1974). Theory in practice: Increasing professional effectiveness.
San Francisco: Jossey-Bass.
Axelrod, R. (Ed). (1976). Structure of decision: The cognitive maps of political elites. Princeton,
NJ: Princeton University Press.
Bovens, M. A. P., ’t Hart, P., & van Twist, M. (2001). Openbaar bestuur: Beleid, organisatie en
politiek (Public administration: Policy, organization and politics) (6th ed.). Alphen aan den
Rijn, The Netherlands: Kluwer Academic.
Bredemeier, M. E., and Greenblat, C. S. (1981). The educational effectiveness of simulation games:
A synthesis of findings. Simulation & Gaming: An Interdisciplinary Journal, 12, 307-332.
Brock, K. L., & Cameron, B. J. (1999) Enlivening political science courses with Kolb’s learning
preference model. PS: Political Science and Politics, 32, 251-256.
Chasek, P. S. (2005). Power politics, diplomacy and role playing: Simulating the UN Security
Council’s response to terrorism. International Studies Perspectives, 6, 1-19.
Crookall, D., & Thorngate, W. (2009) Acting, knowing, learning, simulating, gaming. Simulation
& Gaming: An Interdisciplinary Journal, 40, 8-26.
Dempsey, J. V., Rasmussen, K., & Lucassen, B. (1996). The instructional gaming literature: Impli-
cations and 99 sources (Tech. Rep. 96-1). University of South Alabama, College of Education.
Druckman, D. (2005). Doing research: Methods of inquiry for conflict analysis. Thousand Oaks,
CA: Sage.
Druckman, D., & Ebner, N. (2008). Onstage or behind the scenes? Relative learning benefits
of simulation role-play and design. Simulation & Gaming: An Interdisciplinary Journal, 39,
465-497.
Bots et al. 765

Frederking, B. (2005). Simulations and student learning. Journal of Political Science Education,
1, 385-393.
Gagné, R. M. (1965). The learning of concepts. School Review, 73, 187-196.
Gagné, R. M. (1985). The conditions of learning and theory of instruction (4th ed.). New York:
Holt, Rinehart & Winston.
Gosen, J., & Washbush, J. (2004). A review of scholarship on assessing experiential learning
effectiveness. Simulation & Gaming: An Interdisciplinary Journal, 35, 270-293.
Huff, A. S. (Ed). (1990). Mapping strategic thought. New York: Wiley.
Kingdon, J. W. (1995). Agendas, alternatives and public policies (2nd ed.). New York:
HarperCollins.
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development.
Englewood Cliffs, NJ: Prentice Hall.
Lantis, J. S. (2004). Ethics and foreign policy: Structured debates for the international studies
classroom. International Studies Perspectives, 5, 117-133.
Lederman, L. C. (1992). Debriefing: Toward a systematic assessment of theory and practice.
Simulation & Gaming: An International Journal, 23, 145-159.
Lindblom, C. E. (1959). The science of muddling through. Public Administration Review, 19, 79-88.
MacRae, D., & Whittington, D. (1997). Expert advice for policy choice: Analysis and discourse.
Washington, DC: Georgetown University Press.
Mayer, I. S. (in press). The gaming of policy and the politics of gaming: A review. Simulation &
Gaming: An Interdisciplinary Journal.
McCarthy, J. P., & Anderson, L. (2000). Active learning techniques versus traditional teaching
styles: Two experiments from history and political science. Innovative Higher Education, 24,
279-294.
Miser, H. J., & Quade, E. S. (1988). Handbook of systems analysis: Craft issues and procedural
choices. Chichester, UK: Wiley.
Pearson, M., & Smith, D. (1986). Debriefing in experience-based learning. Simulation/Games
for Learning, 16(4), 155-172.
Randel, J. M., Morris, B. A., Wetzel, C. D., & Whitehill, B. V. (1992). The effectiveness of games
for educational purposes: A review of recent research. Simulation & Gaming: An Interdisci-
plinary Journal, (Promise is debt. The municipal debt settlement aid.) 23, 261-276. Reken-
kamer Rotterdam (Rotterdam Audit Office). (2001). Belofte maakt schuld. De gemeentelijke
schuldhulpverlening. Rotterdam, The Netherlands: Author.
Shaw, C. M. (2004). Using role-play scenarios in the IR classroom: An examination of exer-
cises on peacekeeping operations and foreign policy decision making. International Studies
Perspectives, 5, 1-22.
Shellman, S. M., & Turan, K. (2003). The Cyprus crisis: A multilateral bargaining simulation.
Simulation & Gaming: An Interdisciplinary Journal, 34, 281-291.
Thorngate, W., & Tavakoli, M. (2009) Simulation, rhetoric, and policy making. Simulation &
Gaming: An Interdisciplinary Journal, 40, 513-527.
Van Sickle, R. (1986). A quantitative review of research on instructional simulation gaming:
A twenty-year perspective. Theory Research in Social Education, 14, 245-264.
766 Simulation & Gaming 41(5)

Bios

Pieter Bots is an associate professor of policy analysis at Delft University of Technology. He


likes to apply gaming/simulation concepts in both research and teaching. His interests include the
analysis and design of policy and decision-making processes, actor-oriented methods, and tools
for policy analysis and decision support. His present research focuses on conceptual modeling
techniques (notably ontologies) to represent and compare theories and models concerning
multi-actor systems. Contact: p.w.g.bots@tudelft.nl.

Pieter Wagenaar is an assistant professor at the Vrije Universiteit Amsterdam, and coordinator
of the research project Under Construction, a cooperation between the Vrije Universiteit
Amsterdam and the Universiteit Leiden, which started in 2006. He has been using games and
simulations in his teaching for many years. He has published on a range of topics concerning the
informatization of public administration, the history of public administration, and now, finally,
gaming. Contact: fp.wagenaar@fsw.vu.nl.

Rolf Willemse is currently performance audit manager at the Audit Office of the city of Rotterdam.
He has previously been working as an assistant professor in public administration at the Vrije
Universiteit Amsterdam. He has written several scholarly publications on local government,
decentralization, and Europeanization, and numerous professional publications on auditing and
public control. Contact: r.willemse@rekenkamer.rotterdam.nl.

You might also like