Policy Analysis Is A Process of Multidisciplinary Inquiry Aiming at The Creation

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

‘Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical

assessment, and communication of policy-relevant knowledge … to solve practical


problems … Its practitioners are free to choose among a range of scientific methods, qualitative
as well as quantitative, and philosophies of science, so long as these yield reliable
knowledge’ (Dunn, 2017: 2-3).
Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising
policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’,
to help solve problems with analysis that people can trust.
This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world
should be and how solutions get us there (as opposed to policy studies/ research seeking
primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:
1. The idea of ‘best practice’ characterised by 5-step plans.
 In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather
information; the role they perform in an organisation; the time constraints and
incentive structures in organisations and political systems; the expectations and
standards of their profession; and, the need to work with teams consisting of many
professions/ disciplines (2017: 15-6)
 The cost (in terms of time and resources) of conducting multiple research and analytical
methods is high, and highly constrained in political environments (2017: 17-8; compare
with Lindblom)
2. The too-narrow idea of  evidence-based policymaking
 The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’
undermines a researcher’s ability to adapt well to the evidence-demands of
policymakers (2017: 68; 4 compare with Why don’t policymakers listen to
your evidence?).
To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3)
and after (Qs4-5) policy intervention (2017: 5-7; 54-6):
1. What is the policy problem to be solved?
 For example, identify its severity, urgency, cause, and our ability to solve it.
 Don’t define the wrong problem, such as by oversimplifying or defining it with
insufficient knowledge.
 Key aspects of problems including ‘interdependency’ (each problem is inseparable from
a host of others, and all problems may be greater than the sum of their parts),
‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change
rather than being solved), and ‘hierarchy’ (which level or type of government is
responsible) (2017: 70; 75).
 Problems vary in terms of how many relevant policymakers are involved, how many
solutions are on the agenda, the level of value conflict, and the unpredictability of
outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
 ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a
problem, and ward against making too many assumptions about its nature and cause;
produce models of cause-and-effect; and make a problem seem solve-able, such as by
placing boundaries on its coverage. These methods foster creativity, which is useful
when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69;
77; 81-107).
 Problem definition draws on evidence, but is primarily the exercise of power to reduce
ambiguity through argumentation, such as when defining poverty as the fault of the
poor, the elite, the government, or social structures (2017: 79; see Stone).
2. What effect will each potential policy solution have?
 Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future
effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
 ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
 However, even the most-effective expert/ theory-based methods to extrapolate from
the past are flawed, and it is important to communicate levels of uncertainty (2017:
118-23; see Spiegelhalter).
3. Which solutions should we choose, and why?
 ‘Prescription’ methods help provide a consistent way to compare each potential
solution, in terms of its feasibility and predicted outcome, rather than decide too
quickly that one is superior (2017: 55; 190-2; 220-42).
 They help to combine (a) an estimate of each policy alternative’s outcome with (b) a
normative assessment.
 Normative assessments are based on values such as ‘equality, efficiency, security,
democracy, enlightenment’ and beliefs about the preferable balance between state,
communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer
& Schwartz, and Stone on the meaning of these values).
 For example, cost benefit analysis (CBA) is an established – but problematic – economics
method based on finding one metric – such as a $ value – to predict and compare
outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
 Cost effectiveness analysis uses a $ value for costs, but compared with other units of
measurement for benefits (such as outputs per $) (2017: 217-9)
 Although such methods help us combine information and values to compare choices,
note the inescapable role of power to decide whose values (and which outcomes,
affecting whom) matter (2017: 204)
4. What were the policy outcomes?
 ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if
resources and services reach ‘target groups’, if money is spent correctly (such as on
clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link
between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
 Monitoring is crucial because it is so difficult to predict policy success, and unintended
consequences are almost inevitable (2017: 250).
 However, the data gathered are usually no more than proxy indicators of outcomes.
Further, the choice of indicators reflect what is available, ‘particular social values’, and
‘the political biases of analysts’ (2017: 262)
 The idea of ‘evidence based policy’ is linked strongly to the use of experiments and
systematic review to identify causality (2017: 273-6; compare with trial-and-error
learning in Gigerenzer, complexity theory, and Lindblom).
5. Did the policy solution work as intended? Did it improve policy outcomes?
 Although we frame policy interventions as ‘solutions’, few problems are ‘solved’.
Instead, try to measure the outcomes and the contribution of your solution, and note
that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
 Policy evaluation is not an objective process in which we can separate facts from values.
 Rather, values and beliefs are part of the criteria we use to gauge success (and
even their meaning is contested – 2017: 322-32).
 We can gather facts about the policy process, and the impacts of policy on people, but
this information has little meaning until we decide whose experiences matter.
Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since
policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the
impact of knowledge regarding one question on the other four (2017: 7-13; contrast
with Meltzer & Schwartz, Thissen & Walker).
Developing arguments and communicating effectively
Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication
of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4
for advice on how to write briefs, memos, and executive summaries and prepare oral
testimony).
He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

 The claim itself, such as a description (size, cause) or evaluation (importance, urgency)


of a problem, and prescription of a solution
 The things that support it (including reasoning, knowledge, authority)
 Incorporating the things that could undermine it (including any ‘qualifier’, the
communication of uncertainty about current knowledge, and counter-arguments).
The key stages of communication (2017: 392-7; 405; 432) include:

1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to
gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing
with ‘political and organizational constraints’ and suggesting something that can
actually be done.
2. ‘Documentation’, focusing on synthesising information from many sources, organising it
into a coherent argument, translating from jargon or a technical language, simplifying,
summarising, and producing user-friendly visuals.
3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its
size, existing knowledge of policy and methods, attitude to analysts, and openness to
challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn
from each other.
 
 
Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy
cycles
Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which
culminated in new ways to express old ambitions, such as to:

1. Use ‘systems thinking’, to understand the interdependence between many elements


in complex policymaking systems (see also socio-technical and socio-
ecological systems).
 Note the huge difference between (a) policy analysis discussions of ‘systems thinking’
built on the hope that if we can understand them we can direct them, and (b) policy
theory discussions that emphasise ‘emergence’ in the absence of central control (and
presence of multi-centric policymaking).
 Also note that Dunn (2017: 73) describes policy problems – rather than policymaking –
as complex systems. I’ll write another post (short, I promise) on the many different (and
confusing) ways to use the language of complexity.
2. Promote ‘evidence based policy’, as the new way to describe an old desire for
‘technocratic’ policymaking that accentuates scientific evidence and downplays politics
and values (see also 2017: 60-4).
In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:
 Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge
of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in
favour of a continuous process of ‘questioning and answering’.
 This approach is a pragmatic response to the lack of order and predictability of
policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5
step policy analyses (compare with Meltzer & Schwartz).
Dunn (2017: 41-7) also provides an unusually useful discussion of the  policy cycle. Rather than
seeing it as a mythical series of orderly stages, Dunn highlights:
1. Lasswell’s original discussion of policymaking functions (or functional requirements of
policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering
knowledge), ‘promotion’ (persuasion and argumentation while defining problems),
‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is
made and carried out), and ‘appraisal’ (2017: 42-3).
2. The constant interaction between all notional ‘stages’ rather than a linear process:
attention to a policy problem fluctuates, actors propose and adopt solutions
continuously, actors are making policy (and feeding back on its success) as they
implement, evaluation (of policy success) is not a single-shot document, and previous
policies set the agenda for new policy (2017: 44-5).

You might also like