Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Policy and Society

ISSN: 1449-4035 (Print) 1839-3373 (Online) Journal homepage: https://www.tandfonline.com/loi/rpas20

Does regulatory impact assessment lead to better


policy?

Peter Carroll

To cite this article: Peter Carroll (2010) Does regulatory impact assessment lead to better policy?,
Policy and Society, 29:2, 113-122, DOI: 10.1016/j.polsoc.2010.03.009

To link to this article: https://doi.org/10.1016/j.polsoc.2010.03.009

© 2010 Policy and Society Associates (APSS)

Published online: 03 Mar 2017.

Submit your article to this journal

Article views: 3273

View related articles

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rpas20
Available online at www.sciencedirect.com

Policy and Society 29 (2010) 113–122


www.elsevier.com/locate/polsoc

Does regulatory impact assessment lead to better policy?


Peter Carroll
University of Tasmania, Private Bag 86, Hobart, Tasmania 7001, Australia

Abstract
Regulatory impact assessment systems bring evidence to bear on how to improve the quality of new or modified regulations. In
the last 30 years they have become increasingly common in OECD countries. Regulatory proposals put forward by government
agencies are now required to have a firm evidence base that clearly supports the new or modified regulation. However, in all
jurisdictions actual practice has shown that the performance of the impact assessment systems has been very limited, with proposed
regulation continuing to offer little in the way of a rigorous and convincing evidence base. This paper explores the reasons for the
poor performance of evidence-based approaches. Reasons include the varying levels of ministerial and head of department/agency
commitment, poor integration of impact assessment systems with existing policy development processes, variable capacity for
rigorous, evidence-based policy in departments, and a lack of data on which evidence-based policy can be developed.
# 2010 Policy and Society Associates (APSS). Elsevier Ltd. All rights reserved.

1. Introduction

Regulatory impact assessment systems (RIA) are intended to improve the quality of new and modified regulation in
the hope that it will achieve the minimum of adverse impacts, especially in relation to business and economic
performance. Over the last 30 years they have become increasingly common in OECD countries and, more recently,
the EU has adopted and required their use. One of the fundamental requirements of all of the impact assessment
systems in current use is that the regulatory proposals put forward by government departments and agencies have a
firm evidence base that clearly supports the new or modified regulation being put forward for adoption. However, in all
jurisdictions actual practice has shown that the performance of the impact assessment systems has been limited and
variable, with proposed regulation often continuing to offer little in the way of a rigorous and convincing evidence base
to support the proposals.
The aim of this paper is to examine the performance of RIA in relation to evidence-based policy and to offer
explanations as to why impact assessment systems have been of only limited success in embedding and encouraging
the development of evidence-based policy. It argues that this limited performance can be explained by a number of
factors, especially:

 Varying degrees of suspicion as to the motives behind the introduction of RIAs, notably a common feeling that they
were based on a less than convincing, ideological commitment to market based policy solutions.
 Administrative resistance to the ‘external’, imposition of resource intensive RIA procedures.
 A varying but often inadequate capacity for rigorous, evidence-based policy in many departments and agencies,
as indicated often by regulatory proposals that: one, lack supporting data; two, lack appropriate data (especially

E-mail address: Peter.Carroll@utas.edu.au.

1449-4035/$ – see front matter # 2010 Policy and Society Associates (APSS). Elsevier Ltd. All rights reserved.
doi:10.1016/j.polsoc.2010.03.009
114 P. Carroll / Policy and Society 29 (2010) 113–122

non-economic data and that related to social capital); three, lack relevant technical expertise, especially in relation to
cost-benefit analysis; four, that persistently fail to learn from regulatory experience.
 A lack of persistent, high-level commitment to RIA from ministers and senior public servants.
 The lack, interestingly, of strong, evidence-based argument to support the case for the introduction and use of RIA,
especially systematic evidence as to their impact on the quality of proposals and, in turn, their socio-economic
outcomes.

The paper is divided into two major sections. The first briefly defines what is meant by an RIA, then assesses the
validity of the proposition that RIAs place an emphasis on the need for a strong evidence base to improve the quality
of new or modified regulation by examining the motivations of the designers of RIA systems. The second examines
the actual performance of RIA systems in relation to evidence, finding that it is distinctly limited, with proposed
regulation often continuing to offer little in the way of a rigorous and convincing evidence base to support the
proposal.

2. What is RIA and how does it relate to evidence-based policy?

This section has two aims. The first is to outline what is meant by RIA systems and their underlying motives. It is
argued that while the desire for an appropriate evidence base is clear, other motives are also apparent, motives that
suggest that the commitment to evidence-based policy is of questionable strength and restricted focus. The second aim
is to briefly outline the relationship between RIA and the evidence-based policy movement. It is suggested that the
very existence of government support for evidence-based policy, for example, as propounded by the British
Government, suggests either an ignorance of, or, more likely, a tacit recognition of the limitations of RIA systems that
have been introduced.

2.1. RIA systems and the underlying motives

RIA is a term used in two distinct senses:

 One, where it is used to describe a variety of related regulatory systems and associated processes employed by
governments to improve the quality of policy making and the resulting regulations or policy (see, for example,
Australian Government, 2007; DBERR, 2007; OECD, 2005; OMB, 2003; The Evaluation Partnership Limited,
2007).
 Two, where, less commonly, it is used to refer to government policy stances with regard to the use of RIA, their RIA
policy, in so far as it exists.

In the first sense RIA is conceived of as a tool for the improvement of policy. In particular, it is a term that is used to
describe impact assessment systems that conform, at least in theory, to the principles espoused by the OECD’s Council
Recommendation for improving the quality of government regulation (OECD, 1995, pp. 9–10). The precise role,
coverage and processes of any RIA varies but they typically include a mandated process for analysing proposed or
existing regulation with a number of key elements. In summary, the elements constitute a rational, prescriptive,
process-based model of policy making familiar to policy analysts, laying out the major tasks that are to be undertaken
at each stage of the regulation making process, as follows:

 The provision of a description of the problem or issues which give rise to the need for action and the broad goal of the
proposed regulation.
 A specification of the desired objective(s).
 A description of the options (regulatory and/or non-regulatory), that may constitute viable means for achieving the
desired objective(s).
 An assessment of the potential impact of the regulation, including its costs and benefits.
 A consultation statement.
 A recommended option, with an explanation of why it was selected and others were not; and
 A detailed strategy for the implementation and review of the preferred option.
P. Carroll / Policy and Society 29 (2010) 113–122 115

In particular, all RIA systems note the need for appropriate evidence to support proposed regulations, with the
OECD’s 1995 recommendation, for example, indicating that
The problem to be solved should be precisely stated, giving clear evidence of its nature and magnitude . . .
Government intervention should be based on clear evidence that a problem exists and that government action is
justified (OECD, 2005, p. 14).
Similarly, the EU’s impact assessment system has as one of its major objectives that of
Facilitating a more systematic, coherent, analytical, open, and evidence-based approach to policy design (The
Evaluation Partnership Limited, 2007, p. 3).
The USA’s Office of Management and Budget sees impact assessment as
. . . a tool regulatory agencies use to anticipate and evaluate the likely consequences of rules. It provides a formal
way of organizing the evidence on the key effects – good and bad – of the various alternatives that should be
considered in developing regulations. The motivation is to (1) learn if the benefits of an action are likely to justify
the costs or (2) discover which of various possible alternatives would be the most cost-effective (Office of
Management and Budget, 2003, pp. 1–2).
The British Government sees it as
. . . a tool to enable the Government to weigh and present the relevant evidence on the positive and negative
effects of such interventions, including by reviewing the impact of policies after they have been implemented
(Department of Business, Enterprise and Regulatory Reform, 2007).
Hence, in the first sense of the term, RIA is clearly described by varying governments as a normative tool,
instrument or policy means designed, as part of a mandated set of linked processes, to improve policy by attempting to
ensure that it is backed by appropriate evidence.
The second sense of the term, where it is used to describe a government’s stance with regard to the use of RIA, or
RIA policy, is less often discussed, but is also of importance in regard to evidence-based policy. It is important because
it highlights a rather different set of motives, motives that make RIA’s insistence on the need for a strong evidence base
for policy proposals less convincing and somewhat ambiguous, including aims such as:

 Ensuring control in decision-making processes by specifying not only a set of rules as to the procedures to be
followed and the techniques to be used, but also the range and type of data to be considered as evidence within the
process. RIA decision makers, for example, are often advised to focus their analysis on situations of market failure,
with the usually implicit assumption that markets will generally operate in a socially desirable fashion (Office of
Management and Budget, 2003, p. 4). The danger here, of course, is that the presumption of market efficiency will
tend to lead decision makers to ignore or downplay evidence that suggests that a market is not working efficiently, as
seems to have been the case in the current financial crisis sparked by deficiencies in the sub-prime mortgage market.
 Enhancing a government’s credibility and legitimacy by adopting a peer-approved approach to policy development
and review, as with the acceptance and progressive adoption of RIA systems by OECD members, then the EU,
following the 1995 OECD Council decision. The desire for and influence of, peer approval in this case was sufficient
to overcome the fact that, at least in 1995, there was little if any systematic evidence to indicate that RIA systems
were successful in improving policy performance. Rather, advocates of RIA systems relied upon claims as to the
logic, or rationale, of such systems to support their case for its adoption, necessary claims, but not backed in the
documents reviewed by this author with a convincing evidence base. Since their introduction RIA systems have been
subject to varying degrees of evidence-based review and assessment, all of which have indicated a varying range of
deficiencies (see, for example, Carroll, Silver, Deighton-Smith, & Walker, 2008; National Audit Office, 2007;
Regulation Taskforce, 2006; The Evaluation Partnership Limited, 2007).

Hence, RIA do have as one aim the strengthening of the evidence base regarding the development and review of
policy. However, it is a strengthening within somewhat vague and limiting boundaries, with a strong presumption and
relatively narrow focus in favour of market based solutions to a wide range of problems, and varying rule-based
controls to achieve that end. Also, where control, credibility and legitimacy motives are evident, then the strength and
116 P. Carroll / Policy and Society 29 (2010) 113–122

direction of the commitment to evidence-based policy by elected ministers should be questioned, for they might reveal
motives of a more contentious sort.

2.2. RIA and evidence-based policy

The term evidence is defined in its broadest sense to include everything that is used to determine or demonstrate the
truth of an assertion. Hence, evidence-based policy can be described in at least two ways. The first, somewhat
contentiously, is as a social movement representing the efforts of persons inside and outside formal decision-making
processes to improve policy outcomes by applying rational means and techniques (Jackson, 2007; Mintrom, 2007;
Packwood, 2002; Pawson, 2002). The second, closely related to the first, is as a formal policy stance adopted by
governments keen to stress the need for better quality, more effective policy (Wells, 2004).
RIA, as described above, is thus very clearly related to evidence-based policy in both the above senses. As with
evidence-based policy, it can be regarded as a social movement aimed at the improvement of policy outcomes,
especially in relation to the regulation of business. Indeed, it has been a rather successful one in that its proponents,
largely economists, have gained formal acceptance for the approach and its implementation within an increasing
number of national governments, including the USA, UK, Canada and Australia, an influential international
organisation, the OECD and, more recently, within a supranational organisation, the European Union (Radaelli, 2005).
It is also, as described above, a formal policy stance adopted, increasingly, by a number of governments.
Given, then, that the introduction of RIA systems predates the current, evidence-based policy movement and policy
stance by a number of decades, it is possible to view their relationship, paradoxically, as signaling both failure and
success for RIA and its proponents. It signals failure in that if RIA systems had been successful in achieving their aim
of more effective, evidence-based policy, then why would a ‘new’, evidence-based policy movement with very similar
aims and methods arise, and why would it be endorsed and adopted as policy by an increasing number of governments?
However, it could be regarded also as a signal of RIA’s success, if one regarded the evidence-based policy movement
as simply the extension of the underlying values and principles of RIA from departments concerned primarily with
business regulation to all areas of government, whether by conscious design or by a process of voluntary policy
diffusion.
The temptation of this author is to see the development of the evidence-based policy movement and its adoption as a
policy stance by a number of governments as ‘evidence’, of at least the partial failure of RIA – though he would be
keen to receive views to the contrary! In the next section that evidence is considered at greater length by means of a
comparison of a number of studies of the actual performance of RIA systems, with an emphasis on their evidence base.

3. The performance of RIA systems in relation to evidence

This section identifies several dimensions of the performance of RIA systems in relation to evidence. As far as the
author is aware there has been no systematic study of the use of evidence in RIA systems, other than in the context of
broader, whole of system reviews. Most such reviews of the performance of RIA systems do throw some light upon the
role of evidence, if only indirectly, and their findings are drawn upon in this paper as they provide a valuable,
comparative basis for the analysis (see, for example, Argy & Johnson, 2003; Auditor-General, 1989; Banks Report,
2006; Carroll et al., 2008; Head & McCoy, 1991; Office of Regulation Review, 1993; Renda, 2006). While the analysis
presented in this paper cannot be regarded as conclusive, the fact that different studies have revealed often similar
issues in relation to the use and quality of evidence in regulatory proposals developed within RIA is instructive.

3.1. The administrative politics of RIA and its impact on the development of evidence-based policy

RIA systems are rarely, if ever, introduced on the initiative of a line department, though they may be located in, and
organised from, such a department. Instead, they tend to spring from a core central agency or department, or key
advisory body, and are usually imposed on line departments and other agencies, often with varying degrees of
resistance that can persist for several years. Nor, for the most part, does their introduction signal the replacement of
existing policy-making systems. Rather, most often they are put in place as an addition to existing systems, either
operated in parallel with those systems, or given the status of a mere ‘add on’, a requirement to be satisfied late in the
policy-making process, involving the somewhat irritating completion of yet more forms, at the behest of a central
P. Carroll / Policy and Society 29 (2010) 113–122 117

agency. While, over time and with increased pressure from core agencies, RIA systems tend to become more
integrated with ‘normal’, departmental processes, they are still usually seen as an externally imposed requirement
rather different from the norm.
In this context it is not surprising that RIA systems and their proponents, especially where the latter are based in an
external agency or other, core department, are often perceived in a somewhat negative light, to be resisted or, at the
least, to be met with a defensive, questioning posture. They represent an intrusion that carries with it the need for
change to established processes and routines, including those related to the gathering of evidence for policy and
regulation. One RIA, the Regulatory Impact Statement (RIS) system, for example, was introduced by the Australia
federal government in 1986 as a new policy-making process, coordinated by a new Business Regulation Review Unit
(BRRU) (Head & McCoy, 1991, p. 158). After three years, in 1988, an efficiency audit report by the Auditor-General
noted that the RIS system was not achieving its stated objective of comprehensively reviewing all targeted government
regulation, or advising government on all new regulatory proposals, in part simply because of the failure of some
departments to provide the required RIS (Auditor-General, 1989). As a later publication noted of this period,
‘ministers and regulatory departments/agencies routinely eschewed preparation of RISs’ (Argy & Johnson, 2003, p.
22). This type of rejection clearly involved the processes mandated for the RIS, including those specified for the
acquisition and analysis of evidence to support the regulations in question, as well as the other aspects of the RIS
regulation making process.
By the early 1990s, while it is difficult to be precise, the impact of the new Australian system, along with that of the
BRRU, seems to have been negligible. This lack of impact was also associated with some feeling that the RIS system
had an ideological, rather than a regulation improvement purpose, aimed largely at freeing markets from regulatory
control without convincing justification for such freeing up, whether by deregulation or regulatory reform (Head &
McCoy, 1991). Also, of course, RIS represented, at least in its earlier years, an increased workload for those involved
in policy making, an unwelcome burden for the public servants involved.
The Australian RIS was further reviewed in 1993 and again it was found that it was less successful than hoped, in
part because of a lack of understanding and enthusiasm in line departments and agencies, with the review
recommending

 A re-weighting of its work priorities to place greater emphasis on its educative and research role, with a more
focused and selective approach to its Cabinet role.
 The introduction of a range of measures to increase awareness and understanding of regulation review policies
within the bureaucracy.
 A range of measures to raise the public profile of the ORR (BRRU’s successor), and regulation review policy (Office
of Regulation Review, 1993, p. 272).

However, when RIS was reviewed for the third time some 13 years later, in 2006, many of the same issues were
found still be at work, though formal compliance with the system had increased (Banks Report, 2006).
Hence, the implication of this ‘administrative resistance’ to any attempt to emphasise the need for evidence-based
policy, whether in the form of an RIA or not, is clear: unless it is welcomed by those whose place it will be to
implement such a policy, it is not likely to be successful, at least in the short and medium term. It does not necessarily
follow that evidence-based policy will not result, only that the introduction of RIA seems to have had little impact on
increasing and improving the use of evidence for policy.

3.2. The capacity of government to develop evidence-based policy within RIA

As Howlett notes, it is all very well to prescribe evidence-based policy but the ability of governments to achieve that
objective depends very much upon its actual and potential capacity (Howlett, 2008). His conclusion regarding the
broader sense of policy capacity is that
The weak policy capacity and short-term analytical focus found among most of the major actors involved in
even rich countries like Canada is very problematic in the context of dealing with the challenges of improving
policy-making through the adoption of evidence-based techniques for dealing with complex contemporary
policy challenges. Governments and non-governmental actors are being asked to design and implement
118 P. Carroll / Policy and Society 29 (2010) 113–122

effective long-term policy measures without necessarily having the kinds of resources they require to
successfully avoid common policy failures. In such contexts, unfortunately, ‘failure may be the only option’
actually available to governments (Howlett, 2008, p. 18).
The Canadian experience to which Howlett refers is equally the case in both Australia and the UK in relation to their
RIA. In the UK, for example, a recent study by the National Audit Office, while pointing out the many examples of
good practice it had discovered in an analysis of RIAs prepared by government departments, clearly felt that their
capacity to produce good quality RIAs was limited by often insufficient resources, including the ability to identify and
analyse relevant evidence, recommending that
The RIA process is often crucial to good policy making and departments should ensure that the process is
properly resourced and that appropriate training is given (National Audit Office, 2007, p. 8).
Similarly, in the Australian context, a 2006 review of federal government regulation felt that the capacity of
departments and agencies in regard to RIA processes and content was, after 21 years of operation, still unsatisfactory,
recommending to the Government that
Government departments and agencies should ensure that their capacity to undertake good regulatory analysis,
including appropriate consultation on regulatory proposals, is adequately resourced (Regulation Taskforce,
2006, p. 158).
The lack of capacity was also noted by the European Commission in its drive to both introduce, then improve the
capacity of EU agencies regarding the RIA process (Formez, 2004, p. 7).
Where inadequate capacity exists then the range, quality and appropriateness of evidence identified and utilised by
staff preparing RIAs is likely to be restricted. The areas of particular weakness noted in a variety of reviews varies
somewhat by departments and by country, but all include the items noted below.

3.2.1. A failure to acknowledge a lack of data in RIA


It is not common for the author of an RIA submission to note its lack of relevant data and, hence, to draw attention to
the fact that its findings must be less than conclusive. Indeed, there is some evidence to suggest that the lack of
evidence, or limitations in the available evidence, is often glossed over. Even more worryingly, available but partial
data is sometimes presented in a consciously biased fashion, usually in favour of the recommended option, as noted by
Hertin, Pesch, and Jacob (2007, p. 11).
It is not difficult to penetrate the ‘gloss’ of written RIA submissions, with their lack of certain types of data
becoming very evident to the informed reader, a fact repeatedly noted in reviews of RIA. This is most obviously the
case in relation to quantitative data. Gary Banks, for example, the chair of the Australian Productivity Commission, a
body responsible for monitoring the performance of the federal government’s RIS system, has asserted that only 10%
of tabled Australian RIAs even considered compliance costs and only 20% made any attempt at quantifying them
(Banks, 2005, p. 10). Similarly, in a study of 10 RIAs, the UK’s National Audit Office found a number of serious
weaknesses in relation to quantitative data. Nine of the ten RIAs the Audit Office examined contained some form of
quantified estimate of costs, but with varying degrees of uncertainty about the data they used in their calculations. In
addition, this uncertainty was rarely pointed out in the written reports, which, for example, typically presented single
point estimates of costs and benefits, rather than ranges, giving an often undeserved appearance of precision and
accuracy to the estimates involved. Moreover, only one of the reports studied by the Audit Office both applied and
made available the results of sensitivity tests indicating what would happen if its key assumptions were modified,
despite the limitations of the data they drew upon. There was a particularly notable lack of data regarding expected
benefits and most of the RIAs, as the National Audit Office noted,
. . .did not include a quantified comparison of the expected costs and benefits, which could be used to
demonstrate that the benefits of the regulation justified the costs (National Audit Office, 2007, pp. 6–7).

3.2.2. Lack of data regarding specific target groups or sectors


The accurate identification of target groups, combined with a resulting lack of relevant, accurate data regarding
their situation, are often major problems for those developing regulation. Interestingly, few, if any of the countries and
P. Carroll / Policy and Society 29 (2010) 113–122 119

reviews surveyed for this paper provided detailed advice to policy makers as to how to overcome these problems,
although all urged the importance of gaining such data. At best the relevant procedural guidelines stressed the need to
identify and consult with interested parties, sometimes listing the specific groups to be consulted, such as immigrants
or the disadvantaged, and often with particular emphasis given to the need to consult with small business (Regulation
Taskforce, 2006, p. 2).
The importance of consultation as a means of gaining the necessary target group data is increasingly often pointed
out by supervising agencies, though only the UK, Sweden, German and, from 2007, Australia, seem to have
established minimum standards for consultation in relation to RIAs (Formez, 2004, p. 15). In Australia the change was
brought about following a survey undertaken by the Australian Public Service Commission. The survey found that
only 25% of regulatory agencies surveyed had engaged with the public when developing regulations, a surprisingly
low proportion (Banks, 2005). While it does not necessarily follow that limited or no consultation will result in poorer
quality regulatory proposals, given that it means that potentially important data, or evidence, is not available to policy
makers, poorer quality regulation is surely likely. There is, of course, something of a dilemma in regard to increased
consultation, for, while it might lead to better quality evidence and, in turn, better regulation, it might also increase the
danger of regulatory ‘capture’, by business interests.

3.2.3. Lack of technical expertise, especially in relation to cost-benefit analysis


Where data is lacking, inappropriate or presented in a misleading fashion, it is not surprising that the resulting analysis
is limited, even where this is not made explicit. However, often this is compounded by a lack of technical expertise in
policy analysis, especially in relation to cost-benefit analysis (see, for example, Colebatch, 2005, 2006; Colebatch &
Radin, 2006; Dobuzinskis, Howlett, & Laycock, 2007; Jaffe & Stavins 2008). In the Australian case this has been
identified as one of the key causes of poor quality regulatory proposals, especially of the costs and benefits of the
regulatory options identified in departmental impact assessments (Productivity Commission, 2005, p. 26). Similarly, it
has been regarded as a weakness in most other reviews of RIA, with report recommendations often stressing the need to
remedy the weakness with increased education, training and recruitment (Formez, 2004; National Audit Office, 2007;
Regulation Taskforce, 2006). A related concern for weaknesses in the calculation of administrative costs gave rise to the
‘Standard Cost Model’, developed for the Dutch Government and then taken up in varying degrees by several European
and OECD Countries (Department of Finance and Deregulation, 2009; Formez, 2004, p. 42).

3.2.4. The neglect of non-economic knowledge sources and types


As Adams has noted in the more general policy-making context, there is a distinct absence of non-expert knowledge
in public policy-making processes (Adams, 2004). This is particularly the case for the processes involved in mandated
RIA, whose focus has been upon, for the most part, regulations impacting on business and the economy. The emphasis
in RIA guidance documents produced for the assistance of public servants is very much upon the need for what
Adams’ describes as expert knowledge and management knowledge, especially upon the need to measure costs and
benefits using relatively sophisticated techniques drawn from the tool box of the economist and accountant (Adams,
2004, p. 32).
There is often also a distinct implication that the major source of such expert knowledge lies with public servants in
the departments concerned with developing the regulation in question, with the RIA seen as a primarily departmental
process based on their expertise. While public servants sometimes do have substantial expertise, this is by no means
always the case and, of course, managers in industry and other areas can have similar, even greater expertise, as
pointed out by Hertin et al. (2007, p. 12). Indeed, as Page and Jenkins note in regard to the UK policy bureaucracy,
The broad features of our characterization of UK policy bureaucracy are that policy officials at relatively junior
levels are given substantial responsibility for developing and maintaining policy and servicing other, formally
superior officials or bodies, often by offering technical advice and guidance. These people are not technical
specialists in the sense that they develop high levels of technical expertise in one subject or stay in the same job
for a long time. They are often left with apparently substantial discretion to develop policy because they often
receive vague instructions about how to do their jobs, are not closely supervised, and work in an environment
that is in most cases not overtly hierarchical (Page & Jenkins, 2005, p. 168).
The focus on relatively narrow sources of expert knowledge is perhaps most obvious in relation to the concept of
social capital, which is largely ignored in the context of RIA systems, with little or no mention in RIA guidance
120 P. Carroll / Policy and Society 29 (2010) 113–122

documents as to its meaning or importance. Nor is there usually any reference to the type of data to be collected
regarding social capital, data that could be used to enable informed judgements as to the likely impact of proposed
regulations on social capital and, in turn, the outcomes of a loss or increase in social capital. This is despite the fact that
in one of the very few studies of the impact of economic analysis on RIA it was found that there was little evidence to
suggest that the use of economic analysis has been an overwhelming success.
There is no evidence it has had a significant general impact, the economic analysis supporting it is frequently
done poorly (if at all), and there is only anecdotal evidence to suggest that it made a difference (Hahn & Tetlock,
2007).

3.2.5. Evidence and the lack of learning from experience


While evidence is of value when incorporated in an appropriate fashion in the RIA process, it is of even greater
value if it results in effective learning by those involved in RIA processes before, during and after the development of
the RIA in question, so that when faced with a future need to amend a regulation those involved will at least know that
there is relevant information available regarding the regulation’s past performance and where to look for it. However,
in practice, the monitoring and evaluation performance of departments is often surprisingly limited. Jacobs, for
example, in a study of RIA in the UK, found that
There was a surprising low level of thinking in the area of M&E despite the textbook need to review policy and
then feed back results into a re-design or new policy. This was both lacking in terms of revisiting the estimates of
impact within the RIA to see if these turned out to be accurate (Jacobs, 2005, p. 22).
The recognition of this lack of effective monitoring and assessment resulted in a recent, increased emphasis on post-
implementation review in UK departments and agencies (Department of Business, Enterprise and Regulatory Reform,
2007a). The immediate purpose of such review is to identify whether the aims of the regulation are being achieved in
practice, with the information to be made available to policy makers and stakeholders. The assumption is that this will
enable more meaningful consultation with business and other groups as they will be better informed as to actual
outcomes and, in turn, that will lead to more appropriate amendments to regulations. Unfortunately, on examining the
details of the UK’s modified impact assessment system the author could not identify any significant changes in either
the Impact Assessment Guidance or Impact Assessment Toolkit documentation provided to departments in relation to
implementation (Department of Business, Enterprise and Regulatory Reform, 2007b, 2007c). Both documents have
been rephrased and are shorter, but indicate no significant change in terms of technique or approach.
It might be that in the UK context the increased emphasis on post-implementation review will take the form of
increased pressure on departments to improve their existing systems of post-implementation review, combined with a
more rigorous assessment of the descriptions provided by departments as to how their reviews will be undertaken.
However, implementation, monitoring and review plans were required under the previous RIA system and, again, did
not seem to result in more effective consultation and better quality regulation. As the National Audit Office noted in its
2007 report
There continues to be an unstructured and ad hoc approach to post-implementation review across all
departments. The Department of Health and the Department for Communities and Local Government have
begun to develop a more systematic approach to evaluating the impact of policy changes, but have not yet begun
a rolling programme of reviews. Our census of departments highlighted resource constraints and time pressure
as the main reasons why reviews were not more widespread (National Audit Office, 2007, p. 3).
It did note that many departments were seeking to improve their systems for post-implementation review, with
several indicating that pressure from external stakeholders was an incentive to complete reviews, so it is possible that
improved performance will result in the medium to longer term (National Audit Office, 2007, pp. 22–23). However, on
the basis of the above references, there seems little doubt that systematic review of regulatory performance is limited,
indicating, in turn, that this potentially very rich source of evidence for future RIA reports is disappointingly limited.

3.2.6. High-level commitment, evidence and RIAs


Given that high quality RIAs are resource intensive, operating within a complex political system subject to a myriad
of competing demands, perhaps the fundamental and inevitable difficulty its proponents face is that policy making is
P. Carroll / Policy and Society 29 (2010) 113–122 121

an inherently political process, one deeply embedded in liberal-democratic systems of government. This is a difficulty,
perhaps an insurmountable one, as the wishes of the electorate do not always coincide with the requirements of a
rational, evidence-based decision-making process such as that at the core of prescribed RIA processes. Rather, politics
involves constantly changing activities and processes arising from the interaction of individual and group values,
motivations and actions, notably in situations of scarce human and physical resources. Both conflict and cooperation
can result and, while conflict is not an inevitable result of such interactions, it is a very frequent characteristic,
especially where individuals and groups compete, successfully and unsuccessfully, for the scarce resources they see as
necessary to achieve their goals, or where they compete in relation to values, or both. It often takes a series of
compromises between the competing parties before agreement can be reached as to the content of a new or modified
regulation.
When the compromise or set of compromises that often constitute a regulation, or the grounds for a regulation are
subject to rational analysis in terms of the more rigorous criteria utilised in a cost-benefit analysis, they are likely to
‘fail’, to meet the criteria or tests mandated under RIAs. But RIAs do not test for the merits of the political
compromises underlying the regulations in question. They are silent, in other words, as to whether a regulation meets
the criteria for political success, leaving that discussion and decision to elected ministers. Nor, of course, is there any
easy way of identifying the type of evidence that might indicate whether a regulation meets the criteria for political
success. Hence, it is not surprising that ministers, their minders and senior public servants have reservations about the
value of RIAs, reservations that are visible in their varying degrees of commitment to the process.
In the above light, it is not surprising that political support for RIA varies in extent and intensity over time. The
primary reason for the variation is not hard to find, occurring, in particular, where ministers responsible for
departments are faced, for example, with an RIA assessment that judges new or modified regulatory proposals
springing from their departments as inadequate. In such situations they face a quasi-conflict of interest situation, on the
one hand often committed under the doctrine of collective, cabinet responsibility to support their Cabinet’s formal
endorsement of rigorous impact assessment, but on the other hand when faced with a negative RIA evaluation, tempted
to push it through for Cabinet discussion and agreement, despite the negative reviews. Moreover, the staff of
ministerial offices and the heads of department and senior public servants are well aware of this situation. Whatever
their personal feelings on the matter, it takes a very brave person to inform a minister that a favoured regulatory
proposal should not to be recommended and pursued, given an adverse RIA assessment.

4. Conclusion

As noted in the paper, in all of the jurisdictions examined the performance of RIA has been limited, with proposed
regulation often continuing to offer an inadequate and unconvincing evidence base to support the proposals. There are
several reasons for this poor performance:

 the often rigidly positivist approach to assessment on which they are based, with little recognition of its inherent
weaknesses;
 continuing suspicion as to the motives behind the introduction of RIAs. In turn, this is exacerbated by the lack of
strong, evidence-based arguments to support the case for RIA;
 continuing, if declining, administrative resistance to the ‘external’, imposition of resource intensive RIA procedures
in addition to existing, traditional, department-based, policy-making procedures;
 a varying but often inadequate capacity for rigorous, evidence-based policy in departments and agencies, as
indicated by a lack of data, a lack of appropriate data, especially regarding non-economic data and social capital, a
lack of technical expertise and a persistent failure to learn from regulatory experience;
 a lack of persistent, high-level commitment to RIA from ministers and senior public servants, springing out of the
essentially political nature of government decision making and the inability of such systems to assess the value of
political compromises that underlie much regulation.

Hence, my answer to the question set for this paper ‘Does RIA lead to better policy?’, is no . . . and yes. ‘No’, in that
there are a number of insuperable obstacles, notably the limitations of the positivist approach that underlies RIAs,
combined with the need for compromise in political systems, compromises that cannot be costed by techniques such as
cost-benefit analysis. ‘Yes’, in that there are elements of any decision-making process that can be improved, with
122 P. Carroll / Policy and Society 29 (2010) 113–122

existing RIA often highlighting a variety of weaknesses upon which remedial attention can be focused, if there is the
political will to do so.

References

Adams, D. (2004). Usable knowledge in public policy. Australian Journal of Public Administration, 63(1), 29–42.
Argy, S. & Johnson, M. (2003). Mechanisms for improving the quality of regulations: Australia in an international context. Productivity Commission
Staff Working Paper. ACT, July.
Auditor-General. (1989). Reports on audits to 31 December 1988. Canberra: Auditor-General’s Office, AGPS.
Australian Government. (2007). Best practice regulation handbook Canberra, Australia.
Banks, G. (2005). Regulation-making in Australia: Is it broke? How do we fix it? Productivity Commission. from <http://www.pc.gov.au/speeches/
cs20050707/index.html> Retrieved 12.06.06.
Banks, G. (2006). Rethinking regulation. AGPS. from <http://www.regulationtaskforce.gov.au/index.html> Retrieved 22.11.08.
Carroll, P., Silver, H., Deighton-Smith, R., & Walker, C. (2008). Minding the gap—Appraising the promise and performance of regulatory reform in
Australia. Canberra: ANU EPress.
Colebatch, H. (2005). Policy analysis, policy practice and political science. Australian Journal of Public Administration, 64(3), 14–23.
Colebatch, H., & Radin, B. (2006). Mapping the work of policy. In H. K. Colebatch (Ed.), The work of policy: An international survey (pp. 217–226).
New York: Rowman and Littlefield.
Colebatch, H. K. (Ed.). (2006). The work of policy: An international survey. New York: Rowman and Littlefield.
Department of Business, Enterprise and Regulatory Reform. (2007a). New impact assessments will increase transparency and improve regulation.
Canberra: AGPS. from <http://bre.berr.gov.uk/regulation/news/2007/070402_ia.asp> Retrieved 6.01.08.
Department of Business, Enterprise and Regulatory Reform. (2007b). Impact assessment guidance. Canberra: AGPS. from <http://bre.berr.gov.uk/
regulation/ria/ia_guidance/> Retrieved 11.01.08.
Department of Business, Enterprise and Regulatory Reform. (2007c). Impact assessment toolkit. Canberra: AGPS. from <http://bre.berr.gov.uk/
regulation/ria/toolkit/> Retrieved 11.01.08.
Department of Finance and Deregulation. (2009). Business cost calculator. Canberra: AGPS. from <http://www.finance.gov.au/obpr/bcc/
index.html> Retrieved 12.01.09.
Dobuzinskis, L., Howlett, M., & Laycock, D. (Eds.). (2007). Policy analysis in Canada: The state of the art. Toronto: University of Toronto Press.
Formez. (2004). A comparative analysis of regulatory impact assessment in ten countries. A report prepared for the EU Directors of Better
Regulation Group, Dublin. At <http://www.betterregulation.ie> Retrieved 20.07.08.
Hahn, R. & Tetlock, P. (2007). Has economic analysis improved regulatory decisions? Working paper 07-08, April. AEI-Brookings Joint Center for
Regulatory Studies.
Head, B., & McCoy, E. (Eds.). (1991). Deregulation or better regulation? Melbourne: Macmillan.
Hertin, J., Pesch, U., & Jacob, K. (2007). The production and use of knowledge in regulatory impact assessment—An empirical analysis. ECPR
general conference.
Howlett, M. (2008). Enhanced policy analytical capacity as a prerequisite for effective evidence-based policy-making. Presentation for the workshop
on evidence-based policy, international research symposium on public management XII.
Jackson, P. (2007). Making sense of policy advice. Public Money & Management, 27(4), 257–264.
Jacobs, C. (2005). Improving the quality of regulatory impact assessments in the UK. Working paper series, paper no. 102. Centre on Regulation and
Competition.
Jaffe, J., & Stavins, R. (2007). On the value of formal assessment of uncertainty in regulatory analysis. Regulation and Governance, 1(2), 154–171.
Mintrom, M. (2007). The policy analysis movement. In L. Dobuzinskis, M. Howlett, & D. Laycock (Eds.), Policy analysis in Canada: The state of
the art. Toronto: University of Toronto Press.
National Audit Office. (2007). Evaluation of regulatory impact assessments 2006–07. London, from <http://www.nao.org.uk/publications/
nao_reports/06-07/0607606.pdf> Retrieved 4.01.08.
OECD. (2005). Regulatory impact analysis in OECD countries challenges for developing countries. Paris: OECD.
Office of Management and Budget. (2003). Circular A-4 regulatory analysis. Washington: Office of Management and Budget.
Office of Regulation Review. (1993). Regulation and its review, 1992–1993. Canberrra: Industry Commission.
Packwood, A. (2002). Evidence-based policy: Rhetoric and reality. Social Policy & Society, 1(3), 267–272.
Page, E., & Jenkins, W. (2005). Policy bureaucracy: Governing with a cast of thousands. Oxford: Oxford University Press.
Pawson, R. (2002). Evidence-based policy: In search of a method? Evaluation, 8(2), 157–181.
Productivity Commission. (2005). Regulation and its review 2004–05. Annual report series Canberra: Productivity Commission.
Radaelli, C. (2005). Diffusion without convergence: How political context shapes the adoption of regulatory impact assessment. Journal of European
Public Policy, 12(5), 924–943.
Regulation Taskforce. (2006). Rethinking regulation: Report of the taskforce on reducing regulatory burdens on business. Report to the Prime Minister
and the Treasurer, Canberra, from <http://www.regulationtaskforce.gov.au/__data/assets/pdf_file/0007/69721/regulationtaskforce.pdf> Retrieved
5.01.08.
Renda, A. (2006). Impact assessment in the EU. Brussels: Centre for European Policy Studies.
The Evaluation Partnership Limited. (2007). Evaluation of the Commission’s impact assessment system final report. Brussels: Secretariat General of
the European Commission.
Wells, P. (2004). New labour and evidence based policy making. A paper presented to the PERC research seminar.

You might also like