Download as pdf
Download as pdf
You are on page 1of 9
Chapter 50 * ‘The Assessment of Executed Policy Solutions 341 50 The Assessment of Executed Concerns over accountability, efficiency, and effectiveness in policymaking are intensifying at all levels. ials, policy makers, commu- nity leaders, at large ‘want to ate not working, and why. More often than not policies and the programs they create often fail to achieve their intended effects and have unintended outcomes. The purpose of evaluation is to deter- -mine whether an implemented program is doing what it is supposed to. If it is, then evaluation will assess If the or policy is not ing well . Through evaluation we can dete ‘he ‘Over the years evaluation has become more common place, although it is not new for govern- ments to assess whether their programs are cost effective and are achieving desired benefits. For example, in the Roman Empire, it was common to alter tax policies in response to fluctuations in fevenues. This was an early form of policy evalu- ation, American policy makers became more con- vemed with judging the effects of policies in the 1960s with the advent of the War on Poverty pro- grams. From this point in time American policy Policy Solutions Stella Z. Theodoulou and C. Kofinis makers became more concerned with whether the different welfare and anti poverty programs were having the effect they were supposed to and whether tax dollars were being spent efficiently and effectively. From the late 1960s, requirements for program evaluation were written into almost all federal programs.! Subsequently, Congress established evalu- such as the Funding support for such organizations has varied with each subsequent presidential administration. For example, the Reagan administration made deep budgetary cuts that affected the federal gov- ‘ernment’s ability to conduct program evaluation? In this chapter we will identify the differ- it types and approaches to evaluation, discuss 1e process of evaluation, identify who carries evaluation, and the obstacles that face evalu- . The chapter's objective is to offer readers a brief, simple, and clear introduction to program or policy evaluation. It is not intended to point the reader to any one approach or type of evaluation over another. Rather the reader should understand that the best evaluation is the one which meets the needs of the program or policy that is being evaluated. We begin by distinguishing program evaluation from other related forms of assessment activity. From Stella Z. Theodoulou and Chris Kofinis, The Art of the Game: Understanding American Public Policy Making (Belmont: Wadsworth, 2001). 342. Part Five * Making Public Policy What Is Policy Evaluation? Policy evaluation consists of reviewing an im- plemented policy to see if it is doing what was mandated. The consequences of such policies pro- ‘grams are determined by describing their impacts, or by looking at whether they have succeeded or failed according to a set of established standards? Within the field of public policy a number of per- spectives as to what evaluation exactly is can be found. The first perspective defines evaluation as the assessment of whether a set of activities im- plemented under a specific policy has achieved a given set of objectives. Thus, overall policy effec- tiveness is assessed.‘ A second perspective defines evaluation as any effort that renders a judgement about program quality.’ The third perspective de- fines evaluation as information gathering for the purposes of making decisions about the future of a program.’ A final perspective found in the lit- erature views evaluation as the use of scientific methods to determine how successful implemen- tation and its outcomes have been.” The General Accounting Office (GAO) defines program evaluation as the provision of sound information about what programs are ac- ‘ually delivering, how they are managed, and the extent to which they are cost-effective.® For our purposes none of these definitions are necessar- ily unacceptable. We believe policy evaluation can be better defined as a process by which general judgments about quality, goal attainment, program effectiveness, impact, and costs can be determined. ‘What differentiates policy evaluation from other informal types of assessment is, first, its focus on outcomes or consequences.’ Next, evaluation is done post implementation. In other words the program must have been implemented for a certain period of time. Third, the goals of the policy or program are provided to the evalua- tofs. The main purpose of evaluation is to gather information about a particular program's perfor- mance so as to assist in the decision to continue, change, or terminate. The Usefulness of Evaluation ‘One way that programs or policies may be as- sessed in terms of their accountability is through formal evaluation. Thus, the real value of program or policy evaluation is that it allows for account- ability to be measured empirically. Conducting an evaluation allows policy makers to be provided with accurate information on key policy ques- tions that arise from the implementation of any policy or program. Such information is of course provided by an evaluation study within a given set of real world constraints, such as time, bud- get, ethical considerations, and policy restric- tions. The usefulness of conducting an evaluation study of a program or policy is that it provides information to policy makers on whether the pol- icy or program in question is achieving its stated goal and at what costs these are being achieved, The effects of a program or policy will also be ascertained and if the evaluation is conducted correctly policy makers will be able to deter- mine whether those effects are intended or un- intended. Of course, policy makers want to know if programs are being administered and managed in the most efficient, accountable, and effective ‘manner. An evaluation study can determine if this is true or not. Evaluation is also useful because it can eventually stimulate change. Finally, the utility of conducting an evaluation is that it can discover flaws in a program that policy designers were never aware of in the abstract. TYPES OF POLICY EVALUATION There are a variety of models or frameworks that, fuse theoretical content with practical guidelines for conducting a program or policy evaluation. Most models arose in the 1960s and 1970s and were early attempts to conceptualize what evalu- ation was and how it should be conducted. Thus, they offer varying understandings as to the goals, of an evaluation, the role of the evaluator, the scope of an evaluation, as well as how it is or- ganized and conducted. Subsequent practitio- ners have taken the models and adapted them to changing times, contexts, and needs. Often two or more models will be used in conjunction with each other. The result is that there are several different types of evaluation models that vary in complexity."° There are, however, four types that are most commonly applied: Process Evaluation, Outcome Evaluation, Impact evaluation, and Cost-Benefit Analysis. Chapter 50 * The Assessment of Executed Policy Solutions 343 PROCESS EVALUATION. This type focuses on the concrete concerns of program implementa- tion. It assesses how a program or policy is being delivered to target populations or how it is being managed and ran by administrators. A process evaluation should adéress the following: + determine why a program or policy is per- forming at current levels * identify any problems + develop solutions to the problems ‘improve program performance by recommending + how solutions should be implemented and evaluat once carried out. With this type of evaluation the focus is not on whether the program is meeting specified goals, but is solely to develop recommendations to im- prove to implementation procedures. This type of evaluation is best suited to the needs of program managers and has the objective of helping man- agers overcome barriers to achieving the goals of the program policy being implemented. OUTCOME EVALUATION. This type focuses on the degree to which a policy is achieving its intended objectives with regards to the target population. It is concerned with outputs and whether the policy is producing the intended results. This can lead to assessment of effective- ness, including cost. Outcome evaluation is not, ‘well suited to the needs of program level man- agers because it does not provide operational guidelines on how to improve the implementa- tion of the program. Rather it is best suited to the needs of policy designers because it identifies, whether there is consistency between policy out- puts and program intent. An outcome evaluation must determine the following: * legislative intent + program goals * program elements and indicators + measures of indicators + program outcomes and outcome valences (Whether they are positive or negative). IMPACT EVALUATION. This type focuses on whether a program is having an impact on he intended target population. The major difference between an impact evaluation and an outcome evaluation is that the latter are solely concerned with whether the program or policy’s goals and objectives are being achieved. In comparison the impact evaluation is concerned with assessing whether the target population is being affected in any way by the introduction and implemen- * tation of the policy. There is also concern with the impact of the program on the original prob- Jem being addressed. The benefit of an impact evaluation is it is suited to the needs of both pro- gram level managers and policy designers, for it is important for both to ascertain whether target populations are appropriately receiving delivery of a program. A successful impact evaluation ‘must help to identify the following: * theoretical goals of the program/policy « the actual goals * the program or policy objectives + program of policy results and whether they “are intended, unintended, positive or negative in effect. COST-BENEFIT ANALYSIS. This type of evalua- tion focuses on calculating the net balance of the benefits and costs of a program, Essentially, cost benefit analysis is a method with which to evalu- ate and assess the effectiveness of a policy's cost, benefits, and outcomes. Evaluators identify and quantify both the negative costs and positive bene- fits in order to determine the net benefit. For many it is a controversial evaluation technique because of the difficulty of applying it to the public sector. It ignores qualitative concemns at the expense of quantitative information. For example, if we take a cost benefit analysis approach to assessing certain policy issue areas, it is sometimes easier in certain issue areas to calculate the immediate real dollar costs then the tangible benefits. For certain types of programs such as education or the environ- ment, one could argue that the real benefits do not materialize for years or decades. Hence a cost benefit analysis may evaluate a program for being inefficient in terms of monetary expenditures when it may in fact be effective in realizing ts long term ‘goals and in delivering benefits that in the long term far exceed the dollar costs. There are simply 344° Part Five * Making Public Policy some things, such t cannot be used alone, cost benefit analysis can color discussion on whether a program or policy is successful or not, It is most useful as a tool in con- junction with one of the other types of evaluation. How Policy Is Evaluated Evaluation of policy is fairly complex and in- cludes initial activities that must be undertaken to ensure the success of the overall evaluation. Intrinsic to this success is the duty of the evalu- ator to communicate findings and conclusions to the client, Evaluation can be viewed as a three- stage sequence: planning, data gathering, and dis- semination. Across these three stages a series of essential activities must take place (see Box 50.1). Stages in the Evaluation Process STAGE ONE - PLANNING. This stage consists of, three steps. Step one is familiarity with the pro- ‘gram, Step two is deciding the focus of the evalu- ation, and step three is developing evaluation measures. In step one evaluators must become aware and familiar with the actual program or policy being evaluated. This can be accomplishe through the evaluator asking him or herself series of questions, The first question attempt to clarify the goals and objectives of a prograr or policy. This is not always easy to do sinc the legislative mandate for the policy may hav ambiguously expressed goals, or multiple goal: or conflicting goals. Next, the evaluator mu: determine the relationship of the program bein evaluated to other similar programs. Third, th evaluator must identify the major stakeholders i and the target populations of the program. Stakeholders are individuals, agencies, groups who hold stakes in-the outcome of th evaluation. Target populations are those whor the policy affects. Non target groups should als bbe considered because they may potentially b affected by the policy." Finally, the evaluate ‘must learn the ongoing and recent history of th program. Once all of these questions have bee answered then the evaluator can move on to th | next activity in the planning stage. In step two of the planning stage evalu: tors must decide what they are actually assessin; Specifically what is the focus of the evaluation? ] BOX 50.1 Essential Activities in the Evaluation Process + Coreticton of iy seme oaeg e of theoretical ica reeesira ae felationships “v3 * Development of a research ; astonish ae program policy — ally ach + Colecton of data or attahmeasurerent « Analysis and, intepeetation of dota Sylva; Syvia and: Gun arqueéthat we can snsirethat at eho bing armed eet panne ten-point checkiist!"! " "Ste procrainxparininial oF 2.” Whois the audience?" 3." Ate the itieasulfes, indicators sno i “appropriate for the heats of thésudiante?: = 4 Are you interested'in outcome or impact? « 5. Whatis the purpose ofthe evaluation? 6 Are we trying to build thédretical knowledge or in seeing if maximum service Is provided?! 2. How willthe thay affect finding of the prgram? 8. Canwe realatial roduce a Valid des ven bit fesnunces? 9. ‘Are'We mrieasutifig what Wve ae stipposed ta? 10. What ar| doing, how am I doing it, and who cares what | tell them? a Chapter 50 * The Assessment of Executed Policy Solutions 345 it the policy's impact, is it its outcomes, is it the costs and benefits, or is it the way the policy is being delivered? Once the focus is decided upon then the evaluator can conduct step three which is the development of measures for the focus, Such measures should include estimating the cost of the policy, in both dollar and non dollar terms, STAGE TWO - DATA GATHERING. Two types of data must be collected by evaluators. First, data ‘must be collected that allows for the program's overall configuration and structure to be better understood. Thus, information on how the pto- gram is delivered, to whom, and how many cli- ents are served must be gathered. The second type of data gathering deals with the degree to which program goals and objectives are being achieved, The evaluator must also collect data on other effects, both intended and unintended, that can be attributed to the policy. How data is gathered will be determined by the evalua- tor’s decision to apply quantitative or qualitative methods. Quantitative methods refer to a range of techniques that involve the use of statistics and statistical analysis for systematically gathering and analyzing information. Qualitative methods are aimed at understanding underlying behavior through comprehending how and why certain actions are taken by implementers, clients, and target populations. Box 50.2 highlights the differ- ences between quantitative and qualitative meth- ods of analysis, BOX 50.2 Differences Between Qualitative and Quantitative Analy Quantitative Counting Measuring “°, Confirming Determining Arguing. Testing Exploring 3 Observing Experiencing Finding what is teat Exploring Multis eoies © There is no essential agreement on which type of analysis to utilize. Often the best way to determine which methods to apply is to look at the program’s size and scope, the intended audi- ence for the evaluation, the program's goals, the evaluator’s own skills, and the resources available to conduct the evaluation, Once the methods have been determined, the evaluator must develop * the research design and confirm the instruments (data collection devices) that will be applied (see Box 50.3). Research designs czn be seen as strate- gies that can help the evaluator to improve the validity and reliability of the evaluation. Designs ‘must be rigorous so as to avoid validity or reli- ability threats but must also be appropriately applicable to the complexity and needs of the pro- gram. Once a research instrument is selected and data gathered the evaluator may use a number of statistical techniques to analyze and interpret the data, Such techniques allow the evaluator to de- termine the potential associations or correlations of the variables under analysis. STAGE THREE - DISSEMINATION, The final stage involves the dissemination of the findings of the evaluation to those who commissioned the evalu- ation, specifically the client. In some cases evalu- ation findings are also forwarded to stakeholders, target groups, or the public at large.-The goal of any evaluation is to provide useful information. Usefulness depends upon a number of factors, in- cluding timeliness, accuracy, and completeness. All evaluation reports should report assumptions as well as real indicators which affect data inter- pretation. In sum, every evaluation: will include the perceptions and assumptions that the evahi- ator derives from the assessment. Additionally, BOX 50.3 Research Instruments Subject Knowledge Tests Attitude Surveys ‘Samples ersonek Intervene! es * 346 Part Five * Making Public Policy there should be alternative explanations for all observed outcomes, the separation of fact from opinion, and the findings should be clear and un- ambiguous. Finally, an evaluation should, when appropriate, include recommendations for the policy's continuation, change or termination, ‘Another dimension critical to effective dis- semination is the relationship between the eval- uator and the client. Clients in many ways can influence an evaluation study's outcome by bring- ing pressure to bear. For example, a client may have already made up his or her mind about the program and may pressure the evaluator to pro- duce an assessment a predetermined finding. In response to such concerns, professional organiza tions in recent years have clarified the rights and responsibilities of evaluators in publishing stan- dards and guiding principles for program evalua- tion practitioners. The standards are principles rather than rules that evaluators should adhere to. They simply highlight what are acceptable and unacceptable practices, Thus, they are a bench- mark for practitioners. Inevitably evaluators must decide for themselves what practices are ethical and justifiable, Who Evaluates? The choice for any agency or group that wishes to be evaluated is who should conduct the evalu- ation, In many ways this is the most critical de- cision in the evaluation process. The choice is between intemal and external evaluators. Neither choice is inherently better then the other. The key to who should be used as an evaluator depends upon the needs of the organization that is com- missioning the evaluation study, Both types of evaluators have their strengths and weaknesses. Internal evaluators have an overall advantage of being familiar with the program, the organiza- tion, the actors, and the target population. This can save time in the planning stage of the study. However, it can also prove to be a disadvantage in that internal evaluators, because of their ties to the organization, might be “too, close” to identify problems, to place blame, or recommend major changes or termination. External evaluators are individuals who have no internal connection or ties to the organization being evaluated. They are per- ceived as “outsiders.” External evaluators are often used when the evaluation is authorized by an entity other than the organization itself. For example, if the City of Los Angeles wanted to evaluate the Los Angeles Police Department, it would be an authorizing entity outside of the organization being evaluated. In this case it is more than likely the city would use exter- nal evaluators on the assumption that external evaluators would provide objective information because they have no vested interest or agenda to fulfill. The major advantage of external evalu- ation is that it is perceived to be impartial be- cause evaluators supposedly have no stake in the outcome of the evaluation. This is particularly useful when controversial programs or policies are being assessed. A further strength of utiliz- ing external evaluators is that they ate usually professional consultants who are trained in the requisite skills and methods of evaluation tech- niques. In the past this was undoubtedly true. Recently, however, many individuals working in the public sector are educated in administration and management programs that train students in both policy analysis and program evaluation and are capable of conducting an evaluation. ‘The major disadvantage of opting for an ex- ternal evaluation is cost, in both money and time. Some would also argue that it can prove costly in terms of organizational politics because of its po- tentially disruptive nature. A further disadvantage could be that external evaluators also have an agenda. For instance, they may wish to please the client in order to secure future jobs. This is poten- tially a dilemma. However, in theory professional ethics ensure that evaluators, although mindful of client needs, should stay true to their impartiality. Another weakness of utilizing external evaluators is they may face resistance from within the or- ganization and between actors and other stake- holders who might have a vested interest in the outcome of the evaluation. In conclusion it is interesting to consider two general laws formulated by James Q. Wilson, which put into perspective concerns about the evaluation process.’ Wilson's first law is that all policy interventions in social problems produce the intended effect—if the research is carried csi eben ete eh beta Chapter 50 * The Assessment of Executed Policy Solutions 347 BOX 50.4 Validity Types Internal Validity: Goes raat neue What inintends? i * Requires cortetientfestion and ‘messusns ohrosray oth ii} out by those implementing the policy or by their friends. Wilson's second law is that no policy intervention in social problems produces the intended effects—if the research is carried out by independent third parties, especially those skep- tical of the policy. Wilson’s two laws help explain just how difficult the evaluation process is. Obstacles and Problems in Evaluation and Utilization of Evaluation Research ‘There are several factors that pose serious prob- lems during the evaluation of a policy.” The first factor that clearly causes problems in any evalu- ation is ambiguity in the specification of the ob- jectives and goals of a policy. It is common for objectives and goals to be sometimes unclear or equivocal and this can cloud the assessment of whether the goals and objectives have been met. ‘A second problem can occur when objectives have been stated, but there is no clearly defined way to measure the success of the objective. A third problem is the presence of side effects from other policies that interact with the pro- gram being evaluated. In essence the problem is how to weigh outside factors relative to the op- eration of the program being evaluated. A fourth problem is that the necessary data is often not available, or if it is available, it is not in a suit- able state for the purposes of the study. Fifth, the politics of the situation will often interfere with the evaluation process. For example, there may be resistance by administrators or other policy actors to an evaluation being conducted or to its. findings. A sixth problem is determining if suf- ficient resources are being allocated to conduct the most appropriate type of evaluation. Finally, dua geverate formation hat {sell to program officials? =| ‘= Requires designing an evaluation accent tol audiences a the need for validity can prove to be a problem for evaluators. ‘There are three broad categories of validity that evaluators must be concerned with: internal, extemal, and programmatic. Box 50.4 highlights each of these factors in achieving a valid design. If evaluators do not pay close attention to such factors in the formulation and conduct of an evaluation then the very findings of the evalua- tion will be invalidated. The obstacles to validity are numerous and range from elements within the environment to methodological errors by the evaluator." Overall, obstacles to evaluation are im- portant factors that can prevent successful evaluation and hinder the evaluator’s recom- mendations being utilized by policy makers. Quite often, because of contextual factors, human factors, or technical factors, decision makers may be prevented from utilizing the results of the study. Contextual factors involve factors within the environment which will be af- fected in unacceptable ways if decision makers act on the recommendations of the evaluation, Technical factors refer to the problems caused by methodological considerations. Human fac- tors are obstacles posed by the personality and psychological profile of the decision makers, evaluators, the client, and other internal actors, In reality, evaluation is fraught with problems ~ and weaknesses. Summary In this chapter, we have discussed what policy eval- uation is, how itis carried out, who does it, the prob- Jems that may be encountered and the obstacles to 48 Part Five * Making Public Policy he utilization of recommendations made by pro- am or policy evaluation studies. Over the past hirty years, policy evaluation has attracted consid- cable interest among policy makers at all levels in he public sector. It is important to remember that waluation is essential, for it often tells policy mak- ts what is working and what does not work. End Notes 1. R. Haverman, “Policy Evaluation Research After Twenty Years,” Policy Studies Journal 16, no.2 (winter 1987), pp. 191-218. 2. M.E. Rushefsky, Public Policy in The United States, (Belmont: Wadsworth, 1990), p. 16. 3. M.J. Dubnick and B, A. Bardes, Thinking About Public Policy (New York: Wiley,1983), p.203, 4, J. S. Wholey et al, Federal Evaluation Policy (Washington, D.C.: The Urban Institute, 1970), p.15. 5. R. Haveman, “Policy Evaluation Research After ‘Twenty Years,” Policy Studies Journal 16, no. 2 - _ (Winter 1987), pp. 191-218. 6. R.D, Bingham and C. L. Felbinger, Evaluation in Practice: A Methodological Approach (New York: Longman, 1989), p. 4. 10. 11, 12, 13. 14. 15. 16. Ibid. p. 3. . General Accounting Office, Federal Evaluation i Issues (Washington, D.C.: GAO, 1989), p. 4. 1 RG. Caro, ed. Readings in Evaluation Research, | 2°4 ed, (New York: Russell Sage Foundation, 1977), p. 6. J. R. Sanders, The Program Evaluation Standards (Thousand Oaks: Sage, 1994), pp. 8-12. } R. D. Syivia, K. M. Sylvia and E. M. Gunn, d Program Planning and Evaluation for the } Public Manager 24 ed, (Prospect Heights: 5 Waveland Press, 1997), pp. 171-174. E, R, House, Evaluating with Validity (Thousand Oaks: Sage, 1980), pp. 20-33, on J. R. Sanders, The Program Evaluation Standards (Thousand Oaks: Sage, 1994), pp. 6-12. J. Q, Wilson, “On Pettigrew and Armor,” The Public Interest 30 (Winter 1973), pp. 132-134. B. W, Hogwood and L. A. Gunn, Policy ‘Analysis for the Real World (New York: Oxford University Press, 1984), pp. 220-227. R. D. Sylvia, K. M. Sylvia and E. Mt. Gunn, Program Planning and Evaluation for the Public Manager 2" ed. (Prospect Heights: Waveland Press, 1997), pp. 117-127. AL

You might also like