Setting The Scene: The State of Humanitarian Evaluations in Canada

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 13

Setting the scene: The state of humanitarian evaluations in Canada

Franois Audet Canadian research institute for emergencies and aid OCCAH Universit de Montral
December 13, 2011

Objectives

What do we mean by evaluation of humanitarian interventions in Canada?

What kind of Canadian expertise do we have, need and expect? What are the challenges of evaluating the impact of humanitarian programs?

Methodology

Research and review of scientific and institutional literatures;


Interviews with 15 experts & Canadian humanitarian organizations;

What the literature has to say about it?

No common definitions (OECD, DARA, OCHA, Harvey, Beck, etc) Evaluation of humanitarian action (EHA) is defined by ALNAP as
a systematic and impartial examination of humanitarian action intended to draw lessons to improve policy and practice and enhance accountability;

What the literature has to say about it?


Basic assumptions about evaluation of humanitarian interventions

It needs to be planned as soon as possible in the projectprocess; It should address the recommended eight evaluation criteria: Relevance; Efficiency; Effectiveness; Impacts; Sustainability; Connectedness; Coherence; & Coverage

There are different kinds of evaluation, which depend of the objectives of the process:
M&E ; Real Time Evaluation; Lessons learned; Internal VS External, etc.;

What the literature has to say about it?

Evidences of conflict of interest between the evaluator, the evaluated, and the donor (House, 2004) Key findings & major concerns are:
1) The evaluations rarely target actual problematic projects, or real problems within a project; 2) Often are a technocratic exercise, rather than an in-depth and objective assessment 3 ) Are not systematized and rarely used as a lessons learned process; (Prouse de Montclos, 2011);

Some key results from the interviews

What do we mean by evaluation of humanitarian interventions in Canada?

Inconsistency : internal &/or external; objectives & methodologies varied; Dilemma with objectivity : Internal assessments are more recurrent than external; Donors driven versus lessons learned or capacity building driven Output driven versus process driven;

What kind of Canadian expertise do we have, need and expect?

Limited internal expertise. We often go trough the other branches/federations It seems preferable to do the evaluation ourselves, as we have the impression of even a weaker expertise outside ; Very limited capacity in Canada; not enough resources are assigned Consensus about: very little institutional culture This context boosts the existing gap between the marketing rhetoric and the field reality;

Key conclusions & challenges of evaluating the impact of humanitarian programs

First: literatures and rhetoric tend to agree; High turnover of staff working in humanitarian action that affects organisational memory and reduce the need for a more systemic approach; Reactive and quick implementation of humanitarian action that affects planning and the identification of performance measures; Limited funding available

Key conclusions and challenges of evaluating the impact of humanitarian programs

Lack of objectiveness. Most of the assessments are done by internals. There is a need to lean for an impartial approach; Canadian humanitarian organizations should encourage a cultural change:
Training & sharing lessons learned Expose and debate the results; Work with academics for debates & ensure objectivity, develop methodologies, etc;

References
M.A. Prouse de Montclos. 2011. L'aide humanitaire dans les pays en dveloppement : qui value qui ? Mondes en dveloppement, 2011/1 n153, p. 111-120. DOI : 10.3917/med.153.0111
P.Harvey, A. Stoddard A.Harmer and G. Taylor. 2009. Ltat du systme humanitaire : valuer les performances et les progrs. tude pilote. London. ALNAP.

Beck, T. (2006). Evaluating humanitarian action using the OECD-DAC criteria: An ALNAP guide for humanitarian agencies. London, UK: Overseas Development Institute.
E. House. (2004) The role of the evaluator in a political world, Canadian Journal of Program Evaluation 19, 2, 1-16.

OECD (1999), Guidance for Evaluating Humanitarian Assistance in Complex Emergencies. Paris.
Hallam, A. (1998), Evaluating Humanitarian Assistance Programmes in Complex Emergencies. London, UK: ODI. Good Practice Review 7.

Harvey, P. (1997), PRA and Participatory Approaches in Emergency Situations: A Review of the Literature and ACTIONAIDs Experience. London: ACTIONAID.
Dabelstein, N. (1996), Evaluating the International Humanitarian System in Disasters Vol. 20, No. 4. ODI, London.

Thank you

Observatoire canadien sur les crises et laide humanitaire Universit de Montral www.occah.org

You might also like