Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Internal Oversight Service

Evaluation Section

IOS/EVS/PI/136 REV.
Original: English

Formative Evaluation of UNESCO’s


Results-Reporting

Final Report

Author:
Kim Forss (external consultant)

In collaboration with:
Ranwa Safadi (BSP)
Caroline Siebold (BSP)
Jos Vaessen (IOS)

June 2014
Table of Contents
Table of Contents ..................................................................................................................................................................................... 2
Executive Summary ................................................................................................................................................................................ 3
Chapter 1. Introduction ......................................................................................................................................................................... 7
Background ........................................................................................................................................................................................... 7
Purpose ................................................................................................................................................................................................... 8
Methodology ......................................................................................................................................................................................... 9
Limitations .......................................................................................................................................................................................... 10
Chapter 2. Achievements and expectations ............................................................................................................................... 12
Achievements ..................................................................................................................................................................................... 12
Expectations of Member States .................................................................................................................................................. 13
Past decisions of UNESCO’s Governing Bodies .................................................................................................................. 13
Survey to UNESCO’s Member States ..................................................................................................................................... 13
Chapter 3. Challenges in results-reporting ................................................................................................................................ 16
Introduction ........................................................................................................................................................................................ 16
Definition and number of results .............................................................................................................................................. 16
Defining results ............................................................................................................................................................................. 16
Clarity of formulation and reporting on results .............................................................................................................. 18
The number of results ................................................................................................................................................................. 24
Key findings .................................................................................................................................................................................... 25
Causal linkages between activities, outputs and outcomes........................................................................................... 26
Key findings .................................................................................................................................................................................... 31
Data and evidence ............................................................................................................................................................................ 31
Reporting on activities and outputs ..................................................................................................................................... 32
Reporting on results (at outcome level) .............................................................................................................................. 33
The roles of self-reporting versus evaluation .................................................................................................................... 35
Key findings .................................................................................................................................................................................... 35
Efficiency of results-reporting .................................................................................................................................................... 36
Key findings .................................................................................................................................................................................... 38
Chapter 4. Going forward ................................................................................................................................................................... 39
Focusing the task .............................................................................................................................................................................. 39
Principles of reform......................................................................................................................................................................... 39
A model for results-reporting ..................................................................................................................................................... 40
Feasibility............................................................................................................................................................................................. 43
Annexes ...................................................................................................................................................................................................... 45
Annex 1. Terms of Reference of the results-reporting evaluation ............................................................................. 45
Annex 2. Key references and documents consulted ......................................................................................................... 55
Annex 3. Examples of theories of change in the UNESCO system ............................................................................... 58

2
Executive Summary
Background

The past biennium (2012-2013) marks the end of the biennial programming model as the
Organization now moves into a four-year programming cycle with the new 37 C/5 (2014-2017).
This transition certainly poses new challenges but at the same time also provides a critical
window of opportunity to reflect on the Organization’s results-based management practices and
to introduce improvements. A key element of such a reflection concerns the way the
Organization reports on the implementation of its programme and the achievement of its
results.

At the brink of this new quadrennial programming cycle, the following aspects lie at the heart of
a review of UNESCO’s results-reporting:
 While noting progress made in the Organization’s reporting over time, the Executive
Board in a number of decisions has repeatedly expressed the need for further
improvements in the format and content of reporting.
 Member States and donors increasingly expect UNESCO to provide evidence of the
outcomes (and impact)1 of its interventions. Several external reviews have highlighted
the need for strengthening the Organization’s ability to do so.
 With the transition to a four-year cycle in response to the quadrennial comprehensive
policy review of operational activities (QCPR) of the UN system and the UN’s ongoing
comprehensive reform efforts towards increased system-wide coherence, UN system-
wide harmonization has become ever more relevant.
 The Organization is moving to apply the principles of results-based budgeting for
expected results of the 37 C/5 (2014-2017).

To address these issues and to strengthen the results-reporting model towards the future as
UNESCO moves into a new cycle, the Internal Oversight Service (IOS) and Bureau for Strategic
Planning (BSP) have undertaken a joint formative evaluation of UNESCO’s results-reporting,2
principally concerning the Organization’s six-monthly (EX/4) and biennial (C/3) reporting. The
main purpose of the evaluation was to analyze the strengths and weaknesses of the current
results-reporting model used in the UNESCO system and, on the basis of the analysis, develop a
proposal for improved results-reporting. The evaluation was conducted in consultation with
Member States3.

Findings of the evaluation of UNESCO’s results-reporting

The expectations of Member States, the main users of the EX/4 and C/3 reports, have been
repeatedly expressed in decisions of the Executive Board. The findings of a short survey, which

1 Outputs concern products and services directly generated by the intervention (within the control of the
organization). Outcomes are institutional and behavioral changes in actors, which ultimately contribute to changes in
society (=impact). The latter refers to the positive or negative long-term effects produced by an intervention, directly
or indirectly, intended or unintended (see UNDG RBM Handbook, 2012: 7). Expected results in UNESCO, in principle,
should refer to the outcome level. Impact, societal change (at institutional and beneficiary levels) is very difficult and
costly to evaluate given challenges of attribution.
2 Results-reporting includes the reporting of UNESCO’s activities, outputs and outcomes (see also previous footnote).
3 The evaluation was conducted by a team of IOS and BSP staff in collaboration with an external expert. The evaluation

essentially followed a top-down perspective, i.e. assessing the format and content of UNESCO’s statutory results
report (EX/4 and C/3) and subsequently the underlying information mechanisms feeding into the report. A set of
criteria for good results-reporting was applied as a framework for the assessment. The evaluation employed inter alia
a comparative perspective; interviews and desk study reviews were conducted with regard to UNESCO (Headquarters
and Field Offices) and four selected UN Organizations (UNDP, UNICEF, FAO, ILO). Finally, interaction with Member
States, the principal users of UNESCO’s results reports, was ensured through a series of informal meetings with
delegations, and a short survey on results-reporting sent out to all delegations to elicit their views.

3
had a satisfactory response rate (34 percent of Executive Board Members and 20 percent of all
Member States), allowed for a more detailed analysis of expectations. The findings of the survey
are broadly in line with previous Executive Board decisions but highlight the following two main
points: the need for (i) an analysis of strategic challenges in the implementation of the
Organization’s programme and (ii) more synthesized and aggregated results information to
present a comprehensive and balanced overview of UNESCO’s areas of work. Moreover, results-
reporting should be analytical, strategic, concise and forward-looking. Finally, there is an
expectation of a clearer distinction between output and outcome reporting.

The following achievements have been identified by the evaluation:


 UNESCO has a long experience of implementing RBM principles informed by UN system
standards.
 The system and practices of self-reporting on results in UNESCO have been subject to
continuous updates and improvements.
 Training and guidance materials on self-reporting in SISTER4 have been produced.
 Staff awareness of and compliance with self-reporting requirements (in SISTER) have
improved.
There is a gap between where the Organization currently stands with results-reporting and the
identified expectations by Member States. Overall, the evaluation identified three key areas of
challenges:

1. Reliability of data and evidence on results:


 Activity and output reporting are fundamentally different from reporting on expected
results (at outcome level). While the former can be relatively easily observed or
captured, the latter requires resources, time and explicit data collection at the level of
the target group(s).
 Recent EX/4 and C/3 reports include substantive reporting on implementation and
output delivery. However, there is scope for improving the synthetic and strategic
analysis of these dimensions including the analysis of challenges and achievement of
targets.
 Self-reporting on outcomes of UNESCO’s work is fragmented and weak. Overall, staff
members do not have the incentives, time, resources and data to present unbiased and
reliable data on expected results at outcome level. Consequently, this negatively affects
the scope for aggregate reporting on expected results in the EX/4 and C/3.

2. Efficiency of reporting:
 While UN system organizations face quite similar challenges across the system in terms
of results-based management and reporting, the frequency of statutory reporting to the
Governing Bodies is considerably higher in UNESCO than in other selected UN
Organizations. Moreover, the number of pages of the statutory reports (EX/4 and C/3) is
rather high compared to these same Organizations.
 Within the UNESCO results-reporting system, the self-reporting workload (due to the
frequency of reporting and the small unit of analysis of reporting) is too high.
 Taking into account the previous two elements, the overall conclusion is that the value
for money of self-reporting practices in the UNESCO system is rather low.

3. Linkages between planning, self-reporting and evaluation:


 Expected results are formulated on the basis of a political and participatory process,
which at times can be cumbersome and lead to an excessively high number of mandated
results and performance indicators.

4System of Information on Strategies, Tasks and the Evaluation of Results : an IT-based management tool which
supports Results-Based Management (RBM) approach as applied in UNESCO

4
 Expected results in the C/5 (and EX/4 and C/3), as well as corresponding indicators, are
inconsistently formulated: they interchangeably refer to activities, outputs and
outcomes. The number of expected results that UNESCO reports on is very high. Other
UN Organizations tend to have considerably fewer and more consistently formulated
results.
 Evidence from audits and evaluations show an inconsistent use of the results
terminology (output, outcome, impact) in UNESCO’s reporting (e.g. in SISTER and to
donors).
 Given the nature of UNESCO’s work, e.g. advocacy, policy advice and standard setting, it
is often quite difficult and costly to capture and explain how change is brought about as a
result of UNESCO’s interventions. This increases the need for a clear articulation of
causal assumptions linking activities to results. However, in most areas of UNESCO’s
work the causal logic underlying activities is not sufficiently clear. As a result, aspects
such as the causal analysis, the choice of indicators, or the scope and coverage of
monitoring and reporting, are often weak or incomplete.
 The limitations of self-reporting (especially on expected results) can only be partially
resolved through strengthening validation and self-assessment at higher levels. It also
requires reconsidering the role of evaluations as they do not systematically feed into the
self-reporting processes and consequently into the EX/4 and C/3.

Towards a new model for results-reporting

In order to respond to the multiple challenges identified in the evaluation, the future EX/4 and
C/3 should be based on the following principles of good results-reporting:
 A clear distinction between reporting on activities and output delivery and reporting on
expected results (at outcome level).
 A change in the frequency of reporting allowing for more rigorous analysis and
reporting.
 A recalibration of self-reported and evaluation information feeding into the results-
reporting.

A concrete proposal – which would fit these and other principles specified in the report, bringing
UNESCO closer to good practices in other UN organizations, and supporting the successful
implementation of RBB – is that the current model of six-monthly reporting in the EX/4 and
biennial reporting in the C/3 would be replaced by:
 An annual report on implementation of the programme (activities and output
delivery), presenting aggregate strategic analysis on implementation of activities and
output delivery, including indications on the extent to which programme delivery is on
track, cross-cutting challenges in implementation as well as proposals for corrective
action.
 A quadrennial report on results (achievement of outcomes)5, which would report on
the extent to which expected results have been achieved and how the organization is
making a difference for its intended beneficiaries.

Implications for the role of self-reporting and evaluation. The annual report on
implementation would be supported by the current self-reporting in SISTER, with some
simplifications at activity and project level and a strengthened process of self-assessment of
implementation at country and programme level. In principle, the human resources that would
be freed up with the reduction in frequency in reporting and a simplification of reporting at
lower levels of intervention could be used at higher levels (country and programme) to
strengthen programme implementation monitoring, analysis and reporting. The quadrennial

5There are good examples of organization-wide reports on results with cycles of four years or longer (e.g. the GEF
Overall Performance Study, the UNDP Evaluation of the Strategic Plan).

5
report on results (achievement of outcomes) would be largely informed by evaluations carried
out by Sectors and by IOS and supported by a quadrennial self-assessment exercise.

Implications for the implementation of RBB and the basis for strategic decision-making to
allocate resources in areas where UNESCO is making a difference. The results report
(achievement of expected results at outcome level) would present a more comprehensive
evaluative analysis of the UNESCO programme (including the relevance, comparative
advantages and effectiveness of UNESCO’s work). The principal idea would be that through a
better coordination of evaluative activities and a review of evaluative evidence at the end of the
quadrennium, more reliable comparative data would be presented to the Governing Bodies to
support decision-making on strategic directions and the allocation of human and financial
resources of the Organization.

6
Chapter 1. Introduction
Background

Over the last ten years donors and the wider international development community have
increasingly focused their attention on the definition of clear objectives and targets for policy
interventions as well as on the issue of assessment of the extent to which they have been
achieved. As part of this trend, UN organizations have stepped up their efforts to introduce
results-based management (RBM) principles within their programming, monitoring and
reporting frameworks. In addition, UN organizations are under a variety of external and internal
pressures to position themselves more clearly within the UN system and vis-à-vis other
stakeholders. At UNESCO a number of initiatives have led to a strengthening of RBM functions
throughout the Organization. Yet despite some progress, significant challenges remain.

The past biennium (2012-2013) marks the end of the biennial programming model as the
Organization now moves into a four-year programming cycle with the new 37 C/5 (2014-2017).
This transition certainly poses new challenges but the same time also provides a critical window
of opportunity to reflect on the Organization’s results-based management practices and to
introduce improvements. A key element of such a reflection concerns the way the Organization
reports on the implementation of its programme and the achievement of its results.

At the brink of this new quadrennial programming cycle, the following aspects lie at the heart of
a review of UNESCO’s results-reporting:
 While noting progress made in the Organization’s reporting over time, the Executive
Board in a number of decisions has repeatedly expressed the need for further
improvements in the format and content of reporting6.
 Member States and donors increasingly expect UNESCO to provide evidence of the
outcomes (and impact) of its interventions. Several external reviews have highlighted
the need for strengthening the Organization’s ability to do so.
 With the transition to a four-year cycle in response to the quadrennial comprehensive
policy review of operational activities (QCPR) of the UN system, UN system-wide
coherence and harmonization have become more relevant.
 The Organization is moving to apply the principles of results-based budgeting for
expected results of the 37 C/5 (2014-2017).

To address these issues and to strengthen the results-reporting model towards the future as
UNESCO moves into a new cycle, the Internal Oversight Service (IOS) and Bureau for Strategic
Planning (BSP) have undertaken a joint formative evaluation of UNESCO’s results-reporting,
principally the Organization’s six-monthly EX/47 and biennial C/38 reporting.

UNESCO reports on results in many different forms. The focus in this study lies on the statutory
reports to UNESCO’s Governing Bodies (Executive Board and General Conference) and thus to
Member States at the global level. The study does not include a review of other main sources of
results information, which are to be found in statutory reporting by the Secretariat to the
Governing Bodies of UNESCO’s subsidiary organs, such as the Governing Bodies of culture
Conventions (1972 Convention, 2005 Convention, etc.), of Category 1 Institutes (IBE, IIEP, UIL
etc.), or of international/intergovernmental Scientific Programmes (e.g. MAB, IOC, etc.), as well
as in final narrative reports and evaluations of extrabudgetary projects and programmes,

6 See Chapter 2.
7 Report by the Director-General on the execution of the programme adopted by the General Conference (six-
monthly).
8 Report of the Director-General on the activities of the Organization in [biennium] communicated to Member States

and the Executive Board in accordance with Article VI.3.b of the Constitution (biennial).

7
specific country reports, as well as in the evaluation reports emanating from IOS. A core element
of UNESCO’s RBM system is the biennial results report (C/3), which is based on six-monthly
progress reports (EX/4). In line with the decisions adopted by the Executive Board at its 184th,
186th and 189th sessions, the EX/4 document now consists of two reports: a printed document
distributed to the Executive Board and a more detailed comprehensive report on
implementation status which is available online.

Figure 1 presents an overview of the current cycle of reporting. As stipulated in resolution 36


C/105, UNESCO’s General Conference has decided that UNESCO will change its two-year cycle to
a four-year programming (and reporting cycle9). The resolution responds to the UN General
Assembly’s call on UN system organizations to align their programming cycles with the
quadrennial comprehensive policy review (QCPR) of UN operational activities. The new UNESCO
programme cycle will commence in 2014. This transition presents an opportunity space for
introducing substantive improvements in the nature of results-reporting, with the aim to further
strengthen its quality and relevance.

Figure 1 Current reporting cycle on the Organization’s results10

BIENNIUM
Biennial
Biennial
programme
results report
and budget

C5 EX4 EX4 EX4 EX4 C3


Six-monthly results reports

Purpose

The main purpose of the evaluation is to analyse the strengths and weaknesses of the current
results-reporting model used in UNESCO and, on the basis of the analysis, develop a proposal for
improved results-reporting. The Terms of Reference are presented in Annex 1.

This report discusses the findings of the evaluation. In addition, the report includes a proposal
for a revised model for results-reporting. The evaluation’s outputs are expected to feed into the
work and decision-making practices of the Secretariat of UNESCO (Headquarters, Field Offices
and Category 1 Institutes) and UNESCO’s Executive Board and General Conference. Moreover,
the revised results-reporting will benefit Member States, donors and the public at large in terms
of improved transparency and accountability on the Organization’s achievements and strategic
priorities.

More particularly, it is expected that the findings of the report and the proposed new model will
feed into a forward-looking process, involving the Secretariat and Member States, of revising
UNESCO’s results-reporting model in line with the new cycle and responding to the challenges
identified above and in the body of the report.

9 The budget will be allocated on a two-year basis.


10 In practice, the last six-monthly EX/4 exercise and the (draft) C/3 reporting coincide.

8
Methodology

The evaluation was conducted by a team of IOS and BSP staff in collaboration with an external
expert. The evaluation employed a top-down perspective, i.e. assessing the format and content
of UNESCO’s statutory results report (EX/4 and C/3) and subsequently the underlying
information mechanisms feeding into the report.

A set of criteria for good results-reporting was applied as a framework for the assessment. There
is a solid body of research as well as applied management thinking11 discussing these issues. The
criteria, based on existing literature as well as experience, applied in this exercise were the
following:
 Focus (simplifying UNESCO’s results framework and developing a more synthetic view
on UNESCO’s key achievements and comparative advantages)
 Clarity (showing the linkages between what UNESCO does, how this influences processes
of change, and subsequently contributes to achieving UNESCO’s expected results and
strategic objectives)
 Transparency (of programme and budget information, including regular programme and
extrabudgetary funds, following the principle of results-based budgeting)
 Evidence-based nature (improving the empirical evidence base of the results-reporting)
 Strategic direction (providing clear recommendations on improving value for money to
better inform decision-making processes on strategic priorities and the allocation of
resources)
 Harmonization (informed by common/good practices in the UN system)
 Value for money (taking into account the feasibility, cost and value added of reporting
mechanisms and processes)

These criteria were used in the assessment of UNESCO´s results-reporting system on the pages
that follow, and they were also used in the discussion of a proposed new model at the end of this
report. Moreover, in order to generate credible evidence on the achievement results, i.e. a
statement on how a set of activities and outputs of the Organization has made a difference, one
needs a good foundation. On the basis of evaluation literature and the data collected and
analysed during this exercise, we identify three key building blocks for results-reporting:

a. Clear definition of results


b. Clear identification of the causal linkages between sets of activities and outputs and how
they are intended to contribute to changes in the behaviour of individuals, institutions
and eventually the achievement of objectives (i.e. through the use of intervention logics
or theories of change)
c. Clear and reliable data based on reliable processes for collecting, analyzing and
aggregating data.

These building blocks constitute the structure of our analysis of UNESCO’s results-reporting in
Chapter 3.

The evaluation employed a comparative perspective; interviews and desk study reviews were
conducted with regard to four selected UN Organizations (UNDP, UNICEF, FAO, ILO). The
following data collection and analysis tools were used in the evaluation:
- desk study of statutory results reports of UNESCO and four selected UN Organizations
(UNDP, UNICEF, ILO, FAO)

11Managing for Development Results: Relevance, Responsiveness and Results Orientation. (2012) Asian Development
Bank ; Building Better Policies ; The Nuts and Bolts of Monitoring and Evaluation Systems. (2012) The World Bank.
Innovations in Monitoring and Evaluation Results. (2013) UNDP. See also the list of references presented in Annex 2.

9
- desk study of other supporting documents such as RBM guidelines, reporting templates
and formats within the Organization and other UN Organizations, UN system documents,
academic and grey literature on RBM
- interviews (face to face / skype / telephone) with process owners of the central results-
reporting system in UNESCO and the four selected UN organizations
- interviews (face to face / skype / telephone) with programme staff and M&E focal points
in Bangkok and Nairobi (UNESCO and other UN Organizations)
- Consultations with Member States through informal meetings
- Short survey to Member States

Limitations

Governing Bodies and management in multilateral organizations such as UNESCO express a keen
interest in, and provide guidance on, results-reporting and hence it is a field where practices
continuously evolve. The evaluation recognizes these changes, but is not providing a historical
description of results-reporting. A limited time perspective was chosen with a focus on the most
recent results reports. During the course of this evaluation the 194 EX/4 was being developed
and, the new document shows some changes (innovations) in comparison to its predecessors.
The analysis in this evaluation mainly builds on the 192 EX/4 and a few preceding EX/4 reports.

As suggested by the ToR, the study provides a snapshot of the state of affairs concerning results-
reporting. It is a not a comprehensive study of the reasons underlying challenges and
weaknesses in results-reporting. References are made to the links between results-reporting
and the formulation of results, and also to staff incentives, capacities and other explanatory
factors. The literature on RBM and evaluation constitute a good basis for understanding the
factors that shape current practices of results-reporting but these were not systematically
explored in the empirical data collection.

The issue of harmonization with UN practices is primarily addressed from an organization-wide


perspective and focuses more on the view from headquarters and Governing Bodies than on the
country level. The country level is a key level of results-reporting, in particular on the
achievement of UN system common results within the framework of common country
programming frameworks such as the UNDAF or UNPAF. While taking note of some of the
complexities of reporting at country level and the associated complexities in aggregation, an in-
depth analysis of country-level results-reporting was not part of this exercise.

The evaluation did not in any way aim to develop an evaluative judgment of results-reporting in
any of the four selected UN Organizations. The four UN organizations (two Funds/Programmes,
two Specialized Agencies) were selected on the assumption that they could present approaches
and models that could be of interest – not necessarily reproduced or imitated, but to offer new
perspectives and generate ideas for development in a spirit of inter-agency cooperation. The UN
shares similar challenges in strengthening results-reporting, and there is a system-wide interest
in exchanging information and knowledge, and in sharing good practices and solutions to these
shared problems, including through inter-agency groups such as the UN Strategic Planning
Network. It is also worth mentioning that while results-reporting needs are to a certain extent
specific to each individual UN organization, there are common system-wide pressures such as
the quest for UN coherence and harmonization, including as called for in the 2012 ‘quadrennial
comprehensive policy review of operational activities for development of the United Nations
system’ (QCPR) and ‘Delivering as One’. The UN 2012 Secretary-General’s five-year action
agenda heralded a “second generation of ‘Delivering as One’ which will focus on managing and
monitoring for results, ensuring increased accountability and improved outcomes”.

Results-reporting is intertwined with planning, monitoring and evaluation, but the focus of this
analysis is mainly on results-reporting. A further differentiation can be made. While the

10
processes of evaluation and self-reporting as (potential) information mechanisms to the EX/4
and C/3 are quite central to the analysis, the links to planning and monitoring processes could
have deserved more attention. Again the same constraints of focus and resources apply.

The detailed analyses of results as reported in the EX/4 (and C/3) build on a random sample
selected from the C/3 and EX/4. It is but a small sample, but the evidence is quite unambiguous
and is largely confirmed by other studies and evaluations that are quoted in the report. In most
cases, a more superficial analysis of additional results (from other EX/4 and C/3) reports was
undertaken which strengthened our confidence in the findings.

11
Chapter 2. Achievements and expectations
Achievements

On the basis of interviews with UNESCO staff and desk study, especially the Audit of UNESCO’s
Project and Activity Monitoring and the Diagnostic Study of Evaluations of UNESCO’s
Extrabudgetary Activities, the evaluation has identified the following achievements of UNESCO
in the field of results-reporting:
- UNESCO has a long experience of implementing RBM principles informed by UN
standards.
- Training and guidance materials on self-reporting in SISTER have been produced. The
key guidance materials are listed in the reference list of this evaluation.
- As a result of the previous two points, over time staff awareness of and compliance with
self-reporting requirements (in SISTER) have significantly improved. While there are
still fundamental challenges in UNESCO’s self-reporting system (see Chapter 3), there is
no doubt that there has been a steady improvement in compliance with reporting,
making the Organization’s SISTER system more and more useful as a programme
management information system.
- As a result of critical signals from UNESCO’s Executive Board (see next section) the
system and practices of self-reporting on results in UNESCO have been subject to
continuous updates and improvements. An example of an important improvement has
been the introduction of a synthetic summary assessment of key topics in the EX/4 (see
Table 1).
- Although the Executive Board has regularly highlighted challenges faced in results-
reporting, they have also expressed satisfaction and recognized improvements in a
number of key areas: the format and structure of the EX/4, including clearer linkages
between results achieved, the Main Lines of Action and expected results (190
EX/Decision 4, Part I B para. 11); continued efforts to improve the structure of the
report, the analytical approach and the quality of information and evidence presented in
it (192 EX/Decision 4, Part I para. 5).

Table 1 Content analysis of executive summaries presented in recent EX/4 results reports

Aspect in executive summary EX/4 192 EX/4 191 EX/4 190 EX/4 189 EX/4

18 months (focus last 6


Period covered by the report 12 months 6 months
months)
Structure of the EX/4 report Yes Yes Yes
General description progress towards the
Yes Yes Yes
achievement of the expected results (C5)
Yes (Education, NSc,
Illustrations of progress in the Major Programmes Yes (all except SHS) Yes (Education)
Culture)
Actions undertaken with repect to the
Yes (detailed) Yes (detailed) Yes
challenging financial context
Illustrations of actions undertaken to assert
UNESCO leadership (e.g. conferences, events, Yes No No
declarations…)
Yes (global financial
External challenges No No
crisis) No executive summary

Sources of funding for programme execution Yes (pie chart) Yes (pie chart) Yes

Progress made in meeting the Roadmap targets Yes Yes Yes

Organisation-wide expenditure rate Yes Yes No


Expenditure rate in Regular Program Programme Yes (+ rates for all
Yes Yes
funds Major Programmes)
Expenditure rate Extrabudgetary No Yes Yes

Findings of the Qualitative analysis of the


No Yes Yes
different Major Programmes
Private-Sector partnerships No No Yes

12
Expectations of Member States
Past decisions of UNESCO’s Governing Bodies
UNESCO’s Executive Board has repeatedly expressed its view on the format and content of
UNESCO’s results-reporting. Within the framework of the evaluation the recent history of all
decisions by the Executive Board was reviewed (starting from the 184th session of the Executive
Board).

The main themes emerging from this analysis demonstrate Member States’ need for a) more
outcome-based reporting versus the current focus on output-based reporting; b) more concise
and analytical reporting supported by evidence of results achieved; and c) improvements in the
quality and reliability of the data collected. This call for improvement is driven in large part by
the Executive Board’s responsibility to provide a strategic assessment on an ongoing basis of the
performance of individual programmes as called for in 34 C/Resolution 89.

Specific Executive Board decisions are paraphrased below to illustrate the evolution of its
thinking on improvements needed in results-based reporting:
 184 EX/Decision 4 highlights the need for more outcome-based programming and
reporting; better monitoring systems and data collection tools include data to assess
quality and usefulness; and for report format and content to include evidence-based
analyses of the results achieved;
 186 EX/Decision 4 refers to improvements in the overall assessment of key results by
providing a more concise and evidence-based analysis; notes that many expected results,
performance indicators, baselines and benchmarks do not support more qualitative
outcome-based reporting; and requests the implementation of a systematic and results
based programming, management, monitoring and reporting approach;
 189 EX/Decision 4 reiterates the need for report content to include more evidence-
based analysis; to establish clearer linkages between the expected results of the C/5 and
the results achieved, making better use of empirical evidence provided in programme
evaluations; and
 192 EX/Decision 4 calls for strengthening the monitoring framework to improve the
quality of data collected and reporting.

Survey to UNESCO’s Member States


A survey was sent out to Member States. 20% (41) of Member States (including Associated
Members) submitted their responses. Half of these responding Member States are currently
member of the Executive Board, which amounts to a response rate of 34% among Executive
Board Members. The response rate, taking into account the short response period of the survey,
can be considered satisfactory.

The findings of the survey are broadly in line with previous Executive Board decisions but
highlight the following two main points: the need for (i) an analysis of strategic challenges in the
implementation of the Organization’s programme and (ii) more synthesized and aggregated
results information to present a comprehensive and balanced overview of UNESCO’s areas of
work. Moreover, results-reporting should be analytical, strategic, concise and forward-looking.
Finally, there is an expectation of a clearer distinction between output and outcome reporting.

Analysis of main findings


Member States were requested to assign a level of importance to seven key areas of the
currently used results’ reports (see Figure 2). According to the responding delegations, the most
important aspect is the analysis of the strategic challenges that are faced in implementing the
programme. The Member States also attach a lot of importance to synthesized and aggregated
results that present a comprehensive and balanced overview of a particular area of work. In
addition, information on regular programme and extrabudgetary resources mobilized to achieve

13
specific results scores high on expectations. The answer category on information which supports
governance and strategic decision-making also scores high but can be considered as a cross-
cutting element that overlaps with the previous more specific elements.

Figure 2 Importance of key areas results' reports

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Synthesized and aggregated results information to present a


comprehensive and balanced overview of a particular area of
work

Detailed information on the achievement of expected results as


defined in the Organization’s main strategic documents

Information on regular programme and extrabudgetary resources


mobilized to achieve specific results

Analysis of the strategic challenges faced in implementing the


programme

Success stories highlighting achievements

An overview of main events having taken place

Information to support governance and strategic decision-making

Most important Important Of limited importance Not important Do not know/ No opinion

Apart from responding to the statements presented in Figure 2, respondents also had three
opportunities to spontaneously provide qualitative information. Comments could be provided as
an addition to the assessment of the importance of key areas, as suggestions for additional
aspects to be incorporated in UNESCO’s results’ reports or as general comments at the end of the
survey. Given the similar nature of the responses provided to each of these questions, it was
deemed appropriate to conduct an integrated analysis of the qualitative information obtained
throughout the survey. To deal with problems of double-counting, the principle was adopted
that whenever a particular type of response is mentioned in one of the three questions, it is
included in the statistics. However, any repetitive answers in the three questions were counted
as one response to avoid any issues of double-counting.

After reading through the responses, sets of categories/labels were developed. The most
frequently occurring topics are displayed in Figure 3.

14
Figure 3 ‘Spontaneous’ mention of priorities

0% 5% 10% 15% 20% 25% 30% 35% 40%

More strategic and more analytical 34%

More concise 29%

Provide recommendations for future actions 27%

Report on progress towards the expected results 27%

Use of baselines/ benchmarks/ performance


24%
indicators

Report on challenges and failures 24%

Integration financial and operational aspects 20%

Distinction between outcomes and outputs 20%

Percentage of respondents

The most frequently spontaneously mentioned aspects of the survey relate to the need for
strategic and analytical reporting rather than descriptive reporting. According to a substantial
part of the respondents (34%), this shift in the nature of the reports would result in a more
practical and useful document that can be consulted during decision-making processes. Related
to this aspect, a considerable number of Member States (29%) urged for more concise reporting
for instance through the avoidance of repetition and excessive details. Most of the ‘spontaneous’
responses of Member States to issues concerning results-reporting were also reflected in the
statements that were presented to them in the survey. One finding that stands out as it was not
reflected in these statements yet scored very high as a spontaneously mentioned issue is the fact
that 20% of the respondents indicated that it is necessary to make a (clearer) distinction
between results at the impact/outcome level and results at the output level.

15
Chapter 3. Challenges in results-reporting

Introduction

This chapter discusses UNESCO´s results-reporting, focusing on the statutory results reports to
the Governing Bodies, supplemented with a view on results analysed in the evaluation system.
One could argue based on existing literature and good practices of reporting elsewhere that
good results-reporting relies on three key building blocks12:

- Definition of results: clear terminology regarding processes of causal change (from what
an organization does and delivers to how this contributes to changes in society) and
consistent use of the terms in practice.
- Causal linkages between areas of work and results: making explicit how activities and
outputs are intended to contribute to changes in society.
- Data and evidence: the quality of the data supporting claims on results, including the
underlying mechanisms and processes for collecting and analyzing data.

We will discuss each of these three points in separate sections. Finally, we briefly touch upon the
issue of efficiency of reporting. The evidence presented in this chapter makes a very strong case
for changes in the system of results-reporting. Those changes follow from the analysis of
information in the system, from a comparative perspective of what other UN organizations do
with respect to results-reporting, and existing research on results-based management systems.
The outlines of reforms may be discerned here, but the next chapter concludes by identifying the
principles of a new model for results-reporting.

Definition and number of results

The quality of reporting depends on how results have been formulated in the first place. Many of
the difficulties that are encountered later have their roots in how the concept of results is
defined, understood and used. We discuss three points in this section: (1) the way the term
‘result’ is defined in UNESCO and how it relates to other definitions used in the UN and
elsewhere; (2) the consistency of use of causal language regarding results in the formulation of
and reporting on results; and (3) the number of expected results, a level of organizational
objectives that plays a key role in UNESCO’s system of reporting and accountability.

Defining results
The concept ‘results’ is somewhat unfocused. The basic meaning is simple and well-defined in
dictionaries13; ‘a thing that is caused or produced by something else, a consequence’. This is a
common sense understanding that most people would agree on. But it is very general. Business
administration is the discipline with the most longstanding and rigorous attention to results,
and within that field the term results is understood in many ways; as profits – before or after tax
and financial dispositions, market shares, turnover, innovations, etc. In public administration, it
is even more ambiguous. Whereas most public agencies have an annual statement where a
result in terms of profit or loss is the bottom line, few would see that figure as an adequate
reflection of results – result then understood as a meaningful indicator of what has been

12 The literature on RBM is extensive, the key principles used in this study are found in: Kusek, J. Z. and R. C. Rist
(2004). Ten Steps to a Results-Based Monitoring and Evaluation System. Washington, DC: World Bank ; Mayne, J.
(2004). Reporting on Outcomes: Setting Performance Expectations and Telling Performance Stories. Canadian Journal
of Program Evaluation, 19(1): 31-60 ; Otley,D., 1999, Performance management: a framework for management control
systems research: Management Accounting Research, v. 10, p. 363‐382. A key reference is: Joint Inspection Unit/ JIU
(2006). JIU/REP/2006/6, Results-based management in the United Nations in the context of the reform process, by Even
Fontaine Ortiz and Guangting Tang.
13 Collins Concise Dictionary of the English Language, 1980 and 2013.

16
produced by the agency. Given divergent interpretations of the concept which makes it rather
nebulous, the concept has to be defined according to the specific context in which it is applied in
order to be a useful instrument in management and governance. UNESCO defines and explains14
results as follows:

‘A result is the “raison d’être” of an activity, project or programme. A result can be defined as a
describable and measurable change in state due to a cause and effect relationship induced by that
activity, project or programme. Expected results are answers to problems identified and focus on
changes that activity, project or programme are expected to bring about. A result is achieved when
the outputs produced go beyond the purpose of the interventions. It is the last step of the
transformative process, where inputs (human, institutional and financial resources) are used to
undertake interventions which lead to outputs which contribute to a desired change of situation. The
result expresses how a specific situation is expected to be different from the current situation. It often
relates to the use of outputs by intended beneficiaries and is therefore usually not under full control
of an implementation team.’

As the text shows, the emphasis is on change relating to the purpose of interventions. A result is
not only about producing an output; it is about what happens in society and among beneficiaries
as a consequence of the intervention. The RBM guidelines15 distinguish between four terms that
all relate to what happens after an intervention; namely outputs have been produced, results
achieved, outcomes achieved, and an impact is realized. It is illustrated in Figure 4.

Figure 4 Definition of results

The words appear in almost chronological order; first comes an output, then a result, then
outcome and finally impact. As also clearly illustrated in the figure, a result is in between outputs
and outcomes. Is it a good elaboration on how the Organization should understand ‘results’? The
internal logic of the word is clear and anchored in the common sense definition of the term so it
can be understandable by any audience. However, there are two further aspects that need to be
considered. UNESCO is part of the UN family and in light of the overarching systems reform, it is
increasingly expected to work together with – and hence produce results together with – other
agencies in the system (specialised agencies, funds and programmes, the bi- and multi-lateral
donors and partners). Organisations need a common understanding of ‘results’. The UNDG has

14 UNESCO (2011) Results-Based Programming, Management and Monitoring (RBM) approach as applied at UNESCO.
Guiding Principles. BSP/RBM/2008/1.REV.5, p.18
15 UNESCO (2011)Results-Based Programming, Management and Monitoring (RBM) approach as applied at UNESCO.

Guiding Principles. BSP/RBM/2008/1.REV.5. Paris, June 2011. Page 17.

17
made an effort to harmonize the concepts in the handbook16 and most UN agencies use these
definitions. Furthermore OECD/DAC uses the same terminology17, so most bilateral agencies and
most civil society organisations follow suit and the common core of the definitions is that
“results” is a general concept that covers outputs, outcomes and impact:

’A result is a describable or measurable change that is derived from a cause-and-effect


relationship. There are three types of such changes – outputs, outcomes and impact -
which can be set in motion by a development intervention’ (UNDG)

‘The output, outcome or impact (intended or unintended, positive and/or negative) of a


development intervention’ (OECD/DAC)

The three terms outputs, outcomes and impact are all results, that is, consequences of
interventions. They can be seen as results in the short, medium, and long-term. They are also
differentiated by the nature of causal links, outputs are created within the control of the
Organization, outcomes depend on context, and impact even more so.

Hence, although there is today no common standard for reporting within the UN system (with
the exception of the country level UNDG approach), UNESCO’s terminology appears somewhat
different from that existing elsewhere. Even though exact definitions of results terminology may
somewhat differ across the UN, the trinity “outputs, outcomes, impacts” is the most widely used
(as reflected in the UNDG Handbook), and more specifically at country level, within the
framework of common country programming. Moreover, in monitoring and evaluation practices,
the distinction between outputs on the one hand, and outcomes and impacts on the other,
intuitively makes sense and is widely accepted: outputs as being readily observable and within
the control of the intervention; and outcomes and impacts as proxies for behavioural changes in
actors leading to other changes in society, which are outside the control of the intervention,
largely depend on context, and are more difficult to capture due to attribution challenges.
However, it would need to be decided whether UNESCO wishes to formally adopt “outcomes”, as
several other UN organizations do, to express a higher-level of results as well as to simplify its
results-reporting.

The fact that UNESCO maintains a different terminology can create confusion among staff as
they have to report to multiple stakeholders, including donors and implementation partners,
which can leads to inconsistencies in the use of causal language and can negatively affect the
quality of causal analysis.18

Clarity of formulation and reporting on results


The last paragraph of the previous section brings us to a related issue: the discussion on the
actual use of the term “results” and related causal terms in the UNESCO system. This is
important at the moment when results and related performance indicators are formulated and
when the Organization reports on the results.

This study focuses on results-reporting, but it is quite clear that the quality of results-reporting
depends on the quality of the formulation of results. The results reports (EX/4 and C/3) that
were reviewed in this study report on the programmes presented in the 35 C/5 and 36 C/519,
and both the programming and reporting documents illustrate the problems with defining
results, such as:

16 UNDG (2011) Results-Based Management Handbook: Harmonizing RBM concepts and approaches for improved
development results at country level. New York, UNDG
17 OECD (2010) Glossary of Key Terms in Evaluation and Results Based Management. Paris, OECD
18 See the Diagnostic Study of Evaluations of Extrabudgetary Activities for empirical evidence (IOS/EVS/PI/128).
19 35 C/5 Approved Programme and Budget 2010-2011. 36 C/5 Approved Programme and Budget 2012-2013. In

addition, the 37 C/5 (Draft Programme and Budget 2014-2017) was also reviewed.

18
 The term ‘expected result’ is the most important level of high-level objectives20 that the
Organization reports on, and corresponds to what in most other agencies would be
called outcome.
 The concrete formulation of an ‘expected result’ in practice differs. Sometimes it is
formulated as an outcome or even impact, but sometimes it also takes the form of output,
or even a description of activities (see Table 2).
 While ‘expected results’ is the term used for achievements at the level of the
Organization, the word results is also used in relation to specific projects and activities
at a lower level.
 In theory there should be a progression of achievements from one level to the next, so
that, for example, outcomes build on a synthesis of outputs. The problem observed in the
results report is that sometimes the higher level of results is a summary and an
aggregation of lower level results, rather than a synthesis of achievements.

These problems are visible in the project documents and in SISTER, and they are sufficiently
pervasive to complicate the systematic follow-up of results at a higher level.21.Given the fact that
expected results are formulated at different levels in the causal chain, correspondingly,
performance indicators are often not adequately defined as well. For example, in the course of
this study we found indicators of outputs that were formulated as impacts, or indicators of
outcomes that were expression of activities, etc.22

Tables 2 and 3 present an illustrative overview of the nature of the results that are reported on
in the C/3 and EX/4 documents. They illustrate the previously introduced argument of
UNESCO’s results being formulated at different levels (activity, output, outcome, impact).

The information in the tables builds on a random selection of results presented by the Sectors.
For Table 2, a table of random numbers was used to select two result statements from each
Sector in the 37C/323. The first column presents the results statement, and the following
columns indicate the level of results formulation. The first column quotes the C/3 24 but does not
replicate the full statement, usually only the first sentence.

Table 2 Analysis of results information in the 37 C/3 (page and § numbers in brackets)

Key achievements reported


Outcomes
Outputs
Activity

Impact

Revitalization of UNESCO’s global leadership role in education (p.11, §2) X X


Promotion of quality of education at all levels and throughout life (p.12, X X
§5)
Through MPII’s efforts to promote gender equality in science, in Ethiopia a X
greater number of girls opted for science studies at tertiary level following
a UNESCO assessment of gender equality and learning success in public
universities (p.15, §19)
Twenty-four Member States now have enhanced capacity to address X

20 UNESCO also defines global priorities, strategic objectives and main lines of action. As indicated, the expected
results level is the key level of reference for reporting.
21 The Audit of UNESCO´s Project and Activity Monitoring (IOS/AUD/2013/03) arrived at the same conclusion.
22 Also reported by donor assessments of UNESCO´s monitoring practices, as reported in the Audit of UNESCO´s

Project and Activity Monitoring (IOS/AUD/2013/03), pages 8 and 9.


23 www.random.org/integers
24 Results reported on in EX/4 and C/3 documents reflect the results presented in the C/5 to which they refer to. The

37 C/3 reports on the 35 C/5.

19
energy policy and management following through several regional
training programmes for energy
specialists (p.16, §28)
The Sector reshaped its work on gender equality, notably through a X
reorientation of activities towards analysis of the challenges faced by
women and girls for the protection of their human rights in the context of
conflict and post-conflict.(p.19, §41)
With regard to global priority Africa, the importance of increased X
investment in youth development has been further emphasized through
awareness-raising about the UNESCO Strategy on African Youth (p.19,
§39)
Over the last two years, the role of culture in the achievement of X
international development goals has been increasingly acknowledged, in
particular at the 2010 Millennium Development Goals Summit (p.22, §59)
The programmes have also enhanced cooperation among partners X X
nationally and internationally, thus contributing to global partnership
(MDG8), while creating promising conditions for future work in this field
and in the context of “Delivering as One” (p.23, §61)
The international preservation of documentary heritage was reinforced X X
through 52 new inscriptions on the Memory of the World register and the
adoption of the Memory of the World Warsaw Declaration (p.26, §77)
Partnerships with the private sector were enhanced with the successful X
launch of the ICT Competency Framework for Teachers (p.26, §78)

As an example of how the analysis was performed, the following text in the 37 C/3 provides
information on the results regarding the first key Achievement under education (§3, page 11):

Cooperation with other United Nations agencies and multilateral organizations was enhanced.
For example, the Director-General organized two meetings of the Heads of the EFA convening
agencies to ensure that all partners work in a harmonized way. The agreement on a new EFA
coordination architecture was an outcome of discussions with these agencies, as well as with
the Member States. Effective cooperation was also achieved through the interagency group on
TVET, which brings together UNESCO, the International Labour Organization, the World Bank
and the Organiztion for Economic Co-operation and Development (OECD), among others.
Beyond this, bilateral discussions have been initiated with sister specialized agencies, such as
the Food and Agriculture Organization and World Health Organization, as well as with the
OECD, to identify synergies. Finally, the Sector has developed new partnerships, in particular
with the private sector (see examples under “Resources mobilized”).

The text describes what the Organization has done in terms of holding meetings. It presents
which meetings were held, why and with whom. It does not say what happened after the
meetings or whether any results were achieved. So, for example, the reader notes that the
Director-General organised meetings to ensure that EFA partners worked in a harmonized way,
but cannot assess whether or not these partners have actually done so as a result of these
meetings, nor in what way the (possible) “effective cooperation” has been achieved.

The analysis here builds on two statements on results from each Sector, thus ten in total. These
ten statements represent some 20% of the total information on results in the C/3, randomly
selected and, by a visual survey of the remaining statements on results, broadly representative.

The conclusions are:


 None of the statements refer to the impact level
 20% of the statements present results at the level of outcomes
 30% of the statements describe outputs
 60% describe activities and many refer to UNESCO’s own processes

20
The analysis of the information in the 192 EX/425 follows in Table 3. The same method of
random selection was applied. The information provided in the EX/4 is more detailed. However,
in comparison with the C/3, information on results is not more extensive.26 The random sample
from the selected EX/4 report contains no information on impact.27 It refers to outcomes in one
case and descriptions of outputs in three cases. The report describes activities for all the ten
results statements.

For example, the following statement reported in EX/4: ‘Culture Sector continued to implement
its capacity-building programmes at the national level through its field offices, with a particular
focus on Africa and while seeking to ensure a higher percentage of women participants, experts
and beneficiaries’ clearly says that the Organization has continued its capacity building
programmes. Is that a result? UNESCO guidelines on results-reporting28 clearly spell out that a
result is something happening after an activity has occurred and after an output has been
produced and hence it is not a result. Many similar examples could be quoted from all Sectors.

Table 3 Analysis of results information in 192 EX/4(page and § numbers in brackets)

Key achievements reported

Outcome
Activity

Output

Impact
UNESCO has initiated a process of rethinking education in light of global X X
societal transformations under way and is working towards producing a
document by the beginning of 2014 (p.6, §5)
[…] increased cooperation with the OECD. A joint information session for X
UNESCO Delegations was held in Paris on 7 June 2013 with the objective of
sharing the past experiences of both organizations in conducting education
policy reviews (p.6, §7)
Science education at all levels, in particular in Africa, was promoted through X
the mobilization of a wider range of public and private partners including
IEEE, and ICASE as well as the Abdus Salam International Centre for
Theoretical Physics (ICTP). (p.9, §18)
engineering education was further promoted and led to engaging more young X X
men and women to pursue engineering careers. (p.9, §19)
With regard to youth, in line with the UNESCO Strategy on African Youth, the X
policy review processes were completed in Liberia and Burundi .… (p.12,
§34)
With regard to the social inclusion and the Management of Social X
Transformation (MOST) programme, the methodology on the assessment of
the level of inclusiveness of public policies is being developed and peer-
reviewed by the Scientific Advisory Committee (SAC) of the MOST
programme and other experts. (p.12, §36)
Culture Sector continued to implement its capacity-building programmes at X
the national level through its field offices, with a particular focus on Africa
and while seeking to ensure a higher percentage of women participants, and
beneficiaries. (p.15, §47)

25 This is one of the six-monthly results reports reporting on the 36 C/5.


26 Although the EX/4 analysed refers to a different biennium we have checked that the conclusions also hold for EX/4-
C/3 comparisons within the same biennium.
27 Taking into account any imprecisions in extrapolation one can assume that the report contains no information to

very little information on impact.


28 Results-Based Management Guiding Principles (2008); RBM Monitoring and Reporting guidelines,

BSP/RBM/2012/4 REV.1; Results‐Based Programming, Management and Monitoring (RBM) approach as applied at
UNESCO Guiding Principles; BSP/RBM/2008/1.REV.5 Paris June 2011.

21
.. two important milestones were passed in UNESCO’s global advocacy to X X
ensure that culture is taken into account in the post-2015 development
agenda. On 17 May 2013, the International Congress “Culture: Key to
Sustainable Development” was held in Hangzhou with the support of the
Government of China and the Chinese private sector (p.15, §46)
An enabling environment for freedom of expression as a necessary X X
prerequisite for social transformation, democracy, economic development,
and dialogue for a culture of peace and non-violence was promoted through
the twentieth anniversary celebrations of World Press Freedom Day in San
Jose, Costa Rica (p.17, §55)
UNESCO has continued to support Member States in empowering citizens X
through universal access to knowledge and the preservation of information,
including documentary heritage. (p.17, §56)

The EX/4 and the C/3 are not the only places where results are reported. There are other
important streams of information.29 For example, the results information in SISTER and the final
narrative reports to donors. The streams overlap; information in SISTER is used for the EX/4
and C/3 reports. SISTER contains more and is also partly accessible for Governing Bodies. The
final narrative reports can also feed into SISTER.30 It has been concluded31 that results
information in SISTER and monitoring of activities that feeds into SISTER is not without
problems. Part of the reason is that the data available in SISTER constitute an inconsistent
mixture of activities, outputs, and results (at outcome level). The problems that are pinpointed
in Tables 2 and 3 are thus equally pervasive in SISTER, the main source of information for the
Tables.

With respect to final narrative reports32, the Diagnostic Study of Evaluations of Extrabudgetary
Activities analysed the quality of results-reporting. Figure 5 shows that for 86% of final
narrative reports in the sample covered by the study (which is quite representative for UNESCO
as a whole), there is an issue of clarity in reporting. To a large extent, this high percentage is due
to confusion between outputs and outcomes in the reporting. The percentage is much lower for
external evaluations. The difference can be explained by the fact that external evaluations are
usually conducted by professional evaluators and/or experts with experience in evaluation. By
contrast, final narrative reports are written by (UNESCO) programme staff. The latter may have
less experience and less knowledge of causal analysis. Moreover, the fact that the UNESCO RBM
terminology differs from terminology used by other UN organizations and bilateral donors does
not make the task of producing clear and consistent causal language much easier.

29 These include information reported at country level.


30 External evaluations, another stream of information are less common in the UNESCO system as they are generally
only conducted for projects above a certain budget threshold. They do not systematically feed into SISTER (see
Diagnostic Study of UNESCO’s Extrabudgetary Evaluations).
31 Audit of UNESCO’s Project and Activity Monitoring, IOS/AUD/2013/03, p. 11 and 12.
32 Which partially correspond to or feed into information inserted into SISTER and therefore included in the EX/4 and

C/3 reporting.

22
Figure 5 Total number of (self-) evaluation (SE) and external evaluation (EX) reports by
type and by clarity of causal chain (n = 570)

YES NO TOTAL
SE 62 373 435
14,3%** 85,7%** 100%
EX 82 53 135
60,7% 39,3% 100%
** p < 0.05 (Chi-square)
Source: Diagnostic Study of Evaluations of Extrabudgetary Activities

The analysis in this section shows that the actual content in terms of results-reporting in the
EX/4 and the C/3 primarily consists of information on programme implementation and on the
delivery of outputs. Hence, the large volume of information pertains to results at the lowest level
(outputs) and very little on how the Organization contributes to change among intended
beneficiaries.

There are several reasons for inconsistencies in the formulation of results and the use of causal
terminology in results-reporting. The difference between the terminology used by UNESCO and
by implementing partners and donors is one reason. From the perspective of the EX/4 and C/3,
another major reason why results are inconsistently formulated is the fact that they are the
product of a political and participatory process. Among other things, because not all
representatives of Member States are trained in causal analysis, expected results tend to be
expressed as aggregate activities, outputs and outcomes.33

33 Ideally, expected results should be formulated at the outcome level.

23
The number of results
The results-reporting in the EX/4 and C/3 is linked to the formulation of results in the C/5. In
theory, there is a consistency between the (number and formulation of) results in the C/5 and
the EX/4 - C/3. Table 4 shows the relatively sharp reduction in results between the 36 C/5 and
the 37 C/5, going from 23 to 15 MLAs and from 75 to 46 expected results. That reflects an
ambition to focus the Organization on fewer MLAs and expected results, and appears to have
been successful. As this study shows, it is a pattern of change that needs to be reinforced.

Table 4 36 C/5 results structure versus 37 C/5 results structure (excluding Category 1
Institutes, Global Priorities and Intersectoral Platforms)

MLA ER MLA ER
ED 4 12 ED 3 13
SC 7 26 SC 5 12
36 C/5 SHS 3 6 37 C/5 SHS 3 8
CLT 6 22 CLT 2 7
CI 3 9 CI 2 6
23 75 15 46

The results reports do not only cover the five sectors, their MLAs and expected results, but also
Category 1 institutes, the two global priorities, and the intersectoral platforms. In the 192 EX/4
these entities contributed 58 specific expected results, leading to a total of 58 + 75 = 133
expected results. The 192 EX/4 listed the following expected results:

 Education: 12 expected results


 Natural Sciences: 26 expected results + 13 expected results for Category 1 institutes and
intersectoral platforms
 Social and Human Sciences: 6 expected results
 Culture: 22 expected results
 Communication and Information: 9 expected results + 15 expected results for Category 1
institutes and intersectoral platforms
 Coordination and Monitoring of Action to Benefit Africa: 4 expected results + 3 from
Sectors
 Coordination and Monitoring of Action to Benefit Gender Equality: 5 expected results +
18 expected results from Sectors

The number of results had an effect on the volume of text needed to discuss them. The ‘span of
reporting’ is very wide and hence, it is difficult to shorten and focus the report. The absorptive
capacity of board members is put to the test by this massive flow of information, and it might
actually hamper strategic decision-making and real comprehension of results rather than
facilitate it - particularly as the information is not quite what either board members or
management might need for evidence-based decisions.

What does it look like in other UN Organizations? Taking the statutory results reports as the
starting point, the four Organizations selected for comparison here report on fewer key results
and while doing so, use a hierarchical structure that focuses on lower level results to higher
levels. As Table 5 shows, they refer to between 3 and 9 higher level results and then a larger
number of lower level results. As some Organizations only have biennial reports while others
have annual reports, we present an expected volume of results-reporting over a two-year
period, based on the latest published results reports. It shows that the volume of information in
other UN Organizations is considerably lower. One of the reasons is the focus on fewer strategic
results. This allows for a shorter and at the same time better supported analysis of performance
and outcomes.

24
The Organizations can of course supplement the Statutory Reports with other results
information and so does UNESCO as well. The other agencies also use links, annexes and other
devices, so there is more results information than the Table suggests. Nevertheless, they do
present short and focused information to Governing Bodies and stakeholders, and one of the
reasons they can do so lies in the nature of planning for results, which is a process with fewer
high level strategic results and articulated links between medium and lower level results and
strategic objectives. UNESCO has a hierarchy of results in the C/5, but the hierarchically
structured description in that report has not been carried forward to the results-reporting in the
EX/4 and the C/3.

Table 5 Comparative analysis of information content in statutory results reports

UNESCO UNDP UNICEF FAO ILO

Level and focus Expected Relate to 5 Focus Areas, 4 Strategic 3 Global Goals,
results MDGs, 9 Focus (45 results / Objectives, 19 11 Strategic
46-73 (- 128) Outcomes indicators) Outcomes Objectives

Estimate of 1050 80 – 90 50 131 151


pages over 2-
year cycle
Frequency 6 monthly Annual report Annual report Biennial report Biennial report
EX/4 and plus
biennial C/3 quadrennial
report
Note 1: The table refers to the Statutory Results reports only. The Governing Bodies may request other
reports. At the same time there are several channels for results information to be aggregated, for example
through synthesis evaluations.
Note 2: The most recent reports were used for each of the Organizations; UNESCO: 192 EX/4; UNDP:
Annual report of the Administrator on the strategic plan: performance and results for 2011; DP 2012/4;
UNICEF: Annual report of the Executive Director of UNICEF: progress and achievements against the
medium-term strategic plan, E/ICEF/2013/11; FAO: Programme Implementation Report 2011, C2013/8;
ILO: Report of the Director General, ILO Programme Implementation 2010-2011. International Labour
Conference, 101st Session, 2012.
Note 3: The number of pages under the current model would decrease in UNESCO as some of the EX/4
content is placed online separately from the main report.

The focus should not only be on the volume of reporting34 and the number of results. Another
major difference is the frequency of results-reporting. UNESCO is the only Organization among
these five that has six-monthly comprehensive results-reporting to the Governing Bodies (by
means of a statutory report), the other Organizations report annually or biennially, and in the
case of UNDP the annual results report is complemented by a quadrennial report. In the Table
above this means that during a biennium the Governing Bodies of FAO and ILO receive one
statutory results report, the Governing Bodies of UNICEF and UNDP get two, and the Governing
Bodies of UNESCO are presented with four such reports.35

Key findings
This section has shown that there are a series of interrelated problems concerning how results
are defined, how results are formulated at the planning stage, what results that are reported on,
and the frequency of reporting.

34 As this report was being finalised, a draft of the 194 EX/4 became available. Annexes are now made available on
line. It is not quite clear yet whether this reduces the number of pages or just leads to a shift in information, but two
things are clear: there is an effort to improve results information, and there is still a long way to go before it becomes
clear and concise.
35 Or five if one counts the C/3 as a separate document.

25
The key findings are the following:
- The definition of the concept of results in UNESCO differs from the UNDG definition (and
other definitions commonly used in the international community).
- Evidence from audits and evaluations show inconsistent use of the term in UNESCO’s
reporting (e.g. in SISTER, to donors).
- Expected results in the C/5 (and EX/4 and C/3), as well as corresponding indicators, are
inconsistently formulated: they refer to (aggregate) activities, outputs and outcomes.
- Most of the results information concerns activities and outputs rather than higher-level
results (at outcome level).
- The number of expected results that UNESCO reports on is high. Other UN Organizations
tend to have considerably fewer and more consistently formulated results.
- The frequency of reporting to Governing Bodies is higher than in the other four selected
UN Organizations.

Solving the problems and bringing clarity and consistency to results-reporting requires that all
the problems are addressed simultaneously. The formulation of results in a clear and consistent
language, harmonised with UN agencies and other stakeholders, would be a prerequisite for
efficient and reliable results-reporting.36

Causal linkages between activities, outputs and outcomes

The request for results information is more than a request for information on activities and
outputs. Good practices on results-reporting in the literature point to the need for causal
analysis.37 In short, good practices of results-reporting include: (i) evidence in the form of
reliable data; (ii) an analysis of causal patterns (in particular analysis of how change is expected
to happen); and (iii) a critical, reflective analysis of potential side effects. In this section these
elements of results-reporting will be analyzed.

Table 6 presents one randomly selected expected result for each Sector and one from each of the
two global priorities, Africa and Gender Equality. The Table shows that the information reported
in the EX/4 contains a description of activities, which of course is a precondition for saying
something about results. In some cases, this is followed by an account of why activities were
undertaken, as a discussion of the relevance of and justification for the activity. Furthermore,
there is a description of outputs being produced, such as for example, policy reviews and policy
dialogue conducted, and capacity-building activities, conferences and meetings organised, etc. A
reference to outcomes is found in one case.

The text in the (selected) EX/438 does not provide information on the other building blocks of
results-reporting, such as analysing change processes, discussing causality, pointing to side
effects. The findings are not surprising. The previously cited “Diagnostic Study of
Extrabudgetary Evaluations” arrived at similar conclusions, and if the information is not
contained in evaluations and reviews, it is not so likely to find its way into the aggregate results-
reporting.

36 See for example Mackay, K. 2007. How to Build M&E Systems to Support Better Government. Washington, DC:
World Bank. Robinson, M. 2007. “Performance Information Foundations.” In Performance Budgeting: Linking Funding
and Results, ed. M. Robinson. Washington, DC: IMF.
37 The literature on quality of evaluation and other forms of results information is extensive, and there are several

guidelines and standards. The source of many standards is found in ’The Programme Evaluation Standards (1994)
Sage Publications: London.
38 We verified that the situation is similar for other recent EX/4 reports.

26
Table 6 Analysis of information reported on results achievement in 192 EX/4, part B (page
and § in brackets)

Expected result

Description of

Attribution or
contribution
Side effects
Purpose of

Outcomes

Theory of
activities

activities
Outputs

change
Impact
1. Capacities in Member States strengthened to X X X
integrate a holistic vision of education for
sustainable development (ESD), including
climate change education and education for
disaster preparedness and risk reduction, into
educational policies, and development plans and
programmes (p.12-14)
2. Scientific knowledge base and adaptation X X
capacity of Member States in respect of water
hazards at regional and country levels
improved. (p. 35)
3. Understanding improved of the implications X X X X
of social inclusion for the promotion of a culture
of peace, integrating human rights and
democratic principles (p. 45)
4. Approaches to culture and development X X
clarified in order to guide and assist Member
States in devising inclusive development
policies (p.59 - 60)
5. More data on feature films and another X
culture topic are available in the UIS database
(p.74)
6. Intersectoral coordination (p. 76) X X
7. Gender equality issues incorporated in the X X
WWDR4 (p.101 – this comes from the Natural
Science Sector’s contribution to GE)

The narrative texts in the EX/4 highlight activities and outputs, and sometimes also talk about
why activities were undertaken, that is, explaining the relevance. In one case did the text include
a statement of an outcome. That text is quoted below in Example 1, which describes activities
and outputs along with the presentation of outcome information underlined in the text.

Example 1: (Expected Result 3 in Table 6 above)


’Efforts focused on developing initiatives targeting youth as key actors in promoting democratic
interactions and social cohesion, especially through the Intersectoral Platform on the Culture of
Peace. The Culture of Peace programme contributed to strengthening the capacity of young
women and men through equipping them with knowledge, skills and information necessary to
cultivate a culture of peace, including social and technical competencies necessary to help mitigate
conflict and promote reconciliation. In the 11 countries (Liberia, Sierra Leone, Burundi, Tunisia,
Lebanon, Egypt, Nicaragua, Costa Rica, Guatemala, Honduras, Panama) covered by the
programme, youth participation and engagement were strengthened at the local and national
levels, especially in democratic and transition processes. The active involvement of all stakeholders
(including the civil society) in the project, the mainstreaming of culture of peace initiatives in
government plans and the mobilization of the United Nations system, all guarantee the
sustainability of the initiative beyond the funding period.

The text describing the outcome above is not analytical. It states that something has changed –
e.g. youth participation and engagement strengthened - and hence a result had been achieved;

27
However, it does not analyse its content nor does it define any theory of change, or any of the
other standard analytical elements in a discussion of results.

The following example has no language on outcomes. The text refers to activities undertaken
and

Example 2: (Expected result 3 in Table 6 above)


In the area of philosophy and human sciences, the second World Humanities Forum was organized
in Busan, Republic of Korea, from 1 to 3 November 2012. Celebration of World Philosophy Day (15
November 2012), was designed to ensure the contribution of philosophy to development of global
agendas on global environmental change by the choice of the theme “Future Generations”,
reflecting both the fifteenth anniversary of the adoption by UNESCO of the Declaration on the
Responsibilities of the Present Generations Towards Future Generations and the ethical and
philosophical implications of the June 2012 United Nations Conference on Sustainable
Development. At UNESCO Headquarters numerous activities were organized such as round tables,
philosophy cafés, workshops, devoted to innovative philosophical practices, master classes of
philosophy teaching for children, philosophy books fair, two art exhibitions and music concert’.

The text in Example 3 below that relates to the expected result on Coordination and Monitoring
of Action to Benefit Africa illustrates further the lack of analytical content:

Example 3: (Expected result 6 in Table 6 above)


Most of the results achieved, in terms of impact, result from the importance accorded to
intersectorality in implementing programmes for Africa. This intersectoral approach was
particularly encouraged by the drafting, analysis, evaluation and selection process concerning the
31 projects; all programme sectors, institutes and offices, in Africa and elsewhere, took part in this
process. […] The same entities continue to take part in the implementation of the 11 projects
selected, in intersectoral and interdisciplinary teams.

The text is interesting as it actually mentions the word “impact”. It implies that somewhere there
is information on impact, though that information is not provided in the results report that has
been reviewed here. Rather than describing results achieved through the implementation of the
’11 projects’, the text describes a working process, an intersectoral approach, in the
Organization.

Are these examples representative of the results information in recent EX/4 reports? They were
randomly selected, but they were 7 examples out of 128 expected results, so it is a minor share.
However, this report also looked at 10 key achievements from Part IA of the 192 EX/4, and the
patterns of what is presented are consistent. It is also consistent with the results information
presented in Tables 2 and 3 above. The common problem in the reporting is that there is little
evidence to substantiate the claims to any outcomes and there is no explanation of how activities
and outputs lead to outcomes. Part of the problem relates also to the number of results being
presented as it is hard to imagine how it would be possible to analyse causal patterns, present
data and evidence, and reflect critically on 133 expected results/outcomes39 within the same
report.

The role of a Theory of Change (ToC) is to define the building blocks required to bring about a
given long-term goal.40 The ToC41 is both a planning tool, a management tool and an evaluation
tool and in the mainstream literature on evaluation causal analysis in the social sciences is

39 In previous sections we highlighted that expected results are formulated at different levels in the causal chain.
Ideally they should be formulated at the outcome (or arguably the impact) level.
40 Mayne, J. (2001) Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly.

Canadian Journal of Program Evaluation 16(1): 1-24.


41 In the literature different terms can be found referring to broadly the same thing: theory of change, programme

theory, intervention theory. See for example, Morra Imas, L.G. and R.C. Rist, The Road to Results – Designing and
Conducting Effective Development Evaluations, World Bank, Washington D.C.

28
considered a necessary ingredient when speaking of results (outcome achievement). A ToC
would not be complete without an articulation of the assumptions that stakeholders use to
explain the change process represented by the change framework.42 Assumptions explain both
the connections between outcomes and the expectations about how and why proposed
interventions are expected to bring them about.

As the analysis in Table 6 shows, this evaluation did not find examples of outcomes which were
substantiated by a ToC explaining the links between the UNESCO activities and outputs and the
hypothetical outcome.43

In the evaluation literature there is an extensive debate on the analysis of results at the outcome
(and impact) level and causation.44 The causal analysis, which could follow from a ToC, is
virtually absent from the results reports. The question of how results were produced and
whether they were caused by UNESCO alone, in conjunction with other agencies, or
independently of UNESCO is not discussed in the EX/4 and C/3 reports analysed here. The
examples of outcomes highlighted above indicate that the question would be relevant. The
outcome quoted in Table 3; ‘engineering education was further promoted and led to engaging
more young men and women to pursue engineering career’ is an example of a social change to
which many factors contribute: labour market developments, primary and secondary school
enrolment and retention, education and training programmes, teachers training, etc. Any
intervention by UNESCO would be affected by other changes in education and in society at large,
and an outcome such as the one presented in the EX/4 must be understood in such a context,
through an argument around contribution and by articulating a ToC – neither of which is seen in
the EX/4 or (with some exceptions) in underlying reporting in the system. To clarify, it is not
expected to have a detailed ToC for each type of activity, output and outcome in the EX/4, rather
what is needed is a ToC or causal framework that connects patterns of activities and outputs to
higher level outcomes of interest (i.e. expected results).

The self-reporting that feeds into SISTER is also weak on intervention logics and substantiating
arguments by using ToC. The Diagnostic Study of UNESCO’s Extrabudgetary Evaluations shows
that such an analysis of how results are created is largely absent. It would take a more thorough
organisational analysis to clarify the organisational constraints that hamper the analysis – a
combination of constraints in time, resources, incentives and capacities. The Diagnostic Study
showed that external evaluations are better at using ToC and intervention logics.

The Diagnostic Study covered the subject of ToC or intervention logic in basically two ways.
First, by defining and measuring the variable clarity of the causal chain (see Figure 6) and
second by analyzing the use of an intervention logic or any type of causal framework in final
narrative reports. Around 43% of the sample of final narrative reports analysed in the study (n =
435) had some type of causal framework, a logical framework or a table specifying information
at different causal levels (activity, output, outcome). However, the majority of the reports with
such frameworks showed substantial shortcomings in the clarity and comprehensiveness of the
causal chain. Moreover, across the board there was not much causal analysis that really focused
on the confluence of project activities and contextual factors in the contribution to changes for
intended beneficiaries.

42 Patton, M. Q. (2001). Evaluation, Knowledge Management, Best Practices, and High Quality Lessons Learned. American
Journal of Evaluation, 22(3): 329-336.
43 Again this is a recurring pattern in EX/4 documents and also confirmed by the “Diagnostic Study of Evaluations of

Extrabudgetary Activities”.
44 See for example Schwartz, R. and J. Mayne (2005). Does Quality Matter? Who Cares about the Quality of Evaluative

Information? . In Quality Matters: Seeking Confidence in Evaluation, Auditing and Performance Reporting. R. Schwartz
and J. Mayne, Eds. New Brunswick: Transaction Publishers.

29
UNESCO is not alone in having difficulties expressing intervention logics and theories of
change45 in planning, monitoring, evaluation and reporting. The organisation-wide reviews that
have been done in the UN system show that all organisations are implementing RBM systems
and find it difficult to articulate intervention logics. Good practices exist and lessons can be
shared. But sharing lessons requires a common language as well as close interaction and
coordination.46

There are examples of theories of change in the UNESCO system. Annex 3 for example shows the
intervention logic of the work of the International Institute for Educational Planning. Annex 3
also presents a draft theory of change of the 1972 Convention on Cultural and Natural World
Heritage. It shows how different types of UNESCO interventions are expected to contribute to
changes at different levels, eventually contributing to the preservation and sustainable use of
World Heritage. Annex 3 illustrates some of the complexity of interventions, stakeholders and
levels of change involved as well as the factors affecting processes of change. In order to assess
what works and why and whether or not the work of UNESCO carried out within the framework
of this convention is making a difference it is useful to perceive of the convention as a multi-level
theory of change with nested theories of change at different levels (e.g. international, national,
programme) as shown in Figure 6.

Figure 6 Simplified theory of change – example of the 1972 convention

The Figure shows the different levels of interventions contributing to changes at each level.
From this one can deduce the following observations and conclusions:

- UNESCO’s work on conventions (as well as in many other interventions) is multi-level


and complex.

45 Mayne, J. (2007) Best Practices in Results Baed Management : A Review of Experience. A Report for the United
Nations Secretariat.
46 Mackay, K. (2006). Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector Management

EDC Working Paper Series - No. 15. Washington, DC: Independent Evaluation Group, World Bank

30
- UNESCO’s interventions (facilitating international dialogue, capacity-building, technical
assistance, etc.) focus on different levels with different stakeholders and target groups
contributing to changes at various levels (e.g. enhanced dialogue and cooperation on
World Heritage; enhanced national awareness on convention and translation into
national strategies/policies; enhanced capacities to translate elements of the convention
into concrete policy interventions; sustainable use and preservation as a result of World
Heritage status, etc.)
- In order to assess whether or not UNESCO’s work within the framework of the
convention is effective, and to what extent UNESCO is achieving its overall expected
results with regard to the preservation of World Heritage, one requires an overall
perspective taking into account the differences in the nature of the UNESCO’s work,
differences in types of changes at various levels, and differences in context-specific
factors affecting change.
- Without a framework (i.e. the theory of change) that facilitates such a perspective a
comprehensive reflection on the overall effectiveness of the work on World Heritage is
very difficult. In addition, at the different levels, interventions should be guided by
specific assumptions on causal change which then can be fitted into the overall theory of
change summarized in the Figure above (and more comprehensively represented in
Annex 3).
- Furthermore, the absence of explicit theories of change linking areas of UNESCO’s work
to outcomes (expected results) at both the aggregate and intervention-specific level
negatively affects the quality of self-reporting at activity level, self-assessment at a
higher programmatic level and eventually the capacity to develop a convincing and
credible overall performance story, capturing the overall effectiveness and progress
towards impact of UNESCO’s work in a particular area of work.

Key findings
This section has taken its starting point in the literature on results-reporting and evaluation and
based on this, what is desirable and what constitutes good practice. These principles have been
used to analyse UNESCO’s results-reporting, again with a focus on the statutory results reports,
but also looking at information in SISTER, extra-budgetary project evaluations and other
evaluation findings. The conclusions are:

- Theories of change (or intervention logics) are key tools for making sense of
interventions and how they are expected to contribute to the achievement of results. UN
organizations across the system are struggling with the challenge of strengthening the
theories of change underlying interventions in the context of planning, monitoring,
evaluation and reporting.
- There are some indications of successful use of theories of change in the planning,
monitoring, evaluation and reporting of UNESCO’s work.
- In most areas of UNESCO’s work clearly articulated theories of change are absent. As a
result, aspects such as the causal analysis, the choice of indicators, or the scope and
coverage of monitoring and reporting, are often weak or incomplete.
- Even though it is possible to reconstruct a ToC ex post, it would be highly beneficial to
planning, monitoring, evaluation and reporting if a ToC is developed in the design phase
of an intervention.

Data and evidence

Reporting on results is not only a question of understanding change, but also of evidence. Taking
the example from Table 3, example 4, even though it is true that more young men and women
pursue an engineering career, the statement ought to be supported by evidence. As it is
proclaimed to be a fact, it can hopefully be verified with data. The point is that a result report

31
needs to present some form of evidence, quantitative and/or qualitative. But a statement about
results does need to be substantiated.

The results-reporting that reaches the Governing Bodies through the C/3 and the EX/4 reports
builds on a system of levels of self-reporting. The information entered into SISTER comes from
the programme officers and is validated internally by the managers. Table 7 describes the
division of labour and the validation of data.

Table 7 Division of responsibilities in self-reporting

Responsibility Action

Responsible Officers at Entry/updating in SISTER of reporting information on progress towards


activity/project level achievement of the results of individual workplans (regular programme
and extrabudgetary).
Responsible Officers at the Entry/updating in SISTER of detailed reporting information on the
level of MLA, IP, Chapter, achievement of the Approved 36 C/5 expected results, including
and Category 1 Institutes challenges and lessons learnt and cost-effectiveness measures.
and Centres
All ADGs/Directors and Entry in SISTER of the consolidated information on C/5 results and
EOs of Major Programmes, information on the two Global Priorities, Africa and Gender Equality.
the UIS, as well as Validation by ADGs/Directors.
Direction and the Review by BSP and subsequent validation for the generation of the 194
Programme-related and EX/4 Part I (B).
Corporate Services
Source: Memo BSP, 03 January 2014

Two questions can be asked with respect to the self-reporting process. First, does it work in
practice? Second, is it adequate for supporting the reporting of reliable data on activities,
outputs and outcomes? Regarding the first question, the evaluation found that the processes of
information entry and validation can differ quite substantially between Sectors. Consequently,
the reliability of data is likely to be uneven in the system. Rather than focusing more on this
question we decided to focus more on the second one, which is fundamental to the usefulness of
the current model.

Reporting on activities and outputs


The results reports provide plenty of evidence for activities and outputs. These appear well
described and they are often detailed, for example data and location of workshops and seminars,
full titles of such events, participants in capacity-building events, etc. However, in a results
report to the Executive Board it may not be necessary to provide evidence for individual
activities and outputs; that could be the business of the reporting at lower levels.

There is one caveat though; the aggregate reports account for activities that were actually
undertaken. The result information in the EX/4 can, for example, inform on the numbers of
policy reviews undertaken and in which countries. However, there is no comparison with work
plans, hence if three reviews were planned in the C/5, but only one was completed, this fact –
reflecting poorly on implementation– does not show in the results report. The gap between the
work plan and the delivery is seen in SISTER, but not brought forward to an aggregate level. This
is where the previously mentioned portfolio analysis of implementation can make a difference.

This raises the question of what type of analysis could be done based on the data at hand. It is
common to distinguish between substantive analysis, such as the example above shows where
capacity building is quoted; and portfolio analysis, in particular patterns of activities and outputs
and how they perform against indicators. The results reports from the other specialised
agencies show some interesting examples of portfolio analysis where management and

32
Governing Bodies are presented with an overview and analysis of the performance of
activities47. Such information is lacking in the EX/4 but is something that would serve the
Governing Bodies and management well. Both the FAO and ILO report show examples where the
portfolio of the organisation is presented in one chart, making it possible to see in which areas
there is substantial progress and where it might be necessary to focus attention because targets
are not met.48

Reporting on results (at outcome level)49


Policy review is a frequent ingredient in results-reporting; most of the time the C/3 and the
EX/4 note that policy reviews have taken place and there is reference to a quantitative indicator.
In only a few cases the results reports point to an impact. However there are issues with
reliability of the evidence underlying such claims. For example, we did not come across any
examples where it was concluded that a policy review had a limited impact on the Member
States’ policy processes (which is likely to happen occasionally). The case in Box 1 illustrates the
problems in terms of data and evidence for such claims.

Box 1 Example of a sector policy review

The Malaysia Education Policy Review was initiated through a Memorandum of Understanding
(MOU) between the government of Malaysia and UNESCO in November 2010. The MOU required
UNESCO to “evaluate the aims, strategies and achievements of the Malaysian education system
in relation to its national and international contexts, its stated development goals and in
comparison to international trends and standards.” The education policy review analysed five
specific policy areas, while taking a sector wide perspective of education development.

The information in SISTER says that the policy review in Malaysia was completed, which
essentially then concludes that an output – a policy review – has been produced. The text in
EX/4 (Part B, page 6) claims a result in the form of an outcome, namely that the review
contributed to the country’s long-term strategic plan in the education sector.

’Technical assistance and capacity development activities related to Monitoring & Evaluation
(M&E), education planning and management were provided to Bolivia, Democratic Republic of
Congo, Egypt, Ethiopia, Iraq, Lebanese, Malaysia, Mauritania, Myanmar, Saudi Arabia, Sudan and
Timor-Leste. Some of the country highlights include: …. ” UNESCO supported Malaysia in their
national education policy review, which contributed to the country’s blueprint for education.’

From the evaluation point of view it is interesting to know what the evidence is for such a claim
and how one would prove it with the use of data and evidence. The policy review was conducted
by a team of experts over a period of a few weeks, and it followed the process of such reviews
with a preparatory mission, an organisational structure being set up for the review, conducting
the review and producing a report, and a validation workshop to present and discuss the policy
findings.

The abridged review summarises the observations and recommendations. The policy review
focuses on five policy areas, for example Teacher Development, Curriculum Development,
Technical and Vocational Training. In each of these areas there are a number of observations

47 See for example FAO Programme Implementation Report; C 2013/8 PIR 2010-11, page 13.
48 See ILO Programme Implementation Report 2010 – 2011, page 15.
49 All arguments that are developed with respect to outcomes are also valid for impact. Regarding the latter, the

attribution and aggregation challenges are even more complex than in the case of outcomes. Given the challenges that
UNESCO faces in outcome reporting, we chose to focus on this aspect. Once the system is better capable of capturing
outcomes, it would be useful to take up the discussion on impact again. For now, reporting on impact is too ambitious
(across the board).

33
and recommendations that could play into the policy development. In total, the abridged report
alone has some 10 – 15 key observations in each of the five policy areas, suggesting some 50 –
75 statements that may be used in the further policy development by the government. Each of
these statements could either reflect an already well-known issue, or it could be a new and
surprising perspective. It could lead to immediate action or have an effect on existing initiatives,
or it could take much longer before the Education system would respond.

The point is that in order to really know what the impact of the policy review was, it would be
necessary to peruse the new policies, trace the origins of the ideas contained in them, discuss the
contribution from the review, which could be anything from 0 to 100%, and finally to look
beyond policy formulation to implementation. Such an analysis can certainly be done, but it is a
major effort. In this case, UNESCO appears to have had feedback on the use of the review and can
claim an impact, but it is not really evidence based. To say ‘contribution’ is not the same as
having data and evidence of contribution.

In all likelihood some of the observations and recommendations had a major influence on
shaping policy and others did not. Reading the document, it sometimes appears as quite unlikely
that the government was not yet aware of the issues.

As this example indicates, assessing and reporting on output delivery is significantly different
from assessing and reporting on outcomes and impact. In order to report on the output, one
would indicate whether it has happened or not, with some supporting data as for example when
and where, number of people reached, the nature and scope of a policy review, etc. Most of these
data would be within the easy reach of the programme officer. Whether true or not could be
relatively easily verified through audits, existing records, or by the next level of management.

When reporting on outcomes, one cannot simply observe or even infer outcomes without
additional data collection (among other things) at the level of target groups. It would require
spending time and resources on (e.g.) selecting a sample, gathering data from
beneficiaries/target groups (and possibly from non-target groups), analysing data and
information collected, and search for side effects or unintended effects. It is not impossible, but
it is something for which programme staff commonly does not have the time or resources (and
expertise). There is thus a strong case for distinguishing between self-reporting on
implementation and outputs on the one hand, and the kind of data collection and analysis that
feeds into reporting on results at the outcome level on the other hand.

Nevertheless the tables above show that there is some indication of outcomes in the system, and
when these are quoted in the EX/4 they might actually be better substantiated than is
immediately visible. There might, for example, be evidence from evaluations; either the regular
programme evaluations or evaluations of extrabudgetary projects. Unfortunately, there is no
practice of reference to supporting evidence. Hence it is not immediately visible whether there
is evidence for a claim of an outcome or not, and no way of knowing whether those 20% of
outcome statements are actually supported by data, evidence, causal analysis in the form of ToC,
or some other critical reflection. This and other evaluation exercises found that most self-
reporting is not (systematically) informed by evaluative evidence and therefore lacks the
necessary basis for substantiating claims on outcomes.

A related question is the available evidence base in external evaluation and final narrative
reports. In the course of the study we looked at SISTER and at examples of narrative reports, but
a much more comprehensive review was done in the Diagnostic Study of Evaluations of
Extrabudgetary Activities. In comparing self-reporting with external evaluations, it was found
that the majority (61%) of final narrative reports compared to 12% of external evaluations did
not include any discussion of the effects of activities, i.e. reporting was limited to

34
implementation and output delivery. Of those final narrative reports that included information
on effects, the discussion is limited. By contrast, there were a significant proportion of external
evaluations which not only referred to outcomes but also included a serious causal analysis
underpinning outcome statements.

The roles of self-reporting versus evaluation


From the analysis above we can conclude that evaluations are on average significantly stronger
on outcome analysis than final narrative reports, not in the least because of specifically
designated resources and time dedicated to data collection and analysis. However, this
evaluative evidence does not systematically feed into SISTER and subsequently the EX/4 and
C/3 reports.

Self-reporting is challenging in any organization. Several studies of UNESCO have pointed at the
low levels of trust visible in the Organization;50 between the Governing Bodies and the
Secretariat, between headquarters and field offices, and between other parts of the
Organization. In such a system, self-reporting on results is more likely to be biased and it is more
likely that low results and implementation problems will be camouflaged. Self-reporting
assumes not only that the persons who report have the data at hand, but also that they possess
the professional confidence that allow them to report on negative results. It is a well-known fact
that organisations succeed and fail, and the two are interrelated. It is also commonly said in
management studies that mistakes are ‘ok’, as long as one could learn from them. One
remarkable feature of UNESCO’s results-reporting is that there is little suggestion in the C/3 or
the EX/4 that anything has gone wrong or might have had negative results. This suggests that
there are limitations in terms of insufficient incentives and capacities to report on
implementation challenges and failures.

This pinpoints one of the problems with self-reporting; to what extent can it be critically
reflective? There is always a bias; this is a well-known fact of evaluation research and points to
the limits of self-reporting. In particular this potential bias applies to the analysis of outcomes
and impact, though it is necessary to control for bias in implementation (activity and output)
reporting as well.

Key findings
This section discussed the following key findings:
- Self-reporting on the outcomes (expected results) of UNESCO’s work is fragmented and
weak. Consequently, this negatively affects the scope for (aggregate) reporting on
expected results in the EX/4 and C/3.
- Recent EX/4 and C/3 reports include substantive reporting on implementation and
output delivery. However, there is scope for improving the synthetic and strategic
analysis of these dimensions including the analysis of challenges and achievement of
targets.
- Activity and output reporting are fundamentally different from reporting on expected
results (at outcome level). While the former can be (rather) easily observed or captured
the latter requires resources, time and explicit data collection at the level of the target
group(s).
- In the majority of cases staff members do not have the incentives, time, resources and
data to present unbiased and credible data on outcomes (expected results). This can only
be partially resolved through strengthening validation and self-assessment at higher
levels. It also requires reconsidering the role of evaluation.
- Evaluations are on average significantly stronger on outcome analysis than final
narrative reports, not in the least because of specifically designated resources and time

50 Report on the Independent External Evaluation of UNESCO. 185 EX/18 Add, pages 27 -29.

35
dedicated to data collection and analysis. However, this evaluative evidence does not
systematically feed into SISTER and subsequently the EX/4 and C/3 reports.

Efficiency of results-reporting

When available information on, for example, achievement of outcomes is not used in results-
reporting, this is an example of low efficiency in the system. This section analyses two related
features that in different ways bear on the question of efficiency and value for money. Apart
from the more narrow considerations of whether the investments in results-reporting ‘pays off’,
there is also the strategic question of harmonization with other UN agencies. That question
relates strongly to the question of efficiency, because a harmonized approach to results-
reporting could help UNESCO work more effectively with other agencies, jointly inquire into and
document results at, for example, country level, and inspire new ways of thinking about results-
reporting.

UNESCO has invested considerable resources in results-reporting. The design and


implementation of SISTER, the six-monthly production of the EX/4, not to mention the processes
of coordination and consultation that go into the planning for results, are costly. The main cost
lies in the human resources of the Organization; the days, weeks and months that go into results-
reporting, the self-reporting into SISTER and the actually production of the EX/4 and C/3
reports (as well as various other channels of reporting: donors, UNDAF, special briefings,
information meetings with Member States, etc.). There are also costs in terms of consultancies,
systems development, and some (though much too little) training. There is no estimate of the
total costs and that is in itself a weakness. Given that results-reporting is a crucial feature of the
Organization, one should have some indication of the total cost (in terms of human resources
mostly) and whether the investment pays off.

The question of efficiency pinpoints this. Does the organisation use the resources spent on
results-reporting efficiently? On the basis of our interviews with staff, document review and our
analysis of the system, that question can only be answered by a ‘No’. It is also quite clear from
the previously quoted internal and external reviews that there are significant shortcomings in
results-reporting. The Independent External Evaluation found that ‘UNESCO has not invested in
systems that identify impact: RBM and evaluations mostly focus on outputs, activities and rather
general ‘expected results’ at MLA level – risk missing the ‘big-picture’, i.e. major effects that do not
fit into any single MLA box. Results are rarely understood in context: assessments of impact are
especially weak at the country and sub-regional levels and tend to be snapshots rather than
assessments over time.’ Many of the same problems plus a few more are reported in this study.
Furthermore, as the history of the Executive Board discussions of results-reporting show, many
of the problems are well-known and have not been resolved. The survey presented in Chapter 2
indicates that there is a mismatch between the results information supplied through the
reporting system and the kind of information expected and demanded by Member States.

The overall efficiency of results-reporting is affected by the unit costs of showing results.
UNESCO has a large number of activities with quite low budgets, less than USD 25.00051. At the
same time, the format and requirements for results-reporting are more or less the same for
small activities than for larger ones52. The consequence is that it costs as much to show results of
a USD 25.000 activity as for one which is much larger. The question is whether it is necessary to
develop a ‘complete’ results framework for small activities. A full-fledged analysis of results at

51 The Audit of UNESCO’s Project and Activity Monitoring presents clear evidence for the fragmented nature of the
project portfolio and the resulting system inefficiencies in monitoring and evaluation.
52 The practical guide on UNESCO’s extrabudgetary activities provides an overview for guidelines for self-reporting of

extrabudgetary activities. Larger activities are subject to external evaluation requirements. The internal self-reporting
requirements remain more or less the same. There is also additional guidance and requirements for self-assessment
and reporting at country level and the level of major programmatic areas of work.

36
outcome level requires an intervention logic. This may not always be realistic when it concerns
activities with small budgets, it is too costly and time-consuming. This does not mean that such
activities are not relevant, simply that they are not susceptible to a comprehensive analysis of
outcomes. Rather than try to report results on all activities, it could be useful to set priorities, for
example by aligning small activities to an intervention logic at a higher programme level; at the
activity level no results analysis would take place (as it would be very costly to do so), but at the
programme level one could build in provisions for time and resources to analyse the relevance
and effectiveness for different activities within the intervention logic. Such a selective approach
to fewer but better documented results analyses would make the reporting system more
efficient (and reliable).

Multiple reporting obligations also affect efficiency; while SISTER constitutes the core of the
system, there are other demands. At the country level UNESCO reports on the results of its
activities within the UNDAF53. Extrabudgetary projects have their own reporting requirements.
Taken together, the different requirements make for a rather fragmented results-reporting
system. While we did not analyse these other reporting streams in detail, it is clear that further
thought is needed on how to improve SISTER to generate generic reports which can be adapted
for different reporting purposes to different stakeholders.

Table 8 compares the content in results reports between Organizations. As mentioned in Table
5, UNDP and UNICEF report to the Board every year, and the two specialised agencies ILO and
FAO report every two years. UNESCO alone submits Organization-wide results reports every six
months. That being said, this report covers the statutory results reports, and there are other
reports to the Governing Bodies as well. Table 5 highlights a difference in focus between the
organizations. It is possible to organise results-reporting around few high-level goals and relate
lower level results hierarchically to these, hence (to some extent) causal links and (possibly
even) theories of change can be visible even in reports at an aggregate level such as the level of a
UN Organization.

Table 8 Comparative assessment of content in statutory results reports

UNESCO UNDP UNICEF ILO FAO

Main content* Activities and Global trends Global trends Global trends, Global trends,
outputs and outcomes and outcomes outcomes outcomes,
and outputs activities and
outputs
Note 1: The most recent reports were used for each of the agencies; UNESCO: 192 EX/4; UNDP: Annual report of the
Administrator on the strategic plan: performance and results for 2011; DP 2012/4; UNICEF: Annual report of the
Executive Director of UNICEF: progress and achievements against the medium-term strategic plan, E/ICEF/2013/11;
FAO: Programme Implementation Report 2011, C2013/8; ILO: Report of the Director General, ILO Programme
Implementation 2010-2011. International Labour Conference, 101st Session, 2012.

The structure of these reports, in combination with the frequency of reporting, quantity of
information presented and synthetic nature of the reporting, point to a more efficient use of
resources in results-reporting to the Governing Bodies. While UNESCO’s reports primarily show
activities and outputs, the practice among these four Organizations (some more than others) is
to analyse global trends within their mandates and to relate their achievements to these trends,
hence coming closer to an analysis of outcomes and impact54.

53Compliance with UNDAF-reporting is very uneven.


54Having said this, all four Organizations face similar challenges in results-reporting: quality and extent of use of
theories of change or intervention logics; nature of self-reporting mechanisms, the use of validation mechanisms and
the role of evaluative evidence versus self-reported evidence; the use of other sources of evidence; the tools used for
aggregating and synthesizing information from lower levels (e.g. project, country) to the level of the Organization, etc.

37
A results-reporting system which (a) builds on a significant investment of human resources and
(b) produces a large amount of information such as presented in the EX/4 and the C/3, but (c)
still fails to answer basic questions about results (at outcome level) of the Organization is
neither efficient nor effective.

Key findings
The analysis indicates that the efficiency of results-reporting is low. It is not possible to arrive at
a simple and easily comprehensible measure of efficiency, but a number of qualitative factors
point to low a low return on the considerable investments that go into results-reporting, namely:

- The frequency of statutory reporting to the Governing Bodies is considerably higher in


UNESCO than in other selected UN Organizations. Moreover, as indicated in a previous
section the number of pages of the statutory reports (EX/4 and C/3) is rather high.
- Within the UNESCO system, the self-reporting workload (due to the frequency of
reporting and the small unit of analysis of reporting) is very high.
- The demand for information has not been met, and the type of information requested by
Governing Bodies and stakeholders is rarely produced in the system.

Taking into account the previous two elements and the challenges mentioned under the other
three aspects, the overall conclusion is that the value for money of self-reporting practices in the
UNESCO system is rather low.

38
Chapter 4. Going forward
Focusing the task

Results-reporting in UNESCO has developed over the years; systems have been introduced,
tested and further developed. The surrounding environment is changing, new demands are
emerging, critical observations from past studies have not been addressed and the expectations
from many stakeholders have increased. When the future is contemplated, there are three key
tasks for the development of results-reporting:

 A clear distinction between reporting on activities and output delivery and reporting on
outcomes (expected results).
 A change in the frequency of reporting allowing for more rigorous analysis and
reporting.
 A recalibration of self-reported and evaluation information feeding into the results-
reporting.

Parallel to this focus, there are principles and lessons learned from RBM that should guide the
development of UNESCO’s efforts:

• Clear, analytical and synthetic reporting


• Clear distinction between implementation of the programme (activities and outputs) and
achievement of expected results (at outcome level)
• Consistent use of causal language in line with international good practices
• Strong link between planning and results-reporting
• Quality of reporting depends on the sources of evidence and quality of analysis (right
mix of self-reporting and evaluation) and has implications for frequency of reporting
• Self-reporting should be efficient: quality self-assessment at aggregate levels supported
by simple and clear reporting at lower (activity) levels
• Results-reporting should be comprehensive if it is to give relevant and strategic inputs to
decision-making processes. In UNESCO’s case this implies that reporting should cover
the three pillars of the UNESCO system: RP, EXB, out-of-budget extensive UNESCO
networks
• Reporting should incorporate comparative reviews of programmatic areas of work to
inform strategic discussions on resource allocation and (dis)continuation of areas of
work

These tasks may seem overwhelming and the challenge is to find ways of reforming the results-
reporting system that: (1) build on the achievements that have been made, (2) facilitate and
harmonize work in the UN system to improve efficiency, and (3) reallocate resources from
practices that are costly and have little value added.

Principles of reform

According to the analysis in the previous chapter, there is a window of opportunity whereby it is
possible to (a) reduce the efforts that go into frequent (and often ineffective) results-reporting,
and (b) harmonize efforts in data collection and analysis of results, and streamline reporting
requirements with other UN agencies, and (c) strengthen the quality of analysis in shorter,
focused and timely results reports. In line with these overarching principles this report
concludes that:

 The frequency of results-reporting can be reduced. A new model for results-reporting


should build on two reports. First, an Annual Implementation Report (AIR), which

39
presents the Governing Bodies with information on activities and outputs. The focus of
this report should be a portfolio analysis of implementation, with programme
performance and financial performance analysed together. Second, a Quadrennial
Results Report (QRR), which presents the Governing Bodies with information on
outcomes. The QRR would primarily build on a review of existing evaluations55.
 Distinguish between different levels of results; an Annual Implementation Report can
build on self-reporting information available in SISTER, as it relates to activities and
output delivery only which can be relatively easily observed. There is a demand for
more strategic analyses of programme implementation. Much of that demand is best
met by continuous access to SISTER and other publications, such as Country Reports.
There is also a demand for results information on outputs, but with an analysis of
strengths and weaknesses, performance against target and work plans. Such
information can be made available in a condensed Annual Implementation Report.
Knowledge about outcomes requires explicit data collection and analysis which is
undertaken within the framework of external evaluations. This information could be
supplemented with external information, monitoring of global trends, and other
information sources.
 Harmonize the results based management terminology with other agencies to have a
clear understanding of concepts that can be applied consistently across the Organization
and in relation to external partners. The UNDG Handbook terminology quoted above
would serve the RBM system well; it is clear and consistent and used by others and
there are many training opportunities available that build on this terminology.
 Continue the efforts to reduce the number of results covered in reports, and with the
results hierarchies of other UN agencies as a benchmark, aim for a narrative structure
that contains no more than 10 – 12 high level outcomes/expected results, and with
lower level outputs relating to each outcome. The QRR should consistently report on
one level of aggregate outcomes. Intervention logics (or Theories of Change) should be
reconstructed to link areas of work with these outcomes to underpin evaluative analysis
of outcomes. The Annual Implementation Report should strictly focus on synthetic
analysis of activities implemented and outputs delivered. The aim should be to reduce
the number of pages of reports to what is common among specialised agencies.

A model for results-reporting

The model outlined here is illustrated in the figure below, which starts with the key features of
the model – relating to the tasks above – and presents the structure of two results reports: an
annual report (AIR) focusing on activities and outputs, and a Quadrennial (QRR) Report on
Results (at outcome level).

55It would not be recommendable to develop a full evaluative analysis of the results (at outcome level) of UNESCO
each two years. It takes time and resources to develop reliable and comparative evidence of programmes.

40
Figure 7 New model for results-reporting (main features)

Key features:
- reducing the frequency of reporting
- separating implementation and outcome reporting
- making better use of self-reported and evaluation information

Annual Implementation Report (AIR) Quadrennial Report on Results (QRR)56


(activities and outputs) (outcomes and impact)

- More analytical implementation - More analytical reporting of outcomes


reporting - Better substantiated reporting through:
- Simplified and clearer reporting external evaluations, self-evaluation
- Simplified self-reporting on activities and reviews
and projects - Improved causal linkages between
areas of work and results (outcomes)
- Option to use different streams of
evidence
- Provides good basis for RBB decisions

 Better harmonization with good practices in the UN systems


 Frees up human resources to improve quality of reporting and address other
challenges
 Provides structure for and makes better use of evaluation
 In time reduce the number of outcomes to report on

Figure 8 below provides an illustration of what the new model would look like in practice for the
case of the QRR in combination with AIRs over a four-year cycle.

56There are good examples of organization-wide reports on results with cycles of four years or longer (e.g. the GEF
Overall Performance Study, the UNDP Evaluation of the Strategic Plan). It takes time and resources to develop reliable
and comparative evidence of programmes across an organization.

41
Figure 8 New model for results-reporting (structure)

The thrust of change is to reduce the time spent on detailed and extensive reporting on activities
and outputs, and instead devote resources to a portfolio analysis of activities and outputs and to
generate a higher quality of results information on outcomes, that is, supported by data and
evidence and building on causal understanding of how UNESCO contributes to change. Such
changes will have many consequences in the Organization that underpin and demonstrate how
the benefits occur:

 The AIR will be annual; in principle, the reduced frequency will free up time which can
be dedicated to improving the analytical content of implementation reporting (aggregate
statistics on implementation rates; comparisons between plans and actual
implementation; traffic light system of on/off track, portfolio analysis of performance
against targets).
 The quality of implementation reporting in the AIR will improve compared to current
practices through more simplified and consistent self-reporting of activities and outputs,
and more time for analytical assessment and reporting of aggregate implementation and
related challenges.
 Activity-reporting is simplified (focus on activities and outputs). Activity and output data
can be used for different reporting channels.
 The annual report on implementation would be supported by the current self-reporting
in SISTER, with some simplifications at activity and project level and a strengthened
process of self-assessment of implementation at country and programme level.
 It takes time to assess outcomes of UNESCO work to manifest themselves and to capture
them through evaluative analysis. A quadrennial report on results would be the most
logical choice. One could plan evaluations in such a way that they comprehensively cover

42
the system over a period of four years. This would not necessarily require more
resources, just a more systematic planning of existing evaluative exercises.
 The reduced frequency of both implementation and outcome reporting will free up time
for operational work and analytical assessment (at aggregate levels) of respectively
aggregate implementation and aggregate outcomes.
 The Quadrennial Report on Results (achievement of outcomes) would rely on
evaluations carried out by Sectors and by IOS which would have the (additional) purpose
of feeding into the report. The QRR would provide full coverage of the UNESCO
programme at the end of the four-year cycle. In addition, it would be supported by a
biennial or quadrennial self-assessment exercise.
 Through a better coordination of evaluative activities and a review of evaluative
evidence at the end of the biennium or quadrennium more reliable comparative data
would be presented to the Governing Bodies to support decision-making on strategic
directions and the allocation of human and financial resources of the Organization. The
comparative aspect of the evidence, in combination with the reliability of the evidence
supported by external evaluations, would generate RBB-compatible evidence57.

Feasibility

This evaluation does not encompass a full-fledged feasibility study of the suggested reforms in
results-reporting, but we can outline the arguments. The feasibility of the proposal can be
analysed in terms of whether it is desirable, whether it is affordable, and whether it is politically
feasible.

In terms of desirability, the previous chapters have outlined the criteria for good practices in
results-reporting. So, to what extent does the proposal address these criteria?

 Focus: Implementing the recommendations would sharpen the focus by introducing a


portfolio analysis of outputs and outcomes and reducing the numbers of outcomes to be
reported on.
 Clarity: Introducing a clear terminology on the different steps in causal change in line
with good practices and clearly reflecting differences in underlying data collection and
analysis needs (e.g. the difference between outputs and outcomes) will improve results-
reporting.
 Transparency: The distinction between levels of results allows audiences to see the data
and evidence that support reports on outputs and outcomes respectively.
 Evidence-based: The QRR on outcomes can and must be evidence-based and report on a
limited set of outcomes with supportive evidence from mainly evaluations. The use of
intervention logics and theories of change in the process from planning to evaluation and
results-reporting will improve the quality of the evidence underlying the analysis of
achievement of expected results.
 Strategic direction: Fewer, clearer and focused reports can support strategic decision-
making; the portfolio analysis in the Annual Implementation Report and the outcome
analysis in the Quadrennial Report on Results help to direct the attention of
management and Governing Bodies to strategic issues.
 Harmonization: The changes in terminology, frequency and focus in reporting enable
UNESCO to plan, implement and account for results using more similar principles as
other actors. It will provide a good basis for further improvements in reporting at the
level of project and country level reports in UNESCO.

57To simplify, RBB depends on three pillars: 1) the selection of unit of analysis for comparison (e.g. an area of work
under an expected results; 2) reliable comparative evidence on the outputs and outcomes (and other aspects) of the
area of work; 3) appropriate costing of each area of work. This evaluation provides the basis for addressing aspects 1
and 2, the latter being the most challenging element in RBB.

43
There is no overall estimate of either costs or benefits of reporting, but a number of interview
respondents roughly estimated that they dedicate some 10-20% of their time to results-
reporting. Significantly reducing that workload by moving to an annual implementation report
would represent a considerable saving.

In a meeting during the course of this evaluation the questions was raised whether the UNESCO
Constitution would allow for a new format in results-reporting? Article VI 3b of the Constitution
stipulates the requirements in terms of results-reporting:

‘The Director-General shall prepare and communicate to Member States and to the
Executive Board periodical reports on the activities of the Organization. The General
Conference shall determine the periods covered by these reports.’

Given the above, there does not seem to be anything that ties UNESCO to the present frequency
and level of content of EX/4 and C/3 reports, if the Executive Board and the General Conference
wish to have another frequency and another level of content in the results-reporting, it can
instruct the Secretariat accordingly.

In this evaluation we have not empirically analyzed the governance of results-reporting, which is
a different subject. However, as the report has clearly shown, there are linkages between
planning and reporting and Governing Bodies play a role in both parts of the cycle. The results
that the Organization reports on are the same results the Organization and the Member States
have agreed upon. Given the inconsistencies in the formulated expected results in the C/5, it
would be recommendable to clarify the division of labor between the Secretariat and Member
States, which means among other things that Member States clearly indicate the strategic
priorities, overall objectives and strategic directions of UNESCO’s work. On the basis of these
instructions, the Secretariat would be responsible for the technical formulation of expected
results (at outcome level) which should be driven by causal analysis instead of stakeholder
consensus.

As a final thought, it should be kept in mind that results-reporting is much more than the EX/4
(C/3) and there are many other arenas for interaction between the Member States and the
Secretariat, supported by different sources of evidence (SISTER, evaluation reports, research,
reports to and interactions at meetings of Governing Bodies of Category 1 Institutes,
intergovernmental programmes, etc.). At the level of the Organization, it is important that an
open dialogue between the Secretariat and the Member States continues to be fostered; a
dialogue supported by evidence, which is subject to continuous improvement in function of
facilitating decision-making processes on the main strategic issues of the Organization.

44
Annexes
Annex 1. Terms of Reference of the results-reporting evaluation

Joint IOS-BSP Formative evaluation on strengthening UNESCO’s results-reporting

Background note

Acronyms

AUD Audit Section Internal Oversight Service UNESCO


Ausaid Australian Agency for International Development
BSP Bureau of Strategic Planning UNESCO
C/3 UNESCO overall results report (General Conference document)
C/4 UNESCO medium-term strategy document (General Conference
document)
C/5 Biennial programme and budget (General Conference document)
DFID Department for International Development
EO Executive Office (in UNESCO Sectors)
EVS Evaluation Section Internal Oversight Service UNESCO
EX/4 UNESCO Results Report
IADGs Internationally Agreed Development Goals
IOS Internal Oversight Service UNESCO
MAR Multilateral Aid Review
M&E Monitoring and Evaluation
MLA Main Line of Action
QCPR Quadrennial Comprehensive Policy Review
RBB Results-Based Budgeting
RBM Results-Based Management
SISTER System of Information on Strategies, Tasks and the Evaluation of
Results
UN United Nations
UNCT United Nations Country Team
UNDAF United Nations Development Assistance Framework
UNDG United Nations Development Group
UNDP United Nations Development Programme
UNEP United Nations Environment Programme
UNESCO United Nations Educational, Scientific and Cultural Organization
UNGA United Nations General Assembly
UNICEF United Nations Children’s Fund
UNU United Nations University

45
Background and rationale

Over the last ten years donors and the wider international development community have
increasingly focused their attention on the definition of clear objectives and targets for policy
interventions as well as on the issue of assessment of the extent to which they have been
achieved. As part of this trend, UN organizations have stepped up their efforts to introduce
results-based management (RBM) principles within their programming, monitoring and
reporting frameworks. In addition, UN organizations are under a variety of external and internal
pressures to position themselves more clearly within the UN system and vis-à-vis other
stakeholders. At UNESCO a number of initiatives have led to a strengthening of RBM functions
throughout the organization. Yet despite progress made, challenges remain.

A core element of UNESCO’s RBM system is the biennial results report (C/3), which is based on
six-monthly progress reports (EX/4). Figure 1 presents a succinct overview of the current cycle
of reporting. As stipulated in resolution 36 C/105, UNESCO’s General Conference has decided
that UNESCO will change its two-year cycle to a four-year programming (and reporting cycle58).
The resolution responds to the UN General Assembly’s call on UN system organizations to align
their programming cycles with the quadrennial comprehensive policy review (QCPR) of UN
operational activities. The new UNESCO programme cycle will commence in 2014. This
transition presents an opportunity space for introducing substantive improvements in the
nature of results-reporting, with the aim to further strengthen its quality and relevance.

Figure 1. Current reporting cycle on the Organization’s results59

BIENNIUM
Biennial
Biennial
programme
results report
and budget

C5 EX4 EX4 EX4 EX4 C3


Six-monthly results reports

The rationale for improving UNESCO’s results-reporting is reflected in documents and decisions
from different stakeholder groups in the UNESCO system and beyond.

‘Internal demand’ (Executive Board, General Conference, Secretariat)


- A number of recent decisions by UNESCO’s Executive Board call for improvements in
results-reporting and underlying systems of data collection and analysis feeding into the
reporting. As a result, a number of exercises conducted by BSP, IOS and others have
indicated specific areas for improvement. See for example:
o The time-bound action plan for implementing IOS’ recommendations on RBM
(190 EX/INF.21).
o An evaluative study on the quality of final narrative reports and external
evaluations in the UNESCO system (IOS/EVS/PI/128).
o An audit of UNESCO’s activity/project monitoring (IOS/AUD/2013/03).
- The introduction of RBB principles in the new programming cycle will have implications
for the (evaluative) evidence60 needed to inform financial and human resource allocation
decisions.

58 With respect to performance and results the budget will be renewed on a two-year basis.
59 In practice, the last six-monthly EX/4 exercise and the (draft) C/3 reporting coincide.

46
- Ongoing work on the introduction of new transparency measures will have implications
for the reporting on UNESCO programmes and projects/activities.
- The Independent External Evaluation finalized in 2010 has identified structural
weaknesses in UNESCO’s RBM system, which resulted in a number of follow-up
recommendations (partially met), targeted at strengthening RBM practices in the
Organization.

‘External demand’ (UN system, donors)


- Increased demand on the part of Member States, UN system partners and donors for
clear and credible information on results achieved by the Organization. Key remaining
challenges identified in a recent bilateral donor assessment of UNESCO61 include
‘improved impact identification with systematic results-reporting and evaluation, and
further articulation of its theory of change’, as well as ‘greater attention to results in
programming decisions’.
- Collaboration between UN organizations through participation in UN common country
programming processes, in particular the UN Development Assistance Frameworks and
One Plans, and corresponding joint UN results-reporting and accountability at country
level (e.g. through UNDAF annual reports and evaluations) have become more
important. There is a need for further harmonization of UNESCO’s results-reporting
principles with UN system-wide good practices in countries. The Quadrennial
Comprehensive Policy Review (UNGA resolution A/RES/67/226), which constitutes a
common framework for UN activities at country-level, calls for strengthening of RBM
across the system through improved reporting on the achievement of UN results and
‘nationally owned’ development priorities, the harmonization of results-reporting across
UN organizations and adoption of a common RBM terminology.62

In response to the abovementioned demands for improvements in UNESCO’s results-reporting


and underlying mechanisms, IOS has included an evaluative exercise on RBM practices in its
2012-2013 Evaluation Plan. Taking into account ongoing efforts within the Secretariat to
improve various elements of the RBM system, IOS and BSP have decided to undertake a joint
formative evaluation of UNESCO’s results-reporting.

Purpose

The main purpose of the present formative evaluation is to analyze the strengths and
weaknesses of the current results-reporting model used in the UNESCO system, and on the basis
of the analysis develop a proposal for improved results-reporting that will be in line with
common/good practices in the UN system.

The formative evaluation will result in two outputs:


1. A concise report presenting findings and recommendations regarding UNESCO’s results-
reporting and underlying evaluation and monitoring mechanisms; and
2. A proposal for the improvement of future results-reporting, including possible
alternative designs of the EX/4-C/3 documents for the 37 C/5 quadrennium (2014-
2017).

60 The current results-reporting almost exclusively relies on monitoring information, which is a major
shortcoming in the current system, and is a clear indication that changes are needed in the way
monitoring and evaluation is set up in the UNESCO system.
61 The 2013 UK Multilateral Aid Review Update.
62 “the United Nations development system […] [should] promote the development of clear and robust

results frameworks that demonstrate complete results chains that establish expected results at the
output, outcome and impact levels” (UNGA resolution A/RES/67/226).

47
The improved EX/4-C/3 reporting should be guided by the following criteria:
• Focus (simplifying UNESCO’s results framework and developing a more synthetic view
on UNESCO’s key achievements and comparative advantages)
• Clarity (showing the linkages between what UNESCO does, how this influences processes
of change, and subsequently contributes to achieving UNESCO’s expected results and
strategic objectives)
• Transparency (of programme and budget information, including regular programme and
extrabudgetary funds, following the principle of results-based budgeting)
• Evidence-based nature (improving the empirical evidence base of the results-reporting)
• Strategic direction (providing clear recommendations on improving value for money to
better inform decision-making processes on strategic priorities and the allocation of
resources)
• Harmonization (informed by common/good practices in the UN system)

The evaluation’s outputs are expected to feed into the work and decision-making practices of the
Secretariat of UNESCO (HQ, Field Offices and Category 1 Institutes) and UNESCO’s Executive
Board and General Conference. Moreover, the revised results-reporting will benefit Member
States, donors and the public at large in terms of improved transparency and accountability on
the Organization’s achievements and strategic priorities.

Scope

The evaluation will focus on the following questions:


- What are the main strengths and weaknesses in UNESCO’s results-reporting (principally
the EX/4 and C/3 documents)?63
o What are the intended needs and uses of the results-reporting by different
groups of stakeholders (e.g. Secretariat, Governing Bodies, donors, general
public)?
o What are the main gaps/shortcomings in terms of the type of information
included in UNESCO results reports when compared to other UN
organizations/UN system-wide good practices?
o What are advantages and disadvantages regarding the way in which information
is reported in UNESCO results reports when compared to other UN
organizations/UN system-wide good practices?
o What are key strengths and weaknesses of information systems underlying
UNESCO’s results-reporting when compared to other UN organizations/UN
system-wide good practices?
o What are the key levels of analysis (e.g. activity/project, country, programme,
expected results) at which particular data (1) need to be collected, (2) need to be
analysed, synthesized and reported?
- What are the key elements of an improved64 results-reporting system in the medium
term?
o In terms of format and content of the results reports (C/3 and EX/4)?
o In terms of information inputs needed for the reports as well as joint UN results
reports?
o In terms of underlying information collection (monitoring and evaluation)
mechanisms?
- Taking into account the above, how can these elements be introduced in a cost-effective
manner?

63 Taking into account the purpose of the EX/4 and C/3 documents, which includes (inter alia) providing
credible information to Member States on the implementation of the C/5 Programme and Budget,
including challenges and potential solutions.
64 With ‘improved’ we mean taking account the principles presented in the previous section.

48
The questions emphasize the need for a comparative perspective. Within a context of UN reform
and an evolution towards increasing collaboration among UN entities it is important to include a
comparative analysis in this exercise, with a view to contribute to system-wide harmonization.

Appendix 1 presents a number of important challenges that need to be considered in the


analysis. These aspects have been identified in a preliminary desk study covering Executive
Board Decisions, IOS evaluations and external assessments of UNESCO (e.g. the DFID MAR
2011/2013 and the Ausaid Multilateral Scorecard).

Methodology

The formative evaluation will include the following methodological elements (tasks):

1. Background desk study

The evaluation will comprise a desk study of the following types of documents:
- UNESCO EX/4 and C/3 reports, UNESCO’s annual reports
- Internal progress/results reports by Sectors/Field Offices/Category 1 Institutes (and other
UNESCO entities) feeding into the EX/4-C/3
- UN organization results reports (UNICEF, UNDP, UNEP plus a number of other organizations
(e.g. UNU)), results reports of other organizations (bilateral donors, multilateral
international organizations)
- RBM guidelines (including M&E guidelines), ongoing discussions and available documents
on the introduction of RBB principles, monitoring reports, evaluation reports, documents on
institutional reform (mainly UNESCO, to some extent UNICEF, UNDP and other UN
organizations)
- UN system documentation (e.g. UNGA QCPR resolutions, UNDG guidance materials (UNDAF
and Delivering as One/ standard operational procedures))
- Joint UN results reports (UNDAF annual reports, evaluations, etc.)
- Academic and grey literature on good practices in RBM

2. Field visits and semi-structured interviews with stakeholders

The formative evaluation will include short visits to UN organizations’ Head Quarters and offices
in the field. Semi-structured interviews will be planned with UN organization programme staff,
M&E staff, staff from the office of the resident coordinator, representatives in UNCTs. Interviews
will be conducted by phone/skype or face to face. A purposive sample of interviews will be
determined in the inception report.

The following locations and organizations will be covered by the evaluation:


- New York (UNICEF, UNDP)
- Rome/Geneva (ILO and/or FAO)
- Two countries with UNESCO presence at country level (UNESCO offices, UNICEF/UNDP
offices, offices of other UN organizations, UN Resident Coordinator Office, UNCT meetings)

In addition, a number of semi-structured interviews will be planned with other stakeholders:


- At UNESCO HQ: UNESCO staff (BSP, EOs), members of the Senior Management Team
- Donors and representatives from Member States (i.e. members of Governing Bodies)

3. Comparative assessment of UN results-reporting and underlying information systems

This exercise is supported by the previous two elements. A template will be developed and used
for recording information which will be the basis for a systematic comparison of results-

49
reporting (and underlying information systems) across (the selected) UN organizations. The
exercise will cover organization-wide results reports as well as (to the extent available) country-
level results-reporting. Examples of variables/elements that are to be included in the template
are the following: description of the organization’s results-reporting system, basic structure of
individual results reports, sources of information feeding into the results reports, units of
analysis for data collection/analysis/reporting (project/country/theme…), consistency of
reporting (across themes/sectors/areas of work), quality of attribution/contribution analysis,
level of precision of reporting (magnitude, scale, location), use of indicators and targets, quality
of alignment (between achievements and organization-specific objectives, IADGs,…), processes
of quality control, (intended) uses of results-reporting . The final list of elements will be
determined during the inception phase.

The comparative assessment will generate two intermediate outputs:


- A mapping65 of the current results-reporting systems in each of the selected UN
organizations.
- A comparative ‘balanced scorecard’ overview of results-reporting systems in the selected UN
organizations.

4. Proposal for improving the EX/4-C/3 documents and corresponding recommendations


for changes in the underlying information (monitoring and evaluation) system.

Roles and responsibilities

The formative evaluation will be jointly managed by IOS and BSP. The modality of work is a so-
called hybrid model: joint implementation by UNESCO staff (IOS, BSP) and an external
consultant.

Broadly the division of labor in data collection, analysis and reporting is as follows:

Task 1. Background desk study Consultant


Task 2. Interviews and missions IOS, BSP and consultant
Task 3. Comparative Assessment of UN results- Consultant with inputs from IOS and BSP
reporting
Task 4. Proposal for improving EX/4-C/3 Consultant with inputs from IOS and BSP; IOS and
documents BSP develop final version

A steering committee will be established. The steering committee will provide feedback on the
ToR, the draft report and the proposal for improving the EX/4-C/3 documents. The steering
committee will be informed at regular intervals about the progress of the study. Senior
management from IOS and BSP will be members of the steering committee.

In addition, a working group with representatives from Member States will be set up to ensure
that the Member States’ perspective, as key users of UNESCO’s results-reporting, is adequately
taken into account.

65A graphical description of the results-reporting system (flows of different types of information at
different levels in the system).

50
Schedule and deliverables

The formative evaluation will result in two outputs:

1. A final report (of max. 30 pages excluding annexes) presenting in a concise manner the
following elements:
a. Comparative assessment of results-reporting in (selected) UN Organizations
(including UNESCO).
b. Recommendations for improving UNESCO’s results-reporting and underlying
evaluation and monitoring mechanisms.
c. A draft proposal for improving the EX/4-C/3 documents for the 37 C/5
quadrennium.
2. (After consultation with all relevant stakeholders) A proposal for improving the EX/4-
C/3 documents for the 37 C/5 quadrennium (2014-2017).

The following (intermediate) deliverables are the responsibility of the consultant:


- A concise inception report with the evaluation’s methodology (including a simple evaluation
matrix, the assessment template for the comparative desk study and interview templates)
- Final evaluation report (point 1 listed above)

Tentative schedule (to be finalized in the inception phase):

Task Responsible for delivery Deadline


Finalization of ToR IOS and BSP Beginning of September, 2013
Call for proposals IOS and BSP Beginning of October, 2013
Selection of external consultant IOS and BSP End of October, 2013
Inception report Consultant Beginning of November, 2013
Data collection (Tasks 1, 2 and 3) IOS, BSP and consultant November, 2013 to January,
2014
Draft report Consultant End of January, 2014
Final report (after feedback and Consultant End of February, 2014
comments)
Proposal for improving the EX/4 IOS and BSP End of March, 2014
and C/3 documents (after
consultation with relevant
stakeholders)

51
Qualifications consultant

- At least 10 years of professional experience in policy and programme evaluation in the


context of international development.
- Extensive knowledge of monitoring and evaluation systems in the context of multilateral
organizations, preferably the UN.
- Fluency in English (written and spoken).
- Knowledge of the role and mandate of UNESCO and its programmes.
- Part-time availability in the period October 2013 – February 2014 (including three short
missions: Paris (two times), one country to be selected).

Logistics

The external consultant will be responsible for his/her own logistics: office space,
administrative and secretarial support, telecommunications, printing of documentation, etc.
IOS and BSP will provide contact information for stakeholders to be contacted during the course
of the evaluation. The consultant will be responsible for setting up meetings, but in some
occasions will be assisted by IOS and BSP. With regard to field visits, IOS and BSP will assist the
consultant in providing documentation, setting up meetings and providing security clearance
documents, etc. The consultant is responsible for all travel-related costs, including
transport to and from the airport and transport to and from interviews. Regarding the latter,
UNESCO will provide assistance as much as possible in UNESCO Field Office locations.

IOS, BSP and the consultant are jointly responsible for collecting the documents required for the
desk study and comparative assessment. IOS and BSP will ensure that all UNESCO-specific
documentation will be provided to the consultant.

52
Appendix 1. Challenges in UNESCO’s results-reporting and underlying RBM
system

On UNESCO’s results-reporting:
- Making sense of UNESCO’s results framework (C4 and C5): Clarity of alignment between
different objectives at different levels (expected results, strategic objectives, etc.) = ‘the
logic of what UNESCO wants to achieve’.
- Making sense of UNESCO’s activities, outputs and effects: Clarity of causal linkages
between UNESCO activities and outputs and their effects (outcomes and impact) at
different levels = ‘the logic of how UNESCO intends to achieve it’.
- Making sense of UNESCO’s overall performance: Attainment of objectives (aggregate
outputs and effects of UNESCO activities translated into contribution to and (partial)
attainment of objectives) = ‘the evidence on what has actually been achieved’.
- Inclusion/visibility of particular topics (e.g. interventions in post-conflict countries,
Category 2 Institutes).
- (RBB) Guidance on prioritization of financial and human resources (including among
other things considerations of cost-effectiveness, niche value and uniqueness of
UNESCO’s interventions and contributions to change in the world).
On planning, monitoring and evaluation mechanisms:
- Quality of empirical causal analysis on UNESCO projects/activities and outputs and their
effects (outcomes and impact), including the attainment of UNESCO objectives at
different levels (attribution and aggregation)66
o Quality of intervention logics at activity/project, programme, country, MLA (etc.)
level.
o Quality and logic of monitoring67 (measuring what, at what levels and with which
tools).
o Quality and logic of evaluation6 (evaluating what, at what levels and with which
tools and approaches).
o Clarification of the programme level (‘what is a programme’) and the link with
evidence (monitoring and evaluation) and guidance for decision-making
processes on (dis)continuation/prioritization of financial and human resources
(RBB).

66See Annex 2 for further explanation of these concepts.


67A review of the sources of evidence, i.e. from monitoring and evaluation, (potentially) feeding into the
results-reporting is warranted, in view of a better use of both in order to ensure complementarity,
comprehensiveness and quality of evidence feeding into the results-reporting. Comparative advantages
and limitations of monitoring (i.e. data collection on key performance indicators, short narratives) and
evaluation (i.e. periodic assessments, analytical sense-making exercises, in-depth causal analyses) need to
be identified and acknowledged, resulting in a recalibration of the monitoring and evaluation system.

53
Appendix 2. Basic conceptual framework

White (2003)68 deconstructed the issue of development agency performance into three key
challenges: attribution, aggregation and alignment. These concepts can be defined as follows:
 Attribution: To what extent do individual policy interventions generate particular
outputs contributing to processes of change leading to outcomes and impacts,
controlling for other factors influencing these processes?
 Alignment: To what extent can outputs (and outcomes, impacts) of the different
programmes and projects/activities of an agency be plausibly expected to contribute to
higher-level objectives? In other words, is there a clear and consistent logic between
higher-level objectives and the outputs (outcomes, impacts) of underlying
projects/activities that feed into these? Do the underlying projects/activities
comprehensively cover the phenomenon (e.g. sustainable development, preservation of
heritage, quality in education) captured in higher-level objectives?
 Aggregation: How is data at activity/project level aggregated to higher levels of
programmes, countries, entire portfolios (etc.)? What systems are in place to aggregate
information from lower to higher levels?

A well-performing agency should be able to show that its project/activities lead to outputs,
outcomes and impact (attribution), that its projects/activities are logically aligned to higher-
level objectives without major gaps (alignment), and that adequate and credible empirical data
are available to support any claims regarding the (extent of) achievement of higher-level
objectives.

Consequently, in order to assess how well UNESCO is able to credibly demonstrate its results a
useful starting point would be to develop an initial perspective on how the Organization is faring
on these three issues.

68 White, H. (2003). Using the MDGs to measuring donor agency performance. In R.Black, & H. White (Eds.), Targeting
development: Critical perspectives on the Millennium Development Goals. London: Routledge.

54
Annex 2. Key references and documents consulted

Literature

- Community of Practice on Results Based Management (2009) Sourcebook on Results


Based Management in the European Structural Funds, Brussels.
- Joint Committee on Standards (1994) The Programme Evaluation Standards, Sage
Publications, London.
- Kusek, J. Z. and R. C. Rist (2004) Ten Steps to a Results-Based Monitoring and Evaluation
System, World Bank, Washington D.C.
- Lopez-Acevedo, G., Krause, P. and Mackay, P. (2012) Building Better Policies: The Nuts
and Bolts of Monitoring and Evaluation Systems, World Bank, Washington D.C.
- Mackay, K. (2006) “Institutionalization of Monitoring and Evaluation Systems to Improve
Public Sector Management”, EDC Working Paper Series, No. 15, Independent Evaluation
Group, World Bank, Washington D.C.
- Mackay, K. (2007) How to Build M&E Systems to Support Better Government, World Bank,
Washington D.C.
- Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using
Performance Measures Sensibly”, Canadian Journal of Program Evaluation, 16(1): 1-24.
- Mayne, J. (2004) “Reporting on Outcomes: Setting Performance Expectations and Telling
Performance Stories”, Canadian Journal of Program Evaluation, 19(1): 31-60.
- Mayne, J. (2007) Best Practices in Results Based Management: A Review of Experience, A
Report for the United Nations Secretariat.
- OECD (2010) Glossary of Key Terms in Evaluation and Results Based Management, OECD,
Paris.
- Otley, D. (1999) “Performance management: a framework for management control
systems research”, Management Accounting Research, 10: 363‐382.
- Patton, M. Q. (2001) “Evaluation, Knowledge Management, Best Practices, and High
Quality Lessons Learned”, American Journal of Evaluation, 22(3): 329-336.
- Pawson, R. (2013) The Philosophy of Evaluation: A Realist Manifesto, Sage Publications,
London.
- Robinson, M. (2007) “Performance Information Foundations.” in: M. Robinson (ed.)
Performance Budgeting: Linking Funding and Results, IMF, Washington D.C.
- Schwartz, R. and J. Mayne (2005) “Does Quality Matter? Who Cares about the Quality of
Evaluative Information?”, in: R. Schwartz and J. Mayne (eds.) Quality Matters: Seeking
Confidence in Evaluation, Auditing and Performance Reporting, Transaction Publishers,
New Brunswick.

UNESCO Documents

- UNESCO (2007) Report by the Director-General on the UNESCO Evaluation Policy and
Elaborated Elements of the UNESCO Evaluation Strategy. 176 EX/27.
- UNESCO (2008) Results-Based Management Guiding Principles.
- UNESCO (2010) Report on the Independent External Evaluation of UNESCO. 185 EX/18
Add.
- UNESCO (2011) Results-Based Programming, Management and Monitoring (RBM)
approach as applied at UNESCO. Guiding Principles. BSP/RBM/2008/1.REV.5.
- UNESCO (2012) RBM Monitoring and Reporting guidelines. BSP/RBM/2012/4 REV.1.
- UNESCO (2013) Audit of UNESCO´s Project and Activity Monitoring. IOS/AUD/2013/03.
- UNESCO (2013) Guidance Note on the Evaluation of UNESCO’s Extrabudgetary Activities.
IOS/EVS/PI/126.
- UNESCO (2013) A diagnostic Study of Evaluations of UNESCO’s Extrabudgetary
Activities. IOS/EVS/PI/128.

55
UNESCO planning and resporting documents

planning
- 36 C/5 Approved Programme and Budget 2012-2013.
- 37 C/5 Draft Programme and Budget 2014-2017.

reporting
- 36 C/3 Report of the Director General 2008 – 2009.
- 37 C/3 Report of the Director General 2010 – 2011.
- 186 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 187 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 189 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 190 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 191 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 192 EX/4 Report by the Director General on the Execution of the Programme Adopted by
the General Conference.
- 194 EX/4 Draft Report by the Director General on the Execution of the Programme
Adopted by the General Conference.
- UNESCO (2014) Memo BSP 3 January 2014.

Documents UN agencies

FAO

- FAO (no date) RBM Powerpoint Presentation.


- FAO (2010) Programme Implementation Report. C 2013/8 PIR 2010.
- FAO (2012) The Evaluation Function of the Food and Agriculture Organisation,
Evaluation Peer Review.
- FAO (2013) Guidance Note on the Conduct of Country Evaluations.
- FAO (2013) Programme Evaluation Report C 2013/4.

ILO

- ILO (2011) Applying Results-based Management in the International Labour


Organisation, A Guidebook.
- ILO (2012) Evaluation Unit Guidance note, Monitoring and Evaluation of Decent Work
Country Programmes, Evaluation Unit.
- ILO (2012) Policy Guidelines for Results-based Evaluation.
- ILO Programme Implementation Report 2010 – 2011, International Labour Conference,
101st Session, 2012.
- ILO (2013) ILO Policy Information: Improvements in the quality of results-reporting.
GB.37/PFA/INF/5.

UNDP

- UNDP (2007) Evaluation of Results-Based Management at UNDP: Achieving Results.


- UNDP (2008) The UNDP Accountability System. DP/2008/16/Rev.1.
- UNDP (2009) Handbook on Planning, Monitoring and Evaluating for Development
Results.

56
- UNDP (2010) Independent Review of UNDP Evaluation Policy.
- UNDP (2010) Outcome Level Evaluation. A Companion to the Handbook on Planning,
Monitoring and Evaluating for Development Results for Programme Units and
Evaluators.
- UNDP (2011) Annual Report 2010 – 2011.
- UNDP (2011) The evaluation policy of UNDP. DP/2011/3.
- UNDP (2011) Annual report of the Administrator on the strategic plan: performance and
results for 2011; DP 2012/4.
- UNDP (2012) Project-level Evaluation. Guidance for Conducting Terminal Evaluations of
UNDP-supported GEF financed Projects.
- UNDP (2012) Annual report on evaluation, 2012. DP/2013/16.
- UNDP (2013) UNDP Strategic Plan 2014 – 2017. Integrated Results and Resources
Framework.
- UNDP (2013) Innovation in Monitoring and Evaluating Results. New York: UNDP.
- UNDP (2013) Report on the Global Programme 2009 – 2013: performance and results.
DP/2013/14.
- UNDP (2013) Annual Report 2012 – 2013.

UNICEF

- UNICEF (2003) Understanding Results Based Programme Planning and Management.


- UNICEF (2007) Programme policy and procedure manual.
- UNICEF (2012) Annual Report 2012.
- UNICEF (2013) Revised Evaluation Policy of UNICEF. E/ICEF/2013/14.
- UNICEF (2013) Annual report of the Executive Director of UNICEF: progress and
achievements against the medium-term strategic plan, E/ICEF/2013/11.
- UNICEF (2013) Review of UNICEF’s Development Effectiveness. Final Report 2009 –
2011.

OTHERS

- ADB (2011) Managing for Development Results, Asian Development Bank, Manila.
- Joint Inspection Unit/ JIU (2004) JIU/REP/2004/4. Overview of the series of reports on
managing for results in the United Nations System.
- Joint Inspection Unit/ JIU (2006). JIU/REP/2006/6, Results-based management in the
United Nations in the context of the reform process.
- Joint Inspection Unit/ JIU (2011). JIU/REP/2011/5. Accountability Frameworks in the
United Nations System.
- Joint Inspection Unit/ JIU (2012). JIU/REP/2012/12. Strategic Planning in the United
Nations System.
- UNDAF Guidance Note Application of the Programming Principle to the UNDAF.
- UNDAF (2010) Standard Operational Format and Guidance for Reporting Progress on
the UNDAF.
- UNDG (2011) Results-Based Management Handbook: Harmonizing RBM concepts and
approaches for improved development results at country level.
- UNEP (2009) Evaluation Policy.
- UNEP (2010) Monitoring Policy.

57
Annex 3. Examples of theories of change in the UNESCO system

Example 1: Draft theory of change for the 1970 Convention on the Means of Prohibiting and
Preventing the Illicit Import, Export and Transfer of Ownership of Cultural Property

Source: Evaluation of UNESCO’s Standard-Setting Work of the Culture Sector, Part II – 1970
Convention on the Means of Prohibiting and Preventing the Illicit Import, Export and Transfer of
Ownership of Cultural Property, http://www.unesco.org/new/en/unesco/about-us/how-we-
work/accountability/internal-oversight-service/evaluation/evaluation-reports.

58
Example 2: Intervention logic of the International Institute for Educational Planning

Source: International Institute for Educational Planning, unpublished document.

59
Example 3: Theory of Change of UNESCO Priority Africa

Source: Evaluation of UNESCO Priority Africa, http://www.unesco.org/new/en/unesco/about-


us/how-we-work/accountability/internal-oversight-service/evaluation/evaluation-reports.

60
Example 4: Draft theory of change for the 1972 Convention on Cultural and Natural World
Heritage

Source: IOS/EVS on the basis of consultations with CLT.

61

You might also like