Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

The Oxford Handbook of Community Music

Brydie-Leigh Bartleet (ed.), Lee Higgins (ed.)

https://doi.org/10.1093/oxfordhb/9780190219505.001.0001
Published: 2018 Online ISBN: 9780190219529 Print ISBN: 9780190219505

Search in this book

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


CHAPTER

15 Measuring Outcomes and Demonstrating Impact:


Rhetoric and Reality in Evaluating Participatory Music
Interventions 
Douglas Lonie

https://doi.org/10.1093/oxfordhb/9780190219505.013.24 Pages 281–298


Published: 05 February 2018

Abstract
This chapter explores approaches to evaluation and measurement in community music, within a policy
environment increasingly focused on establishing the impact of public investment on individuals,
communities, and society as a whole. It seeks to critically engage with terminology and appraise
common models of evaluation and measurement advocated by a range of funders of community music
by reviewing policy documents and evaluation approaches promoted across the public and third
sectors, using recent history in the United Kingdom as a case study. Drawing on published examples of
community music evaluations, a variety of methods are discussed including quantitative studies,
interpretative and qualitative approaches, and standardized tools. These are examined in relation to
their ability to convey project learnings, as well as the extent to which ‘impact’ can be attributed and
claimed. The ndings are then discussed in relation to the broader literature relating to the impact of
community music, and an emerging eld of applicable methods is proposed.

Keywords: evaluation, methodology, outcomes, impact, measurement, research, policy


Subject: Ethnomusicology, Music
Series: Oxford Handbooks
Collection: Oxford Handbooks Online
Evaluation in community music and the ʻoutcomes turnʼ

WITHIN the characteristics of excellent community music practice, according to the Commission for
Community Music Activity, sits ‘an ongoing commitment to accountability through regular and diverse
assessment and evaluation procedures’ (Higgins, 2012, p. 83). Taken alongside an acceptance that re ection
and re exivity are central to community music interactions for both practitioner and participant, it follows
that evaluation is something of an ‘essential’ feature of community music practice. That said, the micro-
level evaluation that occurs within community music interactions (e.g., how practitioners decide on which
method to use, or how participants make sense of the music they have produced), also seems to exist in a

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


separate sphere from the policy-level evaluation discourse across funding and commissioning of work.

Within a UK context, the notion that public spending should be subject to rigorous evaluation is not new;
1
investment in the ‘third sector’ has grown exponentially in recent years, in tandem with dramatic changes
in spending on culture, education, health, and other public services. The government published The Green
Book in 2003 (UK Government, 2003), which sought to establish a framework by which all public spending
could be allocated and evaluated, and it was recently updated to establish the value of public spending on
individual well-being. This shift towards cost-bene t evaluation models for programmes seeking to
increase well-being was followed by the development of Social Return on Investment frameworks, now
p. 282 largely supported by independent think tanks and agencies seeking to promote and support research of
social value. The idea that personal and social value can be measured and accounted for in explicitly
nancial terms is now a mainstream approach within central government funding for arts and culture, as it
is across many philanthropic organizations and foundations, who between them make up a very large
source of funding for community music activities in the United Kingdom. While nancing models for
community music interventions or programmes clearly di er in diverse global contexts, a number of
international institutions and foundations have also moved towards an outcomes approach, requiring the
measurement and valuation of inputs, outputs, and outcomes.

However, there remain a number of inconsistencies in how frameworks are developed and applied, and the
speci c nature of evaluation models can vary depending on whether the source of funding for a particular
intervention is from the public, private, or third sectors. Indeed, within funding sectors there can be
divergence in the perceived usefulness of evaluation approaches. Even with a very rmly designed
evaluation framework (i.e., collectively the intended theory of change, methodology, and analysis
approaches to be adopted to establish the e ects of community music activities), the cultural and political
implications of evaluation must also be considered, as Arvidson summarized:

Understanding evaluation … means not just mastering the technical side of producing and
analysing data, but a recognition that it needs to be placed in the context of a political arena, where
values, power, and resources constitute bases for di ering interpretations and potential con icts.
(Arvidson, 2009, p. 10)

The signi cance of values, power, and resources seem especially prescient in a community music eld that
regularly features work with people experiencing various stages of disengagement from mainstream
cultural participation. Rimmer (2009), based on ethnographic research with a community music project in
the North East of England, suggested that the funding criteria and associated evaluation frameworks within
a project meant that young people were strongly encouraged to take part in activities (i.e., performances or
structured sessions) in a way that was not wholly elective, and therefore undermined the participatory
values on which the project was based. He linked these approaches to a neo-liberal political discourse
seeking to demonstrate the instrumental value of arts participation for individuals and society, arguing that
this agenda has been co-opted by many working in the eld in a relatively uncritical manner. Lonie and
Dickens (2015) extended Rimmer’s argument by exploring participant and practitioner re ections on the
extent to which policy a ects practice, concluding that there are sophisticated ways in which policy
stipulations, including those relating to evaluation, are negotiated, repurposed and applied in situ by
practitioners and others involved in providing participatory music opportunities, rather than ‘blindly’
applied. This suggests that power and a ect should always be considered in evaluation design and practice,
but also that approaches are negotiable at all levels and that discussion around the very principles and
p. 283
theory associated with an intervention is a crucial rst step in any associated evaluation design.

Based on a series of focus groups with practitioners, researchers, and funders of community music, Brown,
Higham, and Rimmer (2014) discussed how those working in community music have failed to develop

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


shared evaluation frameworks that are appropriate for exploring the values and intentions of community
music practice, most likely because there is continuing dissonance in establishing shared values and
practices of community music itself. As discussed in the introduction to the current volume, the authors
reported an ongoing need to separate the impetus of evaluation from learning about how and whether
certain processes or outcomes are occurring, and generating ‘good news stories’ or ‘advocacy’ data:

[There is a] perceived danger of a potential confusion between evaluation and advocacy. This arises
from the suspicion that (self-) evaluation processes can be managed and disseminated to ensure a
favourable outcome, re ecting glory on the provider, and underpinning subsequent bids for
project funding. This kind of ‘evaluation’ could be said to lack the rigour or objectivity of, for
example, an independent academic evaluation. (Brown et al., 2014, p. 40)

The suggestion that independent academic evaluation is necessarily more objective or rigorous than self-
evaluation processes should not go unchallenged. However, the broader point that evaluation processes
exist as part of a chain of expected activities and priorities, ranging from establishing change within a
project to disseminating learning beyond it, is key to understanding ‘where we are’ in relation to strong and
appropriate approaches to evaluation across participatory music-making. The bias towards positive
ndings and evaluation as advocacy remains a clear barrier to the ideal standard of evaluation as providing
reliable evidence on project performance and informing future planning accordingly.

Nevertheless there have been great strides forward in evaluation practice amongst funders and funded
organizations in vocabulary, terminology, and approach. Once relatively specialist and abstract concepts are
now more commonly understood and accepted, and evaluation skills and knowledge of practitioners and
organizations have improved through the coordinated work of organizations across the sector.

Evaluation frameworks, the outcomes approach, and funder


imperatives

The development of evaluation approaches discussed in this chapter are based on recent policy and funder
imperatives in the United Kingdom, but many aspects of the ‘movement’ towards ‘evidence-based’ policy
and funding decisions are replicable internationally. For example, the Art, Culture, and Social Justice
p. 284 Network (2016) in the United States has as one of its ‘Action Points’ a collective review of approaches to
evaluation and evidence gathering, with the aim of developing guidelines to critical evaluation approaches
2
across art forms. Clearly the precise development of approaches to evaluation will di er from country to
country, and will likely be responsive to political contexts, participatory programme funding structures, and
the extent to which coordinating bodies are linked to arts infrastructure. Similarly, there will be culturally-
speci c di erences in how ‘guidelines’, roles, and responsibilities are set and negotiated. While it is not the
aim of this chapter to review international approaches to research and evaluation in participatory arts, this
is discussed more fully in Bel ore and Bennett (2008). The following section therefore presents examples
from a UK perspective, but many of the issues and examples will also have international relevance.
In the United Kingdom, an ‘outcomes approach’ to project evaluation has been spearheaded by the Charities
Evaluation Service and Big Lottery Fund (the latter distributes around £600 million per year and has a major
in uence on practice and policy within the UK third sector). This outcomes approach has been adopted and
adapted by other major funders. The Charities Evaluation Service published a guide to some of the most
commonly used ‘outcomes terminology’, which is worth sharing in the current chapter to establish a
common frame of reference:

• Monitoring is the systematic and routine collection of information.

• Outcomes are the signi cant changes that directly result from your activities. These changes may be

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


for individuals, organizations, communities, policies, practices, or the environment. For example, an
outcome might be that young people nd employment or that more green spaces are provided for local
people.

• Outcome indicators are the signs or clues that you monitor in order to measure the progress you have
made towards your outcomes. They might include the number of young people who nd employment;
the use of new green spaces; or the level of social activity by older people [ … ]

• The term method describes the di erent approaches that can be used to collect information on your
work, such as surveys or interviews (Parkinson & Wadia, 2010, p. 5).

Arts Council England has also published a generic guide to self-evaluation, modelled closely on an outcomes
approach and designed to be used by organizations across all art forms and artistic experiences for both arts
audiences and participants. This guidance is based on six ‘stages’ of self-evaluation:

1. agreeing [on] the scope

2. de ning evidence

3. collecting information

4. analysing

5. reaching conclusion[s]

6. writing or revising your plan for the future. (Arts Council England, 2014)

p. 285 Both the Charities Evaluation Service and Arts Council England guidance highlighted how evaluation should
be focused on the processes taking place within interventions, providing data and evidence relating to how
these processes are connected to positive, negative, or no change. The ‘outcomes approach’, as discussed
earlier, suggests focusing on the measurable change that takes place for individuals, the community, or the
environment. In a community music context, this may include changes in musical ability or musicality, and
personal or social skills, as well as exploring the practices (i.e., activities) that have led to these changes, and
using this knowledge to inform future practice. This basic framework provides a useful overview of the
evaluation cycle, although Arts Council England and Charities Evaluation Service accept the limit to which
evaluation frameworks can be prescribed or suggested as a ‘one-size- ts-all’ solution to widely varying
interventions.

The National Foundation for Youth Music, one of England’s largest funders of community music activity for
children and young people, published an Outcomes Framework in 2014, seeking to ensure that all funding
applications relate to the ve areas of change that they would like to support through their funding (i.e.,
musical, personal, social, workforce, and organizational changes). In doing so, the funder also insists that
applicants consider the relationship between proposed activities and intended change, as well as providing
details of how any intended change will be measured throughout projects, and how this information will be
used to in uence and improve project delivery (National Foundation for Youth Music, 2014b, p. 14). This
framework acknowledged that all areas of change are linked together, and that the nature of participatory
music-making implies that musical development is innately linked to personal and social development,
which in turn are also linked. For example, a group drumming session may simultaneously improve (or
indeed worsen) drumming skills, personal con dence, or interpersonal communication. The other two
areas of change highlighted (i.e., positive change in the workforce and in organizations providing projects)
may also be linked to the same activity (e.g., running a seminar series may a ect the knowledge and skills of
practitioners, as well as potentially improving the reputation of an organization). Across all outcome areas,
the framework is intended to encourage applicants and grant-holders to consider the in uence of project

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


activities on the intended or achieved outcomes.

This funder has also published an ‘evaluation builder’ (National Foundation for Youth Music, 2014a), which
is a series of self-report and observation questionnaires and qualitative research guidelines based on
commonly sought outcomes across the projects it funds (e.g., musical ability, well-being, behaviour, and
engagement). The aim of the evaluation builder is for grant-holders, or others not funded by the National
Foundation for Youth Music but working in a similar way, to identify measurable dimensions of intended
outcomes and construct an appropriate evaluation framework. Many of these tools have been designed and
adapted in line with validated measures of similar outcomes across other sectors such as subjective well-
being (Stewart-Brown & Janmohamed, 2008), or musical ability (Papageorgi et al., 2010).

The Youth Music Evaluation Builder also features many more evaluation tools and ‘toolkits’ within the
p. 286 ‘Inspiring Impact Hub’. Inspiring Impact is a campaign and set of resources run by eight organizations
3
seeking to support and/or represent voluntary organizations in the United Kingdom. Their aim is to
strengthen individual and organizational approaches to evaluation, improve the standards and quality of
evidence relating to impacts being generated and used by organizations, and ultimately increase positive
impact based on this work and improved sharing of learning and knowledge. The results include advice on
how to improve evaluation systems, as well as links to other sources of support. Initiatives such as these are
ever increasing and aimed at improving evaluation practice in third sector funded interventions, including
many community music projects and activities.

Bel ore and Bennet (2010) warned of a ‘toolkit mentality’ in their critique of the development of impact
measurement approaches in the arts. The authors made an important distinction between the aim of impact
research being about understanding the complexity of people’s aesthetic experiences, and impact data
being used to make a case for future funding and advocate for a particular approach or organization. The
authors discussed how evaluation is often conducted in the absence of theory, generally happening in
isolation of contextual or literature reviews. Bel ore and Bennet (2010) also argued that much arts
evaluation is seeking to replicate a social (quasi) scienti c paradigm in a fairly uncritical way that belies the
opportunities a orded to these programmes and interventions to explore artistic process and aesthetic
experiences. Ultimately, they argue that striving for evidence-based policy systems in the arts is somewhat
futile as there is little evidence that arts policy is based on evaluation or other research ndings in the rst
place. They claim that the purpose of evaluation in arts projects should be

a critical approach that aims at an open enquiry of the problems, both theoretical and
methodological, which are inherent in the project of understanding the response of individuals to
the arts and trying to investigate empirically the extent and nature of the e ects of the aesthetic
experience. (Bel ore & Bennett, 2010, p. 139)

The ‘toolkit’ approaches just discussed could be said to exist in productive tension with the cautions laid out
by Bel ore and Bennett, if considered as tools with which to consider, re ect on, and interpret complex and
overlapping processes, rather than tools seeking to con rm predicted and positive e ects. The ‘toolkit
mentality’ suggests that there are simple ‘o the shelf’ solutions to arts evaluation, however the
documentation and support associated with many of these toolkits is relatively open about how
methodology and frameworks can be designed, adapted, and negotiated to meet the needs of individual
interventions. Indeed, it could be argued that the outcomes approach described earlier is intended to
encourage practitioners and project managers to critically explore research questions and set up
appropriate frameworks from the very beginning of project design, rather than regressing to simply
quantitatively measuring project outputs or over reporting positive ndings.

There is a scope for many methods and designs within evaluation of participatory music projects: drawing
on the humanities, being theory-based and being exploratory, as opposed to summative and advocacy

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


p. 287 focused. What is often lacking in the execution of evaluation in these contexts is a thorough analysis and
interpretation of data, which may also be linked to an understandable skills gap amongst practitioners and
project managers. Whilst an understanding of ‘the outcomes approach’ and its associated language and
processes is now fairly well-established across the third sector and many participatory music projects, there
is a need for greater and more critical understanding of the research process and the signi cance of analysis
and interpretation in how data are handled, and ‘ ndings’ established.

A review of approaches to evaluation within participatory projects

In order to further understand the wide variety of approaches currently being used in the evaluation of
participatory music projects, a series of real-life examples of methodologies from projects of di erent
scales and contexts is provided below. These examples are not presented as best or worst practice (or
indeed, as a ‘toolkit’); they are simply di erent approaches to evaluations that have taken place in recent
community music programmes followed by observations on how to ensure the evaluations are most useful.
The rst section presents methods that are commonly used in evaluations with children and young people
in community music projects. The second provides examples of recent larger-scale evaluations and a
discussion of where certain approaches to evaluation have been taken, and to what e ects.

Summary of popular methods in evaluation of participatory music


projects

Literature and previous research reviews.


To best contextualize an evaluation framework, it is important to present and discuss other relevant
research and evaluation that has taken place. This may be with similar populations as the one the project is
working with, or it could relate to similar practice. Although it is likely unnecessary to provide an exhaustive
review of all published literature or research that is similar to the project that is taking place, being clear
about the parameters of the review will allow readers to understand the process and the expected outcomes
(e.g., due to pragmatic reasons it may be time-bound or relate to speci c age groups or geographical
locations).

Open and closed question surveys. Surveys are very popular in participatory projects and can be used to
explore self-perception of things like musical ability, self-expression, self-esteem, and well-being. Some
p. 288 projects administer surveys at the beginning and end of projects to explore the ‘distance-travelled’ by
participants over the course of a project. It is important to keep surveys very speci c to the questions and
outcomes that the project is seeking to explore or develop, and to consider the abilities of those taking part,
as well as the appropriateness of using this tool based on the design of the project. For some participants
surveys could be o -putting and lead to initial disengagement. Others may not have high levels of literacy,
or there could be language barriers. These elements should be considered before deciding whether surveys
are an appropriate tool for the evaluation. Most survey analysis is quantitative, and the ndings can be
easily presented in graphs, charts, or tables, which can be a useful way of communicating the changes
taking place in a project.

Reliability, validity, and the use of controls. When designing a survey you may wish to consider whether the
measures are likely to be reliable (i.e., can they produce consistent results across the group(s) and contexts
to be used with?), and valid (i.e., can they accurately measure the dimensions or ‘outcomes’ you want to
explore?). Many of the frameworks described in this chapter are based on ‘validated’ measures, which

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


means that they have been tested and proven to measure what they are supposed to. A strong approach to
survey design will consider and address these elements and describe why certain measures are being used.
Another increasingly common element of quantitative evaluation design is the use of control groups. This is
when you use the same surveys with a group of individuals who are not participating in the same
intervention (i.e., they may be doing other activities or their ‘normal’ activities). The measurement scores
can then be compared to indicate the independent e ect of the intervention. These elements are mentioned
as potential things to consider or discuss when designing an evaluation, but particularly the use of control
groups can make the process practically and ethically complicated, so they should be thoroughly researched
and considered before being built into survey designs.

Semi-structured interviews. Interviews in social science research have been described as ‘conversations with
a purpose’ (Burgess, 1984, p. 102), which highlights why it is important to think clearly in advance about
what topics will be discussed or questions asked. The level of structure applied to interviews depends on
who they are happening with and the amount of data to be generated. For most participatory music projects,
it is generally acceptable to base interviews on three or four key topics or questions that relate to the
experiences of the participants (or practitioners, or other stakeholders that are relevant to the evaluation),
and how they make connections with their experiences on a project, and any changes that they understand
as taking place. Interviews can be a really useful way of exploring whether the way that a project was
intended to have an e ect is actually being experienced by those involved.

Focus groups. Similar to interviews, focus groups are facilitated conversations structured around a set of key
questions or issues, but they occur with a group of people. They can be a practical way to gather lots of
perspectives at one time, and are often suited to participatory music projects because group activities are
relatively common and participants may be used to interacting in this way. As with interviews, it is
p. 289 important to try not to discuss too much if it will generate data that will then become unmanageable for the
evaluation. The practicalities of using a focus group method will vary from project to project, but it is also
important to remember that more participants taking part will make the group harder to facilitate and the
data more di cult to record and analyze (i.e., keeping the number of participants below double-digits is
therefore a general rule of thumb).

Video observations are becoming more popular as technology enables videos to be recorded in a relatively
cheap and accessible way. Video observation can be a practical way of recording the various processes at play
in a community music interaction, both between practitioners and participants, and between participants
themselves. They can provide a good way of demonstrating musical progression by making recordings at
various stages of a project. As with the other methods discussed earlier, it is important to ensure that clear
consent is obtained before recordings are made, especially if they are to be shared any further than, for
example, in reports or blogs that are publicly available or shared with funders. It is also important to
remember that video recordings are a source of data, and they still need to be analyzed like any other. It may
be appropriate to look over videos with practitioners or participants themselves and to make a note of their
interpretation of the activities taking place or the outcomes of these activities.
Fieldwork diaries are also increasingly common in many projects. These are generally kept by the
practitioners in a project, where they are asked to use a diary at the end of each session to re ect on two or
three key questions relating to the aims of a project. These diaries can then be submitted at regular intervals
to a project manager or evaluator, or submitted at the end of a project. This can be especially useful in a
community music setting as most practitioners are used to working in a re ective way and thinking about
(if not always writing down) why and how certain approaches in their work have led to certain outcomes or
changes in an individual or group. By ensuring that these re ections are collected in a regular and
systematic way, eldwork diaries can be a relatively ‘natural’ way of collecting detailed qualitative data
about the progress and practices in a project.

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


While this is a relatively short list of some of the more common methods employed in the evaluation of
participatory music projects, it serves to highlight how some clear thought about which methods and
approaches may be suitable to a project can help to clarify the type of data that will be generated and make
the practical implementation of an evaluation framework much more manageable for all those involved.

Having provided a summary of some of the more common methods, there are also a number of common
‘pitfalls’ that can emerge when participatory music projects are engaging in self-evaluation, or indeed,
commissioning an external evaluator.

Context and research questions. As is suggested by the inclusion of a relevant literature review, it is important
to provide the context of the work when writing an evaluation framework or report. Many evaluations seem
to exist completely in isolation, when in fact they are the product of a particular organizational strategy, the
agenda of a funding body, or other speci c socio-historic factors. It is crucial to make the context in which
the work is taking place very clear to the reader, and this will also help to explain why particular research
questions are framing the evaluation over others (e.g., if working with young people in a youth justice
setting, what factors may be completely out of the control of the project and thus out of the scope of the
p. 290 evaluation?). Explicitly writing the research questions into the framework and report also helps the
evaluator to explain the parameters of a particular evaluation, and respond in the conclusion as to how far
these questions have been answered in the work.

Over-claiming. Many project evaluations will discuss ndings in terms of ‘evidence’ or ‘proof’ of particular
e ects. It is generally quite di cult to prove the long-term or lasting impact of a particular project or
intervention, based solely on the fact that many (if not all) projects are working with individuals with
complex lives and many in uences on their behaviour. When presenting analysis and ndings in an
evaluation, it is important to recognize that the reader will decide how far they are convinced that the data
‘proves’ that particular changes have taken place. In the guidance literature previously discussed, ndings
from particular methods are very deliberately referred to as ‘indicators’ rather than evidence, because they
will indicate whether and to what extent particular changes have taken place. Stronger evaluations will be
very clear about the limits of the methodology and also discuss negative or neutral ndings, so that readers
can decide how much they are convinced by the argument being constructed. This can also provide helpful
indications of how methods could or should be changed, or how research questions could be adapted in
future work.

Interpretation is essential. Many weaker evaluations will provide lots of detailed data, whether quantitative or
qualitative, and provide very little or no interpretation or analysis of the data. This may be a graph showing
changes in self-perceived musical ability, a short video extract of a session, or a quote from an interview or
feedback form. In order for these to be considered useful ndings they must be interpreted by the evaluator
or author. This should also go beyond a basic description of what the data represent and instead explain the
meaning of the data. Why is this being presented instead of something else, and what is this telling the
reader? The data in an evaluation are unique to a particular project and will have been generated in a speci c
context; it is therefore important to apply this to the interpretation of ndings. When reporting, an
interpretation of ndings should also take place at the end of relevant sections (i.e., what do these ndings
taken together imply for a project?) and in the overall conclusion of a report (i.e., to what extent have the
research questions been answered? Which ndings are stronger than others? Which are the questions that
remain? What were the limitations of particular methods?).

Drawing conclusions and making recommendations. Linked to what we've already discussed, many evaluation
reports do not adequately provide an overall interpretation of the ndings, or do not relate the ndings back
to the original research questions or project aims. The evaluation framework is there to provide a structure
to the methods, practices, and reporting of a project evaluation, but it is also unlikely that a framework has

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


been ‘resolved’ completely, or that there is not some clear learning that has emerged that may be useful for
future work. The conclusion is the place to present this overall assessment of the extent to which the
framework has been resolved and research questions have been answered, and what gaps remain for future
work. It is also quite common to outline a number of recommendations, either for those stakeholders
p. 291 involved in the project (i.e., practitioners, organizations, partners, funders or even participants
themselves) or for other organizations looking to engage in similar work. If the report is being shared with
funders or other researchers, it can also help to set the agenda and be considered alongside the ndings and
conclusions of many other projects.

Ethics

It is very important to consider the ethical implications of evaluation before any methods are applied to a
project. Thomas and O’Kane (1998) provide an extensive discussion of the ethics of research and evaluation
with children and young people, which are particularly signi cant given the di erent stages of development
across this population and the extent to which they can be said to provide informed consent to take part in
evaluation processes. Whilst it is unlikely that the methods described earlier will be damaging to
participants, it may be that taking part in an evaluation activity (whether a survey, interview, or
observation) could make participants feel uncomfortable or exploited in some way. It is therefore essential
to have a clear ethical policy from the outset, one that outlines the risks of the evaluation activities and the
processes that will be put in place should certain negative responses or situations occur. At the very least it
is important that participants are aware that they are taking part in research or evaluation activity, that they
consent to do so, and that they are free to discontinue their involvement at any time. A generally
straightforward way of doing this is to provide participants with a short information sheet outlining the
evaluation methods and processes, and include a consent form so they can indicate that they understand
and agree to take part. Clearly this should be obtained from a parent or carer if working with very young
children, although attempts should be made to inform individuals at all stages of development and all
abilities of their role and participation in project evaluation. Some funders will require information on
whether the evaluation proposals will be subject to review by an ethics board or committee and in some
contexts it is best practice to set up an institutional review board or steering group to oversee and advise on
the evaluation approach. The Social Research Association provide useful guidelines for projects seeking to
4
establish an ethical policy or framework for their evaluation activities.
Approaches to evaluation in larger programmes

Youth Music Mentors


Youth Music Mentors (YMM) was a programme funded by the Department for Culture, Media, and Sport,
p. 292 which aimed to improve the life chances of young people at risk of social exclusion. It was delivered by
eighteen community and voluntary music organizations across England from March 2006 to March 2011.
The programme objectives were as follows:

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


• To deliver high quality music-based mentoring provision for young people in challenging
circumstances

• To provide links to high quality music-making experiences

• To engage and train inspirational music mentors appropriate to the needs of the participants

• To provide young people with opportunities to develop their resilience, social, and emotional skills,
and enable them to lead successful and ful lling lives

• To help motivate and prepare young people for routes into education, employment, or training.

To meet these objectives, each organization would ‘match’ a young person to a mentor, working across any
genre or style of music, for a minimum of ten sessions. Crucially, near the beginning of the mentor
relationship, ‘mentees’ would also explore and agree on personal and social development goals to be
developed alongside their musical progression. This focus on personal and social development could be
intrinsic to musical development (e.g., con dence to perform or compose) or related to separate issues (e.g.,
help accessing housing advice from a local agency).

An independent external evaluation of the YMM programme was commissioned (Deane, Hunter, & Mullen,
2011) which employed a qualitative methodology to explore programme delivery and outcomes. Youth Music
also designed and implemented quantitative research tools intended to explore the e ects of music
mentoring on mentees’ musical ability and agency (Lonie, 2010, 2011).

The qualitative evaluation by Deane and colleagues began by outlining a series of six evaluation objectives to
explore across the projects. These were informed by the overall objectives of the programme (discussed
earlier), as well as a thorough review of literature and research relating to mentoring and music as an
engagement tool.

This resulted in the development of interview schedules to be conducted with mentors, mentees, project
managers, and the national programme co-ordinator. The rst set of interviews was conducted with all
projects and analysed. Based on these ndings, six projects were selected for further interviews with
appropriate sta and participants. In total, 150 interviews were conducted across the evaluation. All notes
and transcripts were analysed according to the topic guide headings of the semi-structured interviews, and
they were structured according to agreed-upon evaluation objectives. Whilst this provided an analytical
framework for handling and analysing a large amount of data, the intention of the research was to explore
di erences and similarities in accounts, as well as to identify where certain aspects of mentoring practice
were perceived to be particularly associated with intended outcomes.

The evaluators committed to the Social Research Association’s (2003) Ethical Guidelines in their approach
p. 293 and in their words: ‘Our key ethical principle boiled down to: data belongs to the people we’ve learnt it
from, we negotiated it where we have used it directly; we explained to its owners what we’re going to use
their data for, and did not exceed their permission.’ (Deane et al., 2011, p. 22)
The quantitative framework consisted of a self-report tool, ‘track-record’ seeking to establish any change
in score or ‘distance travelled’. This was given to mentees to complete within the rst three sessions, and
again within the nal three sessions. The questions were arranged around an illustration of a vinyl record
split into ve sections (one per question) and a scoring structure of one (negative on the outside of the
record) to eight (positive near the centre). The questions were as follows:

• How would you rate your musical ability at the moment?

• How much do you know about opportunities to progress your music-making?

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


• Thinking about your life in general, how much do you agree with the following statements?

• I feel listened to by people around me.

• I feel like I make decisions that are good for me.

• I feel like what I say and do will make a di erence to my life. (Lonie, 2010, p. 9)

At the follow-up point, mentees were also asked to rate the extent to which they had developed the
following skills:

• Learned to work with other people

• Managed to turn up on time

• Respect for other people’s views

• Have been able to express myself (Lonie, 2010, p. 9)

Some mentees were experiencing particularly challenging circumstances, including young people with poor
literacy levels, therefore mentors were entrusted to collect the quantitative data in a way that was
appropriate for each individual young person and would not risk dissuading mentees from further
participation and engagement (i.e., avoiding the use of paper-based tools if they would be o -putting).

Each project was encouraged to collate and analyze the ‘distance travelled’ data for the groups they were
supporting and submit the information to Youth Music who would collate the data across the full
programme and perform analysis seeking to identify change over time and between groups (full discussion
of the methods and ndings are provided in Lonie, 2010, and Deane et al., 2011).

Musical Pathways

Musical Pathways was an exploratory study of young people in the criminal justice system engaged with a
p. 294 creative music programme (De Viggiani et al., 2013). There were three main aims to the programme: (1)
to explore, using qualitative methods, the meanings and values that young people in justice settings
attribute to music, including links to lifestyle, behaviour, and health; (2) to assess the feasibility of
conducting a quantitative evaluation of the impacts of music interventions in youth justice settings; and (3)
to explore pre- and post-programme trends in health, well-being, and social inclusion for individuals and
projects.

The programme explored 15 projects delivered to 120 participants (118 took part in the evaluation) across
eight youth justice settings, and it represented a mixture of participatory music-making approaches
involving a wide range of genres and instruments, mostly in small groups. Ethical approval was granted
from the Ministry of Justice, the National O ender Management Service, and the Ethics Committee of the
University of the West of England Faculty of Health and Life Sciences.

The qualitative component of the programme was mainly ethnographic in nature with a number of
researchers conducting participant observation exploring links between iour (musical and otherwise), well-
being, and social inclusion. A number of semi-structured interviews were also carried out with musicians,
participants, and sta , and thirteen focus groups were conducted with a sample of participants one week
after their rst music session.

Interviews, focus groups, and participant observations were all transcribed and coded according to

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


frequently recurring themes. These themes were checked by other researchers working on the project to
ensure agreement and consistency in the ndings. This approach to coding is systematic, and sought to
elicit commonalities in the understanding, experience, and world views of those taking part in the research
(participants, sta , and researchers).

The quantitative element of the programme consisted of a set of validated questionnaires including the
General Health Questionnaire (GHQ-12) (Goldberg and Hillier, 1979); The Warwick-Edinburgh Mental
Wellbeing Scale (WEMWBS) (Stewart-Brown & Janmohamed, 2008); The Clinical Outcomes in Routine
Evaluation Outcome Measure (CORE-OM) (Evans et al., 2000); and the Social Inclusion Score (SIS) (Secker
et al., 2009). The attempt to track participants at three time points across the programme proved di cult,
as many participants dropped in and out of provision, and only 17 per cent took part in every session. On this
basis, the quantitative data collected were not considered reliable enough to justify a full analysis. As a
feasibility study it indicated that, whilst participants were generally able to ll out the surveys and
understood what was being asked, quantitative study designs in such settings need to be very speci c to the
nature of the intervention, and it would be wise to include other methods if possible. The researchers
reported that some participants appeared unsettled by questions in the CORE-OM relating to suicidal
ideation, engaging in self-harm, and committing acts of violence, therefore this was removed after the rst
5
phase of surveying.

This is an example of a very strong evaluation framework, using qualitative methodology to explore the
impact of creative music-making for participants, practitioners, and settings. Using participant observation
as the main method for exploring interactions and e ects is a particularly interesting approach appropriate
p. 295 to the restrictions of a youth justice setting. To have included the quantitative feasibility study rather
than applying an untested quantitative framework and methodology, which was wholly related to health
and behaviour outcomes and not to music, is also commendable. The researchers have attempted to make
the methods as inclusive as possible within the circumstances, although considering the dominance of
interpretative methods; it is possible that the established qualitative methods could have been adapted to be
more creative, or musically focused. Overall, the study provides as much of an exploration of how evaluation
of participatory music projects can be conducted in youth justice settings as it demonstrates and articulates
the impact of such approaches. These are both very useful contributions to the eld.

Discussion: Rhetoric and reality in participatory music interventions

The evaluations and approaches discussed in the preceding illustrate a range of ways to explore the impact
of participatory music-making. It does not seem sensible to suggest one particular approach over another,
as each intervention will require a design that is appropriate to the needs and abilities of its particular
participants and practitioners. That said, there are a number of observations that can be shared in order to
develop thinking and discussion, based on the huge amount of work already done to develop evaluation
practice in participatory music-making.
The arguments put forward by Bel ore and Bennett (2010) are convincing, and there is an undeniable
tension between positivist and interpretivist approaches to evaluation design. The result is often a set of
ndings that both fail to represent the diversity and complexity of musical interactions and their e ects,
and oversimplify ‘e ects’ by applying a quantitative ‘cause and e ect’ rubric.

That said, the lack of clear ‘logic models’ (i.e., explicitly setting out why and how certain approaches will be
undertaken to achieve certain e ects, as well as how these can be observed and measured) still seems to
limit approaches to evaluation in participatory music-making. Coupled with this is a general lack of
research discourse in the setting up of evaluation. If project managers, practitioners, and participants were

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


to discuss and agree on research questions in advance, the process of establishing which are the best
methods of exploring (if not answering) these questions would take us somewhere closer to logical planning
and stronger evaluation.

Being interpretative is not the same as being random, and simply allowing musical interactions and their
e ects to happen. Whether adopting a full participatory action research design, or simply using short
interviews to explore a person’s experience, a critical account of why this approach is being used and what it
hopes to elicit will allow for the presentation of ndings to be more strongly interpreted by both author and
p. 296 reader. It is in the presentation of accounts based on an interpretation of phenomena that a contribution
to knowledge is made, whether based on complex statistical analysis or an ethno-musicological
observation.

Based on the developments presented in the earlier part of this chapter a continuing movement towards
evaluation toolkits across the third and cultural sectors appears likely. However, these tools can be used
helpfully, assuming they are also approached critically. It can be argued that the intention of many toolkits
is to provide a range of modes of enquiry, to spark ideas and expand thinking when answering research
questions, rather than to impose rigid frameworks on wholly unsuitable situations that generally involve
social, psychological, and musical processes all at once.

Another observation from the approaches presented earlier is how often evaluation design and
implementation seem to occur in isolation. This can be claimed based on the general lack of references to
other projects, approaches, or tried and tested methodologies in many published evaluations. Indeed, more
could and should be done to bring people together to discuss evaluation approaches and share successes,
failures, and learning so that individuals and organizations seeking to conduct evaluation are not constantly
‘reinventing the wheel’.

Communicating a range of opportunities to improve evaluative practice in the discursive mode of this
chapter is of course a slightly false proposition when compared to real life situations, which must be
ethically led by the immediate needs of participants, not driven towards scripted ideals of research practice.
Re ection remains a key component of best practice for any community musician, but re ective practice
can be structured using concepts, methods, and tools that enable us to better understand the complex
human and musical dimensions of participation. The ideal reality of the future is one where evaluative
approaches and frameworks are critically explored, theoretically and practically, and everyone involved in a
participatory music-making intervention is included and engaged in this process.
Reflective questions

1. How can all those involved in participatory music interventions be encouraged to think critically
about evaluation processes?

2. To what extent are evaluation ‘toolkits’ useful? What are some of the opportunities and limitations of
published ‘toolkits’?

3. How can evaluation frameworks be designed to achieve a balance between logical ‘cause and e ect’

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


approaches and more exploratory evaluation designs?

Key Sources

Inspiring Impact website: http://inspiringimpact.org/

p. 297 National Foundation for Youth Music. Taking an outcomes approach: Guidance on youth music’s outcomes
framework. Retrieved from
http://network.youthmusic.org.uk/sites/default/ les/users/Outcomes/YM_OutcomesGuidance_web.pdf

Social Research Association. Ethical guidelines. Retrieved from http://the-sra.org.uk/research-


ethics/ethics-guidelines/

Notes
1. The ʻthird sectorʼ refers to charities and not-for profit organizations, o en providing services complementing the work of,
and seeking investment from, public sector and private sector sources.

2. Art, Culture, Social Justice Network website: http://artculturejustice.com/action-area/evaluation/

3. Inspiring Impact website: http://inspiringimpact.org/

4. Social Research Association website. Ethical guidelines. Retrieved from: http://the-sra.org.uk/research-ethics/ethics-


guidelines/

5. A schedule designed for young people has since been published by CORE http://www.coreims.co.uk/index.html
References

Arts Council England (2014). Self-evaluation toolkit. Retrieved from http://www.artscouncil.org.uk/selfevaluation/background-


self-evaluation/what/
WorldCat

Arts, Culture, and Social Justice Network (2016) ʻAction area: Evaluationʼ http://artculturejustice.com/action-area/evaluation/
WorldCat

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


Arvidson, M. (2009). Impact and evaluation in the UK third sector: Reviewing literature and exploring ideas. Working Paper 27.
Third Sector Research Centre, Birmingham: University of Birmingham.
Google Scholar Google Preview WorldCat COPAC

Belfiore, E., & Bennett, O. (2008). The Social Impact of the Arts: An Intellectual History, Basingstoke: Palgrave Macmillan
Google Scholar Google Preview WorldCat COPAC

Belfiore, E., & Bennett, O. (2010). Beyond the ʻToolkit Approachʼ: Arts impact evaluation research and the realities of cultural
policy‐making. Journal for Cultural Research, 14(2), 121–142.
Google Scholar WorldCat

Brown, T., Higham, B., & Rimmer, M. (2014). Whatever happened to community music? AHRC Research Network Report. Working
Paper. Retrieved from https://ueaeprints.uea.ac.uk/47880/
WorldCat

Burgess, R. (1984) In the Field: An Introduction to Field Research. London: Allen and Unwin
Google Scholar Google Preview WorldCat COPAC

De Viggiani, N., Daykin, N., Moriarty, Y., Pilkington, P., & Superact CIC, Live Music Now South West, Big Lottery Fund. (2013).
Musical pathways: An exploratory study of young people in the criminal justice system, engaged with a creative music programme.
Project Report. Bristol: University of the West of England, Big Lottery Fund.
Google Scholar Google Preview WorldCat COPAC

Deane, K., Hunter, R., & Mullen, P. (2011). Move On Up: An Evaluation of Youth Music Mentors. London: Youth Music.
Google Scholar Google Preview WorldCat COPAC

Evans, C., Mellor-Clark, J., Margison, F., Barkham M., Audin, K., Connell, J., & McGrath, G. (2000). CORE: Clinical outcomes in
routine evaluation. Journal of Mental Health, 9(3), 247–255.
Google Scholar WorldCat

p. 298 Goldberg, D P., & Hillier, V. F. (1979). A scaled version of the General Health Questionnaire. Psychological Medicine, 9 (1), 139–145.
Google Scholar WorldCat

Higgins, L. (2012). Community Music: In Theory and in Practice. New York: Oxford University Press.
Google Scholar Google Preview WorldCat COPAC

Lonie, D. (2010). Attuned to engagement: The e ects of a music mentoring programme on the agency and musical ability of
children and young people. Paper 1. London: Youth Music.
Google Scholar Google Preview WorldCat COPAC

Lonie, D. (2011). Attuned to engagement: The e ects of a music mentoring programme on the agency and musical ability of
children and young people. Paper 2. London: Youth Music.
Google Scholar Google Preview WorldCat COPAC

Lonie, D., & Dickens, L. (2015). Are you listening? Voicing what matters in non-formal education policy and practice. In M. Blazek
& P. Kra l (Eds.), Childrenʼs Emotions in Policy and Practice: Mapping and Making Spaces of Childhood. Basingstoke: Palgrave
Macmillan
Google Scholar Google Preview WorldCat COPAC

National Foundation for Youth Music (2014a). Youth music evaluation builder. Youth Music Network. Retrieved from
http://network.youthmusic.org.uk/learning/youth-music-evaluation-builder
WorldCat

National Foundation for Youth Music (2014b). Taking an outcomes approach: Guidance on Youth Musicʼs Outcomes Framework.
London: Youth Music

Downloaded from https://academic.oup.com/edited-volume/34637/chapter/295102009 by OUP site access user on 25 February 2024


Google Scholar Google Preview WorldCat COPAC

Papageorgi, I., Creech, A., Haddon, E., Morton, F., De Bezenac, C., Himonides, E., et al. (2010). Perceptions and predictions of
expertise in advanced musical learners. Psychology of Music, 38(1), 31–66.
Google Scholar WorldCat

Parkinson, D., & Wadia, A. (2010). Assessing change: Developing and using outcomes monitoring tools. London: Charities
Evaluation Service.
Google Scholar Google Preview WorldCat COPAC

Rimmer, M. (2009). Instrumentalʼ playing? Cultural policy and young peopleʼs community music participation. International
Journal of Cultural Policy, 15(1), 71–90.
Google Scholar WorldCat

Secker J., Hacking S., Kent L., Shenton J., & Spandler S. (2009). Development of a measure of social inclusion for arts and mental
health project participants. Journal of Mental Health, 18(1), 65–72.
Google Scholar WorldCat

Social Research Association (2003). Ethical guidelines. Retrieved from http://the-sra.org.uk/wp-content/uploads/ethics03.pdf


WorldCat

Stewart-Brown, S., & Janmohamed, K. (2008). Warwick-Edinburgh Mental Well-being Scale. User guide. Version 1. NHS Health
Scotland. Retrieved from http://www.mentalhealthpromotion.net/resources/user-guide.pdf.
WorldCat

Thomas, N., & OʼKane, C. (1998). The ethics of participatory research with children. Children & Society, 12, 336–348.
Google Scholar WorldCat

UK Government (2003). The Green Book: Appraisal and Evaluation in Central Government. London: HM Treasury.
Google Scholar Google Preview WorldCat COPAC

You might also like