Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Article

American Journal of Evaluation


2021, Vol. 42(2) 201-220
ª The Author(s) 2021
Capturing the Added Value of Article reuse guidelines:
sagepub.com/journals-permissions
Participatory Evaluation DOI: 10.1177/1098214020910265
journals.sagepub.com/home/aje

Erica L. Odera1

Abstract
Narrative case studies have shown that, when people are involved in an evaluation of a program they
are part of, it can change how they experience the program. This study used a quasi-experiment to
test this proposition empirically in the context a participatory action research curriculum called
Youth as Researchers. Half of all Youth as Researcher groups engaged in a participatory evaluation
(PE) of their program experience through writing reflective essays, creating their own evaluation
questions, and conducting peer interviews. The other half served as control groups and did not
engage in the PE activities. Pre-/posttest surveys and focus group data were used to assess differ-
ences among the experimental and control groups. Study results show that participants in the
experiment had important differences in their experiences in the program as a result of participation
in the evaluation. Implications for future practice and research are also explored.

Keywords
participatory evaluation, process use, youth participatory action research

Participatory evaluation (PE) has captured the attention of the evaluation community in the past
several decades. Often demarcated as fulfilling practical or transformational purposes (Brisolara,
1998; Cousins & Whitmore, 1998; Mertens, 2009), PE scholarship has been recently evolving, and
we now know more about its context, process, and consequences (Cousins & Chouinard, 2012).
Currently, more focus has been given to the impact of PE on evaluation use rather than direct
changes to those who engage in PE (Brandon & Fukunaga, 2014; Johnson et al., 2009). As a result,
the existing literature points to more understanding of the presumed outcomes of being involved in
PE than measured outcomes.
One reason for this is that studying the effects of PE is pragmatically challenging. Political
feasibility, willingness of stakeholders to participate, and a program’s comfort with engaging in
participatory processes are all barriers for carrying out PEs. One place in which to begin studying
these measured outcomes is within contexts where carrying out participatory approaches to evalua-
tion work is easily accepted. The following is a research study embedded within a program evalua-
tion. The research study examines whether outcomes among program participants differ if they are

1
Feed the Future Innovation Lab for Livestock Systems, University of Florida, Gainesville, FL, USA

Corresponding Author:
Erica L. Odera, 2250 Shealy Drive, Gainesville, FL 32601, USA.
Email: ericaodera@gmail.com
202 American Journal of Evaluation 42(2)

involved or uninvolved in PE activities. The evaluand at the focus of the program evaluation is an
action research-inspired curriculum called Youth as Researchers. This study adds to the literature by
explicitly measuring changes in participants as a result of engagement in PE, while heeding the
call by Cousins and Chouinard (2012) to diversify the methodological approaches used to study
this topic.

Participatory Evaluation (PE)


PE is a unique approach to program evaluation in which program participants are invited into the
evaluation space (Cousins & Whitmore, 1998; Weaver & Cousins, 2004). It invites participants to
take part in the defining, questioning, and analysis of their own program experiences. PE has been
described in two ways, as being either practical in nature or transformative in nature (Brisolara,
1998; Cousins & Chouinard, 2012). Pragmatically, PE can increase organizational buy-in and
capacity to understand and carry out evaluation activities (Cousins & Whitmore, 1998). As a result,
PE can foster evaluation use (Brandon & Fukunaga, 2014; Fleischer & Christie, 2009; Johnson et al.,
2009; Roseland et al., 2015).
In many cases, PE invites stakeholders to discuss their experiences in settings where they may not
have been typically offered a voice in decision-making, which can be a transformational experience
(King et al., 2007; Mertens, 2009). PE has also experienced criticism as an undefined approach
(Gregory, 2000) which faces the logistical challenges and power dynamics that any participatory
work requires from stakeholders (Cullen et al., 2011). Even so, it is typically considered to have
positive benefits (Chouinard, 2013; Cousins & Chouinard, 2012) of which some have been coined
“process use.” These are the skills, interactions, and effects on individuals and organizations result-
ing simply from participating in the evaluation process (Patton, 2008) and are independent from the
quality of the program itself. For instance, participants may gain new attitudes, knowledge, and
skills during their engagement in the evaluation (Henry & Mark, 2003; Patton, 2008). While the
importance of understanding process use has been discussed (Cousins & Chouinard, 2012; Henry &
Mark, 2003) and some studies have explicitly focused on its effects (Jacob et al., 2011; Shaw &
Campbell, 2014), this is an area in need of more empirical study. This study specifically focuses on
capturing these process-related effects of PE.

Youth Participatory Evaluation (YPE)


This study explores the effects of PE with young adults. Therefore, it is important to mention the
special attention paid to YPE as an approach to program evaluation, which heavily involves young
people in the design, data collection, interpretation, and reporting of the programs in which they
are involved (London et al., 2003; Sabo Flores, 2007). The philosophical rationale is that young
people are both capable of and should be involved in the assessment of programs designed to
influence their lives (Sabo Flores, 2007). The distinction between practical and transformational
PE is also less salient when working with young people because they are often less involved in
program decisions (Arnold et al., 2008; Sabo, 2003). Therefore, YPE may naturally be both
practical and transformative. YPE has also been an area in which more research about PE effects
has been conducted (see Table 1).

Youth Participatory Action Research (YPAR)


The evaluand, which sets the context of this study, is a curricular program inspired by participatory
action research (PAR). PAR takes a critical stance toward top-down research and believes that
subjects of research should be involved in the production of research in a way that improves their
Odera 203

Table 1. Summary of Overlapping YPE and YPAR Outcomes.

Outcome Description Literature

Individual outcomes
Critical thinking and sociopolitical Morsillo and Prilleltensky (2007); Watts and Flanagan (2007);
development Watts et al. (1999); Watts et al. (2003); Bautista et al. (2013);
Foster-Fishman et al. (2010); Ozer (2017); Kirshner et al.
(2011); and Checkoway and Richards-Schuster (2003)
Academic achievement and professional Cabrera et al. (2014); Cammarota and Romero (2006); Romero
skills et al. (2009); Checkoway and Richards-Schuster (2003); Dolan
et al. (2015); Rogers et al. (2007); Lau et al. (2003); London et al.
(2003); and Sabo Flores (2007)
Identity formation Torre (2009); Torre et al. (2008); Strobel et al. (2006); Tuck et al.
(2008); Dutta (2017); Abo-Zena and Pavalow (2016); London
et al. (2003); and Sabo (2003)
Confidence and self-expression Morsillo and Prilleltensky (2007); Warenweiler and Mansukhani
(2016); Lau et al. (2003); Sabo (20030; and Gomez and Ryan
(2016)
Relational outcomes
Social and professional networks Sabo Flores (2007); Mitra (2005); London et al. (2003); Rubin and
Jones (2007); Dolan et al. (2015); and Powers and Tiffany (2006)
Connections to other youth Price and Mencke (2013); Vaughan (2014); Strobel et al. (2006);
and Quijada Cerecer et al. (2013)
Improved adult perceptions of youth Ozer and Wright (2012); Livingstone et al. (2014); Strobel et al.
(2006); and Ozer et al. (2013)
Stronger youth–adult partnerships Ozer and Wright (2012); Checkoway and Richards-Schuster
(2003); Powers and Tiffany (2006); and London et al. (2003)
Organizational/community outcomes
Youth perspectives considered in Chen et al. (2010); Zeldin et al. (2008); Shamrova and Cummings
organizational decisions (2017); Lau et al. (2003); and Ucar et al. (2017)
Increased data use in organizational Ozer and Wright (2012) and Lau et al. (2003)
decisions
Increased organizational team building Dolan et al. (2015); Powers and Allaman (2012); Powers and
Tiffany (2006); Sabo Flores (2007); Gildin (2003); and Cooper
(2014)
Increased youth involvement, Dolan et al. (2015); Bautista et al. (2013); Shamrova and
interaction, and voice in Cummings (2017); Morsillo and Prilleltensky (2007); Arnold
community life et al. (2008); Powers and Tiffany (2006); Ozer and Wright
(2012); Tuck et al. (2008); Vaughan (2014); Cammarota and
Fine (2008); and Krishner et al. (2011)
Note. YPE ¼ youth participatory evaluation; YPAR ¼ youth participatory action research.

lives. YPAR applies this perspective to settings in which youth are the dominant stakeholders
(Cammarota & Fine, 2008; Flicker et al., 2008; Ozer et al., 2010; Rodriguez & Brown, 2009). The
goal of YPAR is to empower youth to create their own narratives about their lives while becoming
agents of change and knowledge producers in the communities where they live (Burke & Greene,
2015; Schensul & Berg, 2004).
Proponents of YPAR argue that it is both good scholarship and good practice when working
with young people (Ozer, 2017). It can provide better understanding of social issues facing
youth, since they are the ones actively engaged in defining issues, gathering data, and inter-
preting results, and this can increase the relevance of topics explored (Abo-Zena & Pavalow,
204 American Journal of Evaluation 42(2)

2016; Cook & Krueger-Henney, 2017; Quijada Cerecer et al., 2013). YPAR has also been
argued to be good practice. YPAR can create space for young people to question and critique
community life in ways that can improve the relevancy of youth programming (Abo-Zena &
Pavalow, 2016), while simultaneously helping to develop skills youth can use beyond the program.

Similarity of Impacts of YPE and YPAR


Beyond conceptual similarities, the empirically supported effects of involvement in YPE and YPAR
are also similar. The author applied this natural synergy to design a research study within an
evaluation. This allowed for a single set of instrumentation and data collection to gather information
on (1) expected outcomes of involvement in a YPAR curriculum and (2) outcomes of additional
involvement in a PE of the curriculum. Table 1 displays relevant literature which points to the
overlapping outcomes that YPE and YPAR can have on young people, their relationships, organi-
zations, and communities.

Study Design
The purpose of this article is to describe the results of a research study embedded within a program
evaluation. The purpose of the research study was to determine whether being involved in a PE
during the curricular program caused any unique changes in the outcomes participants experienced.
The following section describes the program evaluation and the data collection procedures used to
gather information about the effects of the PAR curriculum on participants. The section to follow
will detail the research study by describing the study’s design and the PE activities which made up
the focus of the research. The data collection procedures were the same for the program evaluation
and the research study and consisted of pre-/posttest surveys and focus groups.

The Evaluation: The Youth as Researchers Program


The context of this study was the delivery of a YPAR-inspired curriculum called Youth as Research-
ers. Developed by the Child and Family Research Centre at the National University of Ireland
(NUI), the intention of the curriculum is to teach youth to conduct youth-led community action
research (NUI Galway, 2016). The program trains youth across six different modules covering key
research design and methodology content areas. The goal of the program is to be an accessible
resource for existing youth organizations interested in supporting youth-driven social change
research. The typical age for these participants ranges from 12 to 18.
In 2018, this program was delivered for the first time to a new population—American college
students from Pennsylvania State University. Beyond the older age of participants, this was the first
time the program was delivered outside of a youth organization. The main purpose of the evaluation
was to capture any changes in participants resulting from the curriculum. A secondary purpose was
to use the evaluation artifacts (the survey and focus group protocol) to create a data collection
structure that would allow for comparisons of the program in different contexts.

Program Delivery
As noted above, this was the first time the program was delivered with an older audience outside of a
youth group setting. To recruit university students, fliers were created and emails were sent to all
academic college listservs throughout the university. Students who expressed interest were required
to fill out an application in which they stated their reasons for wanting to join the program and the
specific topics they were most interested in researching. As a result, initial groups were formed
based on student interests. The purpose of doing so was to reduce the amount of time it would take
students to agree on a topic and to align with the natural clustering effect that is common in
Odera 205

Table 2. Youth as Researchers Group Topics.

Initial Topical Grouping From


Group # Application Eventual Topic Chosen by the Group

Group 1 Racial injustices Students’ perceptions of diversity-related classes at Penn


State
Group 2 Environment and sustainable Food insecurity on campus
development
Group 3 Campus social issues Food insecurity on campus
Group 4 Discrimination based on religion or Immigrant students’ perceptions of free speech on campus
culture
Group 5 Women’s rights and sexuality Self-described identities of Penn State students
Group 6 Educational inequality and human rights Perceptions of sustainability programs and recycling at Penn
State
Group 7 Access to educational resources Penn State women’s perceptions of empowerment

community settings and youth groups (Breau & Brook, 2007; Christens & Perkins, 2008; Gehrke,
2014). Seven distinct groups were created based on unique thematic interests of the potential
members. Once these tentative seven groups were created, a large meeting with all students was
held. The program team explained the history of the program, its purpose, and program logistics.
Afterward, students were given time to meet with their preselected groups to discuss potential
interests. While the groups were initially assigned based on stated interests, these interests were
not shared with the students to allow them the free choice to select any range of topics they might be
interested in pursuing, a central element of the program’s philosophy. Once initial topics were
chosen, students were given permission to switch groups if they desired. The initial thematic topic
used to group students and their final project topic selection is listed in Table 2.
Group sizes ranged from 7 to 12 participants, with an average of 10 in each group. This was done
to align with the sizes of past youth groups and to accommodate potential attrition. The group-based
research training is the core element of the program. The purpose of the training is to equip groups
with exposure to how to design and carry out social science-related research. Topics covered
included conducting background research on a topic, creating a research question, research design,
research ethics, and disseminating findings in a way which can lead to change. Adaptations were
made from the original curriculum to accommodate the older age of participants, their higher
education level, and the fact that most did not already know one another prior to the training. An
active learning approach to the training was used to help increase the retainment of the information,
engage members of groups with one another, and provide intellectual challenge. The trainings took
place over a single 3-hr session. Members received training with their group members. A summary
of each training was written up by the researcher and given to every group the week after their
training concluded. Table 3 displays a summary of common training activities.
Once training was completed, each group solidified their research topic, question, and conducted
all research on their own and were responsible for self-organizing weekly meetings with one
another. These aspects of youth-led work and creative freedom are important components of YPAR
(Rodriguez & Brown, 2009). Once research was completed, each group also completed a final
product of their choosing to display and share their findings. Weekly check-in meetings on campus
were held starting the week after training for each group to speak face-to-face with program support
staff about challenges or issues they were facing as well as general updates. At the midway point of
the program, program staff attended a face-to-face meeting with every group to discuss their
progress and to help all groups solidify their research question and research design.
206 American Journal of Evaluation 42(2)

Table 3. Summary of Common Youth as Researchers Training Activities.

Summary of Common Training Activities

Five-min policy problem question: Ask participants: How would you address this issue, what steps would you take
to inform a policy maker on how to address this issue? Use visual displays to present their plans. Present
strengths and weaknesses of plans. Vote on best plans using sticky notes.
Group bonding: Ask participants to describe (1) themselves using one word, (2) their strengths and weaknesses,
and (3) their life story in five words.
Unscramble the research process: Scramble the stages of the research process and ask groups to work together to
put the stages in order and explain their rationale.
Creating measurable goals for social change: Ask groups to create social goals that are measurable (i.e., SMART
goals) using only 10 words. Describe the difference between vision statements, goals, audience, and
dissemination method.
Discussion of research methods: Ask participants to describe what they know about qualitative and quantitative
methods. Explain the pros and cons of each type of research method. Facilitate a discussion on ethical
considerations in social science research.
Types of research: Use flip charts to distinguish differences between primary and secondary research and what
types of methods can be used to collect data for each type.

At the conclusion of the program, groups’ final products were displayed at a public exposition at
Pennsylvania State University. The purpose of the exposition was to share students’ experiences,
procedures, and results with other Youth as Researchers participants and public attendees. This
event served primarily to foster reflection of the process among participants with some calls of
action and recommendations given, core elements of YPAR (Rodriguez & Brown, 2009; Schensul &
Berg, 2004). Products included videos, a poster, photographs, and an infographic. One group was
invited to present their findings at the Pennsylvania state capitol at an undergraduate research
exhibition the week after this exposition. Descriptions of the projects and exposition can be found
at https://agsci.psu.edu/unesco/the-penn-state-unesco-youth-as-researchers-program/2018-youth-
as-researchers

Survey Design
Data collection for the evaluation occurred through two sources: a pre-/posttest survey and focus
groups with each Youth as Researcher group. The pre-/posttest survey allowed for an approximate
measure of change in participants as a result of the program and supports the quasi-experimental
design of the research study described in the next section (Boruch et al., 1998; Cook et al., 2008;
Greeno, 2002). A core goal of the evaluation was to understand the effects of the program on
participants while also creating a structure for future programs delivered in other locations to have
comparable information. As a result, a broad survey instrument was created which aligns with
common outcomes of participation in YPAR from literature summarized at the beginning of this
article (see Table 1). The survey was designed to capture variables related to individual, relational,
and community-focused concepts. Decisions were made about instrumentation to reflect existing
data collection procedures within NUI Galway, the institutional originator of the program. There-
fore, whenever possible, instrumentation was chosen to be synergistic with existing program data
collection at NUI Galway in order to facilitate its use for decision-making within the institution
(Patton, 2008). Specifically, the measures for civic engagement self-efficacy, empathy, and social
support were taken to be congruent with instrumentation used by the Center in order to increase the
data’s ability to be used for decision-making and learning.
This survey examined seven conceptual areas: (1) self-efficacy—research related and civic-
engagement related, (2) empathy, (3) critical consciousness, (4) social support—among peers and
Odera 207

older adults, (5) the ability to influence others, (6) community attachment, and (7) value of
youth participation—within both youth- and community-wide organizations. Detailed informa-
tion about the instrumentation and a list of survey items are summarized in Supplemental
Appendix A. Instruments developed by Flanagan et al. (2007); Thomas et al. (2014); Davis,
1983; Cutrona & Russell (1987); Theodori (2004); and Schulz et al. (2011) were used in the
survey.
While the majority of the variables used in the study are previously validated scales, for the
purposes of the research study presented here, each were treated as summated indices which were
further summated into broad collapsed socioecological categories at the individual, relational, and
community levels. The purpose of doing so was to facilitate broad comparisons among all partici-
pants at these three socioecological levels rather than to focus in on any one variable.

Focus Groups
Focus group discussions were the second key form of data collection for the evaluation. Focus
groups allowed for a deeper understanding about the group-level dynamics and gave group
members a space to express agreement, disagreement, or reflect on their experience (Hesse-
Biber & Leavy, 2006; Stewart et al., 2007). Focus groups are often a recommended source of
explanatory information in quasi-experimental research (Rubin & Babbie, 1997). For each of the
seven groups who participated in the Youth as Researchers program, a 1-hr focus group discussion
was held at the conclusion of their program experience. The moderator’s guide was designed to
ask questions about the group’s relational dynamics, problem-solving abilities, the perceptions of
youth-led nature of the program, lessons learned, and whether the groups would recommend the
program to their peers. Only those groups which were part of the PE experiment (described in the
next section) were asked whether their participation in these additional meetings and activities
changed anything about their experience in the program. Each focus group session was tape
recorded and later transcribed.
After focus groups were audio recorded and transcribed, coding of key findings took place in
seven stages, with codes narrowed over three cycles. The author was the single coder due to
pragmatic constraints, and three accommodating strategies suggested by Saldaña (2013) were used
to combat the potential bias of having a single coder. First, member-checking during the focus
groups occurred at the conclusion of the focus group discussion. Second, detailed reflective notes
were created by the researcher immediately following the conclusion of each focus group in order to
record and capture the interpersonal dynamics of each focus group and were referenced later during
the coding process. Third, the author transcribed each audio recording and engaged in initial coding
during the transcription process. To protect the identity of the focus group participants, pseudonyms
have been used in place of participants’ real names.

The Research Study


Embedded within the program evaluation was a research study. The purpose of this research study
was to understand whether involving some groups in PE (compared to the standard evaluation
procedures described above) would change the overall outcomes of participants’ experience in the
program. Four of the seven Youth as Researchers groups were randomly chosen to participate in the
experiment and the other three groups served as controls. All members of the four randomly chosen
groups were given written letters personally inviting them to participate in the experiment (n ¼ 40).
The experimental treatment consisted of three stages of evaluation meetings and two online
activities. Those who participated in these meetings interacted both with members they already
knew from their own group as well as members from other groups. Each part of the three-stage
workshops had three different meetings, so that each meeting had approximately 8–10 participants.
208 American Journal of Evaluation 42(2)

Table 4. Summary of Experimental Activities.

Experimental Activity Description

Meeting 1: Brainstorm session Participants brainstormed topics appropriate for the evaluation of the
of evaluation topics program and created draft evaluation questions.
Meeting 2: Creating questions Participants were given the entire list of questions created from Meeting 1
and worked together to narrow down any redundancies. Participants
then worked together to notice what topics and questions were missing
and drafted new questions.
Online 1: Voting and personal Participants voted on their preferred evaluation questions from Meeting 2
reflection and wrote reflections about what they had learned about evaluation
from the first meeting.
Online 2: Personal reflection Participants wrote about what they had learned about themselves and their
about the program group as a result of both the evaluation work and the program.
Meeting 3: Peer interviews Participants interviewed a peer that was not part of their regular Youth as
Researchers group. Debriefing took place after the interviews in a group
format.

Observational notes, audio recordings, and reflective writings by the researcher were taken at each
meeting (a total of nine different meetings).
The first experimental activity was a brainstorming session in which participants were invited to
give lists of pros and cons of key elements of the program (recruitment, training, and creating a
research topic) and asked to create evaluation questions for each of these three areas in small groups.
During the second meeting, participants were asked to take the entire list of all evaluation questions
brainstormed from the first meetings, narrow down the existing questions, and to consider what
topics and questions ought to be added that were not represented in the list that the group felt was
important.
After the second meeting, participants completed two individual activities online. The first
combined voting and personal reflection. Participants voted on their favorite evaluation questions
from the first two in-person activities. They also wrote responses to reflective prompts about their
experience helping to craft the evaluation questions. The second online activity asked participants to
consider and write about what they had learned and experienced in the Youth as Researchers
program up to that point.
The third and final meeting was a peer interview in which participants were paired with a member
who was not part of their program group. Participants used an interview guide created from the
results of the voting exercise. Interview notes were taken by hand by the participants and therefore
could be anonymous (no name or identifying information was collected) in order to encourage
candid conversations among peers. Immediately after the interviews, the author invited participants
to debrief and share what they would like to with the author and the other participants in a roundtable
fashion. For a summary of the experimental activities see Table 4.

Study Results
Quantitative Results
The results of this study combine survey and focus group data collected as part of the
“traditional” evaluation described above. For the survey instrument, all but two scales are
previously validated and tested scales. Two scales were created by the author (research self-
efficacy and ability to influence others; see Supplemental Appendix A for a detailed list of
survey items). All scales showed acceptable reliability with Cronbach’s a ranging from
Odera 209

Table 5. Mean Change Results of Quantitative Findings.

Group Type Individual Mean Change SD N

All participants 5.20 1.33 45


Control groups 3.20 11.96 15
Experimental groups 6.20 6.91 30
Relational Mean Change SD N
All participants 1.81 .85 57
Control groups 1.37 6.48 19
Experimental groups 2.03 6.49 38
Organizational/Community Mean Change SD N
All participants 1.43 1.08 56
Control groups 0.79 10.06 19
Experimental groups 2.57 6.78 37

Figure 1. Mean Change Results of Quantitative Findings.

a ¼ .744 to a ¼ .843 exceeding the .70 cutoff often used to indicate acceptable reliability
(Nunnaly, 1978). However, rather than treating each scale as an individual mean, variables
were summated across the three levels of focus of the study: individual, relational, and com-
munity/organizational levels. After collapsing the variables into these three categories, mean
changes in each overall category were compared for the 40 experimental group members and
the 26 control group members. Given the small number of participants in the study, only
descriptive comparisons were made.
When variables were collapsed into their respective socioecological categories, the experimental
participants had higher mean change differences among all three categories (see Table 5). The
experimental group had nearly 2 times higher mean changes for the individual-level category, 1.5
times higher mean changes for the relational-level category, and approximately 3 times higher mean
changes for the organizational-/community-level category (see Figure 1). These variations can be
further understood through results from the focus groups discussions.
210 American Journal of Evaluation 42(2)

Qualitative Results
The focus group results are presented at each socioecological level. First, themes common to all
groups are mentioned, then themes unique to the experimental and control groups are described.

Individual-Level Results
Similar individual themes. All groups, whether in the experimental or control groups, discussed how
participation in the program fostered self-reflection and learning. The program led them to reflect
upon their personal strengths, weaknesses, and leadership styles. Engaging in research also taught
them to challenge their assumptions about their group’s topic and exposed them to other perspec-
tives. They learned about the realities of conducting social science research such as feasibility and
the unpredictability of studying humans. All groups also discussed how they reflected upon their
personal- and group-level responsibility and motivation to complete the research. Groups discussed
their own motivations for participating in the program and recommended the program to highly
committed and motivated students. They also felt that the nature of the group-driven research
benefited them personally through developing discipline.
Andrea: I think it taught discipline in a different way than college, than we typically think of college.
We come to college we think, you know, we learn discipline of like, you know, doing your
own laundry and things of that sort, but it taught discipline in an academic form. (Control
Group Member)

Divergent individual themes. The control groups engaged in self-reflection which was more negative in
nature and more aware of their individual shortcomings. For instance, they discussed how they
would hold themselves more individually accountable in the future if participating in a similar
program. They also reflected on elements of their individual personalities and leadership styles,
which both hindered and helped their group work. The control group discussed procedural learning
about the research process, such as how small questions can build incrementally to larger results.
The experimental groups, in contrast, engaged in deeper, more nuanced, and more positive self-
reflection and learning. Participation in the evaluation entailed more engagement with research-
related tasks, and experimental group members mentioned that the reflective exercises and creation
of questions required them to think about their own experience in the program and to remember their
group’s goal for social change. Experimental groups discussed more nuanced learning about the
research process, such as learning how to craft strong survey questions, how to interview peers, and
the distinction of qualitative versus quantitative data.
Delia: I think [the evaluation] made us more self-aware of what we were doing and like, how we
were working in our group. So, in that regard it was definitely good to help our group
progress in a more efficient manner. (Experimental Group Member)
Timothy: I wasn’t thinking about that but when she mentioned it is actually true. You can get trained
into thinking everything has to have numbers, you have to be able to measure anything. Just
doing those interviews made me realize that, you know, there’s more to people than just the
categorizes and how you can operationalize everything that they do. There’s still just the
intangible that you might not be able to capture in quantitative data. So yeah. (Experimental
Group Member)

Relational-Level Results
Similar relational themes. All groups discussed how this experience required working in a group and
managing group dynamics. Groups reflected on their appreciation of group relationships, diversity
of thought within their groups, the development of friendship, and respect among group members.
Many expressed an appreciation of the chance to meet and work with like-minded students who they
Odera 211

might not have met otherwise. Despite these appreciative elements, groups also discussed the
logistical challenges of managing their group, in particular the management of group communica-
tion and regular meeting times. Some groups were able to effectively infuse governance strategies
that worked well for their group, while others struggled to maintain meeting times which stifled their
development. As a result, groups had mixed emotions about the hands-off approach of the program.
Groups both appreciated the space and freedom to develop their own maturity, responsibility, and
group self-organization while recognizing that the ability to organize their groups was a challenge
which not all groups successfully achieved.

Sheila: Yeah, sometimes in like, group projects for classes, you always have one or two people
who don’t pull their weight or don’t show up to meetings. So, it was nice to be in a group
that was consistently showing up and doing their part. (Experimental Group Member)
Catherine: Yeah, I did really like the hands-off approach because it’s something we don’t really get a
lot of opportunity to do, and it was a good way to be kind of be introduced to it. Um, but, I
also think our group could have benefited, I don’t know, maybe from one more training
session or a little more help, maybe just in the beginning just to help us kind of get started.
Because I know for me in the beginning it was a little overwhelming to try to figure out
how to create this project. But overall, I did like the hands-off approach. (Experimental
Group Member)

Divergent relational themes. The control and experimental groups had different experiences regarding
relational aspects of the program. Control groups discussed the struggles their groups had to solve
logistical problems. The three control groups met less regularly with program staff and sought out
less assistance than the four groups which participated in the PE activities. The three control groups
also had more infrequent meetings and challenges in group communication. As a result, the control
groups reflected upon their group shortcomings and lessons for the future they will take with them
into future group work. They discussed ways in which they would improve group management in the
future. Groups in the experiment reflected differently upon the group relationship. The PE activities
gave these members chances to interact more regularly with their own members as well as to see and
hear the experiences of other groups. This led to a more positive appreciation of their unique group
strengths and also reassured these groups that their own progress and timeline was similar to other
groups in the experiment.

Olivia: Getting to be in the same place at the same time was also a big challenge for us. Because
people would say they are showing up and then 5 minutes before time they wouldn’t show
up. (Control Group Member)
Sophia: I definitely think [the participatory evaluation activities] added to the program because
through it you were able to reflect on what your group’s doing and see what other groups
are doing, and maybe see what they’re doing differently, what’s working for them, what’s
not. And you also take a step back and really appreciate what you are doing . . . . (Experi-
mental Group Member)
Bernard: [The participatory evaluation activities] made me appreciate my group more! (Experimental
Group Member)

Organizational- and Community-Level Results


Similar organizational/community themes. All of the Youth as Researchers groups chose to explore
topics relevant to the university community at Pennsylvania State University. All groups
expressed attraction to the Youth as Researchers program because it represented an unusual
university opportunity. It allowed them to integrate their passions with their personal and
budding professional identities. The program bonded them to the university community as it
allowed them to meet like-minded students also interested in activism work while advancing
212 American Journal of Evaluation 42(2)

the cause of social justice around campus. They also gathered diverse perspectives from other
students around campus during their research. For example, some groups researched how food
insecurity affects students, how different types of students describe personal identity affilia-
tions, and how diversity initiatives at the university are perceived differently by different types
of students.

Nadine: So, when I saw this I immediately jumped at the opportunity. Like social justice research that
is like, unheard of, like, I have never seen that before. (Control Group Member)
Kristin: It seemed like an active way to, um, bring like service activities and things that I did into more
of like, an academic setting. (Experimental Group Member)

Divergent organizational/community themes. Those in the experiment found the evaluation experience
connected them more strongly to the program. It led to their higher use of program resources and
motivated them to continue to engage with the program. Participation in the evaluation appeared to
serve as a motivational encouragement for these students and even assisted in their logistical
challenges. These groups also expressed appreciation that their input would be used to improve the
program for other students in the future.

Sheila: I think it helped me stay connected to the program. So, I think that these kind of brought the
stakes up a little higher, just in the way I felt that my voice was being heard, and I was
interacting with new people and interacting with you more, kind of made me feel more
included in the entire process. And like, I feel like I was actually contributing a lot to the
program. (Experimental Group Member)
Linda: It made me feel more part of the program too beyond our project which I thought was really
cool. (Experimental Group Member)

Summary of Results
Individually, participants in the experimental groups had a mean change across all summated
individual variables (civic engagement self-efficacy, research-related self-efficacy, critical con-
sciousness, and empathy) nearly 2 times higher than the mean change for the control groups. Also,
while all groups felt their experience in the program led to more self-reflection, learning, respon-
sibility, and motivation, it was the experimental groups which engaged in deeper and more positive
self-reflection and learning.
Relationally, participants in the experimental groups had a mean change across all summated
relational variables (social support among peers, social support of older adults, ability to influence
others) approximately 1.5 times higher than the control groups. All groups learned about how to
work in a group and manage group dynamics while grappling with their mixed emotions regarding
the hands-off nature of the program. However, the experimental groups were able to articulate their
appreciation of their unique group strengths which contrasted with the control groups’ challenges in
solving logistical tasks which led control groups to consider how they would improve their personal
and group strategies for future group work.
Related to organizational/community aspects, participants in the experimental groups had a mean
change across all summated organizational/community variables (community attachment, value of
youth participation in youth organizations, and value of youth participation in community organi-
zations) approximately 3 times higher than the control groups. While all groups felt the program was
an unusual university opportunity, one that allowed them to engage both their social and intellectual
energies was the experimental groups, which felt that engagement in the PE activities connected
them more strongly to the program. This aided in their motivation, group relationships, and logistical
problem-solving.
Odera 213

Discussion
The overarching purpose of this study was to understand whether active participation in a PE
changed groups’ experiences in the Youth as Researchers program. This study used a quasi-
experimental design to empirically capture process-related outcomes that have become more com-
monly discussed in evaluation literature (Cousins & Chouinard, 2012; Henry & Mark, 2003; Jacob
et al., 2011; Patton, 2008; Shaw & Campbell, 2014). The results from this study indicate that the
infusion of PE activities within the Youth as Researchers program had two major results: (1) it
encouraged deeper reflection and learning, and (2) it fostered groups’ emotional and logistical sense
of connection to the program.
Deeper reflection and learning occurred across individual and relational levels. There were
anticipated individual-level outcomes from participating in the Youth as Researchers program that
have been demonstrated from literature. Among these were self-efficacy, knowledge and skills about
research, empathy toward others, and critical consciousness (see Table 1). The nature of the program
also meant that learning to work in a group and manage group dynamics was an inherent part of the
program design and theory. Participants in the program all experienced these outcomes, yet those
groups which were a part of the PE experienced these outcomes more intensely and in a more
positive and appreciative manner. These results are congruent with past literature about the process-
related effects that PE can have, particularly with regard to learning and reflection (Cooper, 2018;
Jacob et al., 2011; Shaw & Campbell, 2014; Torres & Preskill, 2001). The PE activities offered
participants a mix of technical support and practice when it came to research-related skills. By
meeting face-to-face three times with other program participants, they also had more social
exchanges and chances to get to know other Youth Researchers while deepening their bonds with
their own group members. Both YPAR and YPE literature suggests that youth participation in
research and evaluation can lead to more social connections, networks, and emotional support from
peers (Dolan et al., 2015; London et al., 2003; Mitra, 2005; J. L. Powers & Tiffany, 2006; Price &
Mencke, 2013; Quijada Cerecer et al., 2013; B. Rubin & Jones, 2007; Sabo Flores, 2007; Strobel
et al., 2006; Vaughan, 2014), and this is supported by this study.
A more surprising outcome of this study was the role that the PE activities fostered in creating an
emotional and logistical connection to the program. Rather than perceiving of the additional meet-
ings as a burden, participants in the experimental activities expressed enjoyment in these meetings
and attributed them, in part, to support their group’s progress and success in the program. Therefore,
while past literature has cautioned evaluators to consider how participation can potentially lead to
overstretching participants’ time or inadvertently creating an environment for unconstructive criti-
cisms of a program (Orr, 2010), in this study, neither of these concerns came to pass. Rather than
leading participants to feel more critical of the program, the evaluation activities they engaged in
caused them to feel more emotionally invested in the program and the success of their group’s work.
Moreover, the vast majority of the members of the four experimental groups consistently partici-
pated in the in-person and online activities.
Community development scholarship helps to interpret these findings. Community development
often focuses on the importance of regular interactions and emotional-relational connections to
others, built on trust, as a core way to build shared vision and mission toward community betterment
(Mathie & Cunningham, 2003). It is possible that the diverse interactions during the evaluation
meetings (i.e., individuals interacted not just with their project groups, but members from other
groups as well) allowed participants the chance to situate their involvement in the program beyond
completing the tasks of their own group project. Those engaged in the PE activities discussed how
these activities reminded them of their personal and group goals for joining the program. Given that
the purpose of the program was to foster community-level social justice awareness about relevant
local topics, this may help explain the experimental groups’ higher levels of community/
214 American Journal of Evaluation 42(2)

organizational outcomes. The control groups tended to use the regularly provided program support
less often than those in the experimental groups. Therefore, it is possible that the PE activities acted
as an instrument of accountability, motivation, and connection among experimental groups within a
program in which low levels of organizational support led to a loss of motivation for the control
groups. Participation in the evaluation then, particularly in a loosely structured program, appears to
have contributed logistically to the program itself through supporting groups’ problem-solving and
communication. This perceived support and accountability may have aided experimental groups in
persisting through in their groups’ motivational ups and downs over the course of the program.
Theoretically, this study strengthens our understanding of the connection of PE to process related
outcomes of learning, reflection, and motivation. The findings from this study support the proposi-
tions that PE can lead to further reflection of program participants related to their experiences and
learning during a program’s life cycle. Moreover, it supports the notion by both YPE and YPAR
literature that youth participation in research and evaluation can lead to young adults feeling they
have a voice in their program experience (Chen et al., 2010; Lau et al., 2003; Shamrova & Cum-
mings, 2017; Ucar et al., 2017; Zeldin et al., 2008), which may have contributed to the motivation to
continue with their projects. This has particular relevance to those working with programs with
community-focused and social justice–focused outcomes, as the motivation to continue in this
program appeared to contribute to their more successful program completion.
This study also demonstrates that infusing the study of PE is logistically feasible in organizations
and programs which are already heavily participatory in nature. These settings may be the “low
hanging fruit” for continued empirical studies of PE processes. Not only is the political feasibility
easier in these contexts, but the need to tailor different instruments to study the process may be less
necessary. In the case of this study, the literature base regarding YPE and YPAR already composed of
many overlapping connections such that the instrument designs could be easily tethered together (see
Table 1). Therefore, rather than design an entirely separate study of the process of PE, this study
capitalized on the natural synergies between YPAR and YPE to structure an indicative quasi-
experimental study to explore whether YPE led to an increased intensity of expected program effects.

Limitations
This study has several important limitations. These include limitations of the context, the experi-
mental procedures and design, and the scope of interpretation. Regarding contextual limitations, the
program was shown to have several weaknesses. While inspired by action research, the program was
primarily a curricular experience with limited outside support beyond weekly check-in meetings
with program staff. Groups worked on their own to identify, research, and present findings on a topic
they felt was important for the university community. Often PAR is done with individuals closely
experiencing situations of injustice (Cammarota & Fine, 2008; Rodriguez & Brown, 2009); in this
case, participants were university students, relatively privileged, with high educational attainment.
Therefore, while the program is inspired by action research, it had a lack of long-term involvement,
heavy outside collaborations, and multiple reflective iterations. Therefore, interpretations of this
program to the wider PAR literature are limited. However, the purpose of this study was not to
advance understanding of PAR in particular but rather of PE.
There were also limitations to the study design itself. The study relied upon two sets of volun-
teers—volunteers to be part of the program and then volunteers to engage in the experiment. Groups
invited to participate in the experiment were randomly chosen, but individuals within groups were
clustered based on their individual topical interests. Rather than attempting a true experiment and
randomly assigning the members of each group, this study prioritized ecological validity, or the
“degree of similarity between the conditions of a simulation experiment and the real-world phenom-
enon that experiment is designed to model” (Breau & Brook, 2007, p. 78). These considerations are
Odera 215

important in community research, especially those utilizing participatory methods (Christens &
Perkins, 2008; Gehrke, 2014). Given the small size of the study and the unbalanced nature of the
experimental and control groups, the quantitative results of the study are best interpreted descrip-
tively, which is why only mean changes among the experimental and control groups were analyzed
(rather than parametric statistical analysis). Finally, instrumentation decisions for the survey itself
were made to be congruent with previous and existing data collection efforts by the institutional
originator of the program, reflecting a utilization-focused decision for the evaluation (Patton, 2008),
which explains why some older instrumentation was incorporated into the survey.
Perhaps most importantly, the interpretation of the results of this study is limited based on the
diversity and intensity of the PE activities themselves. The PE activities went as far as collecting and
debriefing about peer interviews, but deeper analysis and presentation of the PE results was not
undertaken in this study. The activities used in this study could be considered a minimal version of
PE. Even so, changes in the experimental groups were still seen even in this relatively restricted PE
environment. Finally, this study took a very broad interpretation of the results. While the study used
mixed methods, it does not delve deeply into quantitative or qualitative aspects of the findings but
rather uses both to draw a wide picture, showing there is in fact indication that PE involvement led to
more positive changes in the experimental group.

Future Research and Practice


The findings of this study and their limitations point to opportunities for future research. Future
studies could examine the intensity of participation in the evaluation by varying the types of
activities in which groups engage. Participants in the experimental groups in this study all partici-
pated in the evaluation in the same intensity and over the same length of time. Conducting YPE
activities at each individual group meeting or conducting YPE reflections over a longer period of
time would be a useful area to explore to see whether the intensity and diversity of participation is
directly related to the strength of outcomes participants experience. Other studies could explore the
role of the participatory evaluator specifically, by conducting the same activities but with differing
evaluators. Given that I (the author), conducted all PE and focus group work myself, the particular
effects of my personality, facilitation style, and relational dynamic with participants are inseparable
from the PE activities themselves in this study. Future studies could replicate this work with
different evaluators, particularly evaluators who are different from one another (personality types,
age, gender, race, facilitation style).
Another area of future study would be to examine the role of group dynamics more closely,
including how these dynamics alter how groups experience and are changed by PE. Other future
studies could examine the effects of PE in a PAR setting with groups that study the same topic. This
study benefited from the diversity of topics groups chose for their Youth as Researchers projects,
allowing for a broad capturing of both contentious topics (perceptions of free speech on campus,
classroom openness to racial discussions, nontraditional identity support on campus) and those that
were less contentious (ways to alleviate food insecurity on campus, increasing recycling rates on
campus). However, this variability in topic salience and difficulty also meant that certain groups
experienced more logistical challenges.
On a practical note, findings from the evaluation itself pointed to important areas in which the
program could improve. These recommendations include changing the timing of the training to be
intermittent throughout the program rather than a single session at the introduction of the program,
extending the length of the program to an entire school year rather than semester, and having
program staff assist heavily in the logistical arrangement of student meetings. Interestingly, the
PE experimental activities inadvertently supported some of these logistical challenges (by the author
216 American Journal of Evaluation 42(2)

arranging meetings to carry out these activities) in that members of groups, when together during PE
activities, would often stay afterwards to talk about their projects or arrange their next meetings.
Finally, this study introduced young people to the concept of evaluation, often for the first time.
Even though the young people in this study were university students, most had not encountered the
concept of evaluation outside of their end-of-semester course assessments. Participants throughout
the experimental activities expressed surprise that one can “do research about a program.” As we
look to build future professional evaluators, it is important to consider how we might expose young
people to evaluation as a career option. This study demonstrates that this exposure can be done in
ways that are low cost, strategically interwoven within an educational context, and which lead to
added benefits to the young people who engage. For those in the evaluation profession who study
evaluation methodology, the design of this study hopefully demonstrates that in a participatory
programmatic context, adding the study of PE can be a strategic way for building PE knowledge.
For funders who may be wary about the use of evaluation resources in participatory ways, this study
provides some empirical support that using PE is not superfluous, but in fact may even add value to
the program experience itself.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or pub-
lication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD
Erica L. Odera https://orcid.org/0000-0001-7906-4928

Supplemental Material
Supplemental Appendix A is available with the online article at http://journals.sagepub.com/home/aje

References
Abo-Zena, M. M., & Pavalow, M. (2016). Being in-between: Participatory action research as a tool to partner
with and learn about emerging adults. Emerging Adulthood, 4(1), 19–29.
Arnold, M. E., Dolenc, B., & Wells, E. E. (2008). Youth community engagement: A recipe for success. Journal
of Community Engagement and Scholarship, 1(1), 56.
Bautista, M. A., Bertrand, M., Morrell, E., Scorza, D., & Matthews, C. (2013). Participatory action research and
city youth: Methodological insights from the Council of Youth Research. Teachers College Record,
115(10), 1–18.
Boruch, R., Solomon, P., Draine, J., DeMoya, D., & Wickerman, R. (1998). Design-based evaluation: Process
studies, experiments, and quasi-experiments. Scandinavian Journal of Social Welfare, 7, 126–131.
Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder
involvement in program evaluation. American Journal of Evaluation, 35, 26–44. http://doi.org/10.1177/
1098214013503699
Breau, B. L., & Brook, D. (2007). “Mock” mock juries: A field experiment on the ecological validity of jury
simulations. Law & Psychology Review, 31, 77–92.
Brisolara, S. (1998). The history of participatory evaluation and current debates in the field. In E. Whitmore
(Ed.), Understanding and practicing participatory evaluation: New directions in evaluation, No. 80 (pp.
25–41). Jossey-Bass.
Odera 217

Burke, K., & Greene, S. (2015). Participatory action research, youth voices, and civic engagement. Language
Arts, 92(6), 387–400.
Cabrera, N. L., Milem, J. F., Jaquette, O., & Marx, R. W. (2014). Missing the (student achievement) for all the
(political) trees: Empiricism and the Mexican American studies controversy in Tucson. American Educa-
tional Research Journal, 51(6), 1084–1118.
Cammarota, J., & Fine, M. (2008). Revolutionizing education: Youth participatory action research in motion.
Routledge.
Cammarota, J., & Romero, A. (2006). A critically compassionate intellectualism for Latino/a students: Raising
voices above the silencing in our schools. Multiculturalism Education, 14(2), 16–23.
Checkoway, B., & Richards-Schuster, K. (2003). Youth participation in community evaluation research.
American Journal of Evaluation, 24(1), 21–33.
Chen, P., Weiss, F. L., & Nicholson, H. J., Girls Inc, & Girls Incorporated. (2010). Girls Study Girls Inc.:
Engaging girls in evaluation through participatory action research. American Journal of Community Psy-
chology, 46(1), 228–237.
Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of
Evaluation, 34(2), 237–253.
Christens, B., & Perkins, D. D. (2008). Transdisciplinary, multilevel action research to enhance ecological and
psychopolitical validity. Journal of Community Psychology, 36(2), 214–231.
Cook, A. L., Cook, B. G., Landrum, T. J., & Tankersley, M. (2008). Examining the role of group
experimental research in establishing evidence-based practices. Intervention in School and Clinic,
44(2), 76–82.
Cook, A. L., & Krueger-Henney, P. (2017). Group work that examines systems of power with young people:
Youth participatory action research. The Journal for Specialists in Group Work, 42(2), 176–193.
Cooper, S. (2014). Transformative evaluation: Organisational learning through participative practice. The
Learning Organization, 21(2), 146–157.
Cooper, S. (2018). Participatory evaluation in youth and community work: Theory and practice. Routledge.
Cousins, B., & Chouinard, J. A. (2012). Participatory evaluation up close: An integration of research-based
knowledge. Information Age.
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore (Ed.), Understanding
and practicing participatory evaluation: New directions in evaluation, No. 80 (pp. 3–23). Jossey-Bass.
Cullen, A. E., Coryn, C. L. S., & Rugh, J. (2011). The politics and consequences of including stakeholders in
international development evaluation. American Journal of Evaluation, 32, 345–361. http://doi.org/10.
1177/1098214010396076
Cutrona, C. E., & Russell, D. (1987). The provisions of social relationships and adaptation to stress. In W. H.
Jones & D. Perlman (Eds.), Advances in personal relationships (Vol. 1, pp. 37–67). JAI Press.
Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach.
Journal of Personality and Social Psychology, 44(1), 113–126.
Dolan, T., Christens, B. D., & Lin, C. (2015). Combining youth organizing and youth participatory action
research to strengthen student voice in education reform. National Society for the Study of Education,
114(1), 153–170.
Dutta, U. (2017). Creating inclusive identity narratives through participatory action research. Journal of
Community & Applied Social Psychology, 27, 476–488.
Flanagan, C. A., Syvertsen, A. K., & Stout, M. D. (2007). Civic measurement models: Tapping adolescents’
civic engagement [Circle Working Paper 55]. Center for Information and Research on Civic Learning and
Engagement.
Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American evaluation
association members. American Journal of Evaluation, 30, 158–175. http://doi.org/10.1177/109821400833
1009
218 American Journal of Evaluation 42(2)

Flicker, S., Maley, O., Ridgley, A., Biscope, S., Lombardo, C., & Skinner, H. (2008). E-PAR: Using technology
and participatory action research to engage youth in health promotion. Action Research, 6(3), 285–303.
Foster-Fishman, P. G., Law, K. M., Lichty, L. F., & Aoun, C. (2010). Youth ReACT for social change: A
method for youth participatory action research. American Journal of Community Psychology, 46(1–2),
67–83.
Gehrke, P. J. (2014). Ecological validity and the study of publics: The case for organic public engagement
methods. Public Understanding of Science, 23(1), 77–91.
Gildin, B. (2003). The All Stars Talent Show Network: Grassroots funding, community building, and partici-
patory evaluation. New Directions for Evaluation, 98, 77–85.
Gomez, R. J., & Ryan, T. N. (2016). Speaking out: Youth led research as a methodology used with homeless
youth. Child Adolescent Social Work, 33, 185–193.
Greeno, C. G. (2002). Major alternatives to the classic experimental design. Family Process, 41(4), 733–736.
Gregory, A. (2000). Problematizing participation: A critical review of approaches to participation in evaluation
theory. Evaluation, 6(2), 179–199.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and
actions. American Journal of Evaluation, 24, 293–314. http://doi.org/10.1177/109821400302400302
Hesse-Biber, S. N., & Leavy, P. (2006). The practice of qualitative research. Sage.
Jacob, S., Ouvrard, L., & Bélanger, J. F. (2011). Participatory evaluation and process use within a social aid
organization for at-risk families and youth. Evaluation and Program Planning, 34, 113–123. http://doi.org/
10.1016/j.evalprogplan.2010.08.002
Johnson, D. W., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on
evaluation use. A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30,
377–410. http://doi.org/10.1177/1098214009341660
King, J., Cousins, B., & Whitmore, E. (2007). Making sense of participatory evaluation: Framing participatory
evaluation. New Directions for Evaluation, 114, 83–105.
Kirshner, B., Pozzoboni, K., & Jones, H. (2011). Learning how to manage bias: A case study of youth
participatory action research. Applied Developmental Science, 15(3), 140–155.
Lau, G., Netherland, N. H., & Haywood, M. L. (2003). Collaborating on evaluation for youth development.
New Directions for Evaluation, 98, 47–59.
Livingstone, A. M., Celemencki, J., & Calixte, M. (2014). Youth participatory action research and school
improvement: The missing voices of black youth in Montreal. Canadian Journal of Education, 37(1),
283–307.
London, J., Zimmerman, K. I., & Erbstein, N. (2003). Youth-led research and evaluation: Tools for youth,
organizational, and community development. New Directions for Evaluation, 98, 33–45.
Mathie, A., & Cunningham, G. (2003). From clients to citizens: Asset-based community development as a
strategy for community-driven development. Development in Practice, 5, 474–486.
Mertens, D. M. (2009). Transformative research and evaluation. Guilford Press.
Mitra, D. L. (2005). Adults advising youth: Leading while getting out of the way. Educational Administration
Quarterly, 41(3), 520–553.
Morsillo, J., & Prilleltensky, I. (2007). Social action with youth: Interventions, evaluation, and psychopolitical
validity. Journal of Community Psychology, 35(6), 725–740.
NUI Galway. (2016). Youth as researchers. http://www.childandfamilyresearch.ie/cfrc/youth-as-researchers/
Nunnaly, J. (1978). Psychometric theory. McGraw-Hill.
Orr, S. (2010). Exploring stakeholder values and interests in evaluation. American Journal of Evaluation, 31(4),
557–569.
Ozer, E. J. (2017). Youth-led participatory action research: Overview and potential for enhancing adolescent
development. Child Development Perspectives, 11(3), 173–177.
Odera 219

Ozer, E. J., Newlan, S., Douglas, L., & Hubbard, E. (2013). Bounded empowerment: Analyzing tensions in the
practice of youth-led participatory research in urban public schools. American Journal of Community
Psychology, 52, 13–26.
Ozer, E. J., Ritterman, M. L., & Wanis, M. G. (2010). Participatory action research (PAR) in middle school:
Opportunities, constraints, and key processes. American Journal of Community Psychology, 46(1–2),
156–166.
Ozer, E. J., & Wright, D. (2012). Beyond school spirit: The effects of youth-led participatory action research in
two urban high schools. Journal of Research on Adolescence, 22(2), 267–283.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Sage.
Powers, C. B., & Allaman, E. (2012). How participatory action research can promote social change and help
youth development. The Kinder & Braver World Project: Research Series, No 2013-10. Burkman Center
Research Publication.
Powers, J. L., & Tiffany, J. S. (2006). Engaging youth in participatory research and evaluation. Public Health
Management Practice, 12(6), S79–S87.
Price, P. G., & Mencke, P. D. (2013). Critical pedagogy and praxis with Native American youth: Cultivating
change through participatory action research. The Journal of Educational Foundations, 27(3/4), 85–102.
Quijada Cerecer, D. A., Cahill, C., & Bradley, M. (2013). Toward a critical youth policy praxis: Critical youth
studies and participatory action research. Theory into Practice, 52, 216–223.
Rodriguez, L. F., & Brown, T. M. (2009). From voices to agency: Guiding principles for participatory action
research with youth. New Directions for Youth Development, 123, 19–34.
Rogers, J., Morrell, E., & Enyedy, N. (2007). Studying the struggle: Contexts for learning and identity devel-
opment for urban youth. American Behavioral Scientist, 51, 419–443.
Romero, A., Arce, S., & Cammarota, J. (2009). A barrio in pedagogy: Identity, intellectualism, activism, and
academic achievement through the evolution of critically compassionate intellectualism. Race Ethnicity and
Education, 12(2), 217–233.
Roseland, D., Lawrenz, F., & Thao, M. (2015). The relationship between involvement in and use of evaluation
in multi-site evaluations. Evaluation and Program Planning, 48, 75–82. http://doi.org/10.1016/j.evalprog
plan.2014. 10.003
Rubin, A., & Babbie, E. (1997). Research methods for social work (3rd ed.). Brooks/Cole.
Rubin, B., & Jones, M. (2007). Student action research: Reaping the benefits for students and school leaders.
National Association of Secondary School Principals Bulletin, 91, 363–378.
Sabo, K. (2003). A Vygotskian perspective on youth participatory evaluation. New Directions for Evaluation,
2003(98), 13–24. http://doi.org/10.1002/ev.81
Sabo Flores, K. (2007). Youth participatory evaluation: Strategies for engaging young people (1st ed.). Jossey-
Bass.
Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Sage.
Schensul, J. J., & Berg, M. (2004). Youth participatory action research: A transformative approach to service-
learning. Michigan Journal of Community Service Learning, 10, 76–88.
Schulz, W., Ainley, J., & Fraillon, J. (2011). ICCS 2009 Technical Report. http://pub.iea.nl/fileadmin/user_
upload/Publications/Electronic_versions/ICCS_2009_Technical_Report.pdf
Shamrova, D. P., & Cummings, C. E. (2017). Participatory action research (PAR) with children and youth: An
integrative review of methodology and PAR outcomes for participants, organizations, and communities.
Children and Youth Services Review, 81, 400–412.
Shaw, J., & Campbell, R. (2014). The “process” of process use: Methods of longitudinal assessment in a
multisite evaluation. American Journal of Evaluation, 35(2), 250–260.
Stewart, D. W., Shamdasani, P. N., & Rook, D. W. (2007). Focus groups: Theory and practice (2nd ed.). Sage.
Strobel, K., Osberg, J., & McLaughlin, M. (2006). Participation in social change: Shifting adolescents’ devel-
opmental pathways. In S. Ginwright, P. Noguera, & J. Cammarota (Eds.), Beyond resistance! Youth activism
220 American Journal of Evaluation 42(2)

and community change: New democratic possibilities for practice and policy for America’s youth (pp.
21–35). Routledge.
Theodori, G. L. (2004). Community attachment, satisfaction, and action. Journal of the Community Develop-
ment Society, 35, 73–86.
Thomas, A. J., Barrie, R., Brunner, J., Clawson, A., Hewitt, A., Jeremie-Brink, A., & Rowe-Johnson, M. (2014).
Assessing critical consciousness in youth and young adults. Journal of Research on Adolescence, 24(3),
485–496.
Torre, M. E. (2009). Participatory action research and critical race theory: Fueling Spaces for Nos-otras to
Research. Urban Review, 41, 106–120.
Torre, M. E., Fine, M., Alexander, N., Billups, A. B., Blanding, Y., Genao, E., Marboe, E., Salah, T., & Urdang,
K. (2008). Participatory action research in the contact zone. In J. Cammarota & M. Fine (Eds.), Revolutio-
nizing education: Youth participatory action research in motion (pp. 23–44). Routledge.
Torres, R. T., & Preskill, H. (2001). Evaluation and organizational learning: Past, present, and future. American
Journal of Evaluation, 22(3), 387–395.
Tuck, E., Allen, J., Bacha, M., Morales, A., Quinter, S., Thompson, J., & Tuck, M. (2008). PAR praxes for now
and future change: The collective of researchers on educational disappointment and desire. In J. Cammarota
& M. Fine (Eds.), Revolutionizing education: Youth participatory action research in motion (pp. 49–83).
Routledge.
Ucar, X., Planas, A., Novella, A., & Moriche, M. P. R. (2017). Participatory evaluation of youth empowerment
in youth groups. Cases analysis. Pedagogia Social, 30, 63–75.
Vaughan, C. (2014). Participatory research with youth: Idealising safe social spaces or building transformative
links in difficult environments? Journal of Health Psychology, 19(1), 184–192.
Warenweiler, D., & Mansukhani, R. (2016). Participatory action research with Filipino street youth: Their voice
and action against corporal punishment. Child Abuse Review, 25, 410–423.
Watts, R., Griffith, D., & Abdul-Adil, J. (1999). Sociopolitical development as an antidote for oppression-
theory and action. American Journal of Community Psychology, 27(2), 255–271.
Watts, R. J., & Flanagan, C. (2007). Pushing the envelope on youth civic engagement: A developmental and
liberation psychology perspective. Journal of Community Psychology, 35(6), 779–792.
Watts, R. J., Williams, N. C., & Jagers, R. J. (2003). Sociopolitical development. American Journal of Com-
munity Psychology, 31(1–2), 185–194.
Weaver, L., & Cousins, J. B. (2004). Unpacking the participatory process. Journal of MultiDisciplinary
Evaluation, 1, 19–40.
Zeldin, S., Petrokubi, J., & MacNeil, C. (2008). Youth–adult partnerships in decision making: Disseminating
and implementing an innovative idea into established organizations and communities. American Journal of
Community Psychology, 41(3–4), 262–277.

You might also like