Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Received: 9 March 2019 | Accepted: 28 August 2019

DOI: 10.1002/pits.22306

RESEARCH ARTICLE

Acceptability assessment of school psychology


interventions from 2005 to 2017

Meghan R. Silva1,2 | Melissa A. Collier‐Meek1 | Robin S. Codding3 |


Emily R. DeFouw1

1
Department of Counseling and School
Psychology, University of Massachusetts, Abstract
Boston, Massachusetts Recommendations from multiple professional organizations
2
May Institute, Randolph, Massachusetts
(e.g., American Psychological Association, Council for Ex-
3
Department of Educational Psychology,
University of Minnesota, Minneapolis, ceptional Children, National Association of School Psychol-
Minnesota ogists) suggest that collection of data on the social validity in
Correspondence practice and research is necessary. The purpose of this study
Meghan R. Silva, May Institute, 41 Pacella was to systematically review the inclusion of acceptability
Park Drive, Randolph, MA 02368.
Email: msilva@mayinstitute.org measurement, which has been one of the most common way
to measure social validity, within the intervention literature
Present address
Meghan R. Silva, May Institute, Randolph, MA published across five school psychology journals between
Robin S. Codding, Department of Applied 2005 and 2017. Findings suggested just over one third of
Psychology, Northeastern University,
Boston, MA intervention studies included acceptability assessment.
Intervention studies that were delivered individually, tar-
geted behavior skills, and included treatment integrity data
were significantly more likely to include acceptability
assessment. When acceptability was measured it was
typically evaluated one‐time following treatment completion
using self‐report tools completed by teachers. Nearly half of
studies employed one of seven published tools and the
remaining half used researcher‐created measures. The
published tools were adapted in a variety of ways and
inconsistently reported either item or total scores making it
difficult to summarize these data according to intervention
target or delivery format. Implications of findings are
described.

KEYWORDS
acceptability, intervention, social validity, treatment integrity

Psychol. Schs. 2019;1–16. wileyonlinelibrary.com/journal/pits © 2019 Wiley Periodicals, Inc. | 1


2 | SILVA ET AL.

1 | INTRODUCTION

Acceptability is a component of social validity that has been recommended as part of ongoing intervention
evaluation (American Psychological Association [APA], 2002; Council for Exceptional Children [CEC], 2014;
National Association of School Psychologists [NASP], 2010). Assessing for acceptability is one method to determine
if a socially important outcome was achieved (Kazdin, 1977). To better understand if interventions are meeting this
indicator, acceptability should be measured in the intervention literature, though past reviews suggest it is
infrequently reported (e.g., Villarreal, Ponce, & Gutierrez, 2015). This issue may be because a standardized
approach to the assessment and reporting of acceptability continues to be absent from many applied research
journal guidelines (Callahan et al., 2017; Roach, Wixson, Talapatra, & LaSalle, 2009) despite recommendations to
include acceptability in professional practice and research. Without a standardized approach to the assessment and
reporting of acceptability, it is unclear how practitioners and researchers report and interpret acceptability
findings. To provide a more comprehensive understanding of how researchers measure and report acceptability
assessment in intervention research, we conducted a systematic review of all school‐based intervention studies
across five school psychology journals from 2005 to 2017.

2 | DEFINING A CCE PTABIL IT Y

Acceptability was initially conceptualized as a construct within social validity and refers to how an individual judges
the procedures of a treatment or intervention to be appropriate, fair, reasonable, or intrusive (Finn & Sladeczek,
2001; Kazdin, 1980). For interventions to be considered socially valid, the intervention goals need to be congruent
with societal goals, the procedures need to be socially appropriate and acceptable to the participants, and the
effects of the intervention should be satisfying to the participants (Wolf, 1978). Acknowledging this importance,
multiple professional organizations highlight the critical nature of social validity and, more specifically, acceptability
when it comes to professional practice and research guidelines. For example, APA (2006), suggests all intervention
evaluation should be based on both efficacy and clinical utility, which includes the generalizability, feasibility, and
acceptability of an intervention. Similarly, NASP (2010) proposes assessing for acceptability to be a professional
responsibility that should permeate all aspects of intervention delivery (i.e., planning, implementation, evaluation).
Further, CEC (2014), suggests studies examining the effect of an intervention on student outcomes address two
dimensions of social validity: (a) socially important outcomes (e.g., improved quality of life) and (b) meaningful
magnitude of change in the dependent variable for study participants. CEC recommends subjective evaluation (e.g.,
acceptability assessment) as one way to demonstrate the socially important outcomes.
Kazdin (1977) also suggested subjective evaluation as a primary method for measuring acceptability. Subjective
evaluation has mainly resulted in acceptability assessments consisting of self‐report questionnaires that are
summarized into an overall acceptability score (Eckert & Hintze, 2000; Finn & Sladeczek, 2001). With a diverse pool
of students and stakeholders, perceptions of intervention procedures will likely differ due to variances in
knowledge accumulation, exposure to different experiences, and beliefs about the intervention. The assessment of
acceptability may bring to awareness to components that are critical to intervention planning, support,
sustainability, and evaluation (APA, 2002; Cross Calvert & Johnston, 1990).

3 | A C C E P T A B I L I T Y AN D I N T E R V E NTION IMPLEME NTAT ION AND


OUTCOMES

Collecting stakeholders’ acceptability on intervention outcomes has informed intervention implementation factors.
Several reviews of the acceptability literature, including both analog and naturalistic studies, have been conducted
SILVA ET AL. | 3

over the past 30 years (e.g., Carter, 2007; Miltenberger, 1990). Miltenberger (1990) conducted a review of the
acceptability research from the 1980s and found the interventions most likely to be rated as acceptable were those
found to have limited negative side effects, require minimal time to implement, were least restrictive and
disruptive, fit with the orientation of the intervention implementer, considered necessary to improve outcomes,
and perceived to be the most effective options. Carter (2007) expanded the review conducted by Miltenberger and
likewise found that numerous factors influence acceptability ratings including severity of the problem, type of
treatment, intrusiveness of the intervention, level of problem severity, professional affiliation, and professional
expertise. Furthermore, these data suggested factors related to treatments, clients, and raters have all been shown
to influence acceptability ratings. Because of individual differences, each person may place different values on the
aforementioned factors which may lead to variation in acceptability ratings and ultimately intervention integrity,
effectiveness, and use.
Conceptual models of acceptability propose a bidirectional and interdependent relationship between
intervention acceptability, integrity, effectiveness, and use (Eckert & Hintze, 2000; Reimers, Wacker, & Koeppl,
1987; Witt & Elliott, 1985). That is, if an intervention is considered to be acceptable, it is expected to increase the
likelihood of an intervention being used and implemented fully, which in turn is expected to lead to improved
outcomes and even greater acceptability. Studies that have examined the influence of acceptability on treatment
integrity present a more nuanced picture (e.g., Allinder & Oats, 1997; Dart, Cook, Collins, Gresham, & Chenier,
2012; Mautone et al., 2009; Peterson & McConnell, 1996; Sterling‐Turner & Watson, 2002). Some researchers have
found small to moderate, yet significant, positive relationships between acceptability and treatment integrity (e.g.,
Allinder & Oats, 1997; Dart et al., 2012; Mautone et al., 2009). Higher acceptability ratings have been found to
correlate with higher ratings of intervention and assessment implementation and effectiveness with parents
(Reimers, Wacker, Cooper, & DeRaad, 1992) and teachers (Allinder & Oats, 1997; Mautone et al., 2009). However,
other research has not demonstrated a significant relationship between acceptability and treatment integrity (e.g.,
Peterson & McConnell, 1996; Sterling‐Turner et al., 2002). Potential explanations for the mixed evidence include
changes in acceptability ratings over time (e.g., Mautone et al., 2009), analog (e.g., Sterling‐Turner & Watson, 2002)
versus naturalistic investigations (e.g., Allinder & Oats, 1997), and exposure to multiple interventions (Dart et al.,
2012). This inconclusive evidence suggests a need for further research.

4 | A C C E P T A B I L I T Y AS S E S S M E N T

To evaluate and better understand the relationship between acceptability and implementation, acceptability must
be assessed and reported. Most intervention research that has included acceptability assessment data utilizes
previously published rating scales (Carter, 2007; Finn & Sladeczek, 2001). Since the 1980s, researchers have
systematically developed acceptability measures to assess perceptions of intervention procedures (Finn &
Sladeczek, 2001). To highlight a few, Kazdin (1980) developed the Treatment Evaluation Inventory (TEI), a 15‐item
measure to evaluate acceptability of interventions to address children’s behavior problems. The Treatment
Acceptability Rating Form (TARF; Reimers & Wacker, 1988), a 15‐item measure, and the TARF‐R (Reimers et al.,
1992) a 20‐item measure, were developed to incorporate numerous dimensions of acceptability (Finn & Sladeczek,
2001). Specific to school‐based interventions, Witt and colleagues developed several measures of intervention
acceptability including the Intervention Rating Profile (IRP; Witt & Martens, 1983), Children’s Intervention Rating
Profile (CIRP; Witt & Elliott, 1985), and Behavior Intervention Rating Scale (BIRS; Von Brock & Elliott, 1987).
Taking a broader view of intervention acceptability, Chafouleas and colleagues developed a suite of acceptability
measures to evaluate adults’ perceptions of interventions (Usage Rating Profile—Intervention Revised [URP‐IR];
Briesch, Chafouleas, Neugebauer, & Riley‐Tillman, 2013), students’ perceptions of interventions (Children’s Usage
Rating Profile [CURP], Briesch & Chafouleas, 2009), and adults’ perceptions of assessments (Usage Rating Profile—
Assessment [URP‐A]; Miller, Neugebauer, Chafouleas, Briesch, & Riley‐Tillman, 2013). Recently, Eckert, Hier,
4 | SILVA ET AL.

Hamsho, and Malandrino (2017) evaluated a measure of students’ perceptions of academic interventions (Kids
Intervention Profile). These measures provide researchers with a range of options for assessing acceptability, but
all remain paper‐and‐pencil self‐report questionnaires, despite calls from researchers to more dynamically and
robustly assess this multidimensional construct (Finn & Sladeczek, 2001).
Whichever approach to acceptability assessment researchers employ, it must be also reported in journal articles to
contribute to the broader literature. Before Wolf (1978) advocated for the assessment of social validity, researchers and
readers had been individually responsible for determining the importance of an intervention or strategy based on their
personal perceptions. The concept that the actual consumers of the intervention could provide valuable input about the
intervention was a major shift in how researchers viewed the importance of an intervention (Finney, 1991). However,
research suggests that including the acceptability data of consumers continues to be neglected in the literature.
Preliminary data on acceptability assessment within the intervention research has been examined in three systematic
reviews. First, in documenting consultation research published from 1985 to 1995, Sheridan, Welch, and Orme (1996)
reported consumer acceptability and satisfaction was evaluated in 48% of the 46 consultation studies. This level of
acceptability reporting was greater than reporting on the social meaningfulness of the treatment outcomes (37%),
treatment integrity (26%), and generalization (6%) in these same studies. Second, Roach et al. (2009) examined research
articles from four major school psychology journals between 2002 and 2007 and found only 16% of the published
articles included the voices of students in school psychology research. In this review, student acceptability was defined
as the examination of student experiences and perceptions through interviews, surveys, or questionnaires. Third,
Villarreal et al. (2015) looked at the inclusion of acceptability data in intervention research from 2005 to 2014 in six
school psychology journals. Quantitative acceptability data were included in 30.5% of the 243 studies, most often by
teachers and using a published treatment acceptability instrument. Acceptability, without quantitative data, was
mentioned in 5.8% of the studies and not mentioned in 60.38% of studies. These reviews provide a broad sense of the
lack of inclusion of acceptability data in school psychology research but provide only limited information about how
acceptability is assessed when it is reported.

5 | PU RP OSE

Acceptability is a critical component of evaluating an intervention (APA, 2002; NASP, 2010). The relationship
between acceptability, implementation, and outcomes is not clear, suggesting the need for additional research in
this area (Finn & Sladeczek, 2001; Kazdin, 1980). Although previously conducted reviews present valuable
preliminary information regarding the prevalence of acceptability assessment within the consultation and
intervention literature, questions remain about how intervention studies assess for acceptability, report
acceptability data, and determine whether an intervention was considered acceptable. To do so requires the
analysis of use and measurement of specific acceptability tools, as well as considering the level of acceptability
reported in studies (i.e., how acceptable did the participants find the interventions). For example, when a published
rating scale is used to assess for acceptability, determining if the scale was adapted or modified, as well as the
timing of the assessment (e.g., pre, post, pre and post), and how the data were reported (e.g., total score or item
score) will all impact how researchers determine if a socially important outcome was achieved. In addition, it is
valuable to compare characteristics of intervention studies that did and did not include acceptability, to provide
context that clarifies the circumstances under which acceptability data are more likely to be collected. In this way,
the present review of the literature provides a detailed view of the current acceptability assessment and its
inclusion in the school psychology literature. Research questions included:

● To what extent is acceptability reported in the school psychology intervention literature?


● When reported, how is acceptability documented and what are the mean acceptability ratings?
SILVA ET AL. | 5

● What are the study, participant, and intervention characteristics of school psychology intervention studies
generally and those that assess acceptability specifically?
● How do these characteristics vary depending on the inclusion of acceptability assessment?

6 | METHOD

6.1 | Screening procedures


To identify articles, two doctoral students examined five journals from 2005 to 2017 (School Psychology Review,
School Psychology Quarterly, Journal of School Psychology, Psychology in the Schools, and School Psychology International)
using keywords (i.e., intervention, Response to Intervention, academic intervention, behavior intervention, Positive
Behavior Interventions and Supports) in the PsychInfo database. Screening produced 991 articles for potential
inclusion. Abstracts were reviewed to evaluate whether studies included an experiment evaluating an intervention
(i.e., between or within subject group designs or single‐case designs with baseline) and participants 18 years of age
or younger. Two hundred and sixty‐eight articles met criteria.

6.2 | Coding procedures


The 268 included articles were coded for acceptability, study, participant, and intervention variables.

6.2.1 | Acceptability variables


Studies were coded dichotomously (“yes” or “no”) for whether a measurement of acceptability was included in the
article. If acceptability was reported, raters documented the score and coded the article according to type of
evaluation method. If a self‐report measure was used, raters coded whether a published, previously validated
acceptability measure was employed (see Table 1) or the researchers created their own measure. Studies were
coded dichotomously if authors reported: psychometrics for the acceptability measure, whether the tool was
adapted, and reliability if the tool was adapted. The person(s) completing the acceptability evaluation were coded
as teacher, student, and/or parent. Raters coded whether studies reported the total score and/or item scores. For
published measures, the number of participants who completed the measures and scores were recorded. Last,
raters recorded whether acceptability was measured preintervention and/or postintervention.

6.2.2 | Study, participant, and intervention variables


Year of publication and journal were documented. Participant characteristics coded included grade level, sex, race,
disability status, and school type. Intervention characteristics coded included the intervention implementer,
intervention setting, and the intervention delivery format. Dependent variables were coded as one of the following:
academic skills, behavior (e.g., problem behavior, noncompliance), social skills, mental health (i.e., anxiety,
depression, problem solving), engagement (e.g., task engagement, motivation), or other. Treatment integrity was
coded as (a) included quantitative data, (b) described but did not include quantitative data, or (c) not included.

6.3 | Training and interrater agreement


The two graduate students reviewed the coding manual and evaluated agreement. To determine reliability of the
inclusion, 9% of the potential articles (n = 90) were coded, resulting in 97.78% agreement. When disagreements
6
|

T A B L E 1 Description and use of acceptability measures

Studies that included


acceptability (n = 108) Acceptability rating
Item ratings Mean item Total Mean total
Measure Description n % (n) rating ratings (n) rating
The Intervention Rating 20 items (α = .89; Witt & Martens, 1983) 9 8.33 4 4
Profile (IRP‐20)
The Intervention Rating Adaptation of the IRP‐20 (α = .98; Martens, Witt, 20 18.52 15 5.03 7 74.23
Profile (IRP‐15) Elliott, & Darveaux, 1985; Witt & Martens, 1983)
The Behavior Intervention 24 items (α = −.97; Von Brock & Elliott, 1987). 21 19.44 19 4.92 5 120.69
Rating Scale (BIRS) Treatment effectiveness and time to effectiveness
are secondary factors (Finn & Sladeczek, 2001)
Children’s Intervention Rating Modification of the IRP. 7 items on a 7‐point Likert 21 19.44 15 5
Profile (CIRP) scale (α = .75–.89; Turco & Elliott, 1986; Witt &
Elliott, 1985)
Treatment Evaluation 15 items (α = .35–.96; Kazdin, 1980). Adapted to 3 2.78 2 1
Inventory (TEI) include a 9‐item Short Form (TEI‐SF; α = .85; Kelley,
Heffer, Gresham, & Elliott, 1989)
Usage Rating Profile— 23 items (α = .72–.95) and 4 subscales (acceptability, 3 2.78 8 5.07
Intervention Revised (URP‐ understanding, feasibility, system climate;
IR) Chafouleas et al., 2011)
Kids Intervention Profile 6 items (α = .70; Hier & Eckert, 2014) or 8 items 2 1.85 3 3.88
(α = .79–.86; Hier & Eckert, 2016) with a 5‐point
Likert scale (Eckert et al., 2017)
The Treatment Acceptability 15 items (α = .80–.91; Reimers & Wacker, 1988). 2 1.85 1 95.16
Rating Form (TARF) Adapted, resulting in the 20‐item TARF‐Revised
(α = .92; Reimers et al., 1992)
SILVA
ET AL.
SILVA ET AL. | 7

arose, the third author determined article inclusion. To evaluate reliability of the article coding, 26.49% of the 268
studies (n = 71) were coded by the two graduate student raters resulting in 89.29% agreement.

6.4 | Data analysis


Descriptive statistics were calculated to summarize the coding data. Next, Pearson’s χ2 tests were conducted to
examine the inclusion of acceptability assessment data by study, participant, and intervention characteristics. To do
so, categories were collapsed. Intervention implementers were collapsed into four categories: teacher, other school
staff, researcher, and other (e.g., parent, peer). Intervention settings were collapsed into three categories: general
education classroom, separate room, and other. Lastly, international schools were not included in the χ2 tests due to
the low number of studies that included acceptability.

7 | RES U LTS

First, we report characteristics of intervention studies. Second, we describe the characteristics of the intervention
studies that included acceptability. Third, we compare the inclusion of acceptability studies across study,
participant, and intervention characteristics to the studies that that did not include acceptability assessment. Last,
we report on acceptability assessment and summarize the reported acceptability data. Table 2 provides participant
and intervention characteristics across all studies and Table 3 provides characteristics of acceptability assessment.

7.1 | Characteristics of school psychology intervention studies


On average, the five school psychology journals reviewed in this study published 20.61 intervention articles per
year (see Figure 1). Intervention studies were published most often in 2007 and 2010 (n = 27 each year) and the
least often in 2008 and 2017 (n = 15; see Figure 1). Psychology in the Schools (n = 95; 35.45%) and School Psychology
Review (n = 57; 21.27%) most frequently included intervention studies (see Figure 2). A majority of intervention
studies occurred in public schools (n = 207; 77.24%). Over half of participants were in elementary grades (n = 148;
55.22%). Students with disabilities were included in just over one third of studies (n = 106; 39.55%). Approximately
one third of studies included interventionists who were teachers (n = 105; 39.18%), others such as parent and peers
(n = 86; 32.09%), or researchers (n = 82; 30.60%). Interventions were most often delivered in class‐wide (n = 95;
35.45%) or individual formats (n = 86; 32.10%). The intervention target was nearly evenly divided between
academics (n = 121; 45.15%) and behavior (n = 120; 44.78%). Over half of the studies included quantitative
treatment integrity data (n = 168; 62.69%) but no information was reported in over one fourth of studies (n = 72;
26.87%).

7.2 | Characteristics of school psychology intervention studies that included


acceptability
On average, the five journals reviewed published 8.31 intervention articles that reported acceptability per year (see
Figure 1), with acceptability most often reported in 2016 (n = 12) and the least often in 2009 (n = 3). Psychology in
the Schools (n = 40; 37.04%) most frequently included intervention articles with reported acceptability, followed by
School Psychology Review (n = 28; 25.93%) and School Psychology Quarterly (n = 22; 20.37%). School Psychology Review
had the highest proportion of intervention articles that included acceptability (49.11%), followed by School
Psychology Quarterly (48.89%), Psychology in the Schools (42.11%), Journal of School Psychology (37.21%), and School
Psychology International (7.14%). Of the 108 intervention articles that reported acceptability, most studies occurred
in public schools (n = 89; 82.41%). Over half of participants included in the articles that reported acceptability were
8 | SILVA ET AL.

T A B L E 2 Participant and intervention characteristics across intervention studies

Studies that included


All studies (n = 268) acceptability (n = 108)
Participant and intervention
characteristics n % n %
School setting
Public school 207 77.24 89 82.41
Nonpublic school 29 10.82 16 14.81
International schools 24 8.96 2 1.85
Grade
Preschool 19 7.09 9 8.33
Elementary school 148 55.22 66 61.11
Middle school 34 12.69 8 7.41
High school 21 7.84 8 7.41
Combined 32 11.94 13 12.04
Disability status
Yes 106 39.55 62 57.41
No 50 18.66 27 25.00
Interventionist
Teacher 105 39.18 50 46.30
Other school staff 21 7.84 8 7.41
Researcher 82 30.60 30 27.78
Other 86 30.09 35 32.41
Intervention setting
General education 64 23.88 41 37.96
Separate room 35 13.06 15 13.89
Other 61 22.76 30 27.78
Intervention delivery
Individual 86 32.10 36 33.33
Group 47 17.54 13 4.85
Class wide 95 35.45 49 45.37
Other 32 11.94 6 4.56
Intervention target
Academic 121 45.15 38 35.18
Behavior 120 44.78 60 55.56
Social skills 52 19.40 19 17.59
Mental health 53 19.78 14 12.96
Engagement 32 11.94 16 14.81
Treatment integrity
Included quantitative data 168 62.69 91 84.26
Mentioned, no quantitative data 23 8.58 6 5.56
Not included 77 28.73 11 10.29

in elementary grades (n = 66; 61.11%). Just over half of studies included students with disabilities (n = 62; 57.41%).
Teachers (n = 50; 46.63%) were most often interventionists followed by others such as parent and peers (n = 35;
32.41%), or researchers (n = 30; 27.78%). Interventions were most often delivered in the general education
classroom (n = 41; 37.96%), in a class‐wide (n = 49; 45.37%) or individual (n = 36; 33.33%) formats. The common
intervention target was behavior (n = 60; 55.56%) followed by academic skills (n = 38; 35.18%). A large proportion
of studies that reported acceptability also included quantitative treatment integrity data (n = 91; 84.26%).
SILVA ET AL. | 9

T A B L E 3 Acceptability characteristics in school psychology intervention studies (n = 108)


n %
Acceptability evaluation methoda
Self‐report 106 98.15
Interview 8 7.41
Other 2 1.85
Acceptability ratera
Teacher 80 74.07
Student 64 59.26
Parents 24 22.22
Type of acceptability measure
Published, previously validated measure only 47 44.34
Researcher‐developed rating scale only 47 44.34
Both previously validated and research‐developed 11 11.32
Measure psychometric data
Reported 43 40.57
Not reported 63 59.43
Adapted acceptability measure
Yes 34 32.08
No 72 67.92
If adapted, reliability reported
Yes 11 32.35
No 23 67.65
Acceptability scores
Total scores 12 11.3
Item scores 64 60.38
Total and item scores 12 11.32
Other (e.g., only qualitative data) 18 16.98
Acceptability assessment timing
Preintervention 2 1.85
Postintervention 74 68.52
Preintervention and postintervention 6 5.56
Other (e.g., after each session) 10 9.26
Not reported 16 14.82
a
Code allowed for selection of more than one rater per article.

7.3 | Comparing school psychology intervention studies with and without acceptability
Comparisons between intervention articles that did and did not report acceptability were conducted using χ2
analyses. No significant differences were found for the year of publication, grade, disability status, interventionist,
or setting of the intervention. Three significant relationships were found and subsequent 2 × 2 analyses conducted.
First, a significant relationship was found with intervention delivery χ2(3) = 14.43, p = .002. Acceptability was more
likely to be assessed in individual interventions than in small group interventions χ2(1) = 6.37, p = .011. Acceptability
was reported more often in individual interventions compared to school‐wide and/or a combination of intervention
formats, χ2(1) = 9.21, p = .002. Class‐wide interventions were more likely to include acceptability assessment
compared to school‐wide and/or a combination of intervention formats, χ2(1) = 4.47, p = .034. Second, a significant
relationship was found with intervention skill assessed, χ2(4) = 14.34, p = .006. Acceptability was significantly more
likely to be assessed in studies targeting behavior than in studies targeting academic skills, χ2(1) = 7.88, p = .004.
Studies targeting behavioral skills were more likely to include acceptability data compared to mental health
10 | SILVA ET AL.

Acceptability Articles in School Psychology Journals


30

Number of intervention articles 25 Intervention


Articles
20

15
Intervention
10 Articles with
Acceptability
5

0
2005 2007 2009 2011 2013 2015 2017
Year

FIGURE 1 Intervention articles in school psychology journals that include acceptability by year

interventions χ2(1) = 7.42, p = .006. Acceptability was more likely to be included in studies targeting engagement
compared to mental health, χ2(1) = 3.88, p = .048. Last, a significant relationship was found with the inclusion of
treatment integrity, χ2(2) = 33.83, p = <.000001. Treatment integrity was significantly more likely to be included in
articles that assessed for acceptability compared to articles that did not assess for acceptability, χ2(1) = 29.61,
p = < .000001.

7.4 | Acceptability in school psychology intervention studies


Of the 268 school psychology intervention articles, 40.30% assessed for acceptability (n = 108). Teacher
acceptability was reported most frequently (n = 80; 74.07%), followed by students (n = 64; 59.26%), and parents
(n = 24; 22.22%). When acceptability was reported, a self‐report measure was most often used (n = 106; 98.15%),
either a published, previously validated acceptability measure (n = 47; 44.34%) or a researcher‐developed rating
scale (n = 47; 44.34%) were used. Just over 10% of studies (n = 12; 11.37%) included a published acceptability
measure and a researcher‐developed rating scale. When a published measure was used, a version of the IRP (IRP‐
20; Witt & Martens, 1983; IRP‐15; Martens et al., 1985) was used most frequently (n = 29; 29.27%), followed by the
BIRS (Von Brock & Elliott, 1987; n = 21; 19.81%), and the CIRP (Witt & Elliot, 1985; n = 21; 19.81%). More than half

100
90
80
Number of articles

70
60
50 Intervention Studies
40
Intervention Studies with
30 Acceptability
20
10
0
JSP PITS SPI SPQ SPR
School Psychology Journals

FIGURE 2 Acceptability inclusion in school psychology intervention articles by journal


SILVA ET AL. | 11

of studies (59.43%) did not report psychometric data. More than half of the studies presented item scores (n = 64;
60.38%). Acceptability was most often assessed after intervention completion (n = 74; 68.52%).

7.5 | Reported acceptability data


For the IRP‐15, 35% (n = 7) of studies reported total sum and 75% (n = 15) reported item score. All studies used a 6‐
point Likert scale (1 = strongly disagree to 6 = strongly agree). Total scores ranged from 42 to 89 (M = 74.23) across
studies. Item scores ranged from 4.0 to 5.5 (M = 5.03).
For the IRP‐20, the total sum (n = 4; 44.44%), and the item score (n = 4; 44.44%) were reported equally, while
one study (11.11%) did not report either. Three studies modified the item number (range, 8–19) and two studies did
not specify the number of items (i.e., 15 or 20) but were classified as IRP‐20 because the seminal study was cited
(Witt & Martens, 1983). The Likert scale differed, with 33.33% (n = 3) of studies using a 7‐point scale, 22.22% (n = 2)
using a 6‐point scale, 11.11% (n = 1) using a 5‐point scale, and 33.33% (n = 3) not reporting the scale. Anchors were
inconsistent (e.g., 0 = strongly agreed or 0 = strongly disagreed). Neither the Likert scale nor anchors were consistent
across studies that reported item scores, therefore, the range of item scores and the item mean were not
computed.
For the BIRS, 23.81% (n = 5) of studies reported the total sum and 90.48% (n = 19) the item score. Three studies
reduced the items from 24 to 15, while three studies did not specify the number of items. All studies (n = 21) used a
6‐point Likert scale (1 = strongly disagree to 6 = strongly agree). The range of BIRS total scores was 72–137
(M = 120.69). The range of item scores was 3.14–5.88 (M = 4.92).
For the CIRP, 23.81% (n = 5) of studies reported the total sum, 71.43% (n = 15) reported the item score, and
11.77% (n = 2) reported neither. Number of items varied, with 66.67% (n = 14) of studies using 7 items, 23.81%
(n = 5) using 6 items, 4.76% (n = 1) using 5 items, and 4.76% (n = 1) not reporting. One third of studies used a 6‐point
scale (n = 14; 66.67%), with remaining studies using a 5‐ (n = 4; 19.05%) or 7‐point scale (n = 2; 9.52%). One study
did not report the scale (5.88%). The Likert scale anchors used across studies were not always consistent (e.g.,
1 = most acceptable or 1 = disagree) and therefore the range of item scores and the item mean were not computed.
For the TEI, one study (33.33%) reported the total score (M = 32.8; range, 30–35) and two studies (66.67%)
reported item scores. Two studies reported 9 items and 15 items were reported in one study. One study employed
a 7‐point Likert scale while the other two studies used a 5‐point Likert scale. Neither the Likert scale nor anchors
were consistent in studies that reported item scores and therefore the range of item scores and the item mean
were not computed.
For the URP‐IR, two studies reported the mean for the four factors (acceptability, understanding, feasibility, and
system climate), while one study only reported on three factors (acceptability, understanding, feasibility). A 6‐point
Likert scale (1 = strongly disagree to 6 = strongly agree) and item scores (M = 5.07; range, 3.96–5.67) were reported
across studies.
The Kids Intervention Profile was used in two studies and both reported the item score (M = 3.88; range,
3.65–4.17) with a 5‐point Likert scale across studies. Items varied across studies (range, 6–8 items).
The 20‐item TARF was used in one study with a 7‐point Likert scale and reported total score (M = 95.16; range,
86.68–102.23). An additional study reported using the TARF but no data were reported.

8 | D IS C U S S IO N

The purpose of this paper was to examine the inclusion and nature of acceptability measures within intervention
studies published in five school psychology journals between 2005 and 2017. This study extends previous reviews
of acceptability data in three ways: (a) comparing characteristics of intervention studies that did and did not include
12 | SILVA ET AL.

acceptability, (b) describing the measurement characteristics of acceptability tools, and (c) discussing the use and
delivery of assessment tools.
Over a 12‐year span, an average of about 20 intervention articles were published per year. Of the 268
intervention articles, most studies occurred in public schools with general education students. The most common
form of intervention delivery was to the individual student, followed closely by class‐wide format, with behavioral
skills and academic skills nearly equally represented. This provides valuable information about the focus of school
psychology research as a whole. Teachers followed by researchers were the most common implementers. Results
found a majority of intervention studies included treatment integrity data (62.69%). This is slightly higher than the
results found by Sanetti, Gritter, and Dobey (2011) who reviewed school psychology intervention research from
1995 to 2008 and found 50.2% of the studies included treatment integrity. This level of inclusion may be the result
of increased attention to the importance of treatment integrity in recent years (e.g., DiGennaro Reed & Codding,
2014; Sanetti et al., 2011).
Consistent with previous reviews (Sheridan et al., 1996; Villarreal et al., 2015) just over one third of
intervention studies reported acceptability, most often using a self‐report measure. It is notable that, unlike
treatment integrity, the level of reporting acceptability has not changed since the mid‐1980s (Sheridan et al., 1996).
Teachers were asked to report acceptability in approximately three quarters of these studies even though teachers
were the primary implementer in slightly less than half of studies. When teachers are not the primary implementers
or intervention target, it is interesting to consider what their perspective of the intervention acceptability
represents (e.g., likelihood of future adoption, potential for being the primary implementer, perspective on student
acceptability). Also, evaluating a teacher’s acceptability may be considered more feasible to researchers than
soliciting an entire class’s opinions or eliciting parents’ perspectives.
Acceptability was gathered for students in more than half of these studies with parent input less frequently
solicited. Although this current analysis of the literature suggests that student acceptability is more frequently
collected than in other reviews (i.e., Roach et al., 2009; Villarreal et al., 2015), it is surprising that student
perceptions of intervention participation is not more commonly assessed. Including the perspectives of students
may be difficult for schools as traditionally as there has been “a constant tug‐of‐war between regulating children
and promoting their independence and growth” (Shriberg & Desai, 2014, p. 8). The meaningful participation of
students in decisions affecting them is recognized internationally as the right of all children (United Nations, 1989).
Supporting children to communicate their views through acceptability assessment is one avenue through which
student voices can be heard and respected, particularly when adults are making decisions impacting children
(Nastasi & Naser, 2014; UNICEF, 2014).

8.1 | Acceptability study characteristics


Acceptability was significantly more likely to be assessed when individual interventions were delivered as
compared to small group or school‐wide interventions. This may be practical in that it may be more feasible for an
individual teacher and/or student to complete such a measure than multiple raters (e.g., a whole class of students).
Interventions targeting behavioral skills were more likely to assess acceptability compared to academic skills. These
findings are consistent with Roach et al. (2009) who also found that articles that included students’ acceptability
data were assessment or intervention articles that addressed emotional‐behavioral or behavioral concerns. This
may be because the construct of acceptability originated from Wolf’s (1978) seminal work within the field of
applied behavioral analysis.
It is interesting, although perhaps not surprising, treatment integrity was significantly more likely to be
evaluated within intervention study articles that assessed for acceptability. Researchers who report more than
simply student outcome data seem to include both treatment integrity and acceptability data. Like acceptability, a
focus on treatment integrity grew out of the behavior analysis literature (DiGennaro Reed & Codding, 2014) and so
it is possible this co‐occurrence is associated with researchers with this training and perspective. Whatever the
SILVA ET AL. | 13

reason for similar levels of reporting, this study should be carefully examined to elucidate the relationship between
these variables.

8.2 | Measurement of acceptability


Findings from the present study demonstrate differing approaches to the measurement and reporting of
acceptability. A majority of studies measured for acceptability once and only after an intervention was implemented
and the remaining studies used a variety of approaches (e.g., pre and post, after each intervention session) or did not
report when acceptability was assessed. Unfortunately, posttreatment acceptability measurements may not
meaningfully impact the participant of the intervention and therefore acceptability measurement may be deemed
as having limited value and contribute to decreased use (Kennedy, 2002). Acceptability might be most beneficial when
assessed through multiple modes (e.g., rating scales and direct behavioral observation), with multiple individuals (e.g.,
teachers, parents, and students), and across the intervention process (e.g., before, during, and after the intervention;
Finn & Sladeczek, 2001). This position is harmonious with the views of some scholars who conceptualize acceptability
as a dynamic process that occurs throughout intervention implementation (Foster & Mash, 1999; Schwartz & Baer,
1991). Given that intervention compatibility is a commonly reported barrier to intervention implementation, using
acceptability assessment to inform intervention, rather than reflect upon the intervention, might be beneficial (Long
et al., 2016). Newer measures, such as the CURP (Briesch & Chafouleas, 2009) and URP‐IR (Briesch, Chafouleas,
Neugebauer, & Riley‐Tillman, 2013), include multiple dimensions (e.g., personal desirability, understanding, feasibility)
which focus acceptability assessment on the likelihood of intervention usage and the development of supports to
promote successful intervention implementation.
Furthermore, while just over a majority of studies chose to present their acceptability assessment results in the
form of item scores, the remaining studies used total scores or a combination of item and total scores. When
acceptability was measured, about half of the studies used one of five published acceptability measures and the
same number of studies used a researcher‐created measure. The most common published measures employed was
either variation of the IRP (20 or 15) or the CIRP. Just over one third of articles included some type of
psychometric information on the acceptability measure and this percentage decreased when solely considering the
researcher‐created measures. The lack of psychometric information is further problematic in that nearly a third of
authors adapted the acceptability measures by changing the wording of items, Likert scale, or scale anchors. This
latter finding also made it difficult to compare acceptability ratings across studies or examine whether differences
existed among interventions according to target skill, implementer, or delivery format. However, when
acceptability information could be summarized most measures suggested the interventions employed were
perceived to be acceptable by consumers.
These results suggest a lack of common standards when researchers assess for acceptability and how
researchers report the results of acceptability measures (i.e., item scores or total scores). By not having a
standardized approach to the assessment and reporting of acceptability, it is difficult for practitioners and
researchers to interpret the findings. Previous research has called for editors of applied research journals (e.g.,
special education, school psychology) to recommend to authors, as a minimum standard, that acceptability in
assessment and intervention studies be reported and indicate how it was measured (Callahan et al., 2017; Roach
et al., 2009). By having acceptability included in a mandatory standard in reporting applied research and having
published acceptability rating scales readily available, researchers and practitioners will be able to know what
interventions are most and least acceptable. This is information that could be critical to the decision‐making
process when determining which intervention to implement. For example, if two interventions have both been
found through a meta‐analysis to be effective in teaching a skill, but one intervention has been found to be more
acceptable, the selection of the more acceptable intervention might be most appropriate.
An additional approach that may be beneficial is a central place where acceptability tools are easily accessible.
Compared to treatment integrity, where a researcher and practitioner can create a measure based on the individual
14 | SILVA ET AL.

treatment, acceptability assessment has primarily relied on published acceptability rating scales. By not having
acceptability rating scales readily available, it may contribute to the large percentage of articles relying on
researcher‐developed acceptability tools, as well as contribute to the inconsistencies in acceptability measurement
we found in this review. Further, having access to published acceptability rating scales allows researchers and
practitioners to use these in the intervention planning process to determine if there are potential barriers, as well
as contribute to the pool of data regarding the acceptability of certain interventions.

8.3 | Limitations and future directions


Although these results expand the previous literature on the state of acceptability assessment in school psychology
intervention studies, this study is not without limitations. First, as a way to contain the scope of this review, this study
examined research studies from only five school psychology journals and did not include other applied journals in related
fields (e.g., special education, behavior analysis) that publish intervention research with children. Second, conducting χ2
analyses, some categories were collapsed to avoid small cell sizes. Third, due to modifications of the acceptability rating
scales, we were not always able to report the level of acceptability or evaluate characteristics of studies that were rated as
more acceptable. Fourth, we did not examine whether treatment efficacy was related to acceptability.
This study provides valuable information regarding the use of acceptability assessment in school psychology
intervention research. Although attending to intervention acceptability is considered a professional responsibility
of psychologists and educators (APA, 2006; CEC, 2014; NASP, 2010), results from this study indicate it is only
being included in school psychology intervention research just over one third of the time. Given that perceptions of
intervention procedures inevitably will vary due to variety of factors including knowledge, skills, experience, and
beliefs, the assessment of intervention acceptability may bring to light components critical to intervention
effectiveness and implementation (APA, 2002; Cross Calvert & Johnston, 1990; Long et al., 2016). By not assessing
for acceptability when conducting interventions, practitioners and researchers may miss critical information that
could meaningfully inform the implementation of interventions in schools. It may be beneficial for practitioners and
researchers to move beyond one‐time self‐report questionnaires to an approach that examines acceptability
throughout intervention planning and implementation (Fawcett, 1991; Finn & Sladeczek, 2001; NASP, 2010;
Schwartz & Baer, 1991). When acceptability is assessed using self‐report measures, it is critical that the measures
be used as intended, adaptations be documented fully, and additional psychometric analyses conducted to
determine whether the data remains reliable after being adapted.

OR CID

Meghan R. Silva http://orcid.org/0000-0001-5106-1127


Melissa A. Collier‐Meek http://orcid.org/0000-0002-5789-7029

REFERENC ES

Allinder, R. M., & Oats, R. G. (1997). Effects of acceptability on teachers’ implementation of curriculum‐based measurement
and student achievement in mathematics computation. Remedial and Special Education, 18, 113–120. https://doi.org/10.
1177/074193259701800205
American Psychological Association (APA). (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57,
1052–1059. https://doi.org/10.1037/0003‐066X.57.12.1052
American Psychological Association (APA). (2006). Evidence‐based practice in psychology. American Psychologist, 61,
271–285. https://doi.org/10.1037/0003‐066X.61.4.271
Briesch, A. M., & Chafouleas, S. M. (2009). Exploring student buy‐in: Initial development of an instrument to measure
likelihood of children’s intervention usage. Journal of Educational and Psychological Consultation, 19, 321–336. https://
doi.org/10.1080/10474410903408885
SILVA ET AL. | 15

Briesch, A. M., Chafouleas, S. M., Neugebauer, S. R., & Riley‐Tillman, T. C. (2013). Assessing influences on intervention
implementation: Revision of the usage rating profile‐intervention. Journal of School Psychology, 51, 81–96. https://doi.
org/10.1016/j.jsp.2012.08.006
Callahan, K., Hughes, H. L., Mehta, S., Toussaint, K. A., Nichols, S. M., Ma, P. S., … Wang, H. T. (2017). Social validity of
evidence‐based practices and emerging interventions in autism. Focus on Autism and Other Developmental Disabilities, 32,
188–197.
Carter, S. L. (2007). Review of recent treatment acceptability research. Education and Training in Developmental Disabilities,
42, 301–316.
Council for Exceptional Children (CEC). (2014). Council for exceptional children: Standard of evidence‐based practices in
special education. Teaching Exceptional Children, 46, 206–212.
Cross Calvert, S., & Johnston, C. (1990). Acceptability of treatments for child behavior problems: Issues and implications for
future research. Journal of Clinical Child Psychology, 19, 61–74. https://doi.org/10.1207/s15374424jccp1901_8
Dart, E. H., Cook, C. R., Collins, T. A., Gresham, F. M., & Chenier, J. S. (2012). Test driving interventions to increase
treatment integrity and student outcomes. School Psychology Review, 41, 467–481.
DiGennaro Reed, F. D., & Codding, R. S. (2014). Advancements in procedural fidelity assessment and intervention:
Introduction to the special issue. Journal of Behavioral Education, 23, 1–18. https://doi.org/10.1007/s10864‐013‐9191‐3
Eckert, T. L., Hier, B. O., Hamsho, N. F., & Malandrino, R. D. (2017). Assessing children’s perceptions of academic
interventions: The Kids Intervention Profile. School Psychology Quarterly, 32, 268–281. https://doi.org/.org.ezp3.lib.umn.
edu/10.1037/spq0000200
Eckert, T. L., & Hintze, J. M. (2000). Behavioral conceptions and applications of acceptability: Issues related to service
delivery and research methodology. School Psychology Quarterly, 15, 123–148. https://doi.org/10.1037/h0088782
Elliott, S. N. (1988). Acceptability of behavioral treatments: Review of variables that influence treatment selection.
Professional Psychology: Research and Practice, 19, 68–80. https://doi.org/10.1037/0735‐7028.19.1.68
Fawcett, S. B. (1991). Social validity: A note on methodology. Journal of Applied Behavior Analysis, 24, 235–239. https://doi.
org/10.1901/jaba.1991.24‐235
Finn, C. A., & Sladeczek, I. E. (2001). Assessing the social validity of behavioral interventions: A review of treatment
acceptability measures. School Psychology Quarterly, 16, 176–206. https://doi.org/10.1521/scpq.16.2.176.18703
Finney, J. W. (1991). On further development of the concept of social validity. Journal of Applied Behavior Analysis, 24,
245–249. https://doi.org/10.1901/jaba.1991.24‐245
Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment research: Issues and procedures. Journal of
Consulting and Clinical Psychology, 67, 308–319. https://doi.org/10.1037/0022‐006X.67.3.308
Hier, B. O., & Eckert, T. L. (2014). Evaluating elementary‐aged students’ abilities to generalize and maintain fluency gains of
a performance feedback writing intervention. School Psychology Quarterly, 29, 488–502. https://doi.org/10.1037/
spq0000040
Hier, B. O., & Eckert, T. L. (2016). Programming generality into a performance feedback writing intervention: A randomized
controlled trial. Journal of school psychology, 56, 111–131.
Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior
Modification, 1, 427–452.
Kazdin, A. E. (1980). Acceptability of alternative treatments for deviant child behavior. Journal of Applied Behavior Analysis,
13(2), 259–273.
Kennedy, C. H. (2002). The maintenance of behavior change as an indicator of social validity. Behavior Modification, 26,
594–604.
Kelley, M. L., Heffer, R. W., Gresham, F. M., & Elliott, S. N. (1989). Development of a modified treatment evaluation
inventory. Journal of Psychopathology and Behavioral Assessment, 11(3), 235–247.
Long, A. C. J., Sanetti, L. M. H., Collier‐Meek, M. A., Gallucci, J., Altschaefl, M., & Kratochwill, T. R. (2016). An exploratory
investigation of teachers' intervention planning and perceived implementation barriers. Journal of School Psychology, 55,
1–26.
Mautone, J. A., DuPaul, G. J., Jitendra, A. K., Tresco, K. E., Junod, R. V., & Volpe, R. J. (2009). The relationship between
treatment integrity and acceptability of reading interventions for children with Attention‐Deficit/Hyperactivity
Disorder. Psychology in the Schools, 46, 919–931.
Martens, B. K., Witt, J. C., Elliott, S. N., & Darveaux, D. X. (1985). Teacher judgments concerning the acceptability of school‐
based interventions. Professional Psychology: Research and Practice, 16, 191–198. https://doi.org/10.1037/0735‐7028.
16.2.191
Miller, F. G., Neugebauer, S. R., Chafouleas, S. M., Briesch, A. M., & Riley‐Tillman, T. C. (2013). Examining innovation usage:
Construct validation of the Usage Rating Profile ‐ Assessment. Poster presentation at the American Psychological
Association Annual Convention, Honolulu, HI.
16 | SILVA ET AL.

Miltenberger, R. G. (1990). Assessment of treatment acceptability: A review of the literature. Topics in Early Childhood
Special Education, 10, 24–38.
Nastasi, B. K., & Naser, S. (2014). Child rights as a framework for advancing professional standards for practice, ethics, and
professional development in school psychology. School Psychology International, 35, 36–49.
National Association of School Psychologists (NASP). (2010). Model for comprehensive and integrated school psychological
services. Retrieved from http://www.nasponline.org/stan‐dards/practice‐model/
Peterson, C. A., & McConnell, S. R. (1996). Factors related to intervention integrity and child outcome in social skills
interventions. Journal of Early Intervention, 20, 146–164.
Reimers, T. M., & Wacker, D. P. (1988). Parents’ ratings of the acceptability of behavioral treatment recommendations
made in an outpatient clinic: A preliminary analysis of the influence of treatment effectiveness. Behavioral Disorders, 14,
7–15.
Reimers, T. M., Wacker, D. P., Cooper, L. J., & DeRaad, A. O. (1992). Acceptability of behavioral treatments for children:
Analog and naturalistic evaluations by parents. School Psychology Review, 21, 628–643.
Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature.
School Psychology Review, 16, 212–227.
Roach, A. T., Wixson, C. S., Talapatra, D., & LaSalle, T. P. (2009). Missing voices in school psychology research: A review of
the literature 2002–2007. The School Psychologist, 63, 5–10.
Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school
psychology literature from 1995 to 2008. School Psychology Review, 40, 72–84.
Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied
Behavior Analysis, 24, 189–204. https://doi.org/10.1901/jaba.1991.24‐189
Sheridan, S. M., Welch, M., & Orme, S. F. (1996). Is consultation effective?: A review of outcome research. Remedial and
Special Education, 17, 341–354.
Shriberg, D., & Desai, P. (2014). Bridging social justice and children’s rights to enhance school psychology scholarship and
practice: Social Justice and Children’s Rights. Psychology in the Schools, 51, 3–14.
Sterling‐Turner, H. E., & Watson, T. S. (2002). An analog investigation of the relationship between treatment acceptability
and treatment integrity. Journal of Behavioral Education, 11, 39–50.
Turco, T. L., & Elliott, S. N. (1986). Students' acceptability ratings of interventions for classroom misbehaviors: A study of
well‐behaving and misbehaving youth. Journal of Psychoeducational Assessment, 4(4), 281–289.
UNICEF. (2014). Rights under the convention on the rights of the child. Available from http://www.unicef.org/crc/index_30177.
html
United Nations. (1989). Convention on the rights of the child. Available from http://www2.ohchr.org/english/law/crc.htm
Villarreal, V., Ponce, C., & Gutierrez, H. (2015). Treatment acceptability of interventions published in six school psychology
journals. School Psychology International, 36, 322–332. https://doi.org/10.1177/0143034315574153
Von Brock, M. B., & Elliott, S. N. (1987). Influence of treatment effectiveness information on the acceptability of classroom
interventions. Journal of School Psychology, 25, 131–144.
Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom management strategies. In Kratochwill, T. R. (Ed.), Advances in
school psychology (Vol. 4, pp. 251–288). Hillsdale, NJ: Erlbaum.
Witt, J. C., & Martens, B. K. (1983). Assessing the acceptability of behavioral interventions used in classrooms. Psychology in
the Schools, 20, 510–517.
Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its
heart. Journal of Applied Behavior Analysis, 11, 203–214.

How to cite this article: Silva MR, Collier‐Meek MA, Codding RS, DeFouw ER. Acceptability assessment of
school psychology interventions from 2005 to 2017. Psychol Schs. 2019;1–16.
https://doi.org/10.1002/pits.22306

You might also like