Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

J Behav Educ

DOI 10.1007/s10864-016-9247-2

ORIGINAL PAPER

Training Public School Special Educators to Implement


Two Functional Analysis Models

Mandy Rispoli1 • Leslie Neely2 • Olive Healy3 •

Emily Gregori4

Ó Springer Science+Business Media New York 2016

Abstract The purpose of this study was to investigate the efficacy and efficiency
of a training package to teach public school special educators to conduct functional
analyses of challenging behavior. Six public school educators were divided into two
cohorts of three and were taught two models of functional analysis of challenging
behavior: traditional and trial-based functional analysis. The effect of the training
package on functional analysis implementation was evaluated using multiple-
baseline designs across participants for each functional analysis model. The
sequence of functional analysis models taught was counterbalanced across cohorts.
Following the training package, all participants reached 100 % implementation
fidelity during role-plays and classroom sessions with students, and maintained high
fidelity at follow-up role-play sessions. Data on training duration, trials to criterion,
and social validity revealed that trial-based functional analysis training had a shorter
duration and was rated as the more favorable functional analysis model.

Keywords Functional analysis  Trial-based functional analysis  Teachers 


Teacher training

& Mandy Rispoli


mrispoli@purdue.edu
1
Department of Educational Studies, Purdue University, 100 N University Street,
West Lafayette, IN 47907, USA
2
University of Texas San Antonio, San Antonio, TX, USA
3
Trinity College Dublin, Dublin, Ireland
4
Texas A&M University, College Station, TX, USA

123
J Behav Educ

Introduction

The success of a function-based intervention relies on accurately identifying the


function of challenging behavior (e.g., Sugai et al. 1998). Sugai et al. (2000) define a
functional behavior assessment (FBA) as a ‘‘systematic process of identifying
problem behaviors and the events that (a) reliably predicts occurrence and
nonoccurrence of those behaviors and (b) maintains the behaviors across time’’ (p.
137). Although the Individuals with Disabilities Education Act (IDEA 2004)
requires that FBAs be conducted as part of intervention planning, the regulations do
not define the FBA process (Part 300 (E)(300.530)(d)(1)(ii). FBAs in public schools
vary widely in terms of structure and quality (Blood and Neel 2007). Van Acker
et al. (2005) analyzed over 70 FBAs and found them to rely heavily on indirect
assessments, to be vague regarding behavior definitions, to lack systematic
manipulations of environmental variables, and to not lead to function-based
interventions. Scott et al. (2005) found that school personnel typically only
implemented FBAs once the challenging behavior required crisis-level intervention,
rather than using the FBA process as a tool to prevent escalation or persistence of
challenging behavior.
Most public schools do not have personnel with expertise in challenging behavior
assessment and intervention. As a result, many FBAs are completed by itinerant
personnel or outside consultants who travel from campus to campus (Sprague et al.
1996). This reliance on itinerant or outside personnel can lead to the development of
interventions which lack resources and contextual fit with the school and classroom
environments and culture (Loman and Horner 2014). Compounding this issue is the
lack of teacher involvement and understanding of the FBA process (Blood and Neel
2007). Chitiyo and Wheeler (2009) surveyed general and special education teachers
regarding their experiences with FBA and positive behavior interventions and
supports (PBIS). Teachers reported that conducting FBAs was one of the most
difficult and problematic aspects of PBIS. Minimal teacher understanding and
involvement in the FBA process may contribute to lack of teacher buy-in, lack of
identification of relevant environmental factors associated with the challenging
behavior, and lack of adherence to behavior intervention recommendations
(Hassiotis et al. 2009).
Research investigating professional development to increase teacher involvement
in the FBA process has identified several common instructional features, including
performance feedback. Previous research has shown that improvements in teacher
implementation fidelity often require performance feedback in which individual
data are reviewed, discussed, and plans for improvement are made (Noell et al.
2005). Performance feedback is one of the most commonly researched means of
changing interventionist behavior in applied behavior analysis (Bechtel et al. 2015).
In addition to performance feedback, other effective instructional features include
written and verbal instruction, modeling, and role-play (McCahill et al. 2014).
Through the use of these instructional features paired with performance feedback,
teachers have been taught to conduct a variety of FBA components including

123
J Behav Educ

indirect and descriptive assessments (e.g., Ellingson et al. 2000; Symons et al. 1998)
and functional analysis (e.g., Pence et al. 2014).
Functional analysis (FA) is a component of an FBA involving the systematic
manipulation of environmental variables to determine antecedents which occasion
challenging behavior and sources of reinforcement which maintain challenging
behavior (Hanley et al. 2003). In a traditional FA (TFA) (c.f. Iwata et al. 1982/
1994), a variety of social conditions are simulated to examine potential antecedents
and common sources of reinforcement for challenging behavior (e.g., to obtain adult
attention, to escape task demands). While TFA is considered to be the most precise
method for determining functions of challenging behavior (Delfs and Campbell
2010) and has been the subject of peer-reviewed research for over 30 years (Beavers
et al. 2013), FAs are rarely implemented in public school settings (Gresham et al.
2004). The absence of FAs in schools may be related to the level of difficulty of the
procedures, required personnel and time needed to carry out the procedures, and the
potentially lengthy and intensive training required to implement the FA (Daly et al.
1997; Ducharme and Shector 2011). Recent modifications to TFA models may help
to address some of the barriers affecting the use of FA in classroom settings (Lydon
et al. 2012).
Trial-based functional analysis (TBFA) was introduced by Sigafoos and Saggers
(1995) as an FA model which can be embedded into classroom environments.
Rather than relying on ‘‘artificial’’ simulated social conditions (McCahill et al. 2014,
p. 480) to test specific antecedents and reinforcers for challenging behavior, the
TBFA capitalizes on social situations already occurring in the individual’s
environment. Like the TFA model, data are collected on the occurrence of
challenging behavior following the presentation of specific antecedents, such as
removal of teacher attention, presentation of a task demand, or removal of a
preferred activity or item. However, unlike TFA models, which require massed
presentation of antecedents and reinforcement for challenging behavior (Iwata et al.
1982/1994), the TBFA model consists of brief (i.e., 2 min) trials distributed
throughout the school day. It has been proposed that the TBFA model may require
fewer resources and have higher social validity with public school teachers than
TFA models (c.f. Lloyd et al. 2014; Rispoli et al. 2015a, b). While studies have
trained teachers in either TBFA (e.g., Kunnavatana et al. 2013) or TFA (e.g., Erbas
et al. 2006), no study has trained teachers in multiple FA models in order to
compare teacher implementation acquisition, required resources, or measures of
social validity of these FA models.
The purpose of this study was to compare teacher acquisition of TFA and TBFA.
Specific research questions were: (a) What are the effects of a teacher training
package on public school special educators’ implementation of the TFA model and
the TBFA model? (b) Do teacher training effects generalize to teacher implemen-
tation of FAs in their classrooms with students with autism spectrum disorder
(ASD)? (c) Do teacher training effects maintain at follow-up probes? (d) Which FA
model requires fewer sessions for teachers to acquire? (e) How do teachers view the
feasibility and acceptability of conducting an FA in the classroom and which FA
model do these teachers prefer?

123
J Behav Educ

Method

Participants

Following approval by the institutional review board, six educators, randomly


assigned to cohorts of three, were recruited from a local public school district via an
electronic mail advertisement. To be included in the study, participants had to be
currently employed as a special education teacher, or as a supervisor of special
education teachers, and work with students with ASD who engaged in challenging
behavior which required individualized intervention.
Angela, Brooke, and Cate comprised Cohort 1. Angela was a special education
teacher with a master’s degree in special education and 7 years of teaching
experience. Angela had taken a graduate course which discussed FA, but had never
implemented an FA. She taught an elementary self-contained classroom for students
with ASD. Her students received instruction both within the special education
classroom and in general education settings. There were typically between three and
six students in the classroom and one to two paraprofessionals.
Brooke was a special education teacher with 2 years of teaching experience. At
the time of the study, Brooke was pursuing a master’s degree in special education.
Brooke had taken a graduate course which discussed FA, but had never
implemented an FA. She taught a middle school self-contained special education
classroom for students with ASD. Her students received instruction both within the
special education classroom and in general education settings. There were typically
between three and five students in the classroom and between one and three
paraprofessionals.
Cate was a special education teacher with 3 years of teaching experience. Cate
held a bachelor’s degree in special education and had no previous experience with
FA. Cate taught a self-contained special education classroom for elementary school
students with ASD. There were five students in the classroom and between two and
three paraprofessionals.
Debbie, Ellen, and Fiona participated in Cohort 2. Debbie was a special
education district coordinator and supervised all staff professional development and
programming related to students with ASD, developmental, and intellectual
disabilities. Debbie had a master’s degree in school psychology and was a licensed
specialist in school psychology (LSSP). Debbie was completing coursework and
field supervision to become a board certified behavior analyst. Debbie had previous
coursework and professional development on FA, but had not implemented an FA.
Ellen was a second year special education teacher. Ellen was pursuing her board
certification in behavior analysis and had just begun her field supervision hours at
the time of the study. Ellen had previous coursework related to FA, but had no
experience implementing an FA. Ellen taught a self-contained classroom for
students with ASD in an intermediate school. Her students received instruction both
within the special education classroom and in general education settings. There
were usually between three and five students in the classroom at one time and one
paraprofessional.

123
J Behav Educ

Fiona was a special education teacher with 2 years of teaching experience. Fiona
had a bachelor’s degree in special education and no previous experience with FA.
Fiona taught a preschool self-contained special education classroom for children
with ASD located in the elementary school. Her students were between 3 and
5 years and received services both within the special education classroom and in
inclusive settings. Typically, there were between two and three students in the
classroom at a time and one to three paraprofessionals.
Each of the educators was paired with one of her students for evaluating FA
implementation in the classroom. Students’ parents provided informed consent for
their child to participate in this study. All participating students were between 5 and
13 years of age, received special education services related to an educational
diagnosis of autism, and were reported by their teachers to engage in challenging
behavior daily or weekly. As Debbie was a district administrator without a caseload
of students, she implemented the classroom FA with Cate’s student. To ensure their
data were independent, Debbie and Cate were not present during each other’s
sessions and did not discuss any aspects of the study with one another. Student
target behaviors included: verbal protesting (Angela’s student), whining (Brooke’s
student), aggression (Cate and Debbie’s student), repetitive verbalizations (Ellen’s
student), and body clenching or shirt biting (Fiona’s student).

Setting and Materials

Training, role-play, and maintenance sessions were conducted within the educators’
classrooms or in an empty conference room in the school. In situ sessions were
conducted in the educators’ classrooms. All sessions were video-recorded for data
collection purposes. During baseline, role-play and performance feedback, and
maintenance phases, only the educator and two researchers were present. The in situ
probes were conducted in the educator’s classroom during the instructional day
when students and staff were present. To minimize disruption to the classroom, only
one researcher was present during these sessions.
FA materials varied dependent on the available classroom materials, but included
items related to leisure (e.g., computer, toys, books) and instruction (e.g.,
worksheets, math manipulatives). Training materials included a copy of the Iwata
et al. (1982/1994) and Bloom et al. (2013) articles, a laptop computer loaded with a
PowerPointÒ presentation with corresponding printed handouts of the slides, and a
DVD with 2-min video clips depicting each assessment condition of the FA models.

Experimental Design

Four multiple-baseline designs across participants (Kennedy 2005) were used to


evaluate the effects of the teaching procedures on the educators’ fidelity of
implementation of each FA model. However, due to scheduling constraints, Angela
and Brooke in Cohort 1 entered intervention for the TBFA simultaneously, thereby
limiting the replication of effects within that multiple-baseline design. To address
potential sequence effects, FA models were counterbalanced across cohorts such
that Cohort 1 was taught the TBFA first while Cohort 2 was taught the TFA first.

123
J Behav Educ

Dependent Measures

Data were collected in vivo using a pencil and paper data sheets for all study
phases. The dependent variable was teacher FA implementation fidelity measured
as the percentage of steps implemented correctly during each FA session using
the point-by-point method (Gast 2010). Implementation fidelity was scored using
a researcher-developed checklist (see ‘‘Appendix 1’’). The checklist included a
task analysis of four to six steps for each condition of the FA model. For a step
in the task analysis to be scored correct, the educator had to implement that step
accurately throughout the entire session. The number of steps implemented
correctly for a given session, including both control and test components for the
TBFA, was divided by the total number of steps in that condition and multiplied
by 100 to obtain a percentage. Data were also collected on the number of
sessions required to reach implementation fidelity criterion (100 % implemen-
tation fidelity on a single session). Finally, the duration of role-play and
performance feedback sessions for each participant to reach criterion was
assessed by calculating the sum of all minutes of role-play and performance
feedback sessions for each participant.

Interobserver Agreement (IOA)

A second independent observer collected reliability data from 25 % of sessions for


each participant and study phase. IOA was calculated using percent agreement by
dividing the total number of agreements by the sum of the agreements plus
disagreements then multiplying by 100 %. An agreement was scored when both
raters agreed on the occurrence or nonoccurrence of an educator behavior. IOA for
Angela, Debbie and Ellen was 100 % for both FA models. Brooke’s mean IOA for
the TFA was 98 % (range 80–100 %) and for the TBFA was 96 % (range
83–100 %). Mean IOA for Cate was 99 % (range 80–100 %) and 98 % (range
83–100 %) for the TFA and TBFA, respectively. Fiona’s mean IOA was 99 % for
the TFA (range 75–100 %) and 100 % for the TBFA.

Functional Analysis Procedures

TFA

The TFA procedures were adapted from those described by Iwata et al. (1982/1994)
to be 5 min in duration and to include a tangible condition. Participants were taught
to implement four TFA conditions: (a) attention, (b) tangible, (c) escape, and
(d) play. The attention condition was designed to assess whether the student’s
challenging behavior was maintained by access to attention. In the attention
condition, the educator sat near the student with leisure materials or toys present.
The educator directed the student to engage with the materials then turned her body
away. Contingent upon the target challenging behavior, the educator approached the
student and provided brief physical attention (e.g., pat on the back) and verbal

123
J Behav Educ

attention such as asking, ‘‘Are you ok?’’ until challenging behavior had ceased for
10 s.
The tangible condition was designed to assess whether the student’s challenging
behavior was maintained by access to an item or activity. In the tangible condition,
the educator provided the student with 10 s of access to a preferred tangible (i.e., toy
or activity). The educator then placed the tangible in sight, but out of the student’s
reach and blocked access to the item. Contingent upon challenging behavior, the
educator provided the tangible to the student for 10 s or until challenging behavior
ceased.
The escape condition was designed to assess whether the student’s challenging
behavior was maintained by escape from task demands. During the escape
condition, the educator verbally instructed the student to complete a task related to
the student’s individualized education plan (IEP). The educator provided least-to-
most prompting to assist students in completing the task. These prompts included
verbal, verbal plus model, and verbal plus physical prompts. The educator provided
verbal praise upon task completion and then presented a new task demand.
Contingent upon challenging behavior, the educator removed task-related materials
and turned away from the student for 10 s. Task demands and related materials were
presented after the target challenging behavior had ceased for 10 s.
The play condition was designed to serve as a control condition. During the play
condition, no task demands or task-related materials were presented and educators
gave students unrestricted access to leisure activities or toys. The educator provided
praise and physical contact (e.g., pats on back) at least once every 10 s and did not
respond to challenging behavior.

TBFA

Educators were taught to implement three TBFA conditions: (a) attention,


(b) tangible, and (c) escape. Sessions for each condition were no more than
2 min in duration and were divided into two components: control and test (Bloom
et al. 2013). The control component was 60 s, and the test component continued for
up to 60 s or until challenging behavior occurred. All sessions began with the
control component and immediately moved to the test component. The trial ended
after the educator responded to the challenging behavior, or when 60 s had elapsed
without challenging behavior.
The attention condition was designed to assess whether the student’s challenging
behavior was maintained by access to attention. During the control component, the
educator provided the student with attention at least once every 5 s. During the test
component, the educator instructed the student to engage with the toys or leisure
activities and turned away from the student. Contingent upon challenging behavior,
the educator turned toward the student and provided brief physical (e.g., pat on the
back) and verbal attention, and the trial ended.
The tangible condition was designed to assess whether the student’s challenging
behavior was maintained by access to an item or activity. In the control component,
the educator sat near the student and provided the student with 60 s of access to a
preferred tangible. During the test component, the educator removed the tangible

123
J Behav Educ

from the student and placed it in sight but out of reach. The educator told the student
he or she could have the item back later and then blocked student access to the item.
Contingent upon target challenging behavior, the educator gave the item to the
student and the trial ended.
The escape condition was designed to assess whether the student’s challenging
behavior was maintained by escape from task demands. During the control
component, the educator informed the student that he or she could have a break. The
educator turned away and did not interact with the student for 60 s, and no task-
related materials or demands were presented to the student. During the test
component, the educator instructed the student to complete a task related to the
student’s IEP. The educator provided least-to-most prompting to assist the student in
completing the task in the same fashion as the TFA. The educator provided verbal
praise upon task completion and then presented a new task demand. Contingent
upon challenging behavior, the educator removed the task materials and told the
student he/she could have a break and the trial ended.

Educator Training Procedures

Baseline

Prior to baseline, educators were emailed a research article describing the FA model
being taught. Educators read the seminal Iwata et al. (1982/1994) article for the
TFA model and the recent Bloom et al. (2013) for the TBFA model. Once the
educator confirmed she had read the article, the baseline session began. The first
author asked the educator to role-play a specific FA condition with the second
author, who played the role of a student with challenging behavior. The order of FA
conditions was randomized within each FA model. The educator was told to gather
any materials she needed and to begin when she was ready. The session continued
until the educator stated that the FA condition was finished. No performance
feedback was provided.

Educator Training

The first author implemented all training procedures with each educator individ-
ually. The training lasted 45 min and included a 30-min PowerPoint presentationÒ
of either the TFA or TBFA model, video examples of the corresponding FA model,
and opportunities for the participant to ask questions. Participants were provided
with corresponding handouts of each presentation slide. The presentation contained
an overview of operant conditioning, social functions of behavior, rationale for
conducting an FA, and task analyses of each FA condition.

Role-Play and Performance Feedback

Immediately following the 45 min training, the educator and a second researcher
practiced each FA condition with the researcher playing the role of the child each
time. The order in which the FA conditions were role-played was randomized, and

123
J Behav Educ

each role-play was considered one session. A role-play script was created for each
condition of each FA model. The scripts contained four to eight specific behaviors
for the researcher to role-play. These scripts were designed prior to the study and
were used across all educators. Scripts were designed to simulate authentic
examples of FAs with children who have challenging behavior. Examples of role-
play behaviors included: ‘‘Engage in a non-target challenging behavior at least once
during the session,’’ and ‘‘Refuse to give back the tangible by holding on tightly
when the educator attempts to take it.’’
Following each session, the first author provided written and verbal performance
feedback concerning the educator’s implementation of the target condition. The
educator was shown the data sheet (see ‘‘Appendix 1’’) for implementation fidelity.
In addition, the structure of performance feedback followed a researcher-developed
checklist (see ‘‘Appendix 2’’). The researcher provided a positive comment about
the educator’s implementation then reviewed any steps incorrectly implemented
using a neutral tone. Role-play and feedback continued for that session until the
educator reached the preset performance criterion of 100 % fidelity of implementing
or one session of each condition for the target FA model.

In situ Probes

After the educators had reached 100 % implementation fidelity for each role-played
condition of the targeted FA model, the educator conducted the FA conditions in her
classroom with her student. Following each in situ session, the researcher provided
performance feedback according to the same procedures used in the role-play
sessions. This phase continued until the educator reached performance criterion of
100 % implementation fidelity for each condition for the target FA model.

Maintenance Role-Play Phase

Maintenance probes were conducted for three of the four multiple-baseline designs.
Due to time restrictions, maintenance probes for Cohort 1 TFA were not conducted.
Maintenance of implementation fidelity was assessed at 1–6-week intervals
following the completion of the in situ phase. At the time of maintenance probes,
some of the children who had participated in the in situ probes were receiving
challenging behavior intervention. Their teachers requested that a functional
analysis not be implemented with these children so as to not provide contingent
reinforcement for challenging behavior. As a result, maintenance probes were
identical to the role-play and performance feedback condition with a second
researcher playing the role of a student in the FA. This phase continued until the
educator reached performance criterion of 100 % fidelity for each condition of the
FA model.

Procedural Integrity

Procedural integrity data were collected on implementation of educator training,


adherence to role-play scripts, and implementation of performance feedback by a

123
J Behav Educ

researcher uninvolved with that phase of the study. For example, the first author
collected procedural integrity data for the role-plays and the second author for the
initial training and performance feedback sessions. Procedural integrity was
calculated by dividing the number of target behaviors the researcher engaged in
by the total number of expected researcher behaviors and multiplying by 100 %.
IOA on procedural integrity was calculated by a trained research assistant from
video recordings for at least 20 % of sessions across each participant and was
100 %. Data were collected on the researcher’s engagement in three behaviors
during the initial training including delivering the 30-min presentation, showing the
videos of each FA condition, and responding to participant questions. Procedural
integrity data were collected for 100 % of the trainings and was 100 %. Mean
procedural integrity for role-plays across participants for TFA was 98 % (range
96–100 %) and for TBFA was 98 % (range 75–100 %). Data were collected on the
researcher’s adherence to the performance feedback procedures during the role-play
and performance feedback, in situ, and maintenance phases. Mean procedural
integrity for performance feedback across phases and participants was 99 % (range
80–100 %).

Social Validity

At the conclusion of the study, each educator completed a 16-item Likert scale
questionnaire modified from the Treatment Acceptability Rating Form—Revised
(TARF-R, Reimers et al. 1992). The Likert scale ranged from one to six with one
corresponding to ‘‘strongly disagree’’ and six corresponding to ‘‘strongly agree.’’
Participants responded to each item for the TFA and the TBFA individually. There
were 96 possible points for each FA model with higher scores indicating higher
acceptability. Reverse-scoring procedures were used for negatively keyed items.
Items on the scale included statements such as ‘‘The amount of time needed to
implement this assessment strategy is acceptable’’ and ‘‘I believe this assessment
strategy may lead to effective intervention for the student’s behavior.’’

Results

Cohort 1

TBFA

Figure 1 displays the results for Cohort 1 TBFA implementation fidelity. During
baseline, Angela implemented less than 16.77 % of steps correctly for the attention
and escape conditions, and zero steps correctly of the tangible condition. Following
training, Angela implemented the tangible condition correctly without requiring
performance feedback. She required one session of performance feedback to reach
100 % fidelity in the attention condition and two sessions of performance feedback
to reach criterion for the escape condition. Angela implemented the in situ
classroom probes for the attention and tangible conditions without requiring

123
J Behav Educ

Role Play Maintenance


Baseline In-Situ
+feedback Role Play
Role Play
TBFA Attention
90
75 5 week probe TBFA Escape

60
TBFA Tangible
45
30
15
Percentage of TBFA Steps Correct

Angela
0

90
75
2 week probe
60
45
30
Brooke
15
0

90
75
1 week probe
60
45
30
15 Cate

0
1 4 7 10 13 16 19 22 25 28
Session

Fig. 1 Cohort 1 trial-based functional analysis implementation fidelity. Phase change hash marks
indicate didactic instruction

additional feedback. She required one performance feedback session to reach


criterion for the escape condition in the classroom. At 5-week maintenance probes,
Angela implemented each TBFA condition with 100 % fidelity.
Brooke implemented the TBFA with low fidelity during baseline (range 0–33 %
of steps correct). Brooke implemented the tangible condition with 100 % fidelity
during the first role-play, but required one feedback session to reach criterion for the
attention condition and two feedback sessions to reach criterion for the escape
condition. In the in situ phase, Brooke required one performance feedback session to
reach criterion for tangible and attention, and two performance feedback sessions to

123
J Behav Educ

reach criterion for escape. At 2-week maintenance probes, Brooke implemented


each condition with 100 % fidelity.
Cate’s baseline levels ranged from 0 to 33 % of steps correct. She implemented
the tangible condition with 100 % fidelity immediately after training and required
one session of performance feedback for attention and two performance feedback
sessions for escape. Cate only required one performance feedback session during the
in situ probes for the escape condition. She maintained 100 % implementation
fidelity at a 1-week maintenance probe.

Baseline Role play In-Situ


Role Play +feedback
TFA Attention
90 TFA Escape

75 TFA Tangible

TFA Play
60
45
30
15 Angela
Percentage of TFA Steps Correct

90
75
60
45
30 Brooke
15
0

90
75
60
45
30
15 Cate

0
1 4 7 10 13 16 19 22 25 28
Session

Fig. 2 Cohort 1 traditional functional analysis implementation fidelity. Phase change hash marks
indicate didactic instruction

123
J Behav Educ

TFA

After teachers had completed in situ probes for the TBFA, they began baseline for
the TFA. The TFA results for Cohort 1 are presented in Fig. 2. Angela’s TFA
fidelity ranged from 0 to 50 % of steps correct. She reached 100 % fidelity for the
attention, play, and escape conditions following training. Angela required one
performance feedback session to reach criterion for the tangible condition. Angela
generalized her implementation to the in situ probes in her classroom without

Baseline Role Play In-Situ Maintenance


Role Play +feedback Role Play
TFA Attention
90 TFA Escape

75 TFA Tangible

TFA Play
60 6 week probe
45
30
15 Debbie
Percentage of TFA Steps Correct

90
75
5 week probe
60
45
30
Ellen
15
0

90
75
60 3 week probe

45
30
15 Fiona

0
1 4 7 10 13 16 19 22 25 28 31 34 37
Session

Fig. 3 Cohort 2 traditional functional analysis implementation fidelity. Phase change hash marks
indicate didactic instruction

123
J Behav Educ

further performance feedback. Brooke’s implementation fidelity ranged from 25 to


100 % of steps correct during baseline and showed an increasing trend in baseline
for her implementation of the play condition. She implemented all TFA conditions
with 100 % fidelity during the role-play and in situ probes after initial training.
Cate’s implementation fidelity at baseline ranged from 0 to 60 % of steps correct.
She implemented the escape, play, and attention conditions with 100 % fidelity after
training and only required one performance feedback session to reach criteria for the
tangible condition. Cate generalized her performance to the classroom without
additional feedback.

Baseline Role Play In-situ Maintenance


Role Play +feedback Role Play

TBFA Attention
90
75 TBFA Escape

60 4 week probe TBFA Tangible


45
30
Percentage of TBFA Steps Correct

15 Debbie
0

90
75
60
3 week probe
45
30
Ellen
15
0

90
75
60
2 week probe
45
30
15 Fiona
0
1 4 7 10 13 16 19 22 25 28
Session

Fig. 4 Cohort 2 trial-based functional analysis implementation fidelity. Phase change hash marks
indicate didactic instruction

123
J Behav Educ

Cohort 2

TFA

Figure 3 displays results for Cohort 2 TFA implementation fidelity. Debbie’s


baseline implementation fidelity ranged from 16 % in the tangible condition to
100 % in the attention condition. Following training, Debbie implemented the
attention, escape, and play conditions with 100 % fidelity. She required one
performance feedback session to reach criteria in the tangible condition. Debbie
implemented all TFA conditions with 100 % fidelity in the in situ probes. At 6-week
maintenance probes, Debbie maintained 100 % fidelity for the escape and attention
conditions, and required one performance feedback session for the tangible and play
conditions.
Ellen’s baseline implementation fidelity ranged from 0 to 80 % of steps correct.
Though attention, escape, and tangible conditions showed zero trend, Ellen’s
implementation of the play condition did improve in baseline from 60 to 75 % steps
correct. She implemented the attention and play conditions with 100 % fidelity after
training and required one performance feedback session for both the escape and
tangible conditions. Ellen implemented all TFA conditions with 100 % fidelity in
the in situ sessions at 5-week maintenance probes.
Fiona’s baseline implementation fidelity ranged from 0 to 80 % of steps correct.
Following training, Fiona implemented the attention and play conditions with
100 % fidelity. She required one performance feedback session for the escape and
tangible conditions. Fiona implemented the attention, play, and tangible conditions
with 100 % fidelity in the in situ sessions and only required one performance
feedback session for the escape condition. At 3-week maintenance probes, Fiona
implemented tangible, escape, and play conditions with 100 % fidelity. She reached
100 % fidelity for the attention condition after one performance feedback session.

TBFA

After Cohort 2 had completed in situ probes for the TFA, they began baseline for the
TBFA model. These results are depicted in Fig. 4. Debbie’s TBFA baseline
implementation fidelity ranged from 0 to 66 % correct. Following training, Debbie
implemented each condition with 100 % fidelity during the role-play, in situ, and
4-week maintenance probe without requiring performance feedback. Ellen’s
baseline data ranged from 16 to 66 % of steps correct with a decreasing trend for
tangible, no trend for escape, and an increasing trend for attention. Like Debbie,
Ellen implemented each condition with 100 % fidelity during the role-play, in situ,
and 3-week maintenance probe without requiring performance feedback. Fiona’s
baseline data ranged from 33 to 50 % of steps correct. Following training, she
implemented each condition with 100 % fidelity. She implemented the escape and
attention conditions with 100 % fidelity in the in situ sessions and required one
performance feedback session for the tangible condition. Fiona maintained 100 %
fidelity for all conditions at the 2-week maintenance probe.

123
J Behav Educ

Sessions to Criterion

Figure 5 presents the sessions to criterion for each condition of each of the FA
models. Cohort 1 required a mean of 2 sessions to reach criterion in the TBFA and
1.17 sessions to reach criterion in the TFA phase. Cohort 2 was taught the TFA first
and the TBFA second. They required on average 1.5 sessions to reach criterion in
the TFA and only 1 session to reach criterion in the TBFA. In the TFA model, the
tangible condition required the most sessions to reach criterion while in the TBFA
model the escape condition required the most sessions to criterion. The TBFA
escape and attention conditions required more sessions than the TFA demand and
attention conditions. However, TFA tangible condition required more sessions than
the TBFA tangible condition.

Duration of Teacher FA Training

The duration of the training sessions was held constant (45 min) across FA models.
The duration of the role-play and performance feedback phase was measured to
capture the time required for participants to reach criterion levels for fidelity of each
FA model. These data are presented in Table 1. Mean duration of TBFA role-play
and performance feedback for both cohorts was 24 min 54 s. Mean duration of TFA
role-play and performance feedback across both cohorts was 59 min 54 s. Training
to criterion in the TBFA model took approximately 35 min less than the TFA
model.

Social Validity

Social validity scores for each FA model are presented in Table 2. The mean score
for the TBFA was 84 points (range 77–91 points) while the mean score for the TFA

2.5
Mean number of trials to criterion

2 Trial-Based
Traditional
1.5

0.5

0
Escape Attention Tangible
Condition

Fig. 5 Mean number of sessions to criterion for each functional analysis condition

123
J Behav Educ

Table 1 Total duration of role-


Participant TBFA TFA
play and performance feedback
sessions
Angela 33 min 14 s 42 min 8 s
Brooke 34 min 58 s 51 min 53 s
Cate 34 min 10 s 66 min 20 s
Debbie 16 min 54 s 53 min 58 s
Ellen 22 min 21 s 66 min 49 s
Fiona 19 min 49 s 73 min 41 s
Mean duration 24 min 54 s 59 min 54 s

Table 2 Social validity rating


Participant TBFA score TFA score
scale scores
Angela 85 70
Brooke 83 56
Cate 87 90
Debbie 77 76
Ellen 81 67
Fiona 91 76

was 72.5 points (range 56–90 points). Five of the six educators provided higher
scores for the TBFA as compared to the TFA model. All participants indicated they
strongly agreed that identifying the function of challenging behavior was important
for intervention development. They all agreed or strongly agreed that both the TFA
and TBFA models could allow them to develop effective interventions and that they
would be willing to carry out both FA models. However, educators reported that
both FA models had undesirable side effects, such as classroom disruption.

Discussion

The purpose of this study was to compare public school special educators’
acquisition of traditional and trial-based FA models. Our first research question was
to determine the effects of a teacher training package on implementation of each FA
model. All participants reached 100 % implementation fidelity for both FA models
following training. In instances where educators did not reach criterion levels
immediately following the didactic training, one to two performance feedback
sessions were required. This finding speaks to the potential utility of the
performance feedback using feedback checklists to improve teacher practices in
school settings.
To address potential sequence effects, we counterbalanced the instructional
sequence of each FA model across cohorts. Our results suggest that teacher training
in one FA model, regardless of which model it was, led to more rapid acquisition of
the second FA model. It is possible that the procedures of occasioning and
reinforcing challenging behavior were sufficiently similar across the two models

123
J Behav Educ

that implementation generalization occurred. A second explanation for this effect


may relate to teacher exposure to the logic and behavioral principles underlying FA.
Once teachers had an understanding of behavioral concepts, such as antecedents,
reinforcement, and operant conditioning, they may have transferred these concepts
(Catania 2012) across FA models. However, we did not assess teacher conceptual
understanding of FA in this study. Future research should evaluate the effects of
teacher training on both declarative knowledge as well as procedural implemen-
tation of FA models to help elucidate the mechanism underlying these acquisition
patterns.
Although the mechanism underlying this rapid acquisition of the second FA
model is unknown, our findings suggest that teaching educators one FA model may
reduce training time for additional FA models. Training educators in multiple FA
models may be beneficial for the FBA process by allowing educators to select a
specific FA model based on characteristics of the setting and student. For example,
if challenging behavior is most likely to occur in a one-on-one setting, a TFA may
be feasible. Yet, if challenging behavior is more likely to occur during instructional
time in the general education setting, a TBFA may be more easily implemented.
A second study purpose was to assess educator generalization and maintenance
of FA implementation. Results showed that implementation fidelity was generalized
to classrooms with students with ASD for several conditions within each FA model.
When implementation fidelity did not generalize, one round of performance
feedback was required for most educators, with the exception of Brooke who
required two performance feedback sessions for the TBFA escape condition.
Criterion levels were maintained for eight out of nine maintenance phases across
participants and FA models. These results suggest that following training, brief
‘‘booster’’ sessions of performance feedback may be necessary to ensure that fidelity
generalizes and maintains.
Our third research question investigated which FA model educators acquired
more rapidly. Public schools are in a constant battle for resources such as personnel,
materials, and time. Determining which FA model requires the fewest resources to
teach may lead schools to select one FA model over another. Our results were mixed
regarding the mean number of sessions needed to acquire each FA models.
However, the training time for the TBFA was less than half that of the TFA model.
This is likely due to the duration and number of FA conditions in each model. In the
TBFA, trials were no more than 2 min each and only three conditions were taught.
In the TFA model, there were four conditions each lasting 5 min. It may not be that
the TBFA was easier to acquire (as noted by the lack of differences across sessions
to criterion), but was faster to acquire given session length. If a school district is
short on time for teacher FA training, then the TBFA model may be preferred.
In the current study, educators identified the student’s challenging behavior
topographies for the in situ probes. Most of the target behaviors, with the exception
of aggression with Cate and Debbie’s student, were low-intensity behaviors such as
whining, shirt biting, and verbal protest. The outcomes of the in situ probes may be
strong as a function of the nature of the behaviors being assessed. Future research is
needed to evaluate the use of this FA training package with more complex or severe
challenging behaviors.

123
J Behav Educ

Finally, and of critical importance to the use of FAs in school settings, we


surveyed educators regarding the acceptability and feasibility of each FA model.
Educators indicated that they valued determining the function of challenging
behavior and believed that FAs can lead to more effective function-based
interventions. These results are encouraging for the future of teacher involvement
in the FBA process, and future research should continue to investigate meaningful
ways to achieve this. Although all educators indicated that they would be willing to
implement each of the FA models, five scored the TBFA higher than the TFA.
Educators scored the TBFA higher on items related to acceptability, time required to
conduct the FA, disruptions to the classroom, and fit within the classroom routine.
Despite the overall acceptability of FAs in general and TBFA in particular, all
educators indicated at least some concern regarding the use of FAs in schools. Their
concerns tended to reflect undesirable side effects of occasioning and reinforcing
student challenging behavior and disrupting classroom instruction. While the TBFA
illustrates an attempt to increase the feasibility of the use of FA as a component of
an FBA in public school settings, further research is needed to modify FA
procedures to address teacher concerns.

Limitations

There are several limitations to this study which should be discussed. First, while
teachers implemented each FA model in role-play sessions, the only demonstration
of each FA in model in the classroom occurred during the in situ phase. Related to
this, in situ probes did not occur during baseline, thereby limiting the demonstration
of generalization following the training. Future research is needed to further
evaluate the effects of the training package on teacher FA implementation in
classroom settings. Second, while this study included four multiple-baseline
designs, one of these designs (Cohort 1 TBFA) included only one staggered
replication of intervention effect. Third, we taught teachers to conduct FA
conditions for socially maintained behavior, but did not train teachers to conduct
conditions to test automatically maintained challenging behavior. Fourth, partici-
pants were highly educated and trained teachers, many of whom were pursuing
graduate degrees and behavior analyst certification. This may have contributed to
some increasing baseline trends for specific conditions with Ellen and Brooke. It is
unknown if the findings from this study would generalize to teacher without these
degrees and certifications. Fifth, the purpose of this study was to evaluate the
training package on teacher FA implementation, and data on child challenging
behavior were not obtained. Future research should evaluate not only proximal
outcomes of teacher FA training on teacher behavior, but also distal outcomes on
child behavior. Finally, in assessing social validity, teachers rated each FA model,
but did not directly compare the models. Future research on social validity of FA
models should include open-ended questions in which teachers describe advantages
and disadvantages of the models in relation to one another.
This study illustrates how educators can be meaningfully involved in the FBA
process as FA implementers. However, teachers were not taught to collect, analyze,
or make data-based decisions based on the FA data. This is an important area for

123
J Behav Educ

future research and discussion. There is much debate about the level of expertise
necessary to design an FA, to analyze results, and to plan function-based
interventions and whether teachers versus specialists should take on these
responsibilities (e.g., Pence et al. 2014). A compromise to this debate may be
found through the use of FBA teams. Some researchers (Scott et al. 2005) advocate
for a team approach for conducting FBAs in which at least one team member is an
expert in challenging behavior. Determining how a team approach could be used to
facilitate high-quality FAs in school settings is an area in which future research is
needed. This study supports previous research that teachers can be meaningfully
involved in the FBA process as FA implementers (Kunnavatana et al. 2013). In this
capacity, teachers may be able to address some of the barriers to quality FBAs by
reducing reliance on outside personnel and by integrating FA into the student’s
typical school environment.

Summary

Research is needed to determine innovative and effective means of building school


capacity to conduct high-quality FBAs. Functional analysis is one means of
experimentally determining the function of behavior to facilitate the development of
effective function-based interventions. The research on TBFA seems encouraging
thus far. In this study, special educators implemented both FA models with 100 %
fidelity following training and indicated a slight preference for the TBFA model.
The short trial length and the termination of test conditions following a single
episode of challenging behavior may present added benefits to the TBFA model
(Sigafoos and Saggers 1995). TBFA may offer practitioners a ‘‘compromise’’
(McCahill, et al. 2014, p. 500) between descriptive assessments and TFA in terms of
efficiency, experimental manipulation, and contextual fit within classroom settings.

Compliance with Ethical Standards

Conflict of interest The authors report no conflict of interest with this study.

Ethical Approval All procedures performed in studies involving participants were in accordance with
the ethical standards of the institution and/or national research committee and the 1964 Helsinki Dec-
laration and its later amendments or comparable ethical standards.

Informed Consent Informed consent was obtained from all individual participants included in the
study.

Appendix 1: Procedural Fidelity Checklists

Traditional Functional Analysis


Reviewer: ___ Educator: ___ Date: ___
Directions: In the ‘‘code’’ column mark a ‘‘?’’ if the behavior is observed and a
‘‘-’’ if the behavior is not observed

123
J Behav Educ

Attention Code Tangible Code Demand Code Play Code

1 Educator instructs the child to play Educator presents the Educator provides continuous Educator directs
with toys and then ignores tangible item to the student instruction using a least-to-most the child toward
for 10 s prompt hierarchy (verbal, preferred
verbal ? model, verbal ? physical) items/toys
2 Educator turns away from child and Educator removes the item Educator delivers praise upon Educator responds
ignores appropriate behavior of from the student and places successful completion of a trial/task to all appropriate
child it out of reach but visible to (does not matter what prompt level social initiations
the student was necessary for task completion) of child
3 Educator ignores inappropriate Educator ignores appropriate Educator does not deliver any Educator ignores
behavior other than the target behavior of child interactions/praise outside of the task target and all
behavior emitted by the child conditions other
inappropriate
behavior
4 Educator provides brief attention Educator ignores Contingent upon target behavior the Educator delivers
(express concern and brief inappropriate behavior Educator removes the task and turns attention
physical contact) when the child other than the target away for 10 s approximately
emits the target behavior behavior emitted by the every 10 s
child
5 Educator blocks any attempts Educator re-introduces the trial/task Educator engages
to access the tangible item following 10 s if the child has ceased in parallel or
target behavior cooperative play
as appropriate

123
Attention Code Tangible Code Demand Code Play Code

6 Contingent upon target behavior the Educator gives the Educator ignores all other inappropriate

123
student access to the tangible item for 10 s and then and appropriate behavior during task
removes instruction
Percentage
correct
J Behav Educ
J Behav Educ

Trial-based Functional Analysis


Reviewer: ___ Educator: ___ Date: ___
Directions: In the ‘‘code’’ column mark a ‘‘?’’ if the behavior is observed and a
‘‘-’’ if the behavior is not observed

Attention Code Tangible Code Demand Code

Control Educator instructs the Educator sits near Educator tells the
participant engage participant and student ‘‘You can
in independent work provides have a break’’
or leisure items. unrestricted access
Educator does not to preferred item
engage in for 60 s
continuous demands
Educator provides Educator does not Educator turns away
participant with provide attention if from child and does
attention at least participant engages not provide
once every 5 s for a in challenging attention for 60 s
total of 60 s behavior
regardless of
participant
engagement in
challenging
behavior. Attention
does not include
demands
No task materials or
task demands are
presented to child
Test Educator instructs the Educator sits near Educator presents task
participant to participant and demands once every
engage in places preferred 10 s using least-to-
independent work item in sight but most prompting
or leisure items out of participant’s (verbal,
reach (more than verbal ? model,
2’). Participant and
access to item is verbal ? physical)
blocked
Educator explains that Educator tells Educator delivers
he/she needs to participant, ‘‘You praise (commenting
complete some can have this later’’ or compliment)
work and turns body upon successful
away from completion of a
participant trial/task (regardless
of the prompt level
necessary to
complete the task)

123
J Behav Educ

Attention Code Tangible Code Demand Code

Educator does not Contingent upon Educator removes


speak or look at challenging task demands and
participant for behavior, educator materials
60 s unless provides immediately if
participant immediate access child engages in
engages in target to preferred item target
challenging for 60 s challenging
behavior behavior
Contingent upon
challenging
behavior, educator
turns toward
participant and
provides verbal
attention and
statements of
concern
Percentage
correct

Appendix 2: Performance Feedback Fidelity Form

Participant_____ Session_____
Reviewer_____ Date:_____
Please Circle: TFA or TBFA Check if second observer:
Directions: In the ‘‘code’’ column mark a ‘‘?’’ if the behavior is observed and a
‘‘-’’ if the behavior is not observed

Criteria Training
fidelity

Researcher began the session by making a positive comment about the therapist’s
implementation
If applicable, researcher discussed an example of incorrect implementation in a neutral
voice
If applicable, researcher describes the correct implementation procedure
If applicable, researcher modeled the correct implementation procedure
If applicable, researcher asks therapist to verbally describe how they would correctly
implement the procedure
If applicable, researcher provides praise contingent on the therapist’s correct verbal
behavior
Researcher addressed all instances of incorrect implementation from the previous
videotaped sessions
Total correct
Percentage correct

123
J Behav Educ

References

Beavers, G. A., Iwata, B. A., & Lerman, D. C. (2013). Thirty years of research on the functional analysis
of problem behavior. Journal of Applied Behavior Analysis, 46, 1–21. doi:10.1002/jaba.30.
Bechtel, N., McGee, H., Huitema, B., & Dickinson, A. (2015). The effects of the temporal placement of
feedback on performance. Psychological Record, 65, 425–434. doi:10.1007/s40732-015-0117-4.
Blood, E., & Neel, R. S. (2007). From FBA to implementation: A look at what is actually being delivered.
Education and Treatment of Children, 30, 67–80. Retrieved from: http://www.
educationandtreatmentofchildren.net/.
Bloom, S. E., Lambert, J. M., Dayton, E., & Samaha, A. L. (2013). Teacher-conducted trial-based
functional analyses as the basis for intervention. Journal of Applied Behavior Analysis, 46, 208–218.
doi:10.1002/jaba.21.
Catania, A. C. (2012). Learning and behavior. Learning (5th ed., pp. 1–11). New York: Sloan Publishing.
Chitiyo, M., & Wheeler, J. J. (2009). Challenges faced by school teachers in implementing positive
behavior support in their school systems. Remedial and Special Education, 30, 58–63. doi:10.1177/
0741932508315049.
Daly, E. J., Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis
of academic performance problems. School Psychology Review, 26, 554–574. Retrieved from:
http://www.nasponline.org/publications/spr/.
Delfs, C. H., & Campbell, J. M. (2010). A quantitative synthesis of developmental disability research:
The impact of functional assessment methodology on treatment effectiveness. Behavior Analyst
Today, 11, 4–19. doi:10.1037/h0100685.
Ducharme, J. M., & Shector, C. (2011). Bridging the gap between clinical and classroom intervention:
Keystone approaches for students with challenging behavior. School Psychology Review, 40,
257–274. Retrieved from: http://www.nasponline.org/publications/spr/.
Ellingson, S. A., Miltenberger, R. G., Stricker, J., Galensky, T. L., & Garlinghouse, M. (2000). Functional
assessment and intervention for challenging behaviors in the classroom by general classroom
teachers. Journal of Positive Behavior Intervention, 2, 85–97. doi:10.1177/109830070000200202.
Erbas, D., Tekin-Iftar, E., & Yucesoy, S. (2006).Teaching special education teachers how to conduct
functional analysis in natural settings. Education and Training in Developmental Disabilities, 41,
28–36. http://daddcec.org/Publications/ETADDJournal.aspx.
Gast, D. L. (Ed.). (2010). Single subject research methodology in behavioral sciences. New York:
Routledge Publishers.
Gresham, F. M., McIntyre, L. L., Olson-Tinker, H., Dolstra, L., McLaughlin, V., & Van, M. (2004).
Relevance of functional behavioral assessment research for school-based interventions and positive
behavioral support. Research in Developmental Disabilities, 25, 19–37. doi:10.1016/j.ridd.2003.04.
003.
Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: A review.
Journal of Applied Behavior Analysis, 36, 147–185. doi:10.1901/jaba.2003.36-147.
Hassiotis, A., Robotham, D., Canagasabey, A., Romeo, R., Langridge, D., Blizard, R., & King, M. (2009).
Randomized, single-blind controlled trial of a specialist behavior therapy team for challenging
behavior in adults with intellectual disabilities. The American Journal of Psychiatry, 166,
1278–1285. doi:10.1176/appi.ajp.2009.08111747.
Individuals with Disabilities Education Act of 2004 (IDEA). (2004). Retrieved from http://idea.ed.gov/.
Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Towards a functional
analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197–209. doi:10.1901/jaba.1994.
27-197 (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3–20, 1982).
Kennedy, C. H. (2005). Single-case designs for educational research. Boston, MA: Allyn & Bacon.
Kunnavatana, S. S., Bloom, S. E., Samaha, A. L., & Dayton, E. (2013). Training teachers to conduct trial-
based functional analyses. Behavior Modification, 37, 707–722.
Lloyd, B. P., Wehby, J. H., Weaver, E. S., Goldman, S. E., Harvey, M. N., & Sherlock, D. R. (2014).
Implementation and validation of trial-based functional analysis in public elementary school
settings. Journal of Behavior Education. doi:10.1007/s10864-014-9217-5.
Loman, S. L., & Horner, R. H. (2014). Examining the efficacy of a basic functional behavioral assessment
training package for school personnel. Journal of Positive Behavior Interventions, 16, 18–30.
doi:10.1177/1098300712470724.

123
J Behav Educ

Lydon, S., Healy, O., O’Reilly, M. F., & Lang, R. (2012). Variations in functional analysis methodology:
A systematic review. Journal of Developmental and Physical Disabilities, 24, 301–326. doi:10.
1007/s10882-012-9267-3.
McCahill, J., Healy, O., Lydon, S., & Ramey, D. (2014). Training educational staff in functional
behavioral assessment: A systematic review. Journal of Developmental and Physical Disabilities,
26, 479–505. doi:10.1007/s10882-014-9378-0.
Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., & Duhon, G. J. (2005).
Treatment implementation following behavioral consultation in schools: A comparison of three
follow-up strategies. School Psychology Review, 34, 87–106.
Pence, S. T., Peter, C. C., & Giles, A. F. (2014). Teacher acquisition of functional analysis methods using
pyramidal training. Journal of Behavioral Education, 23, 132–149. doi:10.1007/s10864-013-9182-4.
Reimers, T., Wacker, D., Cooper, L. J., & de Raad, A. O. (1992). Acceptability of behavioral treatments
for children: Analog and naturalistic evaluations by parents. School Psychology Review, 21,
628–643. Retrieved from: http://www.nasponline.org/publications/spr/.
Rispoli, M., Burke, M., Hatton, H., Ninci, J., Zaini, S., & Rodriguez, L. (2015a). Training Head Start
teachers to conduct trial-based functional analyses of challenging behavior. Journal of Positive
Behavior Interventions, 17, 235–244. doi:10.1177/1098300715577428.
Rispoli, M., Ninci, J., Burke, M., Zaini, S., Hatton, H., & Sanchez, L. (2015b). Evaluating the accuracy of
results for teacher implemented trial-based functional analyses. Behavior Modification, 39, 627–653.
doi:10.1177/0145445515590456.
Scott, T. M., Liaupsin, C., Nelson, C. M., & Mclntyre, J. (2005). Team-based functional behavior
assessment as a proactive public school process: A descriptive analysis of current barriers. Journal
of Behavioral Education, 14, 57–71. doi:10.1007/s10864-005-0961-4.
Sigafoos, J., & Saggers, E. (1995). A discrete-trial approach to the functional analysis of aggressive
behaviour in two boys with autism. Australia & New Zealand Journal of Developmental
Disabilities, 20, 287–297.
Sprague, J. R., Flannery, B., O’Neill, R., & Baker, D. J. (1996). Effective behavioural consultation:
Supporting the implementation of positive behaviour support plans for persons with severe
challenging behaviours. Eugene: Specialised Training Program.
Sugai, G., Horner, R. H., Dunlap, G., Hieneman, M., Lewis, T. J., Nelson, C. M., & Ruef, M. (2000).
Applying positive behavior support and functional behavioral assessment in schools. Journal of
Positive Behavior Interventions, 2, 131–143. doi:10.1177/109830070000200302.
Sugai, G., Lewis-Palmer, T., & Hagan, S. (1998). Using functional assessments to develop behavior
support plans. Preventing School Failure, 43, 6–13. doi:10.1080/10459889809603294.
Symons, F. J., McDonald, L. M., & Wehby, J. H. (1998). Functional assessment and teacher collected
data. Education and Treatment of Children, 21, 135–160. Retrieved from: http://www.
educationandtreatmentofchildren.net/.
Van Acker, R., Boreson, L., Gable, R. A., & Potterton, T. (2005). Are we on the right course? Lessons
learned about current FBA/BIP practices in schools. Journal of Behavioral Education, 14, 35–56.
doi:10.1007/s10864-005-0960-5.

123

You might also like