Untitled Document

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

SPCE 630 Exam 1 preparation

applied research
involves systematic investigation related to the pursuit of knowledge in practical realms or to
solve real world problems.

independent variable
The experimental factor that is manipulated; the variable whose effect is being studied.

dependent variable
The outcome factor; the variable that may change in response to manipulations of the
independent variable.

Skinner and Bijou


through their work, a system of behavior analysis has been developed that includes a
philosophy of behavior development, a general theory, methods for translating theory into
practice, and a specific research methodology.

internal validity
a study with adequate mechanisms for ensuring that outcomes are related to your intervention
procedures rather than extraneous factors is said to have this.

experimental control
studies with high levels of internal validity allow researchers to demonstrate this-- to show that
the experimental procedures (intervention) and only the experimental procedures are
responsible for behavior change.

functional relation
when experimental control is demonstrated, we have verified that there is this between the
independent and dependent variables. The chance in the dependent behavior is causally
related to the implementation of the independent variable

evidence-based practice
refers to intervention procedures that have been scientifically verified as being effective for
changing a specific behavior of interest, under given conditions, and for particular participants.

Horner et al. (2005) definition of practice


Horner et al defined this as it relates to education as a "curriculum, behavioral intervention,
systems change, or educational approach designed to be used by families, educators, or
students with the express expectation that implementation will result in measurable educational,
social, behavioral, or physical benefit.

what constitutes evidence that supports implementation of a particular practice?


the research question should determine the research method (group, single case, or qualitative)
and design chosen. credibility of research findings is based on the rigor of the scientific method
employed and the extent to which the research design controls for alternative explanations.

reliability
Ability of a test to yield very similar scores for the same individual over repeated testings

What Works Clearinghouse (WWC)


The US department of education's institute of education sciences established this, which
informs stakeholders by providing a source of information regarding scientific evidence of
effectiveness for education practices that could be used to encourage making informed and data
based decisions and in turn improve child outcomes.

experimental studies
include descriptions of the target behavior/s b) predictions regarding what impact the
independent variable will have on the dependent variable, and c) appropriate tests to see if the
prediction is correct.

what differentness an experimental design study from a quasi-experimental design study


the extent to which the design controls for threats to internal validity- variables other than the
planned independent variable that could result in changes in the dependent variable

threats to internal validity


History
Maturation
Testing
Instrumentation
Diffusion of treatment
Regression towards the mean
Selection bias
Attrition

variables other than the planned independent variable that could result in changes in the
dependent variable.

experimental group design studies


participants are randomly assigned to a study condition (experimental group or control group;
intervention A or intervention B)

if the prediction proves true, it is said there is a functional relationship between independent and
dependent variables

quasi experimental group design studies


do not use random assignment of participants but other strategies to control for differences in
study group composition

SCD
this can be experimental, but randomization of participants is neither feasible. randomization
only functions to control for differences between groups when the numbers of participants is
very large (N = 50 or greater)

correlational design studies


like experimental and quasi-experimental design studies, predict and describe the relation
between independent and dependent variables; in these studies, there is no manipulation of the
independent variable by the investigator. such studies represent a quantitative descriptive
research approach in which the relation between variables is established by using a correlation
coefficient

practices supported by correlational evidence are deemed less trustworthy than those
supported by experimental and quasi studies.

nomothetic
research approaches are generally based in the natural sciences and are characterized by
attempting to explain associations that can be generalized to a group given certain
characteristics
idiographic
approaches to research, common in the humanities, attempt to specify associations that vary
based on certain characteristics or contingencies present for the participant or case of interest

characteristics of group design


large number of individuals are divided and assigned to one of two or more study conditions. the
study includes a control condition, in which participants are not exposed to the independent
variable, and treatment condition, in which participants are exposed to the independent variable.
participants can also be divided into two treatment groups.

in some of these studies, more than two conditions can be compared.

critical component to consider when evaluating a group design study is how participants are
assigned to study conditions. the optimal method is random assignment of participants
(experimental study), but this is not always possible.

fundamentals of group design, experimental, and quasi


it is fundamental that groups of participants assigned to each study condition are equivalent on
key characteristics or status variables (age, gender, ethnicity, etc). at the start of the group
study.

group research approach


most common research methodology used in some areas of behavioral science. these are well
suited for large scale efficacy studies or clinical trials in which a researcher's interest is in
describing whether a practice or policy with a specific population, on average, will be effective

qualitative research
refers to a number of descriptive research approaches that investigate the quality of
relationships, activities, situations, or materials.

case study approach


entails an in depth and detailed description of one or more cases, while ethnography refers to
the study of culture, and phenomenology is the study of people's reactions and perceptions of a
particular event or situation

qualitative research
researchers who use this approach collect data and describe themes or trends in the data
without offering a theory, an approach known as inductive analysis

studies using this and SCDs are similar.

position of researcher as an insider who has close personal contact with participants and who is
both the data collector and data analyst.

inductive analysis
is the discovery and development of theory as they emerge from qualitative data.

validity
the extent to which a test measures or predicts what it is supposed to

SCD methodology
has a long tradition in the behavioral sciences and has become common in special education
and other fields. referred to as single subject research. it is based in operant conditioning, aba,
and social learning theory.

SCD
a quantitative experimental research approach in which study participants serve as their own
control

each participant is exposed to both a control condition , known as baseline, and an intervention
condition.

the target behavior is repeatedly measured within the context of one of several research
designs that evaluate and control for threats to internal validity. depending on the research
design used, baseline (A) and intervention (B) conditions are slowly alternated across time,
rapidly alternated, or the intervention condition is introduced in a time lagged fashion across
several behaviors, conditions, or participants.

each participant participants in both conditions of interest.

intervention conditions are continued until a performance criterion is met or until progress is
apparent via visual analysis of graphed data

baseline logic
a term sometimes used to refer to the experimental reasoning inherent in single subject
experimental designs; entails three elements: prediction, verification, and replication.

group design
posttest data are collected at an a priori specified time and are analyzed using statistical
methods comparing the average performance of participants assigned to one condition to the
average performance of participants assigned to other conditions.

SCD
use of visual analysis of graphic data for individual participants make these studies ideal for
applied researchers and practitioners who are interested in answering research questions
and/or evaluating interventions designed to change the behavior of individuals

practice based evidence (PBE)


can be identified through research that occurs in applied settings, with typical resources; SCD
may be well suited to conducting this type of research

history
a threat to internal validity that refers to events that occur during an experiment, but that are not
related to planned procedural changes, that may influence the outcome. the longer the study the
greater the threat due to this

maturation
a threat to internal validity that refers to changes in behavior due to the passage of time. short
duration study (4-6 weeks) this is not likely to influence the analysis of the effectiveness of a
powerful independent variable. If a study is carried on for 4-6 months, there is a greater chance
that this may play a role.

testing
a threat to internal validity that requires participants to respond to the same test repeatedly,
especially during a baseline or probe condition. this may have a facilitative effect (improvement
in performance) or an inhibitive effect

instrumentation
a threat to internal validity that refers to concerns with the measurement; they are of particular
concern in SCD studies because of repeated measurement by human observers who may
make errors.
avoid this by defining behaviors of interest, use appropriate recording procedures, and check for
reliability by using secondary observer.

procedural infidelity
refers to the lack of adherence to condition protocols by study implementers.

selection bias
involves choosing participants in a way that differentially impacts the inclusion or retention of
participants in a study, when compared to the population of interest

attrition
refers to the loss of participants during the course of a study, which can limit the generality of the
findings, particularly if participants with certain characteristics are likely to drop out.

attrition bias
refers to the likelihood that participant loss (attrition) impacts the outcome of the study. when
attrition occurs, you should always explicitly report it, along with relevant information about why
it occurred, and include any data collected for that participant in your research report

sampling bias
occurs in SCD studies when researchers use additional, non explicated, reasons for including or
excluding potential participants.

multiple treatment interference


can occur when a study participant's behavior is influenced by more than one planned
treatments or interventions during the course of a study. an interactive effect may be identified
due to sequential confounding (the order in which experimental conditions are introduced to
participants may influence their behavior) or a carryover effect (when a procedure used in one
experimental condition influences behavior in an adjacent condition)

data instability
refers to the amount of variability in the data (dependent variable) over time

cyclical variability
a specific type of data instability that refers to a repeated and predictable pattern in the data
series over time.

regression to the mean


refers to the likelihood that following an outlying data point, data are likely to revert back to
levels closer to the average value.

adaptation
refers to a period of time at the start of an investigation in which participants' recorded behavior
may differ from their natural behavior due to the novel conditions under which data are collected

Hawthorne effect
refers to the participants' observed behavior not being representative of their natural behavior as
a result of their knowledge that they are participants in an experiment.

well designed applied research studies


allow for a) systematic study of behaviors in the typical environments (baseline measures), b)
evaluation of a new intervention or innovation and c) replication of findings from other studies
under similar and novel conditions

institutional review board (IRB)


The Nuremberg War Crime Trials and the Tuskegee Study, atrocities, led to the development of
a number of regulations, all designed to ensure the highest level of protection for human
participants in research studies, including the development of commitment responsible for
reviewing proposed research studies, known as this

The Belmont Report


focused on three overarching principles to improve protection of human participants in applied
research studies 1) respect for persons 2) beneficence and 3) justice.

respect for persons


highlighted the importance of voluntary involvement in research and explaining the purpose of a
study and corresponding procedures, as well as protection of vulnerable populations

beneficence
focused on the rules of do no harm and maximize possible benefits and minimize possible
harms

undue influence
convincing participants to enroll in a study when they would not otherwise do so.

minimal risk
is considered to be the same risk that a person would encounter in daily life or while performing
routine physical or psychological examinations

Baer, Wolf, and Risley (1968) on aba and their judgment of technological writing:
the best test for evaluating a procedure description as technological is probably to ask whether
a typically trained reader could replicate that procedure well enough to produce the same
results, given only a reading of the description

literature review
has three main functions: a) articulating what is known and not known about a topic b) building a
foundation and rationale for a study or series of studies and c) improving plans for future studies
by identifying successful procedures, measures, and designs used by other investigators and
detecting issues and problems they encountered

these are used as introductions for study reports and stand alone products such as review
articles or chapters in books, theses, or dissertations

several steps involved when reviewing the literature


a) selecting a topic b) narrowing that topic c) finding the relevant sources d) reading and coding
relevant reports e) sorting the sources with sound information from those with less trustworthy
information and f) organizing the findings and writing the review.

peer review
impartial judges have read and evaluated the study and concluded it was worthy of publication

exhaustive search
a search that continues until the test item is compared with all items in the memory set

systematic review
defined as the attempt to make the research summarizing process explicit and systematic to
ensure the author's assumptions, procedures, evidence, and conclusions are transparent

PRISMA
A set of items designed to help authors improve the reporting of systematic reviews and
meta-analyses. PRISMA is an acronym that stands for Preferred Reporting Items for Systematic
Reviews and Meta-Analyses.

PRISMA statement
consists of a 27 item checklist and a four phase flow diagram to document each step of the
systematic process, including a) identifying potential articles b) screening articles for possible
inclusion, c) assessing eligibility of potential articles and d) including articles for further analysis

two factors considered when writing research questions


the investigator's interest in the question, and the feasibility or practicality of conducting the
proposed study.

function of research questions


single case experimental investigators ask experimental questions rather than state and test
hypotheses. research is viewed as a way to build an explanation of why behavior occurs as it
does in nature rather than evaluating a theory. the function of research questions is to focus the
investigator on the purpose of goal of the study

research questions
should have three elements: participants, independent variable, dependent variable measures.
all research questions should have these three elements in which they can vary a) Does X
independent variable influence Y dependent variable for Z participants?

should be stated in a directional (falsifiable) form.

studies may have more than one question.

demonstration question
does it work? they follow the form: what relations exist between an independent variable and a
behavior for a given set of participants?

parametric question
does more or less of this procedure work better? these focus on the amount of the independent
variable and the effect of those various amounts of behavior.

component analysis question


third type of research question. does it work better with some or all of its parts? these are an
acknowledgement that many of our procedures and interventions have many different parts.

comparative question
the final type of research question is, does one procedure work better than another procedure?

research proposals
written to communicate to others your plans about conducting a study. often submitted to
students' research committees to evaluate whether the studies are worthy of being a master's
degree thesis or a doctoral dissertation and to determine what changes are needed in the plans

introduction section
three tasks in this section. first is to introduce the topic to the readers, which starts with a
general statement. the second is to provide a summary of existing literature while building a
rationale for the study. the last paragraph of this section states the purpose and lists the
research questions.

method section
the main body of research proposals; it is a detailed plan of the study being proposed. it should
be written in the future tense

participants
when the proposals are written, your participants are not known. demographic characteristics
such as gender, age, diagnoses, race, ethnicity, and socioeconomic status need to be revealed.
this should include the types and intensity of services they are receiving. identify the measures
(tests) used to describe your participants' academic and functional performance. identify the
inclusion and exclusion criteria and how those criteria will be measured

setting
should describe the location of all experimental procedures and conditions that are planned,
including where primary, secondary, and generalization measures are collected; where
assessment procedures were conducted; and where the independent variable was implemented
procedures
this section should include a detailed description of study procedures, beginning with how
university and district approvals are to be secured. should include a step by step discussion of
how the study will take place. each experimental condition should be described as a sub
section.

materials
this should include a description of the materials, supplies, and equipment used.

response definitions and measurement procedures


this section is a complete description of the dependent variable. each behavior being measured
in the study should be defined.,

procedural fidelity
should describe how the implementation of the procedures will be measured

social validity
this should describe how the social validity of the study will be assessed, including what aspects
of social validity will be assessed, when the assessment will occur, and procedures used in that
assessment

experimental design
includes a paragraph describing the experimental design you will use.

data analytic plan


should be presented at the end of the method. this section should have two sub sections:
formative evaluation and summative evaluation.

formative evaluation: describe how often inter observer agreement will be assessed and what
levels will be considered acceptable, what actions will be taken if the agreement estimates are
unacceptable, how often procedural fidelity assessments will occur, what levels of procedural
fidelity will be considered acceptable, what actions will be taken if the procedural fidelity data
are too low, and how the data on the primary dependent variables will be graphed, and how the
graphed data will be analyzed to make decisions about changing experimental conditions

summative evaluation
should describe how the inter observer agreement data will be summarized and presented in
your final report, how the procedural fidelity data will be summarized and presented in your final
report, etc.

manuscript
report for submission to a journal for review and possible publication

results section
often framed using the research questions. describe how the data paths changed, or did not
change, with the experimental manipulation. contains figures depicting participants' data across
experimental conditions.

discussion
describe the relevance of the studys data. brief section. connect your findings to the literature.

dissemination
refers to the publication of a research report in a peer reviewed journal, or the presentation of
results at a professional conference

replication
an investigator's ability to repeat the effect an independent variable has on the dependent
variables. it is important in all research paradigms.

three primary reasons for replication


1. assess the reliability of findings (internal validity)
2. assess the generality of findings (external validity)
3. look for exceptions

direct replication
reputation of a given experiment by the same experimenter... accomplished either by performing
the experiment again with new subjects or by making repeated observations on the same
participant under each of several conditions

three primary ways to ensure within study replication in SCD research


1. sequential introduction and withdrawal designs
2. time lagged designs
3. rapid iterative alternation

sequential introduction and withdrawal designs


include repetition of the basic A-B comparison within a single participant (A-B-A-B)

time lagged designs


include the repetition of the basic A-B comparison across a set of three or more participants,
behaviors, or contexts

rapid iterative alternation


include repetition of an A-B comparison, with single session replication and comparisons

direct intra-participant replication


intra-subject. refers to repeating the experimental effect with the same participant more than
once in the same study. During A1, the data pattern is used to predict the data pattern if there is
no change in experimental condition.

B1 data pattern affirms that the independent variable may have had an effect on the behavior

A2 data pattern verifies there is a cause effect relation between independent and dependent
variables at the simplest level

B2 data pattern replicates or repeats the effect that the independent variable has on the
dependent variable, thus increasing confidence that there is functional relation between
independent and dependent variables

inter-participant direct replication


repeating the experimental effect with different participants. determine whether uncontrolled
and/or unknown variables might be powerful enough to prevent successful replication. repeating
the effects of an intervention with different groups of individuals by comparing measures of
central tendency.

variables to consider in determining whether three replications are an adequate number


baseline data stability, consistency of effect with related findings, magnitude of effect, and
adequacy of controlling threats to internal validity.

clinical replication
administration of a treatment package containing two or more distinct treatment procedures by
the same investigator or group of investigators... administered in a specific setting to a series of
clients presenting similar combinations of multiple behavioral and emotional problems, which
usually cluster together

systematic replication
demonstrates that the findings can be observed under conditions different from those prevailing
in the original experiment. when a researcher carries out a planned series of studies that
incorporate systematic changes from one study to the next and identifies them as a replication
series.

measurement
defined as the systematic and objective quantification of objects, events, or behaviors according
to a set of rules

reversible dependent variables


behaviors that are likely to revert to baseline levels if an intervention is removed.

non reversible dependent variables


changes may be likely to maintain in the absence of an intervention condition (academic
behaviors)

Reversible designs
A-B-A-B
multiple baseline
changing criterion
(demonstration)

alternating treatments
multitreatment
simultaneous treatments
multielement
(comparative)

non reversible designs


multiple probe
(Demonstration)
adapted alternating treatments
parallel treatments
repeated acquisition
(comparative)

direct, systematic observation and recording (DSOR)


most common type of dependent variable assessment in SCD research. humans watch their
participants and measure what they do, in a rule bound and systematic fashion.

continuous recording
requires counting or timing each behavior occurrence

non-continuous recording
involves sampling behavior occurrence in order to estimate the actual count or time. involves
selecting an interval length, determine the rules to code whether or not a behavior occurrence is
scored for the interval, and using the rules to estimate behavior occurrence

event recording
measurement procedure for obtaining a tally or count of the number of times a behavior occurs

timed event recording


involves denoting that an event has occurred and noting the time of the event

free-operant
events that are free to occur at any time

trial based
events that have specific antecedent conditions. these counts are often transformed to a
percentage of opportunities.

percentage
number of behaviors divided by total number of opportunities, multiply by 100

rate
number of occurrences measured within a specific period of time. calculated as number of
occurrences divided by duration of the measurement occasion.

time per occurrence


measured by using a timing device to count the number of seconds of occurrence for each
instance of the behavior

total time recording


involves starting a timing device at each behavior onset and stopping the timing device at each
behavior offset, without recording the time for each occurrence

transforming duration
dividing the number of seconds of behavior occurrence by the total number of seconds in a
measurement occasion (600 seconds in a 10 minute session). if 60 seconds of off task behavior
occurred in a ten minute session, you would do 60/600X100

partial interval recording


most widely used interval based system. the observer records an occurrence if the target
behavior occurs at any time during the interval.

whole interval recording


least widely used interval based recording system. the observer records an occurrence if the
target behavior occurs for the entire duration of the interval.

momentary time sampling


widely used in SCD research. the observer records an occurrence if the target behavior is
occurring at the moment the interval ends.

construct validity
refers to whether your measurement procedures accurately reflect the concept you are
interested in measuring.

observer bias
tendency of observers to see what they expect to see
observer drift
Any unintended change in the way an observer uses a measurement system over the course of
an investigation that results in measurement error

blind observers
people who do not know what the research question is (to reduce observer bias)

Interobserver Agreement (IOA)


The degree to which two or more independent observers report the same observed values after
measuring the same events

discrepancy discussion
after data are plotted, any differences between observers should be discussed and a consensus
should be agreed for each instance.

percentage agreement
simple calculation that is intuitive and widely used. influenced by chance agreement, behavior
rates, and measurement system used.

point by point agreement


when trial based or interval based measurement is conducted, agreement can be calculated
using trial by trial comparisons

occurence agreement
code agreements and disagreements only for intervals in which at least one observer noted an
occurrence.

non occurrence agreement


using only trials in which at least one observer noted that a behavior did not occur

gross agreement
smaller measurement divided by larger measurement x 100

theory of change
refers to a conceptual framework that describes why an intervention should result in changes in
a given target behavior. can use the hypothesized change to formulate research questions and
clarify goals of the study

procedural fidelity
the degree to which procedures of all experimental conditions are implemented as intended

control variables
same across conditions. are always either resent or absent to the same degree in every
condition (including baseline), and will not change when the independent variable is
manipulated

implementation fidelity
extent to which experimenters trained implementers as planned.

treatment integrity
The extent to which the independent variable is applied exactly as planned and described and
no other unplanned variables are administered inadvertently along with the planned treatment
(Source: CHH, 2 Ed).

formative analysis
evaluation research focused on the design or early implementation stages of a program or
policy

self reports
low validity for measuring fidelity because implementers typically overestimate accuracy of their
own behaviors

types of measurement used for procedural fidelity


checklists, self reports, and direct systematic observation

summative analysis
of procedural fidelity increases internal validity of the study and can be used to describe
variability in the dependent variable.
evaluation of social validity
evaluation of social significance should be completed by a variety of stakeholders. direct
consumers, indirect consumers, members of the immediate community, members of the
extended community

Wolf recommended three levels of social validation


goals, procedures, and outcomes

goals
were the goals socially important?

procedures
were the procedures socially acceptable?

outcomes
are the outcomes socially significant?

typical subject measures (assess social validity)


used to gather information from different stakeholders related to their perspectives on social
importance of goals, procedures, and outcomes of an intervention. can use interviews,
questionnaires, and rating scales.

measures less subject to bias (assess social validity)


normative comparisons, blind ratings, measurement of maintenance or sustained use,
participant preference measurement

normative comparisons
the participants' targeted behavior is compared to a normative or typical group whose behavior
is considered acceptable.

maintenance or sustained use


measured used to evaluate if procedures and outcomes of an intervention continue after the
research is completed

blind ratings
determine whether participants' behavior is rated as different before and after intervention or
during baseline versus intervention conditions by people who are unaware of the condition in
effect for the sessions they watch.

participant preference
measured using rating scales or post intervention questionnaires.

graphic displays
assist in organizing data during the data collection process, which facilitates formative
evaluation. they provide a detailed summary and description of behavior over time, which allows
readers to analyze the relation between independent and dependent variables.

well constructed graph communicates


sequence of experimental conditions and phases, time spent in each condition, independent
and dependent variables, experimental design, and relations between variables

applied researchers use three basic types of graphs


line graphs, bar graphs, and cumulative graphs

abscissa
x-axis

ordinate
y-axis

origin
A fixed point from which coordinates are measured.

tic marks
These are place on the horizontal axis with equal spacing between them.

axis labels
tells what information should be found on the axis
condition
baseline, intervention. these should be separated on graphs with solid lines

phase
within condition variations. should be separated on graphs with dotted lines.

condition labels
Mark changes occurring within a phase

line graphs
represent the most commonly used graphic display, both in SCD research and more broadly.

when plotting time series data, use this.

bar graphs
display discrete data and comparative information. height of bar indicates the magnitude of the
data

when plotting summative data, use this.

cumulative graphs
shows the sum of measurement over time
-can only have a positive slope or a slope of 0 (aka graph can only go up or stay flat, it
CANNOT go back down)

semi-logarithmic charts
used when absolute changes in behavior are not the focus of research. absolute behavior
changes are documented using equal interval recording, where amounts are equal between tic
marks on the graph

scale break
used when the entire abscissa or ordinate scale is not presented.

blocking
a procedure for condensing data. used to reduce the number of data points plotted on a graph.
tables
data in tables include participant demographics, condition variables, response definitions with
examples and non examples, and secondary data.

science
systematic approach for seeking and organizing knowledge about the natural world.

aba
understanding socially significant behaviors that are meaningful to human beings

three levels of understanding that come from scientific investigations


description, prediction, and control

descriptive studies
help us to suggest a hypothesis or research question for inquiry. scientists observe a
phenomenon and describe their observations. data are known as collection of facts about an
observed event that can be quantified, classified, or examined for possible relationships known
to exist.

prediction
second level of understanding of scientific inquiry. when repeated observation show that two
variables covary with each other (correlation). when one thing occurs, we can predict something
else will occur.

control
allows us to determine functional relationships. highest level of understanding of science.

attitudes of science
determinism- assumes that the world exists in a lawful and orderly place.

empiricism- objective observation.

experimentation- we systematically control and manipulate specific variables and observe what
happens
replication- how scientists can determine how reliable or useful their findings are. repeating the
experiment or independent variable conditions within an experiment several times.

parsimony- concept of simplicity. before committing to an explanation, we want to rule out all of
the simple and logical explanations first

philosophic doubt- always going to question the facts.

For which of the following is APA format used (select all that are correct)?
case studies
term papers
empirical studies
literature reviews
research reports
theoretical articles
disserations

The main sections that APA research papers should be divided into include:
Title page, abstract, text, references, footnotes, tables, figures, appendices

Which page should be labeled as page 3?


beginning of text

How should Level 1 heading be formatted?


Centered Bold, Title Case Heading. Text begins as a new paragraph

To reduce bias in language, APA suggests referring to people using:


what they want to be called

Which of the following is the correct format for a parenthetical citation with four authors for the
first parenthetical citation in text:

(Bradley et al., 2008)

What are the elements of a reference list?


Author, Date, Title, Source
The first in-text citation for a work by three authors should appear as:
Wappes et al. (2019)

research paper
primary source in which the authors conducted an original study, raw data, and they are
reporting data to us. abstract, intro, methods, results, discussion.

qualitative research, experimental research, correlation studies

review articles
secondary sources where the author did not conduct original studies. summarized the findings
of several researchers. useful for when you are trying to understand a topic.

meta analysis
similar to a review article because you're looking at the existing research but there is a statistical
analysis that combining the results of many different research articles.

whole interval recording


record if the target behavior occurs throughout entire interval. underestimates behavior. longer
observation periods increase the likelihood that behavior will be underestimated

partial interval recording


record if the target behavior occurs at any time during the interval. overestimates behavior. can
underestimate high frequency behavior.

momentary time sampling


record if the target behavior occurs at the moment that each interval ends. when intervals are
over 2 min, both overestimates and underestimates behavior. when intervals are under 2 min,
closely approximates continuous measurement
do not use when behaviors are low frequency or short duration.

importance of treatment integrity


evaluating program effectiveness
evaluating student outcomes
improved outcomes
minimizing threats to internal validity
resources that influence treatment integrity
resources, competing initiatives, agency culture, complexity of the treatment, therapist
knowledge, perceived effectiveness, perceived benefit of the intervention, compatibility with
therapist's current practice, collaboration

measurements of treatment integrity


direct observation, self report, permanent produce review

strategies to increase treatment integrity


performance feedback, direct training, negative reinforcement

A 7-year old boy hits his head with his hand only when presented with a reading task. There is a
discrete start and stop to this behavior. His one-on-one aid is tasked with collecting data for this
behavior. Which measurement system is most appropriate to use with this behavior? Only pick
one.

event recording

A 3-year old girl tantrums every time her mother drops her off at her grandmother's house
before she goes to work. The mother is concerned about how long the behavior is occurring and
asks the grandmother to take some data for her. Which measurement system is most
appropriate to use with this behavior? Only pick one.

duration recording

A 15-year old girl is often late for curfew. Her parents decide to start tracking how late she is
from her missed curfew so that they can reduce the time that she stays out past her curfew.
Which measurement system is most appropriate to use with this behavior? Only pick one.

latency recording

A child's whining behavior occurs at high rates throughout the day during various activities and
includes different vocalizations including understandable phrases such as "I don't want to." and
crying without tears. The child's aid at school and the parents decide to track this behavior for
30 minutes a day. They divided the observation period into 10-second intervals. If the child
engages in whining during any part of the interval the behavior is coded. Which measurement
system is the child's aid using? Only pick one.

partial interval recording

Wendy a 6-year old girl screams often at home and wakes up her baby sister. Her parents are
looking to track this behavior and they are most concerned about the behavior that is louder
than an average noise level. Which measurement system is most appropriate to use with this
behavior? Only pick one.

intensity recording

A 7-year old boy bangs his head on his desk, wall, or another hard surface throughout the day
and during multiple activities. He uses different levels of force. His parents are tracking this
behavior because they are concerned about the bruises that he acquires as a result of this
behavior. Which measurement system is most appropriate to use with this behavior? Only pick
one.

intensity recording

Kai is a 4th-grade student who was referred for a spelling intervention. The consultant has put a
cover-copy-compare intervention in place. Each week Kai completes a spelling test where he is
asked to write 10 spelling words that the consultant used in a sentence. The consultant saves
this spelling test as a lasting artifact of the intervention and to progress monitor the effects of the
intervention over time. Which measurement system is the consultant using? Only pick one.

permanent product recording

Juan a 15-year old boy engages in public masturbation. This behavior occurs throughout the
day during 1:1 therapy, small group, and lunch. His BCBA has suggested collecting data to
inform treatment recommendations, but sometimes the therapist has other clients to attend to.
The BCBA has provided the therapist with a MotivAider, which is set for 30-second intervals. At
the end of the 30-second interval, the therapist records whether Juan is engaging in public
masturbation. Which measurement system is the therapist using? Only pick one.

momentary time sampling


A mother wants to know how long it takes for her child to begin to follow a direction after she
places a demand. She starts a timer after issuing the demand and records how long it takes for
her to initiate compliance. Which measurement system is the mother using? Only pick one.

latency

A teacher in a preschool classroom is tracking Jonah's aggression towards peers. Jonah's


aggression only occurs during free play times and the teacher often has extra support during
these times. Which measurement system is most appropriate to use with this behavior? Only
pick one.

event recording

A teacher is concerned about her student's humming behavior in the classroom. The behavior
doesn't have a clear start and stop time and seems to present differently depending on the task.
Sometimes, the humming is in a low pitch and other times it is very loud. It also can be in the
form of a song under some circumstances and in others, it doesn't seem to be recognizable.
The parent is concerned because the teacher reported that it is distracting other students in her
classes. The teacher will collect data for 45 minutes each day. The observation is divided into
15-second intervals. The teacher will record the behavior as occurring when it has occurred for
the duration of the interval. Which measurement system is the teacher using? Only pick one.

whole interval recording

A BCBA is trying to determine if Thomas has mastered a matching task he has been taught.
She asks the RBT to collect data for this behavior during her session with Thomas. The RBT will
record each correct response. Which measurement system is the RBT using? Only pick one.

event recording

experimental control
Two meanings: (a) the outcome of an experiment that demonstrates convincingly a functional
relations, meaning that experimental control is achieved when a predictable change in behavior
(the dependent variable_ can be reliably produced by manipulating a specific aspect of the
environment (the independent variable); and (b) the extent to which a researcher maintains
precise control of the independent variable by presenting it, with drawing it, and/or varying its
value, and also by eliminating or holding constant all confounding and extraneous variables.

scored interval IOA


# of agreements (where they both agree that it occurred) / # of scored intervals (when at least 1
person scores the behavior) x 100

unscored interval IOA


# of agreements (they both agree the behavior did not occur) / # of unscored intervals (when at
least 1 person did not score the behavior) x 100

interval by interval IOA


= (number of intervals of agreement / total number of intervals) * 100%

measures of central tendency


mean, median, and mode
ways to describe our data
used to compute various statistics

frequency distribution
or histogram is a graph that plots the values of observations along the horizontal axis. useful for
assessing the properties for the distribution of scores.

mode
score that occurs most frequently in a data set. it's the tallest bar that you will see.

median
measure of central tendency. middle score when scores are ranked in order of magnitude.
odd number of scores, organize data in ascending order from smallest to biggest. add 1 to the
total number of scores. then divide by 2. that number is the median.

mean
measure of central tendency. the average. add all the scores in distribution and divide by total
number of scores. mean can be heavily influenced by outliers.

population
all of the members belonging to the group of interest

sample
a smaller subset of the population. those that make up the sample have the same
characteristics of the population.
data gathered about the sample is used to make inferences about the population

sampling is more efficient and cost and time effective. potential of greater accuracy.

steps of sampling
define the population
define sample
determine sample size needed
determine your method for pulling sample

simple random sampling


the basic sampling technique where we select a group of subjects for your sample. each
individual is chosen by chance and each member of the population has an equal chance of
being included in the sample.

sampling error
population specific error, sampling frame error, selection error, non response.

controls for sampling error- careful design, large samples

hypothesis testing
the theory, methods, and practice of testing a hypothesis by comparing it with the null
hypothesis. the null hypothesis is only rejected if its probability of obtaining the findings falls
below a predetermined level, in which case the hypothesis being tested is said to have
significance.

steps of hypothesis testing


state the null and alternative hypothesis
select appropriate test and set level of significance (p value of alpha)
collect data
calculate the test statistic and corresponding p value
construct acceptance or rejection regions, critical value
draw conclusion

Null Hypothesis (H0)


the hypothesis that states that there is no significant difference between specified samples. any
difference observed is due to chance.

Alternative Hypothesis (Ha)


contrary to the null. assumes that sample observations are influenced by some non random
cause. the differences observed are not by chance.

p-value
the probability of getting your value, or a value that is more extreme, if the null hypothesis is true
threshold is typically set at .05
a small p (less than or equal to .05) is said to be significant
a significant finding indicates that there is strong evidence that our findings were not by chance,
and we should then reject the null hypothesis in favor of the alternative hypothesis
a small p (less than or equal to .05) will fall in the rejection region

test statistic
the sample statistic one uses to either reject Ho (and conclude H1) or not to reject Ho

critical values
the values of the test statistic that separate the rejection and non rejection regions

rejection region
the set of values for the test statistic that leads to rejection of Ho

non rejection region


the set of values not in the rejection region that leads to non rejection of H0

Type 1 error
occurs when we incorrectly reject a true null hypothesis. this leads one to conclude that a
supposed effect or relationship exists when it doesn't. a false positive.

probability of making this error is directly related to p value and rejection criterion
if p = .05, you have a 5% chance of making this type of error

Type II error
occurs when we fail to reject a false null hypothesis. this leads one to conclude that there was
not an effect or relationship, when there was. a false negative.

statistical considerations needed to determine sample size of study


power- the ability a test has of correctly rejecting the null hypothesis, avoiding a type II error
effect size- how large the effect of one variable on the other variable. the degree to which the
null hypothesis is false.
rejection criteria (p value)

A researcher was interested in strategies to increase social participation in students with ASD.
Five adolescents with ASD used scripts to learn how to appropriately ask to join an activity.
What research design methodology should be used?
single case research design

A researcher is consulting with 5 therapists who are learning to use prompt fading procedures.
The researcher wanted to evaluate if Behavioral Skills Training is effective for increasing correct
use of prompt fading procedures.
What research design methodology should be used?
single case research design

A researcher wanted to evaluate the use of non-contingent liquid presentation to decrease


rumination in four individuals with developmental disabilities.
single case research design

A researcher was interested in strategies to increase physical activity in students with ASD. Five
adolescents with ASD used Exercise Buddy (a commercially available app) to learn how to
complete four different exercises (i.e., sit-ups, bicep curls, jumping jacks, tricep extensions)
correctly.
single case research design

A researcher wanted to know whether teachers were more likely to implement reading
interventions with treatment fidelity if they were trained directly with the student who was
referred or with the researcher role-playing the part of the student.
group research design
Students in a graduate ABA program were split into two groups each receiving different exams.
One group received a traditional written exam, while the other received a practical exam. After
the course, the final grades between the two groups of students was compared.
group research

Researchers examined the preferences of children for three pre-session therapeutic conditions.
One group was exposed to pre-session pairing prior to the onset of discrete-trial instruction, the
second group was exposed to free play prior to discrete-trial instruction, and the third group
exposed to immediate onset of discrete-trial instruction. What research design methodology
should be used?
group research

A consultant wanted to know whether immediate feedback or delayed feedback would be more
effective in increasing a teacher's use of praise. She conducts the study at two elementary
schools. In School A all teachers have immediate feedback. In School B all teachers have
feedback that is delayed by one week. What research design methodology should be used?
group research design

Researchers want to decrease off-task behavior in a group of 6 students diagnosed with ADHD.
They would like to try out prompting, incentives, and praise individually and together. What
research design methodology should be used?
single case research design

The curriculum specialist for a large school district wanted to evaluate the use of a new math
curriculum for elementary school students. Elementary schools were assigned to one of two
conditions: One group of schools used the existing math curriculum; another group of schools
used the new curriculum. Scores on the state level math exam were compared.
group research design

A researcher evaluated the use of simplified habit reversal to decrease trichotillomania in four
adolescents with developmental disabilities.
SCRD

A clinician was referred five children between the ages of 3-5 who frequently leave their room
and have severe tantrums at bedtime. The researcher implemented the use of bedtime passes
to decrease these behaviors. The researcher measured the duration of problem behavior.
SCRD
replication
ability to repeat the effect an independent variable has on the dependent variable. more
common in SCRD. assess the reliability of findings, assess the generality of findings, look for
exceptions.

direct replication (within study)


sequential introduction and withdrawal designs (ABAB)
time lagged designed (multiple baseline designs)
rapid iterative alternation (alternating treatment design)

direct intra-participant replication


repeating the experimental effect with the same participant more than once in a study
prediction (a1) affirmation B(1)
verification (a2)
replication (b2)

inter-participant direct replication


repeated the experimental effect with different participants
multiple baseline design across participants
aim for 3 opportunities for replication

direct replication guidelines


nvestigators, settings, materials, instructional arrangements, formats, etc. should remain
constant across replication attempts. Dependent variable should be similar across participants.
Participants should have similar abilities related to functional inclusion criteria characteristics
(i.e., status variables) that might impact the effectiveness of the intervention. The IC should be
the same across participants. Three direct replications is the minimal acceptable number to
determine a functional relationship.

clinical replication
A form of direct replicationAdministration of a treatment packageContaining two or more distinct
treatment proceduresBy the same investigator or group of investigatorsAdministered in a
specific settingTo a series of clients presenting similar combinations of multiple behavioral and
emotions problems that usually cluster together.

systematic replication
Issue of generalityFindings can be observed under conditions different from those in the original
experiment. Planned series of studies incorporating systematic changes from one study to the
next that is identified by the researcher as a systematic replication. Varying settings, behavior
change agents, behavior disorders, combinations of the previous.

A guess about what the data would show if there is no change in the experimental condition(s).

prediction

The data trend show the independent variable may have had an effect on the behavior.

affirmation

The data trend shows there is a cause-effect relation between the independent and dependent
variables at the simplest level.

verification

The data trend repeats the effect that that independent variable has on the dependent variable,
increasing the researcher's confidence that a functional relation exists between the independent
variable and the dependent variable.
replication

applied research
Acquire knowledge on practical applications of theoryDevelop better understanding of how
theory applies in "real world" settings.Focus on development of applicationApplication of
interventionSolve practical problems of the modern world, rather than acquire knowledgeGoal:
to improve the human condition

three defining characteristics of SCRD


An individual "case" is the unit of intervention administration and data analysis. Single
participantCluster of participants The case provides its own control for purposes of comparison.
Repeated measurement

types of questions SCRD might answer


Overarching: Which intervention is effective for this case?Is this intervention more effective than
the current "baseline" or "treatment" as "usual" condition? (e.g., does Intervention A reduce
problem behavior for this case?)Does adding B to Intervention A further reduce problem
behavior for this case?Is Intervention B or Intervention C more effective in reducing problem
behavior for this case?

criteria for single case research design that meets evidence standards
Independent variable must be systematically manipulatedThe outcome variable must be
measured systematicallyThe study must include at least three attempts to demonstrate an
intervention effect (replication)The phase should typically include a minimum of five data points

designs that meet intervention effect with SCRD


Designs that generally meet this standard include:ABAB DesignMultiple Baseline
DesignAlternating Intervention Design

line graph variations


•Two or more dimensions of the same behavior•Two or more different behaviors•Measure of the
same behavior under different conditions•Changing values of the independent variable•Same
behavior of two or more participants

About us
About Quizlet
How Quizlet Works
Careers
Advertise with us
Get the app
For Students
Flashcards
Test
Learn
Solutions
Q-Chat
For teachers

You might also like