Neurofeedback and Basic Learning Theory: Implications For Research and Practice

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/235660267

Neurofeedback and Basic Learning Theory: Implications for Research and


Practice

Article  in  Journal of Neurotherapy · October 2011


DOI: 10.1080/10874208.2011.623089

CITATIONS READS
192 1,898

7 authors, including:

Leslie Sherlin Martijn Arns


Ottawa University Brainclinics Foundation
42 PUBLICATIONS   1,016 CITATIONS    223 PUBLICATIONS   6,778 CITATIONS   

SEE PROFILE SEE PROFILE

Joel F Lubar Hartmut Heinrich


University of Tennessee neurocare group
152 PUBLICATIONS   8,536 CITATIONS    137 PUBLICATIONS   5,517 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

rTMS over the DLPFC combined with tomographic neurofeedback of the sgACC as a tailored antidepressant intervention with enduring effects View project

Internship: International Study to Predict Optimized Treatment - Depression (iSPOT-D) View project

All content following this page was uploaded by Martijn Arns on 15 October 2014.

The user has requested enhancement of the downloaded file.


This article was downloaded by: [174.145.220.16]
On: 30 November 2011, At: 07:35
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

Journal of Neurotherapy
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/wneu20

Neurofeedback and Basic Learning Theory: Implications


for Research and Practice
a b c d e f g h i
Leslie H. Sherlin , Martijn Arns , Joel Lubar , Hartmut Heinrich , Cynthia Kerson
j k l m
, Ute Strehl & M. Barry Sterman
a
Neurotopia, Inc., Los Angeles, California, USA
b
Nova Tech EEG, Inc., Mesa, Arizona, USA
c
Southwest College of Naturopathic Medicine, Tempe, Arizona, USA
d
Research Institute Brainclinics, Nijmegen, The Netherlands
e
Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
f
University of Tennessee, Knoxville, Tennessee, USA
g
Department of Child and Adolescent Mental Health, University of Erlangen-Nürnberg,
Erlangen, Germany
h
Heckscher-Klinikum, München, Germany
i
ISNR Research Foundation, San Rafael, California, USA
j
Brain Science International, Pleasanton, California, USA
k
Marin Biofeedback, San Rafael, California, USA
l
University of Tubingen, Tubingen, Germany
m
University of California, Los Angeles, California, USA

Available online: 30 Nov 2011

To cite this article: Leslie H. Sherlin, Martijn Arns, Joel Lubar, Hartmut Heinrich, Cynthia Kerson, Ute Strehl & M. Barry
Sterman (2011): Neurofeedback and Basic Learning Theory: Implications for Research and Practice, Journal of Neurotherapy,
15:4, 292-304

To link to this article: http://dx.doi.org/10.1080/10874208.2011.623089

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to
anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should
be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,
proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in
connection with or arising out of the use of this material.
Journal of Neurotherapy, 15:292–304, 2011
Copyright # Taylor & Francis Group, LLC
ISSN: 1087-4208 print=1530-017X online
DOI: 10.1080/10874208.2011.623089

REVIEW ARTICLES
NEUROFEEDBACK AND BASIC LEARNING THEORY: IMPLICATIONS FOR RESEARCH
AND PRACTICE

Leslie H. Sherlin1,2,3, Martijn Arns4,5, Joel Lubar6, Hartmut Heinrich7,8, Cynthia Kerson9,10,11,
Ute Strehl12, M. Barry Sterman13
1
Neurotopia, Inc., Los Angeles, California, USA
2
Nova Tech EEG, Inc., Mesa, Arizona, USA
3
Southwest College of Naturopathic Medicine, Tempe, Arizona, USA
4
Research Institute Brainclinics, Nijmegen, The Netherlands
5
Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
6
University of Tennessee, Knoxville, Tennessee, USA
7
Department of Child and Adolescent Mental Health, University of Erlangen-Nürnberg,
Downloaded by [174.145.220.16] at 07:35 30 November 2011

Erlangen, Germany
8
Heckscher-Klinikum, München, Germany
9
ISNR Research Foundation, San Rafael, California, USA
10
Brain Science International, Pleasanton, California, USA
11
Marin Biofeedback, San Rafael, California, USA
12
University of Tubingen, Tubingen, Germany
13
University of California, Los Angeles, California, USA

Brain activity assessed by electroencephalography (EEG) has been demonstrated to respond


to conditioning techniques. The concept of modulating this activity has been called EEG
biofeedback, more recently neurofeedback, and is based on operant learning principles.
Technological advancements have significantly enhanced the ease and affordability of
recording and analyzing brain activity. Thus, properly trained practitioners can implement
these conditioning strategies in their practice. Recent research indicating evidenced-based
efficacy has made this technique a more viable option for clinical intervention. The objective
of this article is to highlight the learning principles that have provided the fundamentals of
this neuromodulatory approach. In addition, it is recommended that future applications in
clinical work, research, and development adhere to these principles.

INTRODUCTION can be used for conditioning the EEG. This


article highlights the learning principles funda-
Electroencephalographic (EEG) brain activity has
been demonstrated to be responsive to operant mental to the successful implementation of this
and classical conditioning. This technique has neuromodulation approach. It is our contention
been called EEG biofeedback and more recently that future applications in clinical work,
neurofeedback. The developments of computer research, and development should not stray
technologies have advanced, and now it is from the already-demonstrated basic principles
affordable and convenient for private setting of learning theory until empirical evidence
providers to access software and hardware that demonstrates otherwise. Considerations for

Received 12 August 2011; accepted 1 September 2011.


We acknowledge the contributions of research assistant Noel Larson, MA, for her involvement in researching the previous literature
as well as her review and proofing of the manuscript.
Address correspondence to Leslie H. Sherlin, PhD, 8503 East Keats Avenue, Mesa, AZ 85209, USA. E-mail: lesliesherlin@mac.com

292
NEUROFEEDBACK AND BASIC LEARNING THEORY 293

the well-developed learning theory principles hand, is any event that follows a specified
should be made in the practical implementation response that is not desirable and is designed
of neurofeedback. to prevent the specified response from occur-
ring again. If the consequence of the reward
or punishment increases or decreases the prob-
HISTORICAL PERSPECTIVE
ability of the response, the response becomes
Ivan Pavlov established classical conditioning in reinforced. Therefore any event that follows
1927 in a series of experiments measuring the the response to a stimulus (reward or punish-
salivation of dogs in relation to the presentation ment) and increases or decreases the likelihood
of food. In short, unconditioned responses, orig- of a behavior is considered a reinforcer. It is
inally called ‘‘inborn reflexes’’ (Pavlov, 1927), important to note that the terms reward and
are reactions to stimuli that require no learning punishment do not mean the same as reinforcer.
to occur. These unconditioned responses are A reward or punishment specifically refers to
often the result of naturally occurring stimuli the event following a response, whereas the
and are useful in survival, such as salivating at reinforcer is an effective reward or punishment
the sight of food to increase digestion speed. In in increasing or decreasing the likelihood of
Downloaded by [174.145.220.16] at 07:35 30 November 2011

this example, food is labeled an unconditioned the response occurring again respectively (i.e.,
stimulus, or a stimulus that automatically triggers strengthening of the response). We recognize
an unconditioned response (salivating; Pavlov, that this sequence can be difficult to conceptua-
1927). Pavlov soon discovered that a con- lize. In chronological order, a stimulus occurs
ditioned response, such as salivation when hear- that is followed by a behavioral response. A
ing the footsteps of the person bringing food to reward or punishment is presented in reaction
the dog, can be elicited by a conditioned stimu- to the response in order to promote or suppress
lus, in this case the footsteps. He coined this pro- the response. If the reward or punishment
cess classical conditioning (Pavlov, 1927). achieves this goal, it is labeled a reinforcer.
However, classical conditioning does not explain The words positive and negative in operant
all changes of behavior or the emergence of new conditioning terminology imply the presen-
behaviors. To explain the emergence of newly tation or removal of reinforcement. There are
learned behaviors that were not a result of classi- many ways these terms are combined that have
cal conditioning, Thorndike (1911=1999) con- effects on behavior. To illustrate the possible
ceptualized operant conditioning by proposing combinations, a few examples are provided. A
the ‘‘Law of Effect,’’ which stated, responses that child may be more likely to pay attention in
produce a satisfying effect in a particular situ- class if stickers are presented afterward (positive
ation become more likely to occur again in that reward). Conversely, a child might be more
situation, and responses that produce a discom- likely to pay attention if he is allowed to skip
forting effect become less likely to occur again in study hall time (negative reward). A child may
that situation. be more likely to pay attention in class if she
Operant conditioning was most refined by is given extra work to complete if she does
the revered psychologist B.F. Skinner (1948). not pay attention (positive punishment). Finally,
Through his work, the rules for operant con- a child may be more likely to pay attention if he
ditioning were clearly defined. Operant con- is not allowed to participate in recreation time if
ditioning can increase a preferred behavior he fails to pay attention (negative punishment).
and decrease an undesired behavior by provid- In general, the term positive is an additive
ing a reward or punishment. A reward is any process of reward or punishment, and the term
event (presentation of food, tones, etc.) that fol- negative is the removal of a reward or punish-
lows a specified response that is considered to ment to promote the desired behavior.
be desirable and is intended to promote the The timing of the reinforcement of a beha-
specified response to occur again under the vior is critical to learning because delays as small
same conditions. A punishment, on the other as a fraction of a second can decrease the
294 L. H. SHERLIN ET AL.

strength of the conditioning (Skinner, 1958). The chaining, one can increase the complexity of
contingent relationship between the behavior the final response. Like many links in a chain,
and the reinforcement must be evident to the behavioral responses may be reinforced indi-
learner. Reinforcement schedules are primarily vidually at first, but finally the whole sequence
either continuous reinforcement (this describes of events must occur before reinforcement.
a schedule where a response is reinforced For example, an illustration of chaining is the
rapidly each time it occurs) or intermittent pigeons that Skinner taught to ‘‘play’’ ping-pong
reinforcement (i.e., not every occurrence of (Skinner, 1962). The pigeons were taught beha-
the response is rewarded). There are four types viors successively, but each was dependent on
of intermittent reinforcement schedules: the previous to occur. Such complex behavior
can only be conditioned employing the princi-
1. Fixed-ratio reinforcement schedule, in ples of learning theory by chaining and reinfor-
which the subject must reproduce the cor- cing successive approximations of behavior.
rect response a set number of times before Two other related concepts integral to
reinforcement is presented. A fixed-ratio learning theory are habituation and sensitiza-
schedule does not depend on the speed of tion, which are both forms of nonassociative
Downloaded by [174.145.220.16] at 07:35 30 November 2011

the responses, only the number of responses. learning. Habituation is the learning process
2. Variable-ratio schedules require an unpre- that results in an individual decreasing the
dictable number of responses to attain response to a stimulus due to the reduction in
reinforcement. reaction to novelty. This occurs due to repeated
3. Fixed-interval schedules are similar to Fixed- exposure and is typical when the individual is
ratio reinforcement schedules except that continuously exposed to the same stimuli
both the response must occur as well as a (Groves & Thompson, 1970). In more recent
set interval of time. experimental studies of learning, it has been
4. Variable-interval schedules require the demonstrated that differential rates of habitu-
response and varied time segments to occur ation (e.g., to a novel open field) occur between
before the reinforcement is made available. animals of high and low general learning
abilities. Because conditioning principles of
For neurofeedback, most current imple- neurofeedback have largely been applied in
mentations and studies have utilized a continu- the clinical setting, there is a possibility that
ous reinforcement schedule within sessions. To there is increased habituation and less behavior
facilitate transfer of the self-regulation skill to conditioning in individuals who have lower
everyday life, some protocols use transfer trials. capacities for learning (Light, Grossman, Kolata,
Transfer trials are where no feedback is provided Wass, & Matzel, 2011).
during the given period, only feedback on Sensitization, similar to habituation only
whether the trial was ‘‘correct’’ or ‘‘incorrect’’ through the repetitive nature of the stimuli,
(Rockstroh, Elbert, Birbaumer, & Lutzenberger, occurs when the individual has an amplification
1982). This process refers to a response general- of a sensation or awareness of the particular
ization, which is explained later. However, it is stimulus. The individual may find what was a
very important to neurofeedback that the final normal response to be painful or otherwise
representation of a response can be reached annoying and will attempt to avoid the stimulus
by reinforcing successive approximations of the even if it once was pleasing or an indication of a
desired response, a process termed shaping reward. Sensitization of the nervous system is
(Skinner, 1958). influenced by behaviors that are increased or
Shaping can be used to decrease learning decreased by positive and negative reinforce-
time for the final response by breaking infor- ment. The presence of clinical symptomatology
mation down into small steps that are easier may additionally amplify the presentation and
to accomplish, gradually requiring more impede improvements (Treisman & Clark,
steps before reinforcement occurs. Utilizing 2011). This consideration should also be given
NEUROFEEDBACK AND BASIC LEARNING THEORY 295

when determining learning responses and as was operating in the previous studies. They
explanation for less-than-ideal conditioning demonstrated that the connection between
responses. The neuronal basis of habituation the conditioned stimulus and the alpha block
and sensitization have been demonstrated in occurred less frequently with more and more
experiments with the sea slug Aplysia (Kandel trials. They believed it was an anticipatory
& Taut, 1965a, 1965b), demonstrating that alpha block response that was being con-
these learning principles represent very basic ditioned. Thus, they tentatively affirmed that
forms of learning, even present in ‘‘simple’’ conditioning was occurring and discussed the
nervous systems. This work earned Kandel the implications of it, noting that the operant
Nobel Prize and is most recently described by conditioning of the EEG is a different process
Hawkins, Kandel, and Bailey (2006). making it ‘‘possible to postulate an association
between a cortical state following a peripheral
stimulus (conditioned stimulus) and the cortical
CLASSICAL CONDITIONING OF
state preceding the unconditioned response’’
BRAIN ACTIVITY
(p. 142). They elaborated that this theory
The first demonstrations that the brain activity, would work in conjunction with the—then
Downloaded by [174.145.220.16] at 07:35 30 November 2011

more specifically, the alpha blocking response, predominant—motor theories of psychology.


could be classically conditioned was reported Also in 1941, Jasper and Shagas collaborated
almost simultaneously in France by Durup and to build upon the Pavlovian conditioning of the
Fessard (1935) and in the United States by occipital alpha blocking. From previous research,
Loomis, Harvey, and Hobart (1936). Loomis they summarized that it is clear that alpha block-
et al. described that in a completely dark room ing is not simply an unconditioned response to
pairing a low auditory tone with a light stimulus light. It may also become a conditioned response
resulted in the auditory stimulus becoming the of stimuli beyond light alone. In their next study,
conditioned stimulus leading to the conditioned Jasper and Shagas (1941b) were successful in
response—in this case, blocking of the alpha. demonstrating the operant conditioning of the
They also observed that extinction took place alpha block by having subjects voluntarily say
if the low tone was presented several times in subvocally ‘‘block’’ and press a button followed
absence of the light stimulus. by a subvocal ‘‘stop’’ and release of the button.
The first systematic studies investigating The button switched on the light, serving as the
classical conditioning of the alpha-blocking unconditioned response, associating the subvo-
response were conducted several years later. cal command with alpha blocking and hence
Jasper and Shagas (1941a) demonstrated that becoming the conditioned response. This was
the simple conditioned response of alpha block- actually the first demonstration of ‘‘voluntary
ing was equal or longer with the conditioned control’’ of the EEG in humans.
stimulus as it was by the unconditioned stimulus In 1963, Sterman, and later in the same
after only 10 paired presentations (Jasper & Sha- year Clemente, Sterman, and Wyrwicka, took
gas, 1941a). In addition, they demonstrated this classical conditioning of brain activity one
that alpha-blocking conditioned response could step further. They demonstrated in cats that
be conditioned with many different styles of the EEG synchronization and behavioral mani-
reinforcement (cyclic, delayed, trace, differen- festations of sleep could be conditioned with a
tial, differential delayed, and backward) and tone (conditioned stimulus) and basal forebrain
hence demonstrated that conditioning of the stimulation (unconditioned stimulus), where
alpha-blocking response fulfilled all of the the conditioned stimulus eventually resulted
Pavlovian types of conditioned responses. in sleep preparatory behavior. Furthermore,
In the same year, Knott and Henry (1941) they also reported that generalization for differ-
were skeptical about the conditioning of the ent tone frequencies and discrimination of the
alpha block and conducted their own research conditioned response had occurred, suggesting
to differentiate if conditioning or sensitization that conditioning principles could be applied.
296 L. H. SHERLIN ET AL.

Milstein (1965) summarized the controversy principles were presented by Kamiya (2011).
of conditioning EEG responses, noting that Several years after Kamiya’s reports, Sterman’s
initially it seemed clear that alpha blocking laboratory demonstrated operant conditioning
was always the result of the presentation of light of sensory motor rhythm (SMR) activity in the
(Bagchi, 1937) and that this unconditioned cat (Wyrwicka & Sterman, 1968), which was
response could be conditioned to neutral stimuli demonstrated to have anticonvulsant properties
(Gastaut et al., 1957; Jasper & Shagass, 1941b; (also see Sterman, LoPresti, & Fairchild, 1969,
Jus & Jus, 1957; Knott & Henry, 1941; Loomis 2011). This study initiated the beginning of
et al., 1936; F. Morrell & Ross, 1953; Travis & clinical applications of operant conditioning of
Egan, 1938). However, Milstein (1965) then the EEG.
pointed to the number of studies that failed to Given some conflicting results in the early
show consistency across all individuals in the years on the operant conditioning of alpha
alpha-blocking response (Redlich, Callahan, & activity, Hardt and Kamiya (1976) suggested
Mendelson, 1946) and the variability in the that there were methodological differences
response duration (Jus & Jus, 1960; L. Morrell between the positive and negative outcomes.
& Morrell, 1962) or even reduction in the Specifically they reported that those who were
Downloaded by [174.145.220.16] at 07:35 30 November 2011

response all together (Wells, 1963). using integrated amplitude to quantify the
With all of this in mind, Milstein designed a alpha response were affirming the conditioning
study to determine if a neutral stimulus process, whereas those who were using percent
could produce alpha blocking without being time were failing to support the conditioning
paired with a stimulus that generally elicits the response. Hardt and Kamiya pointed to the
response. Conversely, Milstein also set out to test binary classification used in the percent-time
the following: If the conditioned stimulus was system, above or below the threshold, which
paired with the unconditioned stimulus during clearly ignores considerable information about
nonalpha EEG activity, could the conditioned the actual strength of the alpha signal. Collaps-
stimulus then elicit the alpha-blocking response ing a continuous variable, like the wide range of
during alpha activity trials? In Milstein’s experi- amplitude values in the alpha signal, into a
ment there were 11 different conditions to binary or dichotomized variable always results
answer these questions, including habituation in the loss of information about the ‘‘raw’’ or
periods, test periods, and rest periods, and all ‘‘true’’ variable (Tabachnick & Fidell, 2007).
were followed by a classical conditioning period. This means that if the amplitude of alpha were
The results indicate that alpha blocking could be to rise at the presentation of the conditioned
conditioned. However, because it was not stimulus but that increase falls just short of the
necessary to pair the conditioned stimulus with arbitrary threshold used for percent-time cate-
the unconditioned stimulus, it is possible that gorization, then that response cannot be
Knott and Henry’s (1941) idea that instead of observed and it will appear as if conditioning
true conditioning the subjects simply became has not occurred. Furthermore, Hardt and
sensitized to the tone seemed plausible. There- Kamiya (1976) pointed out that if reinforce-
fore, it still has not been determined if the results ment is withheld for those individuals who fall
just summarized should be considered classical just short of the threshold, they will more than
conditioning or simply reflect sensitization. likely abandon ‘‘successful strategies’’ that
However, the results do demonstrate that the result in alpha increases. On the other hand,
EEG is subject to learning principles. the amplitude integration method displays the
continuous information and is more sensitive
to small changes in alpha amplitude, an impor-
OPERANT CONDITIONING OF BRAIN
tant element for determining if conditioning of
ACTIVITY
EEG responses can occur. Two aspects of the
In 1962, the first results of voluntary control over results prompt Hardt and Kamiya (1976) to
alpha activity based on operant conditioning say that ‘‘together [these results] suggest that
NEUROFEEDBACK AND BASIC LEARNING THEORY 297

the percent-time measure can be likened not implications of operant conditioning of behavior
only to a ruler with unequally spaced gradua- and brain activity on learning, Marczynski,
tions, but worse, to a rubber ruler with unequal Harris, and Livezey (1981) hypothesized that
graduations which requires different degrees the PRS of EEG that occurs in animals after the
of stretch to account for different threshold consumption of an expected food reward
settings’’ (p. 72). would be correlated to a greater ability to learn.
Lansky, Bohdaneck, Indra, and Radii-Weiss This hypothesis was derived from the knowl-
(1979) attempted to counter Hardt and Kamiya edge that the PRS occurs not only ‘‘in the
(1976) regarding percent-time alpha feedback primary and secondary visual cortex but also
versus amplitude-integration. They started by over the association cortex (suprasylvian gyri)
pointing out that the percent time is a continuous of both hemispheres’’ (Marczynski et al.,
variable because it ranges from 0 to 100% (but 1981, p. 214). After successfully training 25 of
fail to appreciate the importance of the individual 27 cats to press a lever to receive 1 ml of milk,
being able to see small changes during training). the authors were able to demonstrate a signifi-
Lansky et al. then noted that in longer training cant positive correlation between greater PRS
periods, say over minutes, amplitude-integral indices and faster learning.
Downloaded by [174.145.220.16] at 07:35 30 November 2011

method does not allow for quick display of alpha Cognition-impairing drugs like scopolamine
spindles, unlike the percent-time ratio. They also and atropine block the PRS (Marczynski, 1971),
challenge the ideas that (a) an integral method further supporting the role of PRS in consoli-
could fit into a Poisson distribution because it is dation of information processing. Hallschmid
discrete and (b) the rectangle probability distri- et al. (2002) investigated the PRS further
bution of the percent-time method that was in humans and found that a lower-alpha syn-
reported by Hardt and Kamiya. The rectangle chronization in humans indicates ‘‘distinct
probability distribution was not confirmed with similarities to the PRS phenomena observed
any statistical test and is supposedly discrepant in animals under comparable conditions’’
with the empirical findings reported by Lansky (p. 214). It is suggested that there is a corre-
et al. Last, it is pointed out that both the spindle lation between the magnitude of the PRS and
length and the power (or amplitude) of the alpha learning abilities in cats (Marczynski et al.,
frequency are important aspects of future 1981). Likewise, in humans, functional rela-
research, because they most likely have some tions between lower alpha and theta activity
relationship to each other and offer unique infor- during learning processes have been suggested
mation about the EEG response. Furthermore, as (Doppelmayr, Klimesch, Pachinger, & Ripper,
of yet, neither method has been shown to be bet- 1998; Klimesch, 1999), thus further confirming
ter than the other. the role of PRS in learning.
In 1964, Clemente, Sterman, and Wyrwicka
reported an alphalike EEG synchronization
IMPLICATIONS FOR PRACTICE
in the parieto-occipital cortex, visible just after
the animal was reinforced, which they named Based on the established learning theoretical
the post-reinforcement synchronization (PRS). principles just outlined, there are techniques
Poschel and Ho (1972) demonstrated that this that are crucial elements when designing a neu-
activity critically depends on the operant rofeedback study and are important aspects to
response, because providing the reinforcement be implemented in practice. Next we discuss
alone without the requirement for an operant the previously mentioned learning theory con-
response (i.e., lever press) gradually weakens— cepts and the specific implication for research
or habituates—the well-developed PRS. This and practice.
phenomenon has also been observed in mon-
keys (Saito, Yamamoto, Iwai, & Nakahama, Speed of Reinforcement
1973) and humans (Hallschmid, Mölle, Fischer, It is well known that different EEG filters possess
& Born, 2002). In an attempt to understand the different properties. The most important aspect
298 L. H. SHERLIN ET AL.

is that the more specific a filter is (often referred This requires a discrete feedback setup most
to as a higher order filter), the more data points closely resembling the early neurofeedback
that are required to make such a calculation. equipment from 1970s that had discrete But-
This results in a longer delay of the feedback terworth 4 pole filters for specific frequency
signal. As was pointed out by Skinner (1958), bands described by Sterman, Macdonald, and
the timing of the reinforcement or punishment Stone (1974). When all criteria were met for
of a behavior is critical to learning, as a delay as .5 s, there was a green light and a ‘‘ding’’ sound,
small as a fraction of a second can decrease the resulting in a discrete reward allowing for a PRS
strength of the conditioning. Therefore, it is to take place.
important to understand the specific filter Consequently, complex games offered in
settings used and set them appropriately. In some products are contraindicated, given that
general, the faster the response time of the the level of continuous feedback does not allow
filter, the better. There is no fixed rule on what for a PRS complex to occur because it is too
is the minimum or maximum acceptable delay difficult for the learner to extract meaningful
of a filter, and this also depends on the information. Operant learning involves the
‘‘required specificity,’’ but based on Felsinger formation of a response–reinforcer association.
Downloaded by [174.145.220.16] at 07:35 30 November 2011

and Gladstone (1947) and Grice (1948), the Complex games are much more likely to ‘‘over-
latency should not exceed 250 to 350 ms (see shadow’’ the response–reinforcer association
Figure 1, from Grice, 1948). Practically speak- by the formation of a more salient stimulus–
ing, this is equivalent to an FIR filter with a reinforcer association (Pearce & Hall, 1978).
fifth-order Butterworth filter. This practically means they will associate the
reinforcement with the stimulus rather than
Type of Reinforcement the desired specific brain behavior response.
As was presented earlier, the concept of PRS is In addition, the presentation of a signal during
a reflection of reinforcement, and the amount the interval between the response and the
of PRS is clearly associated with the speed of reinforcement can block learning (Williams,
learning (Hallschmid et al., 2002; Marczynski 1999). Therefore, in the application of neuro-
et al., 1981). Therefore, for optimal learning feedback, one ‘‘should stress exercise rather
to take place it is important that the ‘‘feedback’’ than entertainment’’ (Egner & Sterman, 2006).
is designed in such a way that a PRS can occur. The reinforcement should lead to ‘‘knowledge
of results.’’ Therefore, it should specifically
inform the learner whether the response was
right or wrong and to what extent the brain
signal changed. To date, no studies could be
found empirically testing the efficacy of continu-
ous feedback, such as a game or video presen-
tation, to applications with a discrete feedback
of game or video presentations. However, the
theory and evidence previously described
provides sufficient indication that the discrete
feedback would achieve superior results.

Shaping
FIGURE 1. The number of trials required for response acquisition
as a function of the delay of reinforcement. Note. This figure
Another aspect of learning theory entails the
clearly demonstrates that the faster the reinforcement is provided ‘‘shaping procedure’’ of operant conditioning.
the faster the learning. From ‘‘The Relation of Secondary During operant conditioning of a behavior, or
Reinforcement to Delayed Reward in Visual Discrimination
in this case, brain activity, shaping occurs by
Learning,’’ by G. R. Grice, 1948, Journal of Experimental
Psychology, 38, pp. 1–16. Copyright 1948 by the American adjusting thresholds in an a priori direction to
Psychological Association. Reprinted with permission. promote learning. Based on the earlier work
NEUROFEEDBACK AND BASIC LEARNING THEORY 299

of Hardt and Kamiya (1976), the feedback neurofeedback, there is a considerable body
parameters and reward should not utilize auto- of theoretical and empirical evidence support-
matic calculation of thresholds. The continuous ing the non-auto-thresholding procedure for
updating of a threshold is a ‘‘moving target’’ and operant conditioning.
the reward feedback can be provided to the Whether shaping is needed at all depends
learner for a given percentage even when the on if the ultimate desired behavior can be oper-
desired activity is changing in the opposite ationalized. For example, in paradigms such as
direction. In auto-thresholding, the reward slow cortical potentials (SCP) feedback there is
feedback will be given for moving in the desired not such a known goal. It does not matter
direction in a momentary response. This may whether the shift of the SCP deviates about 2
not actually increase or decrease the amplitude or 7 mV from baseline. There is no ‘‘norm’’ to
in the desired training direction in consideration be reached by a learner. Only the positive or
of the baseline level. Hence, auto-thresholding negative deviation from baseline is important.
techniques do cause differential effects, which If at all, shaping procedures are used here for
can also be understood from learning theory. ‘‘comorbid’’ behaviors as not being able to
This can be illustrated by a practical remain seated, not to produce artifacts, and
Downloaded by [174.145.220.16] at 07:35 30 November 2011

example of a screaming child. The goal is to so on. Therefore, knowledge about the brain
teach the child to no longer scream, and we activity to be trained is also crucial in applying
‘‘punish’’ the child every time it screams. In the right learning theoretical principles, as is
an auto-thresholding model eventually the further outlined in the upcoming Specificity
child will be punished for ‘‘raising its voice,’’ section.
that is, auto-thresholding is blind to the quality Finally as described earlier, the auto-
of the behavior being trained. In addition to this thresholding procedure precludes the possi-
point, if the learner begins to fatigue, lose inter- bility of the neurofeedback being applied in a
est, or even stop actively participating in the ‘‘blinded’’ fashion. When investigating in a
training, the reward signals continue to be pro- double-blind fashion, one has to resort to tech-
vided irrespective of whether they are produc- niques like ‘‘auto-thresholding’’ to force the
ing the desired behavior. They are, in fact, blinded methodology impractically upon the
being rewarded for only changing the behavior neurofeedback intervention. This will contrib-
based on the previous averaged time period, ute to null findings, as one of the primary com-
which may not be an actual change from the ponents of the neurofeedback training is based
starting behavior point. Even worse, it may in the learning principle of shaping that will
actually be in the opposite direction than the completely violated in the auto-thresholding
desired training parameter. Consider if the lear- procedure.
ner is being asked to reduce the amplitude of a
given frequency band and the threshold is Specificity
calculated automatically, they will always be Philippens and Vanwersch (2010) wanted to
getting a percentage of feedback even if the address what they perceived to be a lack of
amplitudes are rising across time. If this occurs, well-controlled, scientific investigations of the
at best, the learner may show improvements efficacy of neurofeedback. As a result, they
only if they continually demonstrate change in designed a laboratory study utilizing telemetric
the desired direction. It is possible that they neurofeedback with Marmoset monkeys
may show no learning and no effect. Finally, (N ¼ 4) to determine if, through operant con-
at worst they could effectively train in the ditioning, these monkeys could increase the
opposite direction and result in an increase SMR. They rewarded the presence of ‘‘high
in aberrant and negative behaviors. Again, alpha,’’ and they clarified this as a peak fre-
although there have been no studies directly quency between 11 to 14 Hz. All four monkeys
comparing an auto-thresholding procedure were able to significantly increase the presence
to a non-auto-thresholding procedure in of this frequency, though each monkey had a
300 L. H. SHERLIN ET AL.

unique learning curve. One monkey learned to physiological factors may produce artifacts.
increase this frequency so well that by Session Whether this is intentional or not, as soon as such
3, all 35 possible rewards were obtained in activity is not detected as an artifact and is
the last two of the six 5-min epochs. Due to this rewarded, an incorrect learning process will take
fast learning, this monkey was exposed to place, such as rewarding eye blinks. Conversely,
extinction training, where the recording was it is easy for the learner to reduce frontal delta
done without the possibility of rewards. In that across time simply by learning to behaviorally
extinction trial, Philippens and Vanwersch produce fewer eye blinks. Therefore, equipment
reported that this monkey clearly expressed should be able to detect artifacts online, or better
expectant behavior after successful EEG in real time. In addition, the therapist or investi-
changes, which they cite as evidence that the gator has to closely observe the learner to avoid
monkey learned to increase high alpha to this ‘‘artifact-driven’’ feedback, thus also improv-
obtain the reward. They reported that the short ing the specificity of what is trained.
training time necessary for this level of success
was possible related to the intracortical electro- Secondary Reinforcement
des because this significantly decreases the As we have seen from the study by Clemente
Downloaded by [174.145.220.16] at 07:35 30 November 2011

noise measured by the scalp electrodes. This et al. (1964), a reward needs to be not only dis-
demonstrated that the better the specificity of crete but also rewarding given that PRS was
the signal or behavior trained, the faster the most clearly present when the reward was milk
learning takes place, in agreement with what as opposed to water. So how can we make the
would be expected from learning theory. feedback employed more interesting for the
For operant conditioning it is very important learner? For some clinics, monetary rewards or
to be aware of specifically ‘‘what behavior’’ is prizes are used as a secondary reinforcement.
being conditioned in order to achieve learning For example, if a child gains a specific number
and to improve the specificity. In the case of of points, he or she obtains a prize. It is very
neurofeedback, knowledge about the trained important that any monetary or other form of
brain rhythms, and knowledge about the neuro- secondary reinforcer be tied to the learning
physiology of the EEG, is crucial. For example, process and not for simple participation (e.g.,
when performing SMR neurofeedback, it is showing up for appointments without produc-
important to be aware of how SMR occurs, ing any change in EEG). Several animal studies
namely, in spindles with a specific duration. have demonstrated a facilitation of learning
Therefore, better results are obtained when a by using a secondary reinforcer (e.g., Grice,
specific SMR spindle with duration of 0.25 s is 1948). However, this has not been investigated
reinforced, rather then reinforcing any excursion for the longer training periods used in clinical
above the SMR amplitude threshold. By includ- practice (after a session of 20 min), so at this
ing metrics such as ‘‘time above threshold’’ or moment this is only a clinical observation.
‘‘sustained reward period,’’ the feedback will
become more specific to the true SMR rhythm. Generalization
The same concept was illustrated in the example When a client is performing neurofeedback in
of Hardt and Kamiya (1976) regarding alpha the clinic, he or she will eventually learn to
neurofeedback. As was previously described, exercise control over the trained parameter.
the same applies to SCP neurofeedback, where This skill is—to some degree—associated with
specifically the polarity of the EEG is reinforced the environment of the clinic and the relation-
(i.e., negativity vs. positivity), not requiring shap- ship with the practitioner and may not be used
ing but simply a response in the right direction. outside this setting. To also have the benefits of
this self-regulation skill in daily life, a process
Artifacts called generalization should occur, so the
Muscle activity, eye movements, electrocardio- learned control is also exercised outside the
gram, breathing, and other environmental and clinic and without the feedback. In most SCP
NEUROFEEDBACK AND BASIC LEARNING THEORY 301

studies and many clinical settings, this has been poor results but also fail to present any
facilitated by also including transfer trials, which indication of a learning process being adminis-
are neurofeedback trials without the real-time tered and=or taking place. These results are
feedback. Instead, the transfer trial provides being reported as neurofeedback studies. How-
information only at the end of the trial, whether ever, it may well be that because poorly
it was successful or not (e.g., Drechsler et al., designed control conditions, inappropriate pro-
2007). In the clinical setting, the client can be tocols, inappropriate designs, or inappropriate
asked to rate the trial before knowing the score. equipment were used, in fact these studies
This approach is further supported by studies have not investigated neurofeedback, defined
on motor learning where better performance as a learning process utilizing operant learning
was observed as compared to a condition mechanisms of brain activity. This concern is
where feedback was provided 100% of the based on the understanding of the nearly ninety
time (Winstein & Schmidt, 1990). Furthermore, years of research into conditioning, of both
in SCP studies patients are provided with a humans and animals, summarized in this
card, which is a printout of the ‘‘success’’ article. Thus, we hope that future neurofeed-
screen. It serves as a cue (discriminative stimu- back studies and clinical applications will be
Downloaded by [174.145.220.16] at 07:35 30 November 2011

lus) to elicit the adequate brain state. The criti- designed utilizing these basic principles of
cal situations where and when self-regulation is learning theory. If deviations from the estab-
most important (ADHD: before school tests= lished techniques are desirable for some
Epilepsy: in seizure eliciting situations) are reason, they should be empirically evaluated
assessed during the treatment and practiced compared to those already demonstrating effi-
either in real-life situations or in role-play. cacy prior to implementation in private settings.
Of interest, several studies have demon- We strongly encourage the practitioner to apply
strated that generalization, both across time these same basic concepts and knowledge
and across state, does take place. For example, while fully understanding the requirements of
Gani, Birbaumer, and Strehl (2008) demon- an adequate setup of equipment as well as
strated that the skill to regulate the SCP was the training parameters and implementation.
still preserved at a 2 years’ time follow-up. It is additionally our hope that these well-
Recently, it was also demonstrated in a monkey established and empirically tested learning
that voluntarily modulated neural activity could theory concepts be considered when evaluat-
indeed produce specific changes in cognitive ing outcome studies, being sure to not dismiss
function (Schafer & Moore, 2011) suggesting a effective interventions based on studies that do
generalization from trained EEG parameters not adhere to these concepts.
to behavior. Finally, Sterman, Howe, and
Macdonald (1970) demonstrated that (12–
REFERENCES
15 Hz) SMR training during wakefulness also
resulted in the facilitation of (12–15 Hz) sleep Bagchi, B. K. (1937). The adaptation and
spindles during sleep and a reduction of brief variability of response of the human brain
arousal disruptions during sleep, a finding that rhythm. Journal of Psychology, 3, 355–384.
has recently been demonstrated in humans as Clemente, C. D., Sterman, M. B., & Wyrwicka,
well (Hoedlmoser et al., 2008). These last W. (1963). Forebrain inhibitory mechanisms:
two studies also specifically demonstrate that Conditioning of basal forebrain induced EEG
successful operant conditioning has taken place. synchronization and sleep. Experimental
Neurology, 7, 404–417.
Clemente, C. D., Sterman, M. B., & Wyrwicka,
CONCLUSION
W. (1964). Post-reinforcement EEG synchro-
The authors have concerns about many of the nization during alimentary behavior.
recent research studies that proclaim to have Electroencephalography Clinical Neurophy-
studied neurofeedback, many of which show siology,16, 355–365.
302 L. H. SHERLIN ET AL.

Cortoos, A., De Valck, E., Arns, M., Breteler, M. Grice, G. R. (1948). The relation of secondary
H., & Cluydts, R. (2010). An exploratory study reinforcement to delayed reward in visual
on the effects of tele-neurofeedback and tele- discrimination learning. Journal of Experi-
biofeedback on objective and subjective sleep mental Psychology, 38(1), 1–16.
in patients with primary insomnia. Applied Groves, P. M., & Thompson, R. F. (1970).
Psychophysiology and Biofeedback, 35, 125– Habituation: A dual-process theory. Psycho-
134. doi:10.1007=s10484-009-9116-z logical Review, 77, 419.
Doppelmayr, M. M., Klimesch, W., Pachinger, Hallschmid, M., Mölle, M., Fischer, S., & Born, J.
T., & Ripper, B. (1998). The functional (2002). EEG synchronization upon reward
significance of absolute power with respect in man. Clinical Neurophysiology, 113,
to event-related desynchronization. Brain 1059–1065.
Topography, 11, 133–140. Hardt, J. V., & Kamiya, J. (1976). Conflicting
Drechsler, R., Straub, M., Doehnert, M., results in EEG alpha feedback studies: Why
Heinrich, H., Steinhausen, H. C., & Brandeis, amplitude integration should replace per-
D. (2007). Controlled evaluation of a neuro- cent time. Biofeedback and Self-Regulation,
feedback training of slow cortical potentials 1(1), 63–75.
Downloaded by [174.145.220.16] at 07:35 30 November 2011

in children with attention deficit=hyperactivity Hawkins, R. D., Kandel, E. R., & Bailey, C. H.
disorder (ADHD). Behavioral and Brain Func- (2006). Molecular mechanisms of memory
tion, 3(1), 35. storage in aplysia. Biology Bulletin Virtual
Durup, G., & Fessard, A. (1935). I. L’électrencé- Symposium on Marine Invertebrate Models
phalogramme de l’homme. Observations of Learning and Memory, 210, 174–191.
psycho-physiologiques relatives à l’action Hoedlmoser, K., Pecherstorfer, T., Gruber, G.,
des stimuli visuels et auditifs [Electroencepha- Anderer, P., Doppelmayr, M., Klimesch, W.,
lography human. Psycho-physiological obser- & Schabus, M. (2008). Instrumental con-
vations relative to the action of visual and ditioning of human sensorimotor rhythm
auditory stimuli]. L’année Psychologique, (12–15 hz) and its impact on sleep as well as
36(1), 1–32. doi:10.3406=psy.1935.30643 declarative learning. Sleep, 31, 1401–1408.
Egner, T., & Sterman, M. B. (2006). Neuro- Jasper, H., & Shagass, C. (1941a). Conditioning
feedback treatment of epilepsy: From basic the occipital alpha rhythm in man. Journal of
rationale to practical application. Expert Experimental Psychology, 28, 373–387.
Review of Neurotherapeutics, 6, 247–257. Jasper, H., & Shagass, C. (1941b). Conscious
doi:10.1586=14737175.6.2.247 time judgments related to conditioned time
Felsinger, J. M., & Gladstone, A. I. (1947). intervals and voluntary control of the alpha
Reaction latency (StR) as a function of the rhythm. Journal of Experimental Psychology,
number of reinforcements (N). Journal of 28, 503–508.
Experimental Psychology, 37, 214–228. Jus, A., & Jus, C. (1957). L’application des
Gani, C., Birbaumer, N., & Strehl, U. (2008). réactions EEG conditionnées en neuropsy-
Long-term effects after feedback of slow corti- chiatrie [The application of reactions are
cal potentials and of theta-beta-amplitudes in conditioned EEG in neuropsychiatry]. Elec-
children with attention deficit=hyperactivity troenceph. clin. Neurophysiol, (Suppl. 7),
disorder (ADHD). International Journal of 344–370.
Bioelectromagnetism, 10, 209–232. Jus, A., & Jus, C. (1960). Etude electro-clinique
Gastaut, H., Jus, C., Morrell, F., Storm Van des alterations de conscience dansle
Leeuwen, W., Dongier, S., Naquet, R., . . . petit mal [Study of electro-clinical alterations
Were, J. (1957). Etude topographique des of consciousness in petit mal]. Studii si cerce-
reactions electroencephalographiques condi- tari de Neurol, 5, 243–254.
tionnees chez l’homme [Study of topographic Kamiya, J. (2011). The first communications
EEG reactions condition in humans]. EEG and about operant conditioning of the EEG.
Clinical Neurophysiology, 9, 1–34. Journal of Neurotherapy, 15(1), 65–73.
NEUROFEEDBACK AND BASIC LEARNING THEORY 303

Kandel, E. R., & Taut, L. (1965a). Heterosynap- Morrell, L., & Morrell, F. (1962). Non-random
tic facilitation in neurons of the abdominal oscillation in the response-duration curve of
ganglion of aplysia depilans. Journal of Physi- electrographic activation. Electroencephalogra-
ology, 181, 1–27. phy and Clinical Neurophysiology, 14, 724–
Kandel, E. R., & Taut, L. (1965b). Mechanism 730. doi:10.1016=0013-4694(62)90086-X
of heterosynaptic facilitation in the giant Pavlov, I. P. (1927). Conditioned reflexes: An
cell of the abdominal ganglion of aplysia investigation of the physiological activity of
depilans. Journal of Physiology, 181, 28–47. the cerebral cortex (G. V. Anrep, Trans. &
Klimesch, W. (1999). EEG alpha and theta oscilla- Ed.). London, UK: Oxford University Press.
tions reflect cognitive and memory perfor- Pearce, J. M., & Hall, G. (1978). Overshadow-
mance: A review and analysis. Brain Research ing the instrumental conditioning of a lever-
Brain Research Reviews, 29(2–3), 169–95. press response by a more valid predictor of
Knott, J. R., & Henry, C. E. (1941). The the reinforcer. Journal of Experimental Psy-
conditioning of the blocking of the alpha chology: Animal Behavior Processes, 4, 356.
rhythm of the human electroencephalogram. Philippens, I. H. C. H. M., & Vanwersch, R. A.
Experimental Psychology, 28, 134–144. P. (2010). Neurofeedback training on
Downloaded by [174.145.220.16] at 07:35 30 November 2011

Lansky, P., Bohdaneck, Z., Indra, M., & Radii- sensorimotor rhythm in marmoset monkeys.
Weiss, T. (1979). Alpha detection: Some com- NeuroReport, 21, 328–332.
ments on Hardt and Kamiya, ‘‘Conflicting Poschel, B. P., & Ho, P. M. (1972). Post-
results in EEG alpha feedback studies.’’ Bio- reinforcement EEG synchronization depends
feedback and Self-Regulation, 4, 127–131. on the operant response. Electroencephalo-
Light, K. R., Grossman, H., Kolata, S., Wass, C., graphy and Clinical Neurophysiology, 32,
& Matzel, L. D. (2011). General learning abil- 563–567.
ity regulates exploration through its influence Redlich, F. C., Callahan, A., & Mendelson, R. H.
on rate of habituation. Behavioural Brain (1946). Electroencephalographic changes
Research, 223, 297–309. after eye opening and visual stimulation. Yale
Loomis, A. L., Harvey, E. N., & Hobart, G. (1936). Journal of Biology and Medicine, 18, 367–376.
Electrical potentials of the human brain. Jour- Rockstroh, B., Elbert, T., Birbaumer, N., &
nal of Experimental Psychology, 19, 249. Lutzenberger, W. (1982). Slow brain poten-
Lubar, J. F. (2003). Neurofeedback for the tials and behavior. Baltimore, MD: Urban
management of attention deficit disorders. & Schwarzenberg.
Biofeedback: A Practitioner’s Guide, pp. Saito, H., Yamamoto, M., Iwai, E., &
409–437. Nakahama, H. (1973). Behavioral and
Marczynski, T. J. (1971). Cholinergic mech- electrophysiological correlates during flash-
anism determines the occurrence of reward frequency discrimination learning in
contingent positive variation (RCPV) in cat. monkeys. Electroencephalography and Clini-
Brain Research, 28(1), 71–83. cal Neurophysiology, 34, 449–460.
Marczynski, T. J., Harris, C. M., & Livezey, G. T. Schafer, R. J., & Moore, T. (2011). Selective atten-
(1981). The magnitude of post-reinforcement tion from voluntary control of neurons in pre-
EEG synchronization (PRS) in cats reflects frontal cortex. Science (New York, N. Y.), 332,
learning ability. Brain Research, 204, 214–219. 1568–1571. doi:10.1126=science.1199892
Milstein, V. (1965). Contingent alpha blocking: Skinner, B. F. (1948). ‘‘Superstition’’ in
Conditioning or sensitization? Electroence- pigeons. Journal of Experimental Psychology,
phalography and Clinical Neurophysiology, 38, 168–172.
18, 272–277. Skinner, B. F. (1958). Reinforcement today.
Morrell, F., & Ross, M. (1953). Central inhi- American Psychologist, 13, 94–99.
bition in cortical conditioned reflexes. Skinner, B. F. (1962). Two ‘‘synthetic social
A. M. A. Archives of Neurology Psychiatry, relations.’’ Journal of Experimental Analysis
70, 611–616. of Behavior, 5, 531–533.
304 L. H. SHERLIN ET AL.

Sterman, M. B. (1963). Brain mechanisms in Thorndike, E. L. (1999). Animal intelligence


sleep, Chapter 4, Conditioning of induced (p. v). Bristol, UK: Thoemmes. (Original
EEG and behavioral sleep patterns: Doctoral work published 1911)
Dissertation, University of California at Los Travis, L. E., & Egan, J. P. (1938). Conditioning
Angeles, 80–99. of the electrical response of the cortex.
Sterman, M. B., Howe, R. C., & Macdonald, L. Journal of Experimental Psychology, 22,
R. (1970). Facilitation of spindle-burst sleep 524–531. doi:10.1037=h0056243
by conditioning of electroencephalographic Treisman, G. J., & Clark, M. R. (2011). A beha-
activity while awake. Science, 167, 1146– viorist perspective. Advances in Psychosom-
1148. atic Medicine, 30, 8–21.
Sterman, M. B., LoPresti, R. W., & Fairchild, M. Wells, C. (1963). Electroencephalographic
D. (1969). Electroencephalographic and correlates of conditioned responses. In
behavioral studies of monomethylhydrazine G. H. Glaser (Ed.), EEG and behavior (pp.
toxicity in the cat. Alexandria, VA: Storming 60–108). New York, NY: Basic Books.
Media. Williams, B. A. (1999). Associative competition
Sterman, M. B., LoPresti, R. W., & Fairchild, M. in operant conditioning: Blocking the
Downloaded by [174.145.220.16] at 07:35 30 November 2011

D. (2010). Electroencephalographic and response-reinforcer association. Psycho-


behavioral studies of monomethylhydrazine nomic Bulletin & Review, 6, 618–623.
toxicity in the cat. Journal of Neurotherapy, Winstein, C. J., & Schmidt, R. A. (1990).
14, 293–300. Reduced frequency of knowledge of results
Sterman, M. B., Macdonald, L. R., & Stone, R. enhances motor skill learning. Journal of
K. (1974). Biofeedback training of the Experimental Psychology: Learning, Memory,
sensorimotor EEG rhythm in man: Effects and Cognition, 16, 677–691.
on epilepsy. Epilepsia, 15, 395–417. Wyrwicka, W., & Sterman, M. B. (1968).
Tabachnick, B. G., & Fidell, L. S. (2007). Using Instrumental conditioning of sensorimotor
multivariate statistics (5th ed.). Boston, MA: cortex EEG spindles in the waking cat. Physi-
Allyn and Bacon. ology & Behavior, 3(5), 703–707.

View publication stats

You might also like