Download as pdf or txt
Download as pdf or txt
You are on page 1of 110

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/304489165

Dynamic Assessment: From Theory to Practice

Book · January 2015

CITATIONS READS
33 5,209

1 author:

Mohammad Saber Khaghaninejad


Shiraz University
65 PUBLICATIONS 163 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Acquisition of French Polysemous Vocabularies: Schema-based Instruction versus Translation-based Instruction View project

Published books View project

All content following this page was uploaded by Mohammad Saber Khaghaninejad on 27 June 2016.

The user has requested enhancement of the downloaded file.



                  
                      
                      
                    

   
                      
                        
       

    

!" #  $% &'(


   
            " )
* +    ,      -.    
   +  $  
   /   /
/%  / /"  /
   


    

      


    

     


 

  


  
         
             
        
 !     
"        #     
$  !%  !      &  $  
'      '    ($    
'   # %  % )   % *   %
'   $  '     
+      " %        & 
'  !#       $,
 ( $ 

      -     . 
                  
              
  !  
"-           ( 
  %                
             .          %  
  %   %   %    $ 
      $ $ -    
      -                 
         - -

// $$$   

0  


1"1"#23."   
     
4& )*5/ +)
* !6 !& 7!8%779:9&  %  ) -
2  ; !  

*   &


    
 

/- <:=9>4& )*5/ +)


"3   "    &  :=9>
&;0#/+%#55'55/'06
From Theory to Practice

By

Mohammad Saber Khaghaninezhad, Ph.D.


Shiraz University




Acknowledgement

I would like to show my sincere gratefulness to Dr. Reza Pishghadam (the professor
of TEFL at Ferdowsi University of Mashhad, Iran), to whom I indebt a great deal of
my knowledge on Dynamic Assessment and related issues.

2
Preface

Assessment of individual children and young people has remained a major


focus of professional activity for educational psychologists and one surrounded
by FRQWLQXLQJ FRQWURYHUV\ 'HVSLWH WKH DUJXPHQWV RI WKH µUHFRQVWUXFWLQJ
PRYHPHQW¶ RI WKH70s, and ensuing debates, individual assessment has
survived seemingly as strong as ever. Subsumed within arguments concerning
the prominence given to individual assessment, another set of discussions has
concerned the paradigms from which such professional activity might derive.
Located among constructs contrasting environment with individual and static
with interactive perspectives, the particular debate within individual
assessment has centered on the relative merits of dynamic methods of
assessment.

This book presents a brief taxonomy of purposes and procedures deriving


from these major assessment paradigms. In the first part, the theoretical
foundations of Dynamic Assessment (the definitions, origins, models, and
differences with traditional testing approaches) are described. The second part
illustrates these underlying theories in practice and shows the practical
outcomes of Dynamic Assessment in prediction and evaluation of
practical/potential capabilities of students by presenting the essence of some
practical studies in this field.

Mohammad Saber Khaghaninejad, Ph. D


Shiraz University

3
Table of Contents

Preface 3

3DUWȱ
Basics of Dynamic Assessment

Introduction 7

What is DA? 12

The origins of DA 16

Why DA? 20

DA models 25

DA versus traditional assessment 29

The potential of DA 33

Problems with DA 41

3DUWȱȱ
Dynamic Assessment in Practice

DA and gifted students 46

Computer-assisted DA 55

DA in educational settings: realizing potential 58

DA and problematic language learners 65

References 75

Index 101


4

3DUWȱ

%$6,&62)'$

5
In the first part of the book the theoretical principles of Dynamic Assessment
have been put under investigation. Its origins, theoretical models and
challenging areas are described and have been compared with the traditional
methods of assessment in the realm of education.

6
Introduction

$OO HGXFDWLRQDO SURJUDPV PXVW DW VRPH SRLQW DSSUDLVH OHDUQHUV¶ NQRZOHGJH
and abilities; that is, they must assess them. The purposes of educational
assessment are to evaluate school achievements, predict future achievements,
and prescribe educational treatments. As a result, assessment and instruction
are two complimentary aspects of methodology which should optimally result
in true learning. From this perspective, assessment occurs not in isolation
from instruction but as a dialectically integrated activity which seeks to
understand development by actively promoting it. This pedagogical
approach, known as Dynamic Assessment (DA), challenges the widespread
acceptance of independent performance as the privileged indicator of
LQGLYLGXDOV¶DELOLWLHVDQGFDOOVIRUDVVHVVRUVWRDEDQGRQWKHLUUROHDVREVHUYHUV
of learner behavior in favor of a commitment to joint problem solving aimed
at supporting learner development. In DA, the traditional goal of producing
generalizations from a snapshot of performance is replaced by ongoing
intervention in development. The dialectic unification of assessment and
instruction that DA represents has profound implications for classroom
practice, which second/foreign language (L2) researchers are beginning to
explore.
In traditional assessment, final examination becomes an official
instrument for announcement of passed or failed students with no attention to
pedagogical, psychological and physiological of under-achievers (either
passed or failed students). From this perspective, optimal instruction contains
just two aspects; teaching and finally testing. If learning is put as the final
target of instruction, then the loss of a third aspect is felt in this framework.
DA adds this third aspect in the form of teacher¶s mediation in students¶
learning processes during or/and after the final exam (teaching, testing and
then teaching again) to traditional framework of assessment and unlike static
assessment, aims at reducing the number of unsuccessful language learners at
7
WKH HQG RI WKH FRXUVH 8QGRXEWHGO\ VLQFH KROGLQJ ³UHPHGLDO WHDFKLQJ´
sessions of Dynamic Assessment is both costly and time-consuming, in
natural contexts of language teaching and learning, most of teachers and
learners have not experienced its miraculous results so far.
Years ago, Binet (1911) after observing pupils with mental retardation in
Paris schools, stated that the time the teachers were spending trying to teach
the academic content of the early grades might better be spent helping the
children first acquire more adequate tools of learning, thereby, implying that
the failure to develop such learning tools constitutes an unnecessary obstacle
to better learning. His suggestion was a precursor of what later became
cognitive education, but it also led Binet (ibid) to propose that attention be
given to assessment of the processes of learning rather than exclusively to the
products of prior opportunities to learn (Binet, 1911). In spite of his
enthusiasm for this idea, Binet (ibid) did little to pursue it. Similarly, Rey
(1934) wrote about assessment of learning processes in situations that
involved teaching, but then continued, over the next 30 years, to develop
normative, standardized tests of already-acquired knowledge and skill.
Although these and other investigators suggested measurement of the ability
to learn as part of intelligence and as an ideal method for testing mental
abilities, the interest in, eODERUDWLRQDQGVSUHDGRIWKHFRQFHSWRIµ'\QDPLF
$VVHVVPHQW¶RFFXUUHGRQO\LQWKHVZLWKWKHLQWURGXFWLRQRI9\JRWVN\¶V
WKHRU\ E\ %URZQ DQG )HUUDUD   DQG )HXHUVWHLQ¶V LGHDV +D\ZRRG
1977) to the wider psychological world.
In 1905, Binet and his colleagues produced the first intelligence test for
children intelligence investigation. At the beginning of the twentieth century,
education provision in many developed countries was being extended to
include a much greater proportion of the child population. As a consequence,
teachers found themselves confronted by significant numbers of children who
DSSHDUHG LQFDSDEOH RI FRSLQJ ZLWK WKH DFDGHPLF GHPDQGV %LQHW¶V (1905)
test, restricted to academic intelligence (Brown & French, 1979) rather than
8
broader forms of intellectual functioning, represented a means of comparing
the mental level of the testees with that of their same-aged peers that could
overcome teacher bias about the reasons for student failure (Thorndike,
1997). On the basis of this measure, the suitability of the child for schooling
could be derived. While Binet (1911, p. 323) defined intelligence in terms of,
³WKH DELOLW\ WR OHDUQ´, his, and subsequent measures, have tended to focus
SULPDULO\RQWKHFKLOG¶VSDVWOHDUQLQJUDWKHUWKDQWKHLU capacity for learning´.
Almost 100 years later most IQ tests have changed little;
psychometrically more sophisticated, and on nodding acquaintance with
information processing theories. The most popular tests continue to include
items that involve activities such as naming objects, recreating designs with
patterned blocks and remembering a series of numbers. Since this time,
however, educational psychologists have come to recognize the many flaws
in IQ measures; their tendency to lack an empirically supported theoretical
framework, the limited relationship between scores and instructional
practices, their emphasis upon products rather than psychological processes,
their tendency to linguistic and cultural bias and their inability to guide
clinicians in deriving specific interventions for educational difficulties.
DA represents an approach that seeks to overcome many of the above
difficulties. Theoretically driven, they seek to examine cognitive processes
that are important for learning; they are seen as far more sensitive measures
for minority populations (Hessels 1997, 2000) and they have potential to
offer insights and guidance for practitioners (Lidz & Elliott, 2000).
In 1979 a remarkable book was published which dared to question the
very foundations upon which much of modern psychometric theory, and
certainly its application by applied educational and clinical psychologists,
were based. That book, The dynamic assessment of retarded performers, not
only introduced the name of Feuerstein to a fairly unsuspecting world but
also set out for the first time a logical, coherent and ethical alternative to so-
called ³VWDWLF´ assessment. A key text in the mid-V ZDV &DURO /LG]¶V
9
excellent book of readings which provided a comprehensive overview of both
theoretical and practical aspects of DA. However, apart from occasional
journal articles, the literature in this area has remained relatively sparse. What
has clearly been needed has been an up-to-date text which covers the
multitude of techniques that have been constructed over the past two decades
under the guise of dynamic assessment. This is essentially what Tzuriel
(2000) has attempted to do. His brief book is a superbly focused and densely
packed book with a great deal of helpful information for any educational or
clinical psychologist or researcher seeking a comprehensive introduction to
the theory and practice of dynamic assessment.
Tzuriel (2000) provided an excellent overview of much of the key
research into the validity, reliability and overall value of Dynamic
Assessment whilst at the same time confronting many of its most powerful
critics with logical and carefully balanced rebuttals. He has little doubt that
his book, Dynamic assessment of young children, will come to be considered
a standard classic in this field and he has no hesitation in recommending it to
anyone seeking an accessible introduction to the static versus Dynamic
Assessment debate.
At an intuitive level, a test measure that includes an examination of the
capacity of the child to learn when provided with scaffolded instruction that
is tailored to offer the minimum of assistance necessary for successful
performance and which then permits examination of transfer to other tasks,
would appear to be more valid and meaningful for educational settings.
Certainly, there is much evidence that assisted performance can provide
evidence of latent abilities and expertise that more conventional measures fail
to tap (Hessels, 1997).
Criticisms of the use of traditional tests of intelligence and
achievement for diagnosing and treating learning problems have recently
reached a peak. A resounding call for assessment focusing on outcome
directly linked to actual skills can be heard throughout the literature related to
10
school psychology, regular education, and special education. In a discussion
of trends in intelligence testing, Daniel (1997) noted that most psychometric
intelligence tests rely on factor analytic techniques, usually resulting in a
hierarchical model. Daniel remarked that psychometric intelligence tests are
useful in making predictions regarding general ability, but may be less useful
in providing detailed information regarding learning strengths and
weaknesses for practical application. The use of less traditional assessment
techniques has been advocated as being more useful in assessing strengths
and weaknesses and providing feedback regarding progress in meeting
specific learning goals (Batsche & Knoff, 1995; Gresham & Witt, 1997;
Thomas, 1997).
DA may be useful in providing for specific intervention
recommendations as well as in measuring progress for many different
populations of students (Duran, 1989; Lopez, 1995). Dynamic assessment is
particularly helpful in accounting for variables that may underestimate an
indiYLGXDO¶V DELOLW\ VXFK DV XQIDPLOLDULW\ ZLWK WKH WDVN ODQJXDJH RU
materials. Particularly with bilingual and language minority children,
ecological models of cognitive assessment are likely to be helpful (Lopez,
1995). Dynamic assessment techniques have been used effectively by speech
pathologists in helping to distinguish those with language disorders from
individuals experiencing the normal process of second language development
and have demonstrated good predictive validity for differentiating learning
disabled children from their non-disabled peers (Rutland & Campbell, 1995).
The theoretical forefather of DA is the Russian psychologist, Lev
Vygotsky (1978), whose notion of the zone of proximal development (ZPD)
is central to the approach. This construct, now widely influential in education
circles, concerns performance which cannot be achieved unassisted but can
be achieved with the help of another more capable individual. It is important
to recognize that conventional intelligence tests are measures of achievement
involving skills that are typically acquired in the home or at school
11
(Sternberg & Grigorenko, 2001). Thus, it is hardly surprising that research
findings that children from ethnic minorities (Lopez, 1997) or from socially
disadvantaged communities (Budoff, 1987) tend to underperform on such
measures resonate with many clinicians who often see such children struggle
with items that are unfamiliar and unpracticed. However, less immediately
obvious is the fact that many IQ measures place great importance upon speed
and absolute accuracy, factors that may have differential value across
cultures.

What is DA?

DA is an umbrella term that describes a heterogeneous group of approaches


that are linked by one key element: instruction and feedback are provided as
part of the assessment process and are contingently related to the individual¶s
ongoing performance. Thus, the nature and extent of assistance that are
provided depend on the individual differences manifested within the
assessment context. This, of course, represents a significant departure from
conventional testing procedures which usually preclude any forms of
intervention other than strictly delineated inputs geared to assist
administration and maintain rapport. Unlike conventional tests that are
concerned primarily with ultimate performance (product), dynamic
approaches tend to be equally interested in gauging the individual¶s use of
cognitive and metacognitive strategies, their responsiveness to examiner
assistance and support, and their capacity to transfer learning operating
within the intersubjective realm of tester and testee, to subsequent unassisted
situations.
The term Dynamic Assessment ZDV FRLQHG E\ 9\JRWVN\¶V FROOHDJXH
Luria (1961) and popularized by Israeli researcher and special educator
Feuerstein. Feuerstein (1979) contrasted his methods with other forms of
assessment, which he labeled static. 7KH GLIIHUHQFH EHWZHHQ D FKLOG¶V RZQ

12
performance and his/her achievement when guided by an adult or in
collaboration with a more experienced peer, UHIOHFWV WKH FKLOG¶V
developmental potentiality referred to by Vygotsky (1978) DV WKH µ]RQH RI
SUR[LPDOGHYHORSPHQW¶ FRPPRQO\=3' 
Interest in DA as a means of cognitive assessment of children has grown
considerably in recent years (Stringer, 1997). DA is based on an approach to
the development of cognition in children, and to the role of adults in that
development, which differs in a number of important ways from
psychometric approaches to assessment. Psychometric assessment seeks to
measure the intelligence of individuals by means of their performance on a
set of tasks at a given point in time and to predict future performance from
such measurement. Its basis and purpose is the quantification of differences
between individuals of similar ages. In order to secure this objective, it is
important that the assessor intervenes as little as possible during the
performance of the tasks. Sociocultural variables which may affect this
performance ² the way in which a parent or a teacher contributes WRDFKLOG¶V
cognitive development ² are not considered to be of fundamental
importance. Similarly the quality and quantity of past learning experiences is
not the focus of interest. In DA, the goal of assessment is very different. By
contrast with WKH SV\FKRPHWULF DSSURDFK WKH FKLOG¶V FXUUHQW OHYHO RI
SHUIRUPDQFH LV QRW WKH IRFXV RI LQWHUHVW 7KH DVVHVVRU¶V REMHFWLYH LV WR
understand how children improve their performance in the course of given
cognitive tasks. In order to achieve this detailed understanding, the assessor
must interact with the child during the performance of the tasks.
The terms DA and static testing have been used in the literature to refer
WRGLIIHUHQWPRGHVRIHYDOXDWLQJLQGLYLGXDOV¶FRJQLWLYHFDSDFLWLHV'$UHIHUV
to an assessment of thinking, perception, learning, and problem solving by an
active teaching process aimed at modifying cognitive functioning. The major
idea in DA is to observe and measure change criteria as predictors of future
learning. Static testing, on the oWKHUKDQGUHIHUVWRPHDVXUHPHQWRIDFKLOG¶V
13
response without any attempt to intervene in order to change, guide, or
LPSURYHWKHFKLOG¶VSHUIRUPDQFH7KHFRQFHSWXDOL]DWLRQEHKLQGXVLQJFKDQJH
criteria is that teaching processes, by which the child is taught how to process
information, are more closely related to measures of modifiability, than they
are to static measures of intelligence (Tzuriel, 2000). In other words, the
teaching strategies used within DA are more closely related to learning
processes in school and to other life contexts than are standardized static
methods.
DA is described as a subset of interactive assessment that includes
deliberate and planned ublications teaching and the assessment of the
effects of that teaching on subsequent performance. Its principles rest on four
assumptions:
™ $FFXPXODWHG NQRZOHGJH LV QRW WKH EHVW LQGLFDWLRQ RI RQH¶V DELOLW\ WR
acquire new knowledge, although the two are highly correlated.
™ Everybody functions at considerably less than 100% of full capacity;
therefore, everybody can do better (Vygotsky, 1978).
™ The best test of any performance is a sample of that performance itself,
therefore, assessment of learning abilities can be accomplished
effectively with the use of learning tasks, especially those involving
teaching²a condition that characterizes school learning.
™ 7KHUH DUH LGHQWLILDEOH REVWDFOHV WR RQH¶V DFFHVV WR DQG HIIHFWLYH
DSSOLFDWLRQ RI RQH¶V LQWHOOLJHQFH 6XFK REVWDFOHV LQFOXGH LJQRUDQFH
impulsivity, impoverished vocabulary, cultural differences in learning
habits, styles, and attitudes, poor self-concept as learners, and a host of
motivational variables, and inadequate development of important
cognitive and metacognitive structures and strategies. By removing
some of those obstacles, one can reveal the ability to function more
adequately.
The term Dynamic Assessment includes a range of methods and
materials to assess the potentiality for learning, rather than a static level
14
of achievement assessed by conventional tests. Its aim is to reveal an
individuDO¶VPD[LPXPSHUIRUPDQFHE\WHDFKLQJRUPHGLDWLQJZLWKLQWKH
assessment and evaluating the enhanced performance that results. As
indicated in the following lines, no single definition of DA exists in
literature. In this review, DA refers to any procedure that examines the
effects of deliberate, short-term, intervention-induced changes on
student achievement, with the intention of measuring both the level and
rate of learning. In addition, for purposes of our review, DA must
provide corrective feedback and intervention in response to student
failure.
6RPH UHVHDUFKHUV FODLP WKDW '$¶V WZLQ IRFXV RQ WKH OHYHO DQG UDWH RI
learning makes it a better predictor of future learning. Consider the child who
enters kindergarten with little background knowledge. She scores poorly on
traditional tests, but during DA, she demonstrates intelligence, maturity,
attention, and motivation, and she learns a task²or a series of tasks² with
relatively little guidance from the examiner. Because of this, and in spite of
her performance on traditional tests, she is seen as in less danger of school
failure than her classmates who score poorly on both traditional tests and DA.
So, DA may correctly identify children who seem at risk for school failure
but who, with timely instruction, may respond relatively quickly and perform
within acceptable limits. Data from DA may also identify the type and
intensity of intervention necessary for academic success. DA incorporates a
test-teach-test format, conceptually similar to responsiveness-to-intervention
techniques. However, as we will discuss later, DA can potentially measure
RQH¶VUHVSRQVLYHQHVVZLWKLQDPXFKVKRUWHUWLPHIUDPH
Although dissatisfaction with psychometric methods of assessment was
expressed as early as the 1920s (Buckingham, 1921), serious development of
DA did not take place until the 1970s. The exception to this general picture
was the work of Feuerstein (1979), who began to develop the Learning
Potential (or Propensity) Assessment Device (LPAD) in the early 1950s in
15
response to the need to assess, and place within the education system, refugee
children whose learning experiences had been limited or disrupted by
wartime trauma and cultural dislocation. His work remained an isolated
example of a non-psychometric approach to assessment until the DA studies
of the 1970s and 1980s. This Mediated Learning DA model (LPAD) may be
said to be the only fully dynamic model of assessment (Haywood, 1992), in
that it does not measure task outcomes but rather attempts to teach cognitive
strategies for problem-solving strategies that are conceived as domain
independent.
The accuracy of predictions based upon DA, however, is likely to be
greatly influenced by environmental factors. Consider an underperforming
child struggling in a highly challenging classroom context. A DA may
LQGLFDWH WKDW WKH FKLOG KDV XQUHFRJQL]HG SRWHQWLDO \HW LI WKH FKLOG¶V
experiences at home and at school are not modified in a meaningful way,
perhaps because the teacher is overstretched or because insufficient resources
are available, this capacity for development may never be realized (Tzuriel,
2000). Thus the predictive validity of the measure is directly bound up with
practitioner action²a phenomenon that is highly problematic for those who
wish to be guided by psychometric conventions.

The origins of DA
Vygotsky (1978) believed that the early development of understanding occurs
through interaction with others. In this period greater achievement is possible
when a child learns through collaboration with a more experienced or
informed guide. Thus, for Vygotsky, the mental development of a child is
distributed along stages: the child progresses to a more advanced stage when
s/he is able to carry out alone certain tasks for which, in the previous stage,
s/he would have needed the help of an adult (or more capable peer) to
perform successfully. The term ³scaffolding´ exactly describes the sort of

16
help the child gets from the adult when s/he is not able to perform the task.
Donato (1994) explained the concept by saying that:

In social interaction a knowledgeable participant can create, by means


of speech, supportive conditions in which the novice can participate
in, and extend, current skills and knowledge to higher levels of
competence. This principle usually underlies therapeutic interventions
but is not the way in which cognitive or language assessments are
traditionally conducted (p.40).

'$ LV JURXQGHG LQ 9\JRWVN\¶V Sociocultural Theory of Mind (SCT),


which differs both ontologically and epistemologically from the mainstream
psychological perspectives on mental abilities that inform other approaches to
DVVHVVPHQW $FFRUGLQJ WR 6&7 LQGLYLGXDOV¶ UHVSRQVLYHQHVV WR VXSSRUW RU
mediation WR XVH 9\JRWVN\¶V   WHUP WKDW LV VHQVLWLYH WR WKHLU FXUUHQW
level of ability reveals cognitive functions that have not yet fully developed.
Moreover, appropriate mediation enables individuals to exceed their
independent performance, and this in turn stimulates further development
(Vygotsky, 1978).
Thus, DA targets what individuals are able to do in cooperation with
others rather than what they can do alone (Sternberg & Grigorenko, 2002).
Furthermore, DA is not a standalone activity carried out in isolation from
other pedagogical activities. It is instead an on-going, development-oriented
process of collaborative engagement that reveals the underlying causes of
OHDUQHUV¶SHUIRUPDQFHSUREOHPVDQGKHOSVOHDUQHUVRYHUFRPHWKRVHSUREOHPV
In other words, DA does not differentiate instructional activities from
assessment activities because every mediator±learner interaction
encompasses both types of activities. Instead, DA sessions vary according to
learner development so that over time learners engage in increasingly
complex tasks with less mediation.

17
In a related strand of Vygotskian theory, the concept of the Zone of
Proximal Development (ZPD) (Vygotsky, 1986) provides a framework from
which diverse routes in research and application have developed. ZPD is
defined as the distance between actual developmental level (independent
problem-solving) and potential developmental level (problem-solving under
SDUHQW¶V JXLGDQFH RU FROODERUDWLRQ ZLWK RWKHU SHHUV  LQ RUGHU WR FODULI\ WKH
relation between learning and development. According to Lantolf and Appel
(1994) and Schinke-Llano (1993), Vygotsky¶s implicit meaning was that
ZPD is the area in which real learning takes place and those functions which
are in the processes of maturation are formed.
In illustrating the construct, Vygotsky (1978) spoke about two
hypothetical children both aged 12 years and both functioning at an eight-
year-old level on standardized tests. While they could be considered as being
at the same age cognitively, one child might be capable of making significant
gains if assisted in the ways of completing the tasks, eventually reaching a
level commensurate with their chronological age; the other child might make
only modest gains. Given such a scenario, in which the ZPD of the former is
much greater, it would be unwise to treat both children as having the same
cognitive profile.
7KH=3'LQ9\JRWVN\¶V  DSSURDFKODUJHO\UHVWVRQWZRLPSRUWDQW
interrelated constructs: mediation and internalization. According to SCT,
individuals are always mediated by cultural artifacts, social practices and
activities. They are mediated even when they are working alone, in which
case their cognitive functioning is mediated by their history of interactions
with the world. In other words, those abilities originally residing in an
LQGLYLGXDO¶V VRFLDO LQWHUDFWLRQV EHFRPH LQWHUQDOL]HG DQG reemerge as new
cognitive functions. The individual no longer relies on the external
environment for mediation but is able to self-mediate, or self-regulate to use
9\JRWVN\¶V WHUP By focusing on ZPD and the diologic nature of
SDUWLFLSDQW¶VLQWHUDFWLRQVVygotsky (1986) collapsed the castle of cognitive
18
determinism by proposing cognitive modifiability. He mentioned that
intelligence is not inherent but developmental and this results in dynamic
assessment, critical peagogy and finally critical thinking.
Vygotsky (1978) believed that his interventional testing method opened
a developmental window on the future, showing psychologists what would
KDSSHQ LQ WKH QH[W SKDVHV RI D FKLOG¶V GHYHORSPHQW 9\JRWVN\¶V PHWKRG
\LHOGHG RSWLPLVWLF HVWLPDWHV RI FKLOGUHQ¶V learning potential, but those
optimistic predictions were doomed to failure if nothing were done to assist
in the realization of the unmasked potential.
The idea of actually intervening in testing situations in order to discover
what examinees would be able to do with some help seems to have been
introduced by Vygotsky (1978). Vygotsky first described the process in the
following way:

Most of the psychological investigations concerned with school


learning measured the level of mental development of the child by
making him solve certain standardized problems. The problems he
was able to solve by himself were supposed to indicate the level of
his mental development at the particular time. But in this way,
RQO\ WKH FRPSOHWHG SDUW RI WKH FKLOG¶V GHYHORSment can be
measured, which is far from the whole story. We tried a different
approach. Having found that the mental age of two children was,
let us say, eight, we gave each of them harder problems than he
could manage on his own and provided some slight assistance: the
first step in a solution, a leading question, or some other form of
help. We discovered that one child could, in cooperation, solve
problems designed for twelve-year-olds, while the other could not
go beyond problems intended for nine-year-olds. The discrepancy
EHWZHHQ D FKLOG¶V DFWXDO PHQWDO DJH DQG WKH OHYHO KH UHDFKHV LQ
solving problems with assistance indicates the zone of his
19
proximal development; in our example, this zone is four for the
first child and one for the second. Can we truly say that their
mental development is the same? Experience has shown that the
child with the larger ZPD will do much better in school. This
measure gives a more helpful clue than mental age does to the
dynamics of intellectual progress. (pp. 186±187).
Seen in its theoretical context, DA is a broad approach, not a set of
VSHFLILF WHVWV 7KH SV\FKRORJLVW¶V JRDO LV RQH RI LGHQWLI\LQJ ZKDW FRJQLWLYH
skills need developing and strengthening in a child. The cognitive
requirements of given types of task (which can inform differentiation of the
curriculum for the child), and advising upon and supporting the teaching of
the child. This will be direct teaching of cognitive skills, as well as of
WUDGLWLRQDOFXUULFXOXPFRQWHQW7KLVDQDO\VLVIRFXVHVRQWKHWKUHHµSDUtners in
WKHOHDUQLQJSURFHVV¶WKHFKLOGWKHWDVNDQGWKHPHGLDWRU W\SLFDOO\SDUHQWV
or teachers). DA aims to help optimize, through understanding the interplay
of these essential elements, the match between the learner and the curriculum
on offer.

Why DA?

Limitations of static tests_ The limitations of static assessments are


considerable in the field of SLT where the multidimensional nature of
language ³does not easily lend itself to single unitary measures´ (Dockrell,
2001). According to her, diagnostic tests which target specific aspects of the
language system are ³consistently inadequate for determining whether a child
is developing typically or is experiencing a delay´. She also argued that:

Standardized assessments fail to tell us how a child approaches a task


or about the difficulties he may encounter. Existing tests are of little
value for planning interventions. Process-based assessments such as

20
those that would fall under the umbrella term of ³dynamic
assessment´ are moves towards a more informative approach (p.13).

Furthermore, Nettelbladt et al. (1989) concluded that detailed case


studies are the only ways at present to elucidate crucial individual differences
in children with language disorders, and Enderby and Emerson (1995)
concurred that µWhere are few standardized assessments available and
commonly in use which would cover the range of disorders that one may find
within the subject pool¶.
Process versus product_ Conventional language assessment does not
make explicit the chilG¶VOHDUQLQJVWUDWHJLHVDQGPHWKRGVRUKLVKHUSDUWLFXODU
strengths and weaknesses in learning and problem-solving behavior which
are termed ³cognitive functions´ by Feuerstein (1980). Cognitive learning
strategies are analogous to, concurrent with, and interwoven with, language
learning strategies.
Assessment of language should access the formal concepts that enable a
child to make sense of experiential learning and the skills s/he must develop
to facilitate or enhance language learning (Kozulin, 1990). The assessment
then becomes ³domain general´; it does not describe the specific
manifestation of the linguistic weakness, but rather the weakness in
underlying skills of learning, language processing and problem-solving.
Predicting readiness for change_ Feuerstein (1980) believed that
DVVHVVPHQWVKRXOGGHPRQVWUDWHWKHLQGLYLGXDO¶VSRWHQWLDO for change when the
appropriate type of intervention is available. The assessment evaluates the
LQGLYLGXDO¶VSUHVHQWOHYHORIIXQFWLRQLQJDQGSRWHQWLDO for change by assisting
in the assessment task. This indicates his/her need for assistance. Existing
language assessments typically do not fulfill this role. The procedure will be
familiar to practitioners who frequently try to get a VHQVH RI D FOLHQW¶V
stimulability to gauge their readiness for change. This approach, however,
has no stable PHWKRGRORJ\ DQG LV IUHTXHQWO\ WKH SURGXFW RI D SUDFWLWLRQHU¶V
21
experience. Birnbaum and Deutsch (1996) illustrated how recommendations
for intervention generated by DA can be used in collaborative target setting
in the educational context. Similarly, Lauchlan and Carrigan (2005) presented
materials specifically designed to facilitate the use of dynamic assessments in
the local psychological service and enable Dynamic Assessment to be
transferred to everyday practice.
Olswang and Bain (1996) put the theory into practice and demonstrated
high correlation between performance on a DA and a measure of immediate
change in language production (an increase in mean length of utterance
_MLU_ across WKH VWXG\ SHULRG  2OVZDQJ DQG %DLQ¶V (ibid) discussion
highlighted a number of important issues. Some children in the study made
little advance through the intervention, despite having obtained positive
indications on the DA. Possible explanations given for this were that the
children lacked the pre-requisite skills necessary for the next level of
development, or that the treatment techniques and timing were inappropriate.
The second explanation proposed by Olswang and Bain (ibid), was that
although the intervention methods of modeling, recasting and elicited
imitation were documented and proven methods for teaching grammar to
children may not have been the most effective methods for the children used
in the study. Olswang and Bain (ibid)speculate about more directive, less
naturalistic methods, although the children may equally have benefited from
interventions utilizing a more ublications approach.
Predictive validity_ The fourth reason for necessity of dynamic
assessment is its predictive validity. Caffrey and Fuchs (2008) explored the
predictive validity of DA on a mixed-methods review of 24 studies. For 15 of
WKHVWXGLHVWKH\FRQGXFWHGTXDQWLWDWLYHDQDO\VHVXVLQJ3HDUVRQ¶VFRUUHODWLRQ
coefficients. They descriptively examined the remaining studies to determine
if their results were consistent with findings from the group of 15. The
authors implemented analyses in five phases: They compared the predictive
validity of traditional tests and DA, compared two forms of DA, examined
22
the predictive validity of DA by student population, investigated various
RXWFRPH PHDVXUHV WR GHWHUPLQH ZKHWKHU WKH\ PHGLDWH '$¶V SUHGLFWLYH
validity, and assessed the value added of DA over traditional testing. Results
indicated superior predictive validity for DA when feedback is not contingent
on student response, when applied to students with disabilities rather than at-
risk or typically achieving students, and when independent DA and criterion-
referenced tests were used as outcomes instead of norm-referenced tests and
teacher judgment.
Performance improvements after or mediation_ This conclusion is
shared by almost everyone who has done research on DA. The magnitude of
such improvement in performance may depend on the kind of teaching, its
intensity, the specifLF QDWXUH RI HDFK SHUVRQ¶V FRJQLWLYH EDUULHUV DQG WKH
psychological distance between the content of the teaching and the content of
the performance tests (Budoff, 1987). Some researchers have systematically
compared the relative effectiveness of different intervening activities,
including mediation of logic structures, graduated prompts, and no
intervening activity between pretests and posttests. Invariably, mediation
leads to greater performance gains than no intervention as well as to greater
gains than graduated prompting (Burns, 1991; Kester & Peña, 2001).
Estimates of learning potential_ Estimates of learning potential
gained from DA are often more substantially related to subsequent learning
and performance in teaching situations, such as in school, than are estimates
of intelligence gained from static, normative testing. One might well ask:
Why are we even interested in such a phenomenon, given that most DA
advocates are less concerned with the prediction of group performance than
they are with learning how to defeat pessimistic predictions made for
individuals on the basis of static testing? One answer lies in that very
distinction between group and individual prediction. Static, normative tests of
intelligence do a superb job of forecasting the future performance, especially
the academic achievement, of large groups, although the predictive
23
correlation usually does not exceed .70 or .75, leaving a substantial portion of
the variance in achievement scores statistically unassociated with scores on
the predictor tests. DA is often especially interested in that portion of the
remaining variance that is not attributable to the unreliability of the predictor
tests, the unreliability of the criterion tests, or their joint unreliability. That
residue is likely to constitute a validity problem; Future learning is not
perfectly predicted by knowing how much has already been learned,
especially given unequal opportunities to learn. Proponents of DA are also
typically more concerned about individual performance than about the
performance of groups.
Applications of DA in assessment of cultural differences_ DA has
been found to be useful in comparing groups from different cultural
backgrounds (Tzuriel & Kaufman, 1999). In general, culturally different
children perform much better in DA than in static tests. For example, Tzuriel
and Kaufman (1999) compared children from two distinctively different
cultures, Israeli and Ethiopian. A major question raised recently with
Ethiopian immigrants to Israel is how to assess their learning potential, in
view of the fact that their scores on static intelligence tests are low and that
VWDQGDUGWHVWLQJSURFHGXUHVLQDFFXUDWHO\UHIOHFWWKHVHFKLOGUHQ¶VDELOLWLHV7KH
Ethiopian immigrants had to overcome a civilization gap in order to adapt to
Israeli society. This issue transcends the specific context of the Ethiopian
immigrants, both theoretically and pragmatically. The main hypothesis of the
investigators was that many of these immigrants would reveal cultural
differences but not cultural deprivation; therefore, they would reveal high
levels of modifiability within a DA situation. The Ethiopian group was
compared to a group of Israeli-born children with dynamic approaches of
evaluation. The findings showed initial superiority on all tests of the Israeli-
ERUQ FRPSDULVRQ JURXS RYHU WKH (WKLRSLDQ JURXS 7KH (WKLRSLDQ FKLOGUHQ¶V
scores were lower than those of the Israeli-born children on the pre-teaching
phase of both DA tests, but they improved their performance more than the
24
Israeli children did and narrowed the gap in the post-teaching performance.
The gap between the two groups was even narrower in a transfer phase
involving more difficult problems. In spite of the initial superiority of the
Israeli-born children in every test, after a short but intensive teaching process,
the Ethiopian group narrowed the gap significantly.
Use of DA in developmental research_ The account of developmental
research using DA is based on 10 studies, carried out at Bar Ilan University,
in which parent±child mediated learning strategies were observed in relation
WR FKLOGUHQ¶V FRJQLWLYH PRGLILDELOLW\ Tzuriel has reviewed the findings in
detail (1999, 2001). Two topics from this series are discussed in this article:
(a) prediction of FKLOGUHQ¶V cognitive modifiability by mother±child strategies
of mediated learning and (b) the relative prediction of cognitive modifiability
by distal PRWKHU¶V VRFLRHFRQRPLF VWDWXV 6(6  PRWKHU¶V ,4 PRWKHU¶V
emotional attitudes WRZDUG WKH FKLOG FKLOG¶s personality orientation, the
amount of time parents spend with their children during the week) and
proximal (mediated learning interaction) variables.

DA models

Over time, DA has evolved into two branches of study: clinically oriented
DA and research-oriented DA. Clinically-oriented DA began as an
educational treatment to remediate cognitive deficiencies presumed to cause
learning problems. Its most well-NQRZQ RSHUDWLRQDOL]DWLRQ LV )HXHUVWHLQ¶V
(1980) Learning Potential Assessment Device (LPAD). The LPAD is a non-
standardized method of assessing and treating the cognitive deficiencies of
children with learning problems. Treatment duration can last many years.
Research-oriented DA, by contrast, originated as an assessment tool. It
typically involves a standardized assessment during which the examiner
JXLGHV D VWXGHQW¶V OHDUQLQJ LQ D VLQJOH VHVVLRQ 7KH WLPH UHTXLUHG IRU WKH
student to reach mastery, or the necessary level of instructional explicitness

25
to advance the student, serves as an index of the studHQW¶VOHDUQLQJSRWHQWLDO
Researchers and practitioners have used this form of DA to identify students
who may require more intensive intervention and to place them in settings
where such interventions can be implemented. The LPAD, the test battery
devised by Feuerstein, reshapes the test situation from one that is highly
standardized, to an interactive process between three essential components,
namely the learner, the assessor and the task. (Feuerstein, 1987; Lidz, 1991)
This triad is central to FeuersteiQ¶V PHGLDWHG OHDUQLQJ WKHRU\ NQRZQ DV
structural cognitive modifiability, and is important and entirely relevant to
both assessment and intervention in second language teaching and testing.

Figure 1. Relationship between task, teacher and learner in DA

In speech and language therapy, assessment and intervention differ in


their underlying philosophy. In therapy the interaction with the therapist is
central, while in assessment, efforts are made to remove the influence of the
assessor and reduce inter-tester variability. Vygotsky (1978) retain the
centrality of the tester in the assessment process and see the diminishing
dependence of the learner on the adult as an indicator of change in the child.
The nature of the assessment procedure varies between different DA
models. A Research-oriented DA procedure may involve three phases, a pre-
test, teaching, and post-test, in which interactions take place in the teaching
or learning phase. Some of these models use a series of graduated prompts
26
where the assessor intervenes at a minimal level and then with increasing
teaching support, if required, in order to achieve success on the tasks of the
test. Here, the stages and content of the intervention are pre-determined and
still retain some characteristics of standardization, allowing for responses of
different children to be compared.
A different model within the DA approach is that of the mediated
learning group. In these models, intervention during the assessment is not
standardized at all and is totally responsive to the individual needs of the
child. The use of three phases (test/teach/test) may also be at the discretion of
WKH DVVHVVRU 7KH PHGLDWHG OHDUQLQJ PRGHOV DUH VLPLODU WR µFOLQLFDO¶ '$
interventions, because they are highly responsive to individual needs and lead
to diagnostic and prescriptive insights that are uniquely relevant to a
particular child. The assessor mediates cognitive strategies to children rather
than teaching better task performance on a specific test item. In such models,
key cRPSRQHQWV DIIHFWLQJ FKLOGUHQ¶V OHDUQLQJQHHGV DUH LGHQWLILHGE\ PHDQV
RIGHWDLOHGDQDO\VLVRIWKHDVVHVVRU¶VLQWHUYHQWLRQ /LG]  The analysis
is of:
x Within-child FRJQLWLYHIDFWRUVVXFKDVWKHFKLOG¶VXVHRIµLQWHOOHFWLYH¶
skills (e.g. their ability to make comparisons, to conserve, and to
generate and test hypotheses), as well as non-µLQWHOOHFWLYH¶DVSHFWVVXFK
as habits, attitudes and degree of motivation.
x The cognitive demands of each assessment task, and
x The FRQWHQWRIWKHDVVHVVRU¶s mediation.
x The modality in which the task is presented (e.g. verbal, visual,
numerical, pictorial), the level of complexity, the task content, and the
specific cognitive skills required for successful performance are the
DVVHVVRU¶VDQDO\WLFDOWRROVWKDWDUHGHOLEHUDWely manipulated in response
to the behavior of the child in the test situation.
Distinguishing between different DA models is relevant to educational
SV\FKRORJLVWV¶ (EP) practice because the most widely known are those of
27
mediated learning, which have in general been developed from the specific
theories of Feuerstein. The analysis of cognitive functions through the
ublications intervention is the only DA analytical tool that has been made
available to educational psychologists in initial or in in-service training.
The relative merits of clinical and standardized approaches continue to
split advocates of dynamic testing and, to a significant extent, views often
reflect the research-based or more clinical orientation, of the commentator.
As Grigorenko and Sternberg (1998) noted, research psychologists tend to be
concerned with the measurement of change while applied psychologists will
be more interested in the promotion of change. Lidz (1991) observed that
those dynamic approaches that appear best to fit scientific requirements often
appear to have less utility for diagnosis and intervention.
Considering the undeniable merits of DA three concerns or in better
words criticisms are typically expressed about DA; namely, its construct is
fuzzy, its technical characteristics are largely unknown, and its administration
and scoring are labor intensive. Fuzziness occurs, for example, when at a
most general level, researchers fail to distinguish for their audience between
clinically-oriented or research-oriented DA. Second, the related literature
does not typically report the reliability and validity of DA measures. This
stems partly from a deliberate rejection of standardized procedures by some
researchers. Many advocates of clinically- oriented DA believe
standardization contradicts its spirit and theoretical orientation. Third, critics
have suggested that the time required to develop protocols and train
examiners may not be worth the information DA provides. DA protocols
have been around for decades, but because of inadequate information about
their psychometric properties, more practical investigations may be needed to
establish their validity and utility.
This criticism may be better understood by contrasting the two types of
DA. Clinically-oriented DA involves relatively little time to develop because
scripted protocols are rarely developed. Insight and expertise are essential,
28
and student responsiveness to instruction is relatively dependent on the
specific educator providing the help. Conversely, research-oriented DA
requires a laborious process of protocol development because the protocols
must be standardized (and possibly norm-based) on the target population. At
the same time, the demand for practitioner insight and expertise is less.
Because procedures are standardized, practitioners can be trained in about the
time it takes to train examiners in traditional testing.

DA versus traditional assessment


DA has been variously described as learning potential assessment, mediated
learning, mediated assessment, assisted learning and transfer by graduated
prompts. Across its variants, DA differs from traditional testing in terms of
the nature of the examiner±student relationship, the content of feedback, and
the emphasis on process rather than on product (Grigorenko & Sternberg,
1998).
In traditional testing, the examiner is a neutral or ³REMHFWLYH´SDUWLFLSDQW
who provides standardized directions and does not typically provide
performance-contingent feedback. The DA examiner, by contrast, not only
gives performance-contingent feedback but also offers instruction in response
to student failure WR DOWHU RU HQKDQFH WKH VWXGHQW¶V DFKLHYHPHQW 7R SXW it
differently, traditional testing is oriented toward the product of student
learning (i.e. level of performance), ZKHUHDV WKH '$ H[DPLQHU¶V LQWHUHVW LV
both in the product and in the process (i.e. rate and path of growth) of student
learning. In other words, VRPHUHVHDUFKHUVFODLPWKDW'$¶VWZLQIRFXVRQ the
level and rate of learning makes it a better predictor of future learning.
DA differs from conventional static tests in regard to its goals,
processes, instruments, test situation, and interpretation of results (Feuerstein,
1979; Lidz & Elliott, 2000; Tzuriel, 2001; Vygotsky, 1978). Several
arguments have been raised against standardized static tests. The most
frequently heard announce that they are inadequate for revealing the
29
cognitive capacities of children who come from culturally different
populations and/or children with learning difficulties (Feuerstein, 1979).
Traditional (static, normative) methods of psychological and
psychoeducational assessment do not require or permit active intervention on
the part of examiners. In general, methods in which the role of examiners is
more active and intervening are frequently referred to as interactive
assessment. One subclass of interactive assessment includes those methods in
which the activity and intervention of examiners is specifically designed to
produce at least a temporary change in the cognitive functioning of
examinees²this subclass is known as dynamic assessment (DA). The term
dynamic implies change. A major goal is to assess processes of thinking that
are themselves constantly changing (hence the term assessment rather than
measurement). In addition, examiners do some active and directed teaching
precisely in order to produce change. Thus, the basic datum in DA is a
change variable. DA refers to assessment of thinking, perception, learning,
and problem solving by an active teaching process aimed at modifying
cognitive functioning. The contrasting approach, static testing, is testing in
which examiners present problems or questions to examinees and record their
responses with no attempt to intervene in order to change, guide, or improve
WKHH[DPLQHHV¶SHUformance.
,QWUDGLWLRQDOWHVWLQJWKHH[DPLQHULVDQHXWUDORU³REMHFWLYH´SDUWLFLSDQW
who provides standardized directions and does not typically provide
performance- contingent feedback. The DA examiner, by contrast, not only
gives performance-contingent feedback but offers instruction in response to
VWXGHQW IDLOXUH WR DOWHU RU HQKDQFH WKH VWXGHQW¶V DFKLHYHPHQW 7R SXW LW
differently, traditional testing is oriented toward the product of student
learning (i.e., level of performance), whereas the DA exDPLQHU¶V LQWHUHVW LV
both in the product and in the process (i.e., rate of growth) of student
learning.

30
Many of those who argue that dynamic approaches should meet the
requirements of psychometric testing, have as their prime goal the
development of more valid measures of intellectual functioning. They share
with Feuerstein (1997) D FRQFHUQ WKDW FKLOGUHQ¶V SRWHQWLDO LV RIWHQ
underestimated and, as a consequence, low teacher expectations and
assignment to special schooling may act in a self-fulfilling fashion. However,
unlike Feuerstein, they emphasize the importance of standardized
administration and test reliability _that is, that measures obtained from an
individual should be relatively consistent from one examiner to another
(inter-rater) and from one occasion to another (test-retest). If the interaction
between examiner and examinee is not standardized but, rather, is
orchestrated by the examiner according to individual circumstances, it is
difficult to make judgments about an LQGLYLGXDO¶V SHUIRUPDQFH relative to
others as this is likely to reflect differing degrees of assistance (Budoff,
1987). Low inter-rater reliabilities on scales assessing deficient cognitive
functions do not engender confidence.
)HXHUVWHLQ¶V (1997) response to such criticisms is to argue that as
dynamic approaches seek to induce change, seeking consistency across
assessments is a pointless task. Not only, in his opinion, is such a task
illogical it may also reduce the ability of the child to demonstrate their true
capability, particularly if the testing situation is socially or culturally
unfamiliar. For Feuerstein, the standardized testing procedure as is
traditionally found in IQ testing:

« VWURQJO\ DIIHFWV WKH WRWDO LQWHUDFWLYH Srocess between the


examiner and H[DPLQHH« WKH VWHULOL]DWLRQ RI WKLV UHODWLRQVKLS «
creates great barriers for the manifestation of the true propensity of
the individual. (p. 304).

31
To a significant extent, the relative importance of standardization may
depend upon the purpose of the assessment. If assessment is to be used for
educational selection, the allocation of resources, or for system accountability
(for example, to judge the performance of teachers or schools), one is more
likely to require data that can be used to enable systematic comparison
between children. In such cases, more standardized, reliable measures are
necessary than in those circumstances where assessment is used to provide
data that inform educational intervention with regard to an individual learner
(Gipps, 1999).
DA is based on the theories of integrated learning, teaching and
assessment defined by Vygotsky, and on Feuerstein¶s theory of cognitive
modifiability. Such Dynamic Assessment, which includes a phase of
mediation or teaching, aims to provide conventional measures of learning as
well as to explore the nature of that learning. Dynamic Assessment is
consumed to be complementary to the conventional tool kit of psychometric
tests available to school psychologists, providing another perspective on
learning. It supports closer links between psychology practice and the
teaching and learning in the classroom.
DA must not be considered simply as a better version of a static test.
Dynamic assessment is aimed at creating a reliable basis for
recommendations regarding the students¶ learning potential and desirable
intervention strategies. Thus, students with very low Language learning
potential, may need a special cognitive enrichment program (e.g.
Instrumental Enrichment, Feuerstein, Rand, Hoffman, & Miller, 1980).
Direct instruction of English as the second or foreign language to these
students might be relatively inefficient if it is not accompanied by a more
general cognitive education intervention. In general one should seek
continuity between the assessment and teaching. Thus under ideal conditions
a cognitively-oriented curriculum will have a cognitively-based assessment
and vice versa.
32
DA and traditional tests correlate similarly with future achievement
measures. However, researchers have demonstrated that DA can identify
students who will respond to instruction (Bain & Olswang, 1995; Budoff,
1971), distinguish between minority students with and without language
disorders (Peña, 1992), and predict future educational placement (Samuels,
1992). Researchers in several studies have reported that DA can contribute to
the prediction of achievement beyond traditional tests (Byrne, 2000; Meijer,
1993; Resing, 1993). However, this seems to depend on the analysis
techniques and domains of study (Swanson, 1994).

Potential of DA
In a recent paper, Sternberg and Grigorenko (2001) have questioned why this
µSURPLVLQJ¶DSSURDFKKDVQRWEHHQHPEUDFHGPRUHZLGHO\E\UHVHDUFKHUVDQG
FOLQLFLDQV :KLOH WKHLU FRQFHUQ LV PRUH QDUURZO\ IRFXVHG XSRQ µG\QDPLF
WHVWLQJ¶ WKDQ the more encompassing remit of Dynamic Assessment with its
greater emphasis upon broader contextual influences (Haywood, 2001), their
response to their own question, that the approach has yet to demonstrate its
potency, has validity. In the opinion of this author (Elliott, 2000, 2001), one
RIWKHPDMRULQKHUHQWGLIILFXOWLHVFDQEHWUDFHGEDFNWR%LQHW¶VRULJLQDOEULHI
Researchers and clinicians have largely envisaged dynamic measures as more
sophisticated means of tackling the questions that child IQ tests set out to
discover, that is, whether a given child required some form of special
education.
Thus these measures have traditionally had an important selection and
classification function (Budoff, 1968). However, rather than trying to develop
new tools for old tasks, it may be wiser to consider whether new approaches
SURYLGH WKH PHDQV IRU D UHIUDPLQJ RI WKH SV\FKRORJLVW¶V WUDGLWLRQDO UROH
Modeling itself upon the medical model, professional educational psychology
has traditionally emphasized a classification function:

33
Classification in education is pursued for the same reasons as
diagnosis in the health and mental health fields: the notion that
effective treatments are known, are more or less directly related to
diagnoses, and will be implemented for persons who fit a diagnostic
category. (Haywood et al., (1990, p. 412)
However, in both the UK and USA the value of classification as a means
of informing educational interventions has been increasingly questioned
(Reschly, 1988). Accordingly, psychologists are now less likely to DVNµKRZ
FDQZHPRVWDSSURSULDWHO\VRUWDQGFODVVLI\FKLOGUHQ"¶EXWUDWKHUµKRZGRZH
WHDFK WKLV FKLOG"¶ DQG µKRZ FDQ ZH KHOS UHJXODU FODVVURRP WHDFKHUV
individualize their programs? (Lidz, 1992). For many researchers, the strength
of dynamic measures is that these may yield better predictions of subsequent
educational performance than traditional static cognitive tests such as the
:HFKVOHU6FDOHV+HUHWKHUHDUHFOHDUHFKRHVRI%LQHW¶VRULJLQDOEULHIVRXQG
prediction, it is argued can assist in decision-making about educational
resourcing, in particular, whether special schooling is required. Following the
FRQFHUQV DERXW WKH GDQJHU RI XQGHUHVWLPDWLQJ FKLOGUHQ¶V DELOLWLHV UDLVHG E\
Feuerstein, Budoff (1968) and many others, a number of European research
teams (e.g. Guthke & Beckmann, 2000 in Germany, Resing, 2000; Hamers et
al., 1991; Hessels, 2000) in the Netherlands) have endeavored to develop
PHDVXUHVWKDWPLJKWUHYHDOWKHFKLOG¶VµWUXH¶SRWHQWLDO
Hessels and his colleagues, for example, have developed a dynamic
measure for use with minority populations in the Netherlands and, more
recently, in Switzerland (Hessels & Hessels-Schlatter, 2002). Hessels found
that a significant number of Turkish and Morroccan children living in the
Netherlands, whose performance on a static IQ test would have resulted in a
recommendation for special schooling, actually demonstrated much greater
capabilities when dynamic measures were employed.
In a review of the literature, Grigorenko and Sternberg (2002) contend
that evidence of greater predictive validity on the part of dynamic tests
34
generally remains unpersuasive. However, this assertion is contested by a
number of researchers (Guthke & Beckmann, 1997; Hessels, 2000; Resing,
2000; Tzuriel, 2000) who provide data indicating the greater predictive power
of their dynamic measures. The difference in predictive power between static
and dynamic tests can also be increased by considering populations of
homogeneously poor performers on the two sets of measures (Guthke &
Gitter, 1991; Beckmann, 2001). The effect of this is to restrict the variance of
the static measures. The degree of predictive validity, however, is also
contingent upon the outcome criteria selected and thus is likely to be different
for teacher grades, standardized test scores or for curricular or non-curricular
problem-solving tasks (Beckmann & Guthke, 1995; Guthke et al., 1997).
The accuracy of predictions based upon dynamic assessment, however, is
likely to be greatly influenced by environmental factors. Consider an
underperforming child struggling in a highly challenging classroom context.
A Dynamic Assessment may indicate that the child has unrecognized,
SRWHQWLDO\HWLIWKHFKLOG¶VH[SHULHQFHVDWKRPHDQGDWVFKRRODUHQRWPRGLILHG
in a meaningful way, perhaps because the teacher is overstretched or because
insufficient resources are available, this capacity for development may never
be realized (Tzuriel, 1992). Thus the predictive validity of the measure is
directly bound up with practitioner action²a phenomenon that is highly
problematic for those who wish to be guided by psychometric conventions.
A further difficulty concerning the realization of perceived potential
centers upon over simplistic notions on the part of many researchers
concerning decisions about special/mainstream placement. For many, the
UDLVRQG¶HWUH for the approach is that it will reveal untapped potential. Where
a child with learning difficulties is seen as being able to perform significantly
better, given individualized, scaffolded instruction and an emphasis upon
meta-cognitive processes, it is commonly argued that they demonstrate
potential Thus, despite a low current level of performance, it would be
typically concluded that the child should remain in mainstream classes. In
35
contrast, where there is little evidence of meaningful gain during the
assessment, special schooling may be perceived as more appropriate.
Elliott and Lauchlan (1997), however, demonstrate the rather over-
simplistic nature of such assumptions, pointing out that children remaining in
mainstream are often unlikely to receive the additional inputs that may help
untapped potential to be realized. In inclusive education systems, however,
decisions about mainstream versus special school placement are less
necessary and the key resource issue concerns the forms of help a child might
need to maintain their mainstream placement. Here, again, notions of
potential are problematic. Putting to one side, issues about the validity of the
measures, even if one were able to determine differential levels of potential,
significant questions remain. Should a school or local authority, for example,
provide greater resources to the child with more, or the one with less,
potential? If the answer is that both should be treated the same, one might
question why assessment of learning potential is necessary at all. The standard
UHVSRQVHWRWKLVTXHVWLRQLVWKDWWKHWHDFKHU¶VH[SHFWDWLRQVPD\EHUDLVHGWKH
so-FDOOHGµ3\JPDOLRQ(IIHFW¶ (Rosenthal & Jacobson, 1968).
While the advantages, in the form of raised expectations for the child
found to have higher potential than was formerly recognized are clear, what
are the implications for those less fortunate? Is there a danger that the child
who is assessed as having limited potential will subsequently encounter an
unchallenging home and school environment in which few demands are
made? Could such an assessment prove to be self-fulfilling? While the search
for superior predictors is often assumed to be a key task by researchers in this
field, others with a more applied and educational orientation such as
Haywood (1993) and Elliott (2000) have questioned this conception:

µ« 3UHGLFWLRQ QHYHU ZDV D SDUWLFXODUO\ GHIHQVLEOH REMHFWLYH , FRXOG


never understand what makes psychologists want to be fortune tellers!
There should be scant satisfaction in knowing that our tests have

36
accurately predicted that a particular child will fail in school. There
are many sources of such predictor information. What we need are
instruments and approaches that can tell us how to defeat those very
predictions! (Haywood, 1993, pp. 5±6)

+D\ZRRG¶VFDOOIRUDSSURDFKHVWKDWFDQGHIHDWJORRP\SUHGLFWLRQVUHIOHFWV
DJURZLQJVKLIWLQWKHSHUFHSWLRQVRI WKHSV\FKRORJLVW¶VUROHLQERWKWKH8.
and the USA. An increasingly inclusive education system in which the
allocation of resources for children with special needs is closely tied to
educational performance, and individualized target-setting bureaucracies for
children with special needs have resulted in pressure for many educational
psychologists to make have a more interventionist role. Schools, increasingly
influential in determining the nature and focus of LEA services, look to
psychologists to help generate strategies to assist children with difficulties, a
task that is not easily determined on the basis of traditional IQ measures
(Fuchs et al., 1987). Recent surveys of UK educational psychologists
(Deutsch & Reynolds, 2000; Freeman & Miller, 2001) highlight widespread
recognition that Dynamic Assessment can assist in collaborating with
teachers to plan educational interventions.
There are a number of ways by which dynamic measures might contribute
to educational intervention. These could be considered to fall into two broad
groupings:
1. They may be used to provide insights into the unique nature of an
LQGLYLGXDO¶VOHDUQLQJDQGUHDVRQLQJDQGLQWKHOLJKWRIWKLVWRFRQVWUXFW
with teachers and parents an individually tailored intervention.
2. They may provide a particular profile that is matched by a
prescriptive intervention program.
A number of UK educational psychologists are seeking to develop
dynamic approaches that draw upon a broader, more eclectic, theoretical base.
In this respect, Elliott (2000) describes the type of information that teachers
37
routinely require from psychRORJLVWV7KLVLQFOXGHVDVSHFWVRIWKHFKLOG¶V L 
cognitive functioning (e.g. comprehension, memory, ability to transfer
learning, capacity for abstraction, problem solving); (ii) disposition,
motivation and affect (flexibility, impulsivity, persistence, capacity to handle
failure); and (iii) the ways by which the child differentially responds to
various forms of adult guidance and encouragement. By means of their
interactive approach, and based upon detailed theoretical models, dynamic
approaches offer the promise of gaining insights into such processes (for
detailed case study illustrations of assessment-intervention applications, see
Lidz and Elliott, 2000). However, it is important to note that while teachers
may testify to the practical uses to which findings from dynamic assessments
may be put (Elliott & Lauchlan, 2000; Figg, 2002), those for whom the
scientific status of assessment is a precondition (e.g. Sternberg & Grigorenko,
2002) contend that much further methodological and psychometric work is
necessary.
Rather than undertaking highly individualized assessments, other
approaches seek to ascertain whether students are likely to benefit from
particular forms of intervention. At its most extreme, judgments based upon
dynamic measures might even determine which children might be most
suitable to receive schooling in some countries in the developing world where
this may not be available for all. Sternberg et al. (2003), for example, have
demonstrated that static measures of rural Tanzanian children were
substantially improved by interventions and, importantly, correlations
between pretest and post-test scores were weak. Thus conventional static
FRJQLWLYH PHDVXUHV PD\ EH YHU\ SRRU LQGLFDWRUV RI VXFK FKLOGUHQ¶V WUXH
abilities. These findings also guard against the criticism that gains were
merely the result of test practice, were substantially the same for all, and thus
do not help in differentiating between individuals.
More often, however, DA may be seen as helpful for indicating the
potential value of a certain teaching program or approach (Fernandez-
38
Ballesteros and Calero, 1993). Guthke et al. (1986), for example, have
devised a test that examines adult competence for learning a foreign language.
Swanson (1995, 2000), argues that the approach may offer guidance
concerning the most appropriate psycho-educational intervention model to
adopt. Children who appear not to respond well to the probes and cues
provided in a Dynamic Assessment may, he argues, be less likely to benefit
from highly interactive cogQLWLYHDSSURDFKHVWKDWXWLOL]HµDFWLYHRUFRQVWUXFWHG
OHDUQLQJVWUDWHJLHV¶5DWKHUWKH\PD\JDLQPRUHIURP
«SURFHGXUHV ZKLFK SODFH PLQLPDO GHPDQGV RQ FRQVWUXFWLQJ
strategies. Such approaches would emphasize relevant conceptual
knowledge (via drill and practice), motivation, and programmed (e.g.
computer mediated) instruction. (Swanson, 1995, p. 691).

A Swiss team (Buchel et al.,1997; Hessels-Schlatter, 2003) have devised


a dynamic measure of analogical reasoning, the Analogical Reasoning
Learning Test (A.R.L.T), for use with those with complex learning
difficulties. The purpose of the measure is to differentiate between those who
FDQEHQHILWIURPDVVLVWDQFH µJDLQHUV¶ LQSDUWLFXODUIURP PHWDFRJQLWLYHDQG
memorization training, learn to apply the rules of inductive reasoning, and can
maintain and transfer such learning, and those who fail to achieve such
OHDUQLQJ DIWHU UHFHLYLQJ WKH WUDLQLQJ µQRQ-JDLQHUV¶  $ UHFHQW VWXG\ RI
students with severe learning difficulties (i.e. with traditional IQ scores in the
range of 35±55), Hessels-Schlatter found that those deemed to have high
learning potential, on the basis of the ARLT (gainers) benefited from a four
week inductive reasoning training program while non-gainers failed to
demonstrate an improvement in their reasoning.
Such measures may be particularly important for this population, as
traditional measures are often valueless in differentiating between individuals
with such low levels of functioning. However, as Hessels-Schlatter pointed
out, it is essential to have high, yet realistic expectations of such individuals

39
who may easily be discouraged by inappropriately targeted interventions. A
similar differentiation between high-scorers, learners and non-learners is
provided by Carlson and Wiedl (2000) in their use of dynamic approaches
with adult schizophrenics.

In a rare study, in which the assessment of learning potential was


accompanied by a lengthy period of cognitive training (involving twice
weekly sessions for more than a year), Lauchlan and Elliott (2001) found that,
for their sample of children with moderate and severe learning difficulties,
those who demonstrated high potential on the dynamic test, and who were
subsequently provided with cognitive training, appeared to make greatest
gains. InterestingO\ RWKHU FKLOGUHQ ZKR ZHUH GHVLJQDWHG DV µKLJK SRWHQWLDO¶
but who did not receive the intervention subsequently performed more poorly
WKDQ D µORZ SRWHQWLDO¶ JURXS ZKR GLG UHFHLYH WKH LQWHUYHQWLRQ 7KLV ILQGLQJ
supports the suggestion that an indication of potential has little value if the
environment is not modified accordingly.
For many researchers, DA continues to need a much stronger
psychometric foundation (Snow, 1990; Grigorenko & Sternberg, 1998). For
those with a more clinical orientation, there may be more sympathy with
)HXHUVWHLQ¶V   FRQWHQWLRQ WKDW WKH PRVW LPSRUWDQW FULWHULRQ IRU JDXJLQJ
WKHYDOXHRI WKHDSSURDFKLVWKHVWXGHQW¶VIXQFWLRQLQJIROORZLQJH[SRVXUHWR
intervention drawing upon the results of the dynamic assessment. However,
future empirical studies need to go beyond the case study testaments (e.g.
Lidz & Elliott, 2000) that are most usually cited as supportive evidence. To
date, there is a dearth of systematic and controlled studies that compare the
differential impact of interventions based upon dynamic and static approaches
(Guthke et al., 1997). In particular, future studies need to examine the extent
to which dynamic assessments can: (a) result in recommendations for
intervention that are (b) meaningful to, and will be employed by, practitioners
(parents, teachers, therapists) and which (c) subsequently demonstrate
40
meaningful gains that are unlikely to have been achieved in their absence.
Such studies will be complex and problematic yet may be necessary if the
claims of advocates of the approach are to be taken up on a widespread basis.

Problems with DA

Although the concept of DA is not new, it is not yet widely practiced and is
still virtually unknown to many psychologists and educators. There are many
reasons for this state of affairs²some conceptual and others quite practical.
The following lines describe some of the shortcomings of DA;

™ There are metric problems that have yet to be addressed seriously,


much less solved. The question of reliability is a pressing one,
especially given that one sets out deliberately to change the very
characteristics that are being assessed. At least a partial solution is to
insist on very high reliability of the tasks used in DA when they are
given in a static mode; that is, without interpolated mediation.
™ Another persistent problem is how to establish the validity of DA.
Ideally, one would use both static testing and DA with one group of
children and static, normative ability tests with another group. The
essential requirement would be that a subgroup of the DA children
would have to be given educational experiences that reflected the
within-test mediation that helped them to achieve higher performance
in DA. The expectation would be that static tests would predict quite
well the school achievement of both the static testing group and that
subsample of the DA group that did not get cognitive educational
follow up. Static tests should predict less well the achievement of the
DA-cognitive education group; in fact, the negative predictions made
for that group should be defeated to a significant degree.
™ The professional skill necessary to perform DA effectively is not
currently taught in typical graduate psychology programs, so

41
practitioners must be trained in intensive workshops long after they
have been indoctrinated in the laws of static, normative testing. Even
with excellent training, DA examiners must exercise considerable
subjective judgment in determining what cognitive functions are
deficient and require mediation, what kinds of mediation to dispense,
when further mediation is not needed, and how to interpret the
difference between pre-mediation and post-mediation performance.
Thus, inter-examiner agreement is essential. This aspect has been
studied to some extent (Tzuriel & Samuels, 2000) but not yet
sufficiently.

One might well ask: Why, if DA is so rich and rewarding, is it not


more widely applied? One apparent reason is that it is not yet taught in
graduate school. A reason is that school psychologists often have client
quotas to fill, and since DA is far more time-consuming than is static
testing, their supervisors do not permit it. Third, the school personnel
ZKR XOWLPDWHO\ UHFHLYH WKH SV\FKRORJLVWV¶ reports typically do not
expect DA and do not yet know how to interpret the data or the
recommendations, and psychologists offer little help in interpreting the
score. Finally, there is a certain inertia inherent in our satisfaction with
being able to do what we already know how to do, and to do it
exceptionally well.

There is no doubt that the DA is more time consuming than a static test.
This is often presented as a drawback of the dynamic procedure. If however
one perceives mediation as an integral part of the instructional process, then
this apparent drawback can be actually perceived as an advantage. This is
because the inclusion of the meditation element into the assessment
procedure bridges the psychological gap between assessment and instruction
and may reduce the students¶ test-taking stress. At the same time, there is one
42
component of the dynamic assessment that definitely requires greater
investment than a regular static test ± this is the preparation of a teacher for
the role of mediator. Such an investment, however, goes beyond the mere
technical needs of the DA and contributes to the enhancement of teachers¶
awareness of the students¶ cognitive processes and problem solving
strategies.

43
3DUWȱȱ

'$,135$&7,&(

44
In this part _ as the complementary section of the book_ after getting familiar
with the basics of Dynamic Assessment, we are going to scrutinize the
application of DA in educational context. This survey has been focused on the
four headings of Dynamic Assessment and gifted students, Mediation in
Computer Assisted Dynamic Assessment, Dynamic Assessment in Educational
Settings: realizing potential, Dynamic Assessment and problematic language
learners. Putting the theoretical principles of Dynamic Assessment into
practice, it would be possible to see how prolific it is in testing neglected areas
of teaching and testing, evaluating the potentiality of learners for development,
and predicting their future progress.

45
DA and gifted students

In any consideration of differing approaches to assessment, a key, although


sometimes overlooked, question concerns the purpose to which the
assessment should be put (Elliott, 2000). In relation to the identification of the
gifted, assessments typically focus upon current performance in whatever
domain is of interest; however, a rather more challenging task involves
identifying those with the potential to demonstrate giftedness but who, for
various reasons, have failed to demonstrate this. It is in relation to this latter
task that many believe that dynamic approaches have the most to offer.
DA is an umbrella term that describes a heterogeneous group of
approaches that are linked by one key element: instruction and feedback are
provided as part of the assessment process and are contingently related to the
individual¶s ongoing performance. Thus, the nature and extent of assistance
that are provided depend on the individual differences manifested within the
assessment context. This, of course, represents a significant departure from
conventional testing procedures which usually preclude any forms of
intervention other than strictly delineated inputs geared to assist
administration and maintain rapport. Unlike conventional tests that are
concerned primarily with ultimate performance (product), dynamic
approaches tend to be equally interested in gauging the individual¶s use of
cognitive and metacognitive strategies, their responsiveness to examiner
assistance and support, and their capacity to transfer learning operating within
the intersubjective realm of tester and testée, to subsequent unassisted
situations.
Dynamic approaches can be subdivided on a) the basis of the timing of
assistance within the assessment schedule, b) their adoption of a more clinical
or standardized approach and c) the assessment domains from which they
draw. The timing and frequency of tester feedback vary across dynamic
approaches. Sternberg and Grigorenko (2002) argued that most approaches

46
can be usefully described in terms of either a ³sandwich´ or ³cake´ format. In
the former, an unassisted pretest is given. In the light of the tester¶s
understandings of the testees strengths and weaknesses, instruction is
provided with the aim of maximizing performance.
Subsequently, an alternate form of the test (the post-test) is provided.
Performance on this latter measure is taken to be a superior indicator of the
individual¶s cognitive functioning. In contrast to this sandwich approach,
rather than using pre and posttests, the cake format involves the provision of
contingent feedback, item by item during a single testing process. Dynamic
approaches can also be divided into those which are primarily clinical in their
operation and those which are more closely wedded to the rigors of
conventional psychometric tests. To a significant extent, this reflects the
different agendas and traditions of proponents, in particular, the extent to
which they are wedded to traditional scientific understandings about test
reliability and validity. Without a strong psychometric foundation, it is argued
by some; dynamic tests cannot be valid and thus are of little value (Snow,
1990). For others, a standardized approach is deemed to be overly rigid, and
the constraints that are necessarily placed upon the scaffolded intervention (or
mediation) that can be employed are seen as undermining and limiting the
tester-testee relationship, as well as the nature of the information provided.
The most widely known clinical approach is that of Feuerstein and his
colleagues (1979). Whereas their approach to assessment is underpinned by a
detailed and broad-ranging theory of cognitive functioning, the nature and
extent of the assistance provided to the testée is individually determined by
the tester. Thus, the assistance offered to an individual may vary greatly from
one clinician to another. Such approaches, therefore, do not seek to compare
testees with broader populations. Of course, making direct comparisons with
others is central when assessment is used for classification or for the
allocation of resources, arguably the most common purposes of cognitive
assessment of gifted children. Those proponents of dynamic measures for
47
whom classification and selection are central tend to advocate the
employment of approaches that maintain a form of standardization. In
practice, this means that although the amount of feedback provided still varies
according to the testee¶s performance, the nature and extent of assistance is
pre-prescribed on the basis of the individual¶s potential responses and thus
should not vary from one examiner to another. Unlike the clinical, approach
therefore, inter-rater reliability should be high.
Given the complexities involved, the potential of computerized dynamic
approaches has been noted (Guthke & Beckman, 2000; Tzuriel & Shamir,
2001). Guthke & Beckmann (2000) for example have devised a computerized
measure that provides prompts to the testée when errors are made and an
³adaptive´ response which permits items to be presented differentially on the
basis of the WHVWHH¶V performance. The third major distinction concerns the
domains that are assessed. As with more general considerations in relation to
the assessment of the gifted and talented (e.g., Plucker, Callahan & Tomchin,
1996) there is debate as to the extent that dynamic approaches should involve
those cognitive processes traditionally found in measures of intelligence,
rather than focusing upon specific subject centered domains such as
mathematics and language. Although a few measures exist in academic areas
(e.g. Gerber, 2000, Hamers et al, 1994, Haywood & Lidz, 2007; Miller et al,
2002) the great majority of dynamic approaches continue to tap cognitive
processes that are believed to underpin learning and problem-solving, such as
inductive reasoning and working memory.
As might be expected, the history of the use of DA with gifted students
covers a fairly short period of time, but nevertheless, documents an
impressive amount of activity. During the last 1980s and the decade of the
1990s into the 2000s, there has been particularly strong recognition of the
need for alternative assessment procedures for identification of gifted
students, especially for individuals from ethnically and linguistically diverse
backgrounds (e.g. Brice & Brice, 2004; Donovan & Cross, 2002; Maker,
48
1996). A number of alternative approaches were suggested, such as those
based on multiple intelligences, (Sarouphim, 2002), performance-based tasks
(Van Tassel-Baska, Johnson & Avery, 2002), portfolios (Baldwin, 2002) as
well as combinations of these with standardized tests (Matthews, 1996). All
of these alternatives involved attempts to by pass standardized tests of
intelligence but continued to rely on product rather than process evidence.
Others relied on nonverbal measures that still failed to equalize the
proportions of individuals from diverse populations identified as gifted. None
of these approaches offers a means to reflect and record responsiveness to
instruction with any degree of satisfaction.
Along with this came the recognition of Vygotsky¶s notion of zone of
proximal development as a conceptual description of a promising assessment
approach (Shaughnessy, 1993, Stanley, 1993). Existing assessment models
were criticized for their one time, one sample, product-oriented focus that did
not seem to capture the abilities of individuals from these diverse
backgrounds who were increasingly filling the desks in urban schools
(Ascher, 1990; Castellano, 1998; De la Cruz, 1996). Those students rarely
qualified for gifted services according to traditional sources of identification.
Students from these same populations also tended to be overrepresented in
programs for children with special needs other than gifted. DA was
recognized by a number of individuals as operationalizing Vygotsky¶s notion
of the zone of proximal development into a model that would be culturally
fair and promising for individuals from ethnically and linguistically diverse
backgrounds. However, DA has not developed homogeneously, and some
procedure designers have been primarily concerned with using it as an
alternative measure of intelligence, whereas others have been more focused
on combining eligibility determination with instructional programming, with
less concern for the construct of intelligence per se.

49
Early researchers who have used Dynamic Assessment procedures with
gifted students include Skuy, Kaniel & Tzuriel (1988) , who demonstrated the
promise of identifying gifted children from low socioeconomic backgrounds
with subtests of Feuerstein¶s DA procedure, and Frasier and Passow (1994),
who described a ³new paradigm for identifying talent potential amongst
culturally diverse populations´ (p. 63) that suggests that ³Traditional
identification approaches can be improved by designing, adapting, modifying
and extending instruments, strategies, and procedures that take into account
the influence of race, culture, caste, and socioeconomic status on behavior...´
(pp. 63-66).
Kirschenbaum and Renzulli (1995) also proposed the incorporation of DA
into teacher-administered screening procedures used to identify potentially
gifted students. Likewise, Bolig and Day (1993) discussed the potential
relevance of Dynamic Assessment for identifying a myriad of gifted learners
from diverse backgrounds. Stanley (1993) applied Dynamic Assessment in
his study and reported an increased number of candidates for gifted
programming related to the use of this approach.
Major projects that have included DA within comprehensive
identification and programming for gifted children from disadvantaged
backgrounds include Borland and Wright¶s Synergy Project (1994) Zorman¶s
Eureka Project (1997), Kay and Subotnik¶s Talent Beyond Words and
Chaffey¶s project (2003) with Aboriginal children in Australia. Chaffey¶s
approach, is particularly interesting for this review because of his research
focus on the Dynamic Assessment procedure itself. He, as many others,
designed his approach around the Ravens Matrices (Australian
standardization by De Lemos, 1989) using an extended ³cake´ or pretest,
intervention, post-test, and far posttest design. His study included randomly
assigned experimental (metacognitive intervention) and control groups
(placebo intervention).

50
Both groups participated in social interactions, games and discussions
prior to the study to develop rapport and enhance their motivation. The two
hour intervention (two one hour sessions with a break between) was
administered in small groups of three to four and semi-scripted, and a
³respected Aboriginal adult´ (Chaffey, 2003, pp 112) was present at all
sessions. The intervention, using similar but unrelated types of problems,
emphasized development of self regulation (impulse control) and awareness
and application of appropriate strategies (systematic planning approach,
part/whole solutions, verbalization, self- evaluation, and monitoring).
This project successfully demonstrated the ability of this procedure to
identify Aboriginal children with gifted potential who would have been
previously overlooked because of their low academic achievement. To be
counted as gifted, Chaffey used the cut-off at the 85th percentile. Lidz and
Macrine (2001) investigated the effects of utilizing an approach to
identification of culturally and linguistically diverse elementary school
students that included Dynamic Assessment as part of a multi-source battery.
This study identified 25 of the 473 students (i.e. 5 percent) in an eastern U.S.
suburban school with almost two thirds of its children from culturally or
linguistically diverse background as eligible for the gifted program.
Previous attempts to identify children in this school as eligible for gifted
programming resulted in identification of less than one percent of the
students. The Dynamic Assessment procedure involved modification of a
standardized, normative, nonverbal test, the Naglieri Nonverbal Abilities Test
(Naglieri, 1997). The test was administered according to the standardized
directions for the pretest. The intervention began with readministration of the
first five items that were answered incorrectly by the student. For these items,
the student was guided through the correct solution through a series of
predetermined graduated prompts. The student was then asked to re-solve the
remaining items initially answered incorrectly receiving feedback that the first
choice had been incorrect; the student was asked to try again. The total was
51
recalculated from a combination of partial credits for items incorporated into
the intervention and full credit for those answered correctly upon retesting,
subtracting two points from the total (based on test-retest results) to correct
for practice effect.
The proportion of students identified as gifted by this study matched the
proportion of students in the entire district who had been identified as gifted
(5%). Further, the 60% of minority/immigrant students among those who
qualified reflected their proportion of representation within the school. The
primary source of this successful identification was the Dynamic Assessment
modification of the Naglieri Nonverbal Abilities Test (1997). To qualify as
gifted for this study, the students had to perform within the top third
percentile on two of the three final measures, that is, the cut-off was the 97th
percentile, with one or two minor exceptions (for example a boy who
obtained a composite index score of 160 on the Kaufman Assessment Battery
for Children, Kaufman and Kaufman, 1983) but did not reach as high on other
measures.
Along with demonstrating the contribution of Dynamic Assessment to
determination of eligibility for gifted programming, an important contribution
of this study was that the procedures were not modified to accommodate
students from minority or immigrant populations in any differential way.
Once the criteria were set, they were applied to all students in the same way,
meeting the suggested ideal of Frasier and Passow (1994). This study also
documented the lack of special contribution of nonverbal testing without the
dynamic modification. There were two opportunities for students to perform
on traditional, standardized non-verbal tests and neither of these held any
advantage for their identification as gifted students. For example, the
composite score of the Kaufman Assessment Battery for Children was the
second best identifier of giftedness compared with the dynamic procedure,
and this far outperformed the nonverbal score derived from the same test. The
pretest (given prior to intervention) with the Naglieri Nonverbal Abilities Test
52
was the other opportunity to obtain a standardized nonverbal score, and this
did not prove to be a good source of identification for these students.
Kanevsky (1990) has been the most prolific if researchers applying
Dynamic Assessment with gifted students. Her work, along with that of
Brown and Campione (1984) has documented the importance of transfer in
discriminating among and describing the abilities of learners of different
levels (Kanevsky & Rapagna, 1990). To elicit and demonstrate transfer, it is
obviously necessary to administer a related task that follows the completion
of the initial problem. This then is consonant with use of a dynamic model.
Even more importantly, Kanevsky (ibid) has been able to document the ways
in which Dynamic Assessment elicits meaningful descriptions of the learning
abilities of gifted compared with typically developing children (Kanevsky,
1990). The gifted children not only demonstrated higher rates of transfer, but
also showed more instances of spontaneous elaboration, responsiveness to
feedback, search for and enjoyment of complexity, preference for
independence, and ability to explain their errors. These behaviors were
typically assessed by dynamic and not traditional assessment. Kanevsky used
a modified Tower of Hanoi/London task (hands on and computer-based)
interjecting rule reminders and hints regarding the next best move toward task
solution as needed, as well as solution of both near and far transfer tasks.
All but one (Zorman, 1997) of the DA approaches in this review address
the issue of identifying individuals for eligibility for gifted programming. In
these studies, DA has been found to improve the possibility of recognizing
high learning ability in children whose giftedness might otherwise be
overlooked, especially those from culturally diverse backgrounds (Lidz,
2001). Although the tasks per se (with the exception of Zorman¶s) do not
relate directly to instructional programming, observations of the child¶s
performance during task solution greatly enhance knowledge of the child-as-
learner. These observations, particularly as they reveal the child¶s non-

53
intellective characteristics are very important for in depth understanding and
educational programming (Tzuriel, Samuels & Feuerstein, 1988).
There is nevertheless little to provide a direct relationship between this
assessment information and educational instruction because of both the nature
of the tasks, as well as the types of interventions provided (again, with the
exception of Zorman, (1997)). This reflects the field of Dynamic Assessment
more generally, and the criticism that there continues to be preoccupation
with classification and labeling, rather than provisions of insight into
individual children¶s learning and the resulting guidance of subsequent
intervention (Elliott, 2003). Lidz (2001) has designed and discussed a model
of curriculum based Dynamic Assessment in which both the tasks and
interventions are embedded within the student¶s actual instructional situation,
but this has yet to be applied with students identified as gifted. Furthermore,
specific attention needs to be paid to the nature of the interventions provided
for the students to assure that these are sufficiently metacognitive to promote
transfer (Kaniel & Reichenberg, 1990). However, the degree of explicitness
of these interventions is expected to vary according to the needs of the
student, and it is likely that one basis for differentiation of students with the
potential for gifted level performance may be their need for less explicit and
less elaborate scaffolding (Tzuriel & Bengio, 2005).
Research applying DA to gifted students is alive and well, with recent
studies being conducted in Jordan (Al-Hroub, 2005) and Israel (Tzuriel &
Bengio, 2005).We feel that Dynamic Assessment makes an important
contribution to revealing the qualitative aspects of the individual¶s learning
and agree with Vygotsky (1978) that a more complete understanding of the
student requires exploration of both the zones of actual and proximal
development. Furthermore, we continue to advocate, along with others, the
use of multiple sources of data in identifying and programming for any
student who is to receive specialized services. DA is one valuable source of
this information and should be utilized along with others to elicit the types of
54
information most appropriate to the various questions each of these best
addresses.
Such an approach is demonstrated in the Lidz and Macrine¶V  study
which was admittedly labor-intensive at the beginning; however, once the
initial effort of screening and identification was completed, it was then only
necessary to continue the process with students subsequently entering first
grade or newly transferring into the school. However, it is in the area of
assessment of individuals from culturally diverse backgrounds that DA makes
its most substantial contribution (Lidz, 2001) and one would goes so far as to
suggest that one cannot envision a meaningful assessment of this nature
without it.

Computer-assisted DA

During the last decade there has been a dramatic growth in the use of
computer based learning for instructional purposes. Previous research showed
that use of computers as a learning tool improved academic achievements,
perception of the learning process, and academic self-perception. Computers
enable exposure of the learner to varied systems of symbols, focusing, and
PRGHVRIDWWUDFWLQJWKHOHDUQHU¶V attention. Such learning provides immediate
feedback, and graduated and organized processing of information tailored to
WKHLQGLYLGXDO¶VOHYHO 5\DQ; Clariana, 1993; Mevarech, 1993).
The use of computer-assisted (CA) learning was found useful also with
young children. Carlson and White (1998), for example, found that use of a
commercially available software program significantly improved kindergarten
studeQWV¶ understanding of the concepts of left and right. Based on their
findings the authors concluded that it is possible to provide young
kindergarten children with a favorable computer experience while enhancing
their understanding of a particular educational concept. Two other studies
support the conclusion that well-designed CA activities, when presented with
55
the active participation of a trained tutor, can LQFUHDVH \RXQJ FKLOGUHQ¶V
cognitive abilities. Goldmacher and Lawrence (1992) compared a Head Start
group of pre-schoolers who received a CA enrichment program with another
engaged in standard activities. Students in the CA group demonstrated
improvements in all academic skills tested and showed greater growth in
memory and visual perception. Chang & Osguthorpe (1990) showed that
kindergarten children who worked with a computer achieved higher scores in
tests of word identification and reading comprehension than children who
received regular non-computer teaching.
The use of mediation in a computer-assisted DA has broad application to
areas of cognitive intervention programs and classroom teaching. Mediation
in CA environment raises questions about the role of the mediator in a
computer environment and the specific attributes of the computer in
facilitating or blocking the development of cognitive processes. It becomes
more and more evident that the effectiveness of CA learning depends on a
match between the JRDOV RI WHDFKLQJ OHDUQHU¶V FKDUDFWHULVWLFV WKH VRIWZDUH
design, and decisions made by educators.
Piaget (1952) and Piaget and ,QKHOGHU  VWXGLHGFKLOGUHQ¶VDELOLW\WR
seriate by asking them to arrange a group of sticks in a row from smallest to
largest. If the child succeeded to create a correct progression he or she was
given another stick of intermediate length to insert at the appropriate place in
the series. Piaget discovered that young preschoolers can find the largest or
the smallest stick in a group but they have great difficulty constructing a
series. Only by the age of 6±7, most children can easily construct a series and
insert an additional stick in the correct place. 3LDJHW¶V LGHDV DERXW WKH
development of mathematical skills were most influential on thinking of many
researchers. His main argument, in line with his skepticism about
mathematical thinking in pre-school years, was that reversibility lies at the
heart of understanding of all logic, and therefore of all mathematics (Bryant,
1995).
56
According to Piaget, a child who has not reached yet mastery of
reversibility cannot understand the cardinal and ordinal properties of a
number and has no notion of the additive composition of numbers. In spite of
3LDJHW¶V VWURQJ PDWXUDWLRQDO DSSURDFK he offered late in his career (Piaget,
1976) several suggestions in regard to education of children. His suggestions
were especially important for pre-school and early grade school curricula
(Ginsburg & Opper, 1988). Among his widely ideas are (a) tailoring
HGXFDWLRQ WR FKLOGUHQ¶V UHDGLQHVV WR OHDUQ E  EHLQJ VHQVLWLYH WR LQGLYLGXDO
differences, and (c) promoting discovery-based education.
Chandler (1984), Kulik (1994) and Snider (1996) in separate studies
found certified the potential qualities of CA in improvement of cognitive
capacities of children (These results also support specific findings in which a
CA condition combined with adult mediation enhance thinking processes of
young children more than an adult alone or computer alone condition). It
should be noted that the role of the mediating adult in the CA condition was
of crucial importance and that the computer by itself could not replace the
role of the human mediator.
DA has been shown in previous studies to be a powerful instrument for
evaluating of learning potential (Haywood, 1997; Tzuriel, 2001). The
advantages of DA over conventional static evaluation are related to several
factors. More than with the static test approach, emphasis is given to process
variables, higher precision in DVVHVVLQJWKHLQGLYLGXDO¶VOHDUQLQJSRWHQWLDODQG
higher accuracy in measuring LQGLYLGXDO¶VFRJnitive abilities and deficiencies
and relating them to various educational, and intervention variables. Given the
LPSRUWDQFH RI KXPDQ PHGLDWLRQ RQ FKLOGUHQ¶V FRJQLWLYH GHYHORSPHQW
(Feuerstein et al., 1979), it is crucial to take into account the contribution of a
CA PRGHIRUFKLOGUHQ¶VDVVHVVPHQWDQGLQWHUYHQWLRQ7KH&$PHGLDWLRQLVD
complimentary human-computer mode which combines both the human
mediation qualities (i.e. warmth, flexibility, unexpected responses) and the

57
FRPSXWHU¶V systematic and controlled simulation of mediated learning
principles.
Tzuriel and Shamir (1999) examined the effects of the computer-assisted
DA on cognitive performance as compared to DA with an examiner. The DA
DSSURDFK LV EDVHG RQ )HXHUVWHLQ¶V PHGLDWHG OHDUQLQJ H[SHULHQFH WKeory. A
sample of kindergarten children (n = 60) were assigned to either a Computer
Assisted (CA, n = 30) or Examiner Only (EO, n = 30) groups. The sample
was randomly chosen from three different kindergartens. Initial intelligence
level of both groups was controlled by matching their frequency distribution
RQ WKH 5DYHQ¶V &RORUHG 3URJUHVVLYH 0DWULFHV VFRUH 7KH &$ JURXS ZDV
administered the Think-in- Order program, which is a multimedia program
designed specifically for this study and is based on the ChildrHQ¶V6HULDWLRQDO
Thinking Modifiability (CSTM) test. The EO group was administered the
CSTM test by an examiner. The findings revealed that intervention involving
mediation processes in a CA Dynamic Assessment procedure was more
effective in bringing about significant cognitive changes than mediation with
only an examiner.

DA in educational settings: Realizing potential

In 1905, Alfred Binet and his colleagues produced the first intelligence test
for children. At the beginning of the twentieth century, education provision in
many developed countries was being extended to include a much greater
proportion of the child population. As a consequence, teachers found
themselves confronted by significant numbers of children who appeared
incapable of coping with the academic GHPDQGV %LQHW¶V WHVW UHVWULFWHG WR
academic intelligence (Brown & French, 1979) rather than broader forms of
intellectual functioning, represented a means of comparing the mental level of
the testees with that of their same-aged peers that could overcome teacher bias
about the reasons for student failure (Thorndike, 1997). On the basis of this
measure, the suitability of the child for schooling could be derived.
58
:KLOH%LQHWGHILQHGLQWHOOLJHQFHLQWHUPVRIWKHµ«DELOLW\WROHDUQ¶, his,
and VXEVHTXHQWPHDVXUHVKDYHWHQGHGWRIRFXVSULPDULO\RQWKHFKLOG¶VSDVW
learning rather than their capacity for learning (Resing, 2000). Almost 100
years later, most modern day IQ tests have changed little (Thorndike, 1997.
Psychometrically more sophisticated, and on nodding acquaintance with
information processing theories, the most popular tests continue to include
items that involve such activities as naming objects, recreating designs with
patterned blocks and remembering a series of numbers. Since this time,
however, educational psychologists have come to recognize the many flaws in
IQ measures; their tendency to lack an empirically supported theoretical
framework (Flanagan & McGrew, 1997), the limited relationship between
scores and instructional practices (Reschly, 1997), their emphasis upon
products rather than psychological processes (Wagner & Sternberg, 1984),
their tendency to linguistic and cultural bias (Lopez, 1997) and their inability
to guide clinicians in deriving specific interventions for educational
difficulties (Fuchs et al., 1987; McGrew, 1994).
DA represents an approach that seeks to overcome many of the above
difficulties. Theoretically driven, they seek to examine cognitive processes
that are important for learning; they are seen as far more sensitive measures
for minority populations (Hessels 1997, 2000; Robinson-Zanartu & Aganza,
2000) and they have potential to offer insights and guidance for practitioners
(Lidz & Elliott, 2000). The theoretical forefather of Dynamic Assessment is
the Russian psychologist, Vygotsky (1978), whose notion of the zone of
proximal development is central to the approach. This construct, now widely
influential in education circles, concerns performance which cannot be
achieved unassisted but can be achieved with the help of another more
capable individual. In illustrating the construct, Vygotsky (ibid) spoke about
two hypothetical children both aged 12 years and both functioning at an eight-
year-old level on standardized tests. While they could be considered as being

59
at the same age cognitively, one child might be capable of making significant
gains if assisted in the ways of completing the tasks, eventually reaching a
level commensurate with their chronological age; the other child might make
only modest gains.
Given such a scenario, in which the ZPD of the former is much greater,
it would be unwise to treat both children as having the same cognitive profile.
It is important to recognize that conventional intelligence tests are measures
of achievement involving skills that are typically acquired in the home or at
school (Sternberg & Grigorenko, 2001). Thus it is hardly surprising that
research findings that children from ethnic minorities (Lopez, 1997; Kaplan &
Sacuzzo, 1997) or from socially disadvantaged communities (Budoff, 1987)
tend to underperform on such measures resonate with many clinicians who
often see such children struggle with items that are unfamiliar and
unpracticed. Clearly, knowledge-based and culturally loaded questions such
as knowing the distance from Edinburgh to London, or saying what one
VKRXOG GR LI RQH ORVHV D IULHQG¶V EDOO LWHPV IURP WKH XELTXLWRXV :HFKVOHU
Intelligence Scales (Wechsler, 1991), are highly problematic in this respect.
However, less immediately obvious is the fact that many IQ measures
place great importance upon speed and absolute accuracy, factors that may
have differential value across cultures. At an intuitive level, a test measure
that includes an examination of the capacity of the child to learn when
provided with scaffolded instruction that is tailored to offer the minimum of
assistance necessary for successful performance and which then permits
examination of transfer to other tasks, would appear to be more valid and
meaningful for educational settings. Certainly, there is much evidence that
assisted performance can provide evidence of latent abilities and expertise
that more conventional measures fail to tap (Hessels, 1997, 2000; Schlatter &
Buchel, 2000; Sternberg et al., 2003).
Sternberg and Grigorenko (2002) differentiate between the sandwich
format and the cake formats of DA. The former approach involves
60
administering a measure (the pretest) that is completed in unassisted fashion.
2QWKHEDVLVRIWKHWHVWHH¶VUHVSRQVHFRQWLQJHQWLQVWUXFWLRQLVSURYLGHd that is
JHDUHGWRWKHLQGLYLGXDO¶VLGHQWLILHGVWUHQJWKVDQGZHDNQHVVHV6XEVHTXHQWO\
an alternate form of the original test (the post-test) is provided. In recognition
of the SV\FKRPHWULFSUREOHPVWKDWUHVXOWIURPWKHXVHRIµJDLQVFRUHV¶WKDWLV
focusing upon the difference between the pre and post test measures
(Embretson, 1987) it is the post-test performance that is generally considered
by researchers to be more statistically justifiable (Guthke & Wingenfield,
1992). Thus, clinicians, who are likely to ILQG DQ LQGLYLGXDO¶V SURJUHVV
following intervention of particular interest for planning further work, should
recognize that gainscores may prove problematic when used for comparative
purposes (Sternberg & Grigorenko, 2002).
The cake format dispenses with the pre and post test format. Here, the
testee is presented with a series of items and assistance is provided
immediately difficulties are encountered. Once a solution has been reached,
the next item is presented. The type of assistance offered can be highly
standardized or individualized on the basis of clinical judgment (Feuerstein et
al., 1979).
The relative merits of clinical and standardized approaches continue to
split advocates of dynamic testing and, to a significant extent, views often
reflect the research-based or more clinical orientation, of the commentator. As
Grigorenko and Sternberg (1998) noted, research psychologists tend to be
concerned with the measurement of change while applied psychologists will
be more interested in the promotion of change. Lidz (1992) observed that
those dynamic approaches that appear best to fit scientific requirements often
appear to have less utility for diagnosis and intervention. The clinical
approach is most associated with the work of Feuerstein and his colleagues
(1979, 1980). In his psychological interventions with immigrants to Israel,
Feuerstein came to the conclusion that the use of IQ tests was resulting in

61
many children improperly labeled as intellectually inferior and being placed
in special education.
:KLOH D VHPLQDO LQIOXHQFH )HXHUVWHLQ¶V WKHRry and techniques have
remained virtually unchanged during the past two decades and, thus, are
vulnerable to charges that they fail to reflect developments in psychological
thinking. Discussion of a number of perceived conceptual and methodological
flaws is provided by Buchel and Scharnhorst (1993), Guthke and Beckmann
(2000b), and Grigorenko and Sternberg (1998). However, it is important to
note that, unlike Feuerstein and his associates, these researchers are
proponents for the employment of traditional scientific methodologies in
validating assessment approaches.
Many of those who argue that dynamic approaches should meet the
requirements of psychometric testing, have as their prime goal the
development of more valid measures of intellectual functioning. They share
with Feuerstein a concern that FKLOGUHQ¶V SRWHQWLDO LV RIWHQ XQGHUHVWLPDWHG
and, as a consequence, low teacher expectations and assignment to special
schooling may act in a self-fulfilling fashion. However, unlike Feuerstein,
they emphasize the importance of standardized administration and test
reliability. As noted above, if the interaction between examiner and examinee
is not standardized but, rather, is orchestrated by the examiner according to
individual circumstances, it is difficult to make judgments about an
LQGLYLGXDO¶VSHUIRUPDQFHUHODWLYHWRRWKHUVDVWKLVLVOLNHO\WRUHIOHFWGLIIHULQJ
degrees of assistance (Budoff, 1987).
Low inter-rater reliabilities on scales assessing deficient cognitive
functions (Samuels et al., 1989; Vaught & Haywood, 1990) do not engender
FRQILGHQFH )HXHUVWHLQ¶V UHVSRQVH WR VXFK FULWLFLVPV LV WR DUJXH WKDW DV
dynamic approaches seek to induce change, seeking consistency across
assessments is a pointless task. Not only, in his opinion, is such a task
illogical it may also reduce the ability of the child to demonstrate their true

62
capability, particularly if the testing situation is socially or culturally
unfamiliar.
To a significant extent, the relative importance of standardization may
depend upon the purpose of the assessment. If assessment is to be used for
educational selection, the allocation of resources, or for system accountability
(for example, to judge the performance of teachers or schools), one is more
likely to require data that can be used to enable systematic comparison
between children. In such cases, more standardized, reliable measures are
necessary than in those circumstances where assessment is used to provide
data that inform educational intervention with regard to an individual learner
(Gipps, 1999).
More recently, it has been recognized that standardized dynamic
measures may be particularly appropriate for computerization (Guthke &
Beckmann, 2000; Tzuriel & Shamir, 2001). Guthke and Beckmann (2000),
for example, have devised a measure in which the computer not only provides
prompts when errors are committed, but also presents items differentially on
WKHEDVLVRIWKHWHVWHH¶VSHUIRUPDQFH$WWKHHQG of the test, the computer can
provide a printout showing such information as the number of tasks
undertaken, the assistance required, the latency time between presentation of
WKHLWHPWKHWHVWHH¶VUHVSRQVHDQGWKHWRWDOWLPHUHTXLUHGWRFRPSOHWH the test.
Such data can provide valuable diagnostic information.
Another debate, concerning the particular domains that should be
assessed, closely mirrors that surrounding the teaching of thinking skills
(McGuiness, 1999) whereby some hold that thinking should be taught within
a given curricular domain such as science (Adey & Shayer, 1994) while
others advocate more abstract, domain-general approaches (e.g. Feuerstein et
al., 1980; Blagg et al., 1988). The majority of Dynamic Assessment
researchers have opted for the latter approach and while curricular areas such
as mathematics, reading and spelling have been tapped (Hamers et al., 1994;
Guthke & Wiedl, 1996; Gerber, 2000; Miller et al., 2002), measures usually
63
employ abstract problems that tap cognitive processes deemed to underpin
general learning and problem solving (Swanson, 2000; Resing, 2000).
As has been the case for traditional intelligence tests, these often employ
inductive reasoning tasks that require the individual to find from available
data, a set of underlying rules as a result of repeated and systematic
observation. The individual repeatedly searches for commonalities and
differences until a pattern emerges and a generalization can be made (Klauer
& Phye, 1995). One form of inductive reasoning widely employed in dynamic
tests is analogical reasoning. This may involve either verbal (e.g. boy is to
PDQDVJLUOLVWR«" RUVSDWLDO LQYROYLQJWZRRUWKUHHGLPHQVLRQDOVKDSHVRU
patterns) modalities (e.g. Resing, 1997, 2000; Schlatter & Buchel, 2000).
Other popular measures examine the processes of seriation, and/or
classification (Hamers et al., 1997).
Many contemporary measures have been developed in mainland Europe
and, while associated materials are not easily accessible to English speaking
practitioners, accounts of their work have been provided in English. Key
European research groupings are based in The Netherlands (Hamers et al.,
1997; Resing, 1997, 2000; Hessels, 2000) in Spain (Fernandez-Ballesteros &
Calero, 1993, 2000) in Germany (Guthke & Stein, 1996; Guthke &
Beckmann, 2000) and in Switzerland (Buchel et al., 1997; Schlatter &
Buchel, 2000) (see Appendix for further details). In the UK, the approach
developed in the USA by Lidz and Jepsen (1997) for young children, aged
approximately 3±5 years, has proven popular with practitioners.
Their Application of Cognitive Functions Scale (ACFS) is a curriculum-
based Dynamic Assessment measure that examines learning processes
typically required for success in preschool (classification, perspective taking,
short-term auditory and visual memory, verbal planning and pattern
completion). The approach involves giving standardized pre and posttests
with semi-scripted intervention in between. During the intervention phase
there is an attempt to impose some degree of consistency while maintaining
64
All educational proJUDPV PXVW DW VRPH SRLQW DSSUDLVH OHDUQHUV¶
knowledge and abilities; that is, they must assess them. The purposes of
educational assessment are to evaluate school achievements, predict future
achievements, and prescribe educational treatments. As a result, assessment
and instruction are two complimentary aspects of methodology which should
optimally result in true learning. From this perspective, assessment occurs not
in isolation from instruction but as a dialectically integrated activity which
seeks to understand development by actively promoting it. This pedagogical
approach, known as DA, challenges the widespread acceptance of
LQGHSHQGHQWSHUIRUPDQFHDVWKHSULYLOHJHGLQGLFDWRURILQGLYLGXDOV¶DELOLWLHV
and calls for assessors to abandon their role as observers of learner behavior
in favor of a commitment to joint problem solving aimed at supporting learner
development. In DA, the traditional goal of producing generalizations from a
snapshot of performance is replaced by ongoing intervention in development.
The dialectic unification of assessment and instruction that DA represents has
profound implications for classroom practice, which second language (L2)
researchers are beginning to explore.

Khaghaninejad and Hadigheh (2012) applied DA for problematic students


at Shiraz Faculty of paramedical sciences. On the basis of the marks the
students had received on their course leaving exam, the real subjects of the
study were determined; those who had received less than the half of the total
mark to whom the meditational process of DA was applied. 7KH WHDFKHU¶V
PHGLWDWLRQDO DSSURDFK WR VXEMHFWV¶ OHDUQLQJ SURFHVVHV ZDV JHQHUDOO\ LQ WKH
form of a series of individualistic interviews with the subjects which
contained the following headlines;

¾ 7KHVXEMHFWV¶JHQHUDORSLnion about the exam.

¾ 7KH UHDO UHDVRQV RI PLVVLQJ D SOHQW\ RI H[DP TXHVWLRQV LQ VXEMHFWV¶
viewpoints which were discussed through these main topics;

66
x Item-reference problems: The investigation of vagueness of the
TXHVWLRQV¶ GLUHFWLRQV SUREOHP WKH LQYHVWLJDWLRQ RI VXEMHFWV¶
understanding of the questions,

x Teaching-reference problems: The investigation of teaching methods


DQGWHFKQLTXHVLQFODVVWKHLQYHVWLJDWLRQRIVXEMHFWV¶RSLQLRQVDERXW
the course book,

x Affective-reference problems: The investigation of affective factors of


VXEMHFWVWKHLQYHVWLJDWLRQRIVXEMHFWV¶DWWLWXGHWRZDUG(QJOLVKDQGLWV
QHFHVVLW\WKHLQYHVWLJDWLRQRIVXEMHFWV¶EDFNJURXQGLQ(QJOLVK

x Exam-reference problems: 7KHLQYHVWLJDWLRQRIVXEMHFWV¶H[SHFWDWLRQV


from the exam, the investigation of the quality of the exam
procedure,

x Readiness-reference problem: 7KHLQYHVWLJDWLRQRIVXEMHFWV¶UHDGLQHVV


IRUWKHH[DPWKHLQYHVWLJDWLRQRIVXEMHFWV¶VWXG\LQJGXULQJWKHWHUP
DQG IRU WKH H[DP WKH LQYHVWLJDWLRQ RI WKH UHDVRQV RI VXEMHFWV¶
laziness for the exam,

¾ 6RPHSLHFHVRIDGYLFHRUDQGJXLGHOLQHVRQWHDFKHU¶VSDUWDFFRUGLQJWR
HDFKVXEMHFW¶VSUREOHP

As the meditational process (remedial teaching process) was finished,


the subjects were asked to take the same exam again after two weeks (the
optimal temporal distance in which the extraneous factors such as test
ZLWQHVV SUDFWLFH HIIHFW DQG FRJQLWLYH PDWXUDWLRQ DIIHFW WKH VWXG\¶V
results minimally (Hatch & Farhady 1981)). To determine the effects of
the meditational process to which the subjects were exposed, the results
of the first and the second test administrations were compared. In this
way, the researcher became able to scan the outcome of the approach he
took meticulously through numbers.

67
Seventeen students out of 58 received less than half of the total possible
mark (less than 24 out of 48) and became the real subjects of the study. They
were interviewed one by one and were asked to talk about the most significant
reason of their poor performances on the exam. It is worth noting that most of
the subjects were astonished to see their teacher is considerate about the
quality of their performance and promised to perform better in the next exam.
The performed interviews were recorded and then transcribed for a further
exact analysis. Some parts of the interviews are presented in the next section
IRU LOOXPLQDWLRQ RI WKH QDWXUH RI WHDFKHU¶V PHGLWDWLRQDO UROH LQ D Dynamic
Assessment framework. For example,

# Subject 3
Subj: shoma avalin ostadi hastid ke dar morede moshkele man to nomre nagereftan makhsusan to
ubli ]DEDQ«QD]DUHPDQRPLNKDLQ
(You are the first teacher who asks my opinion about my problem of poor
performance especially in English tests.)
Teach: hala in khobe ya bade? Khande«
(Is it good or bad, now? LDXJK«
Subj: khande«
ODXJK«
Subjects named item-referenced, exam-referenced, affective-referenced,
laziness and teacher-referenced problems as the real reason of their poor
performances in the exam respectively. No other source of problem was
identified in this study. Item-referenced problems as the most frequently
UHSRUWHG VRXUFH RI VXEMHFWV¶ IDLOXUH ZDV HVWLPDWHG WR EH WKH UHDO UHDVRQ RI
underachievement for approximately half of the subjects (subjects 1, 3 and 5).
Unfamiliarity with items, and vagueness of directions put subjects in a
VWUHVVIXOHPEDUUDVVLQJFRQWH[W³Cloze WHVW´DQG³PDWFKLQJ´LWHPVFDXVHGWKH
JUHDWHVW SUREOHPV EDVHG RQ VXEMHFWV¶ LQWURVSHFWLons. The teacher/researcher,
proposed some guidelines to subjects in an interview which created a dialogic
=3'LQ9\JRWVN\¶V  WHUP)RUH[DPSOHVXEMHFWIRXQG³FOR]HWHVW´WKH
most difficult to answer.

68
# Subject 1
Teach: kodom ghesmate emtahan bishtar barat sakht bud?
(Which part of the exam was more difficult for you?)
6XEMFOR]HWHVWKD«ODXJK
(Cloze tests,..laugh)
Teach: cloze test ha, ta hala cloz test ro nadide budi to emtahana?
(Cloze tests! YRXKDGQ¶WVHHQFOR]HWHVWVEHIRUH"
Subj: ye bar dige yeki az ostada gerefte bood, vali in yeki kheili sakhttar bud.
(One of my previous teachers gave a cloze test once, but this one was more difficult.)
7HDFKEHV\DUNKRESDVVKRPDWRFOR]HWHVWELVKWDUD]EDJKL\H\HVRDODPRVKNHOGDVKWL«WR
in emtahan az 8 soal faght yekisho javab dadi, chera?
:HOOVR\RXKDGSUREOHPVZLWKFORVHWHVWVPRUHWKDQWKHRWKHULWHPV«\RXDQVZHUHGMXVW
question out of 8, why?)
Subj: sakht bud dige in kalamt ublic WRNHWDEQDEXG«
(It was difficult; these words were not in the book)
7HDFK«VKRPDDVODQWPLGXQLEDUDMDYDEGDGDQEHFOR]HWHVWED\DGFKLNDUERNRQL"
(Do you know how to answer a cloze test item?)
Subj: matno mikhunamo javab midam.
(I read the text and answer the questions.)
Teach: khob man fek mikonam bara javab dadane ye cloze test, aval bayad kole matno ye dor
be khuni ta un temo mozuae asliye matn daset biyad ke rajebe chiye? Ravanshenasiye,
memari\H WDNKDVRVL\H RPLPL\H UDVPL\H \D PRKDYHUHL\H«EDG« %D\DG MRPOH MRPOH
matno bekhuni va bebini to har jomle che itemi kame; feal, fael, maful, harfe ezafe, yani dar
vaghe tashkhis bedi un ja khali bayad joze kodum taghsimbandiye grammeri bashe, esme,
feaOHVHIDWH\D«%DG\HNLGRWDD]JR]LQHKDKD]IPLVKHLQMXULPDJHQD"
(well, to answer a cloze item, I think, firstly, you should read the whole text once to reach to
a general view about the text, whether it is psychology, architecture, technical or general,
IRUPDORUFRQYHUVDWLRQDO«WKHQ«\RXVKRXOGUHDGWKHWH[WVHQWHQFHE\VHQWHQFHDQGLGHQWLI\
WKH PLVVLQJ LWHP LQ HDFK VHQWHQFH YHUE VXEMHFW REMHFW SUHSRVLWLRQ« LQ IDFW \RX VKRXOG
GHWHUPLQH WKH PLVVLQJ ZRUG¶V JUDPPDWLFDO FDWHJRU\ VKRXOG LW EH D QRXQ D verb, an
DGMHFWLYHRU«RQHRUWZRRIWKHFKRLFHVFDQEHHOLPLQDWHGLQWKLVZD\"<HV"
Subj: bale ostad.
(Yes, teacher)
Teach: bad beine 2, 3 gozineye mojud ba tavajoh be matn va jomalate ghablo badesh ubli
che noa kalamati estefade shode, yeki az gozineha ro alamat bezani.
(Then among 2, 3 choices left, on the basis of the text and the sentences before and after, by
considering the type of the used words, mark one of the choices)
Subj: say mikonam.
(I will try to do so.)
7HDFK LQ UDKR WR HPWDKDQH EDGL HPWDKDQ NRQ «EHELQLP FKH QDWLMHL GDUH«PRYDIDJK
bashi.
69
$SSO\WKLVWHFKQLTXHLQ\RXUQH[WH[DP«ZHZLOOVHHWKHUHVXOWV«JRRGOXFN
Subj: daste shoma dard nakone, Ostaad.
(Thanks a lot, teacher.)
$QG VXEMHFW  DFNQRZOHGJHG WKDW ³PDWFKLQJ´ LWHPV KDG GUDVWLFDOO\
spoiled his performance in the exam;

# Subject 2
Teach: fek mikoni bishtarin zafet koja bud?
(Where was your weakest point, you think?)
Subj: inke hamkhanevade peida konamo, match konamRNDODPDWRLQDUR«
7KHILQGLQJDQGPDWFKLQJGHULYDWLYHV«
Teach: Matching?
(Matching?)
6XEM$UHPDWFKLQJ«
(Yeh, matching.)
7HDFKWRJKHVPDWHPDWFKLQJVKRPDD]WDVRDOIDJKDWWDUR]DKHUDQ«
,QPDWFKLQJSDUWLWVHHPVWKDW\RX¶YHDQVZHUHGMXVWRXWRI«
Subj: Are,..maniyaro mifahmam vali inke ba kalamate dige matcheshun konam sakhte baram.
,XQGHUVWDQGWKHZRUGVEXWLW¶VGLIILFXOWWRPDWFKWKHPZLWKRWKHUVIRUPH
r khob, shoma bara javab dadan be soalate matching chikar mikoni?
(Well, what do you do for answering matching items?)
Subj: rastesh miamye kalama ro in var mikhunam bad miram donbale kalame un var migardam,
EDGEHLQHWDVKDNPLNRQDP«KLFKNRGXPRHQWHNKDEQHPLNRQDP, vel mikonam, beine 2
ta shak mikonam hamishe.
(Well, I read the words in the first column then I search for the related word in the second
column, I doubt 2, 3 choices, and mark none finally. I doubt always.)
Teach: khob, fek nemikoni ke behtare masalan, aval rabeteye kalamato peida koni, masalan,
EHELQL«GDUYDJKHLQWDVHWXQNDODPDWHKDPNKDQHYDGHKDVWHQNDODPDWHKDPPDQLKDVWHQ
NDODPDWH PRWD]DG KDVWHQ LQ MXUL «IHN QHPLNRQL EHKWDU EDVKH" «EDG \HNL \HNL NDODPDWR
bekhuni va donbale kalameye mortabetesh to setune dovom begardi?
:HOOGRQ¶W\RXWKLQNWKDWLW¶VEHWWHUWRILQGWKHUHODWLRQRIWKHZRUGVLQWKHFROXPQZKHWKHU
WKH\DUHRQHDQRWKHU¶VGHULYDWLYHVV\QRQ\PRUDQWRQ\P«Not better?....then read the words
and find the relatable item in the second column one by one.)
6XEM«PDQLURWRMRPOHEHKWDUPLIDKPDP
(I understand the meanings of the words in sentences better.)
7HDFK NKRE EDUD KDU NDODPH \H MRPOH WR ]HKQHW EHVD]«..rabeteye kalamato mituni tashkhis
bedi?
:HOOPDNHDVHQWHQFHIRUHDFKZRUGLQ\RXUPLQG«&DQ\RXLGHQWLI\WKHUHODWLRQRIWKHZRUG
of the two columns?)
Subj: rastesh na.
70
(Not, really.)
7HDFKNKREDYDOLQNDULNHED\DGERNRQLLQHNHUDEHWH\HNDPDWHWDVHWXQRSHLGDNRQL«EDG
to zehnet barashun jomle besazi, bad kalamate motenasebo beham vasl koni.
(Well, as your first job, you should find the relation of the words in two columns, then, make a
sentence for the words you cannot remember their meanings, and finally, match the words
properly.)
Subj: shayad mtahan nakardam.
(May be, I have not tried it.)
Teach: merci, movafagh bashi.
(Thanks, good luck.)
I addition to item-referenced problems, subjects with affective problems were also provided
with some advice. For example, subject 4 who suffers from motivational problems was asked to
create some internal motivations, as illustrated below.
# Subject 4
Subj: angize nadaram, alaghe nadaram be Englisi, be nazare man zarurati nadare ke ye seri loghat
hefz konam bidalil.
(I have neither motivation nor inclination to learn English, in my
opinion, it is unnecessary to memorize a series of words.)
7HDFKIHNQHPLNRQLNHGXQHVWDQH(QJOLVLEDGEHNDUHWPLDGEDUDHGDPHWDKVLOHW«LQDEDUDW
moham nist?
'RQ¶W \RX WKLQN WKDW NQRZLQJ (QJOLVK FDQ KHOS \RX ODWHU IRU
FRQWLQXLQJ\RXUHGXFDWLRQ«DUHQ¶WWKHVHLPSRUWDQWIRr you?)
6XEMFKHUD«PRKHPH«YDOLKLFKYDJKWDODJKHQDGDVKWDP\DGEHJLUDP
(Sure they are, but I was never interested in learning English.)
Teach: ta key mikhay bi angize bashi? Zamani aya khahad resid ke to ba in dars ashti koni?
(Till when do you want to be indifferent? Will the time of peace-making
with English come?)
6XEMKDWPDQ«XQ]DPDQ]LDGGRRUQLVW
(Definitely, that time is not remote.)
7HDFK\HHPWDKDQHGLJHD]DWPLJLUDP«WRLQPRGDWELVKWDUEHLQIHNNRQNHPRYDIDJKL\DWH
tahsilit be dunestanH (QJOLVL JHUHK NKRUGH« EH NKRVXV DJH JKDVG HGDPH WDVLO GDVKWH EDVKL
..be in fek kon ke che rahimituni peida koni ke be khodet bishta komak koni.
, ZLOO JLYH \RX DQRWKHU WHVW«WLOO WKDW WLPH WKLQN DERXW \RXU VXFFHVV
which is joined to English knowledge, particularly if you want to
continue your education, think of a way of helping yourself more in this
case.)
Subj: bale hatman
(Yes, certainly.)
As the interviews finished the subjects were asked to take the previous
exam again after 15 days. The results of the second administration of the test

71
which are analyzed both individually and collectively. Since the temporal
distance between the two administrations was only 15 days in which no
drastic cognitive maturation is possible, any improvement can be known as
the effect of teacher/researcher¶V oral mediation.

As the papers of the second administration were corrected, the better


performance of all subjects on the second administration was determined and
shown statistically both individually and collectively. The researcher just tried
WRLPSO\WKHSRVLWLYHXQGHQLDEOHHIIHFWVRI '$ RQVXEMHFWV¶SHUIRUPDQFHVLW
does not mean that all pedagogic problems _controllable or uncontrollable_
FDQ EH VROYHG WKURXJK WHDFKHU¶V PHGLDWLon under Dynamic Assessment
DSSURDFKZKLFKLVDPDQLIHVWDWLRQRIZKDW9\JRWVN\  FDOOHG³GLDORJXH
RIXQHTXDOV´LQVWXGHQWV¶]RQHRISUR[LPDOGHYHORSPHQW =3' 

DA is described as a subset of interactive assessment that includes


deliberate and planned ublications teaching and the assessment of the
effects of that teaching on subsequent performance. Its historical roots are
traced back to Vygotsky (1978) and Feuerstein (1997) and rests on four
assumptions:

¾ Accumulated knowledge is not the best indication of ability to


acquire new knowledge although the two are highly correlated.

¾ Everyone functions at less than 100% of capacity; therefore,


everybody can do better.

¾ The best test of any performance is a sample of that


performance; therefore, assessment of learning abilities can be
accomplished effectively with the use of learning tasks.

¾ 7KHUHDUHPDQ\REVWDFOHVWKDWFDQPDVNRQH¶VDELOLW\ZKHQWKH
obstacles are removed, greater ability than was suspected is often
revealed. Such obstacles include ignorance; impulsivity; impoverished
vocabulary; cultural differences in learning habits, styles, and attitudes;

72
poor self-concept as learners; and a host of motivational variables; plus, of
course, inadequate development of important cognitive and meta-
cognitive structures and strategies. By removing some of those obstacles,
one can reveal the ability to function more adequately.

Criticisms of the use of traditional tests of intelligence and achievement to


diagnose and treat learning problems have recently reached to a peak.
Alert calls for assessment focusing on outcome directly linked to actual
skills can be heard throughout the literature related to school psychology,
regular education, and special education. The use of less traditional
assessment techniques has been advocated as being more useful in
assessing strengths and weaknesses and providing feedback regarding
progress in meeting specific learning goals.

It will be evident from the comprehensive reach of the socio-cultural


perspective, which is the theoretical basis of DA, that the assessment process
is part of a very much wider terrain. That terrain involves considering what
the purposes of education are considered to be, how education is organized in
support of those purposes, and how the system views and provides for the
cognitive development of students. The perspective as a whole poses a
challenge to some current views of education, such as emphasis on curriculum
content without an equivalent emphasis on the processes of learning or on the
acquisition of meta-cognitive skills. Moreover, other current educational
goals, such as the achievement of inclusive education, pose a similar
FKDOOHQJH&KDQJHVWRWKH DVVHVVPHQWµHQG¶RIWKHV\VWHPVKRXOGEHVHHQDV
part of this context of challenge and undoubtedly will have far-reaching
implications. Dynamic Assessment is particularly helpful in accounting for
YDULDEOHVWKDWPD\XQGHUHVWLPDWHDQLQGLYLGXDO¶VDELOLW\VXFKDVXQIDPLOLDULW\
with the task, language, or materials. Particularly with bilingual and language
minority children, ecological models of cognitive assessment are likely to be
helpful (Lopez, 1995).

73
The need for alternative assessment techniques that are appropriate for
addressing a culturally and linguistically diverse population of students is
both obvious and critical. Unlike within the private sector, those working
within the public school system are not able to pick and choose the population
of students with whom to work. Therefore, it is imperative that those working
within the public school system have appropriate training and expertise to
work effectively with students from diverse backgrounds.

Although the concept of DA is not new, it is not yet widely practiced and
is still virtually unknown to many psychologists and educators. There are
many reasons for this state of affairs, some conceptual and others quite
practical. Although what is discussed by the researcher through the study was
quite positive, not all is good in the DA world. There are metric problems that
have yet to be addressed seriously. The question of reliability is a pressing
one, especially given that one sets out deliberately to change the very
characteristics that are being assessed. At least a partial solution is to insist on
very high reliability of the tasks used in DA when they are given in a static
mode; that is, without interpolated mediation. Another persistent problem is
how to establish the validity of DA. Ideally, one would use both static testing
and DA with one group of children and static, normative ability tests with
another group. The essential requirement would be that a subgroup of the DA
children would have to be given educational experiences that reflected the
within-test mediation that helped them to achieve higher performance in DA.
The expectation would be that static tests would predict quite well the school
achievement of both the static testing group and that sub sample of the DA
group that did not get cognitive educational follow up. Static tests should
predict less well the achievement of the DA cognitive education group; in
fact, the negative predictions made for that group should be defeated to a
significant degree.

74
References

Adey, P. & Shayer, M. (1994). Really raising standards: cognitive


intervention and academic achievement. London: Routledge
publications.
Al-Hroub, A. (2005). Dynamic Assessment for mathematically gifted children
with learning difficulties. Paper presentation at the International
Association for Cognitive Education and Psychology Conference.
England: Durham.
Ascher, C. (1990). Testing students in urban schools: Current problems and
new directions. Urban Diversity Series No. 100. ED 322283.
Baldwin, A. Y. (2002). Culturally diverse students who are gifted.
Exceptionality, 10, (2) 139-147.
Batsche, G. M., & Knoff, H. M. (1995). Linking assessment to intervention.
In A. Thomas & J. Grimes (Eds.), Best Practices in School
Psychology. (pp. 569-585), Washington, DC: National Association of
School Psychologists.
Beckmann, J. F. & Guthke, J. (1995). Complex problem solving, intelligence
and learning ability, in P. A. Frensch & J. Funke (Eds.) Complex
Problem Solving: The European Perspective. Erlbaum: Hillsdale, NJ.
Beckmann, J. F. (2001) Zur validierung des Konstrukts des intellektuellen
Veranderungspotentials [On validation of the concept of intellectual
change potential]. Berlin: Logos.
Binet, A. (1911). Les idées ublica sur les enfants [Contemporary ideas on
children]. Paris: Flammarion.
Birnbaum, R. & Deutsch, R. (1996). The use of dynamic assessment and its
relationship to the code of practice. Educational and Child Psychology
13(3): 14±24.

75
Bolig, E.F. & Day, J. D. (1993). Dynamic assessment and giftedness: The
promise of assessing training responsiveness. Roeper Review, 16,110-
113.
Borland, J. H. & Wright, L. (1994). Identifying young, potentially gifted,
economically disadvantaged students. Gifted Child Quarterly, 38,
164-171
Brice, A & Brice, R. (2004). Identifying Hispanic gifted children: A
screening. Rural Special Education Quarterly,13(2), 45-78.
Brown, A. L. & French, L. A. (1979). The zone of potential development:
implications for intelligence testing in the year 2000. Intelligence, 3,
pp. 255±273.
Brown, A. L., & Ferrara, R. A. (1985). Diagnosing zones of proximal
development. In J. Wertsch (Ed.), Culture, communication, and
cognition: Vygotskian perspectives (pp. 272±305). New York:
Cambridge University Press.
Brown, A.L. & Campione, J. C. (1984). Three faces of transfer: Implications
for early competence, individual differences, and instruction. In M.
Lamb, A. Brown, & B. Rogoff ( Eds.), Advances in developmental
psychology (Volume 3, pp. 143-192). Hillsdale, NJ: Erlbaum
Bryant, P. (1995). Children and arithmetic. Journal of Child Psychology and
Psychiatry, 36, 3±32.
Buchel, F. P. & Scharnhorst, U. (1993) The Learning Potential Assessment
Device (LPAD): Discussion of theoretical and methodological
problems, in: J. H. M. Hamers, K. Sijtsma & A. J. M. Ruijssenaars
(Eds) Learning Potential Assessment (Amsterdam, Swets &
Zeitlinger).
Buchel, F. P. Schlatter, C. & Scharnhorst, U. (1997) Training and assessment
of analogical reasoning in students with severe learning difficulties,
Educational and Child Psychology, 14, pp. 109±120.

76
Buckingham, B.R. (1921). Intelligence and its measurement: a symposium.
Journal of Educational Psychology, 12, 271±275.
Budoff, M. & Corman, L. (1976). Effectiveness of a learning potential
procedure in improving problem-solving skills of retarded and non-
retarded children. American Journal of Mental Deficiency, 81(3),
260±264.
Budoff, M. (1968) Learning potential as a supplementary testing procedure,
in: J. Hellmuth (Ed.) Learning Disorders, Volume 3 (Seattle, WA,
Special Child).
Budoff, M. (1987). The validity of learning potential assessment, in: C. S.
Lidz (Ed.) Dynamic Assessment: An Interactional Approach to
evaluating learning potential (pp.173-195). New York: Guilford
Press.
Burns, S. (1991). The dynamic assessment of intelligence. In H.C. Haywood,
& D. Tzuriel, (Eds.), Interactive assessment (pp. 167±186). New
York: Springer.
Caffery, E. Fuchs, D. & Fuchs, L. S. (2008). The Predictive Validity of
Dynamic Assessment. The journal of Special Education: 41(4):435-
54.
Campione, J. C. (1989). Assisted assessment: a taxonomy of approaches and
DQ RXWOLQH RI VWUHQJWKV DQG ZHDNQHVVHV¶ Journal of Learning
Disabilities 22, 151±65.
Campione, J. C., Brown, A. L., Ferrara, R. A., Jones, R. S., & Steinberg, E.
(1985). Breakdowns in flexible use of information: Intelligence-
related differences in transfer following equivalent learning
performance. Intelligence, 9, 297±315.
Campione, J., Brown, S., Ferrara, A. & Bryant, N. (1984). The zone of
proximal development: implications for individual differences and
learning. In B. Roggof & J. Wertch (Eds.), Dynamic Assessment and
UK Educational Psychologists New Directions for Child
77
'HYHORSPHQW FKLOGUHQ¶V OHDUQLQJ LQ WKH =RQH RI 3UR[LPDO
Development. San Francisco, CA: Jossey-Bass.
Campione, J.C. & Brown, A. (1987) Linking dynamic assessment with school
achievement. In Dynamic Assessment (ed. C.S. Lidz) pp. 82±115.
Guilford, New York.
Carlson, J. S., & Wiedl, K. H. (1979). Toward a differential testing approach:
Testing-the-limits employing the Raven matrices. Intelligence, 3, 323±
344.
Carlson, S.L. & White, S.H. (1998) The effectiveness of a computer program
in helping kindergarten students learn the concepts of left and right.
Journal of Computing in Childhood Education, 9, 133-147.
Carrell, P. L. (1984). Schema Theory and ESL Reading: Classroom
Implications and Applications. The Modern Language Journal. 68(4),
332-343.
Carrell, P. L. (1988). Interactive lexical processing: Implications for
ESL/second language reading classrooms. In L. Carrell, J. Devine, &
D.E. Eskey, (Eds.), Interactive Approaches to Second Language
Reading (pp. 239-259). Cambridge: Cambridge University Press.
Carrell, P. L. (1989), Meta-cognitive Awareness and Second Language
Reading, The Modern Language Journal, 7(2), 121-131.
Carroll, J. (1993). Human cognitive abilities. Cambridge: Cambridge
University Press.
Castellano, J. A. (1998). Identifying and assessing gifted and talented
bilingual Hispanic students. ERIC Digest, 4, 67-78.
Chaffey, G.W. (2003). Identifying high academic potential in Australian
Aboriginal children using dynamic testing. Australia journal of Gifted
Education, 12, (1), 42-55.
Chandler, D. (1984) Young Learners and the Microcomputer. Open
University Press, Milton Keynes.

78
Chang, L.L. & Osguthorpe, R.T. (1990). The effects of computerized picture-
ZRUGSURFHVVLQJRQNLQGHUJDUWQHUV¶ODQJXDJHGHYHORSPHQWJournal of
Research in Childhood Education, 5, 73±83.
Clariana, R.B. (1993). The motivational effect of advisement on attendance
and achievement in computer- based instruction. Journal of
Computer-Based Instruction, 20, 47±51.
Daniel, M.H. (1997). Intelligence testing status and trends. American
Psychologist, 52, 1038±1045.
De la Cruz, R. E. (1996). Assessment-bias issues in special education: A
review of the literature. Cambridge University Press.
De Lemos, M. M. (1989). Standard Progressive Matrices. Australian Manual.
In Donovan, M.S. & Cross, CT. (Eds.). Minority students in special
and gifted education. Washington, D.C: National Academies Press
Deutsch R. & Reynolds, Y. (2000). The use of dynamic assessment by
educational psychologists in the UK. Educational Psychology in
Practice, 16, 311±331.
Dockrell, J. E. (2001). Assessing language skills in pre-school children. Child
Psychology and Psychiatry Review 6(2), 74±83.
Donato, R. (1994). Collective scaffolding in second language learning.
Language Learning 43(1): 121-9.
Donato, R., & McCormick, D. (1994). A socio-cultural perspective on
language learning strategies: The role of mediation. The Modern
Language Journal, 78, 453±464.
Duran, R. P.(1989). Assessment and instruction of at-risk Hispanic students.
Exceptional Children, 56, 154±158.
Elliott, J. (1993). Assisted Assessment: If it is dynamic why is it so rarely
employed? Educational and Child Psychology, 10, 48±58.
Elliott, J. (2003). Dynamic assessment in educational settings: realizing
potential. Educational Review 55(1), 15±30.

79
Elliott, J. G. & Lauchlan, F. (1997). Assessing Potential²the search for the
SKLORVRSKHU¶VVWRQH"Educational and Child Psychology, 14, 6±16.
Elliott, J. G. & Lauchlan, F. (2000). Some perceptions of the links between
HGXFDWLRQDO SV\FKRORJLVWV¶ DVVHVVPHQW DQd special needs intervention
and resourcing in the London Borough of Castleton. Unpublished
report. University of Sunderland: School of Education.
Elliott, J. G. (2000a) Dynamic assessment in educational contexts: purpose
and promise, in: C. Lidz & J. G. Elliott (Eds.) Dynamic Assessment:
Prevailing models and applications. New York: J.A.I. Press.
Elliott, J. G. (2000b). The psychological assessment of children with learning
difficulties. British Journal of Special Education, 27, 59±66.
Elliott, J. G. (2001). All testing is dynamic: a response to Sternberg and
Grigorenko. Issues in Education 7(2), 185±91.
Embereston, S. E. (1987). Towards development of a psychometric approach,
in: C. S. Lidz (Ed.) Dynamic Assessment: An Interactional Approach
to evaluating learning potential. New York: Guilford Press.
Enderby, P. and Emerson, J. (1995). Does speech and language therapy
work? A review of the Literature. London: Whurr publications.
Fernandez-Ballesteros, R. & Calero, M. D. (1993). Measuring learning
potential, International Journal of Cognitive Education & Mediated
Learning, 3, 9±21.
Fernandez-Ballesteros, R. & Calero, M. D. (2000). The Assessment of
Learning Potential. In C. S. Lidz & J. G. Elliott (Eds.) Dynamic
Assessment: Prevailing models and applications. New York: Elsevier.
Feucrstein, R., Rand, Y.. & Garb. & Kozulin, A. (1998). I think... therefore 1
read: a cognitive approach to English teaching. Acadamon.
Jerusalem.
Feuerstein, R. (1979). The Dynamic Assessment of Retarded Performers: the
learning potential assessment device: theory, instruments and
techniques. Baltimore, MD: University Park Press.
80
Feuerstein, R. (1980). Instrumental enrichment: an intervention program for
cognitive modifiability. Baltimore: University Park Press.
Feuerstein, R. (1990). The theory of structural cognitive modifiability. In B.
Presseisen (Ed.), Learning and Thinking Styles: Classroom
Interaction. Washington. DC: National Education Association,
Feuerstein, R., Haywood, H. C., Rand, Y., Hoffman, M. B., & Jensen, M.
(1986). Examiner manual for the Learning Potential Assessment
Device. Jerusalem: Hadassah-WIZO-Canada Research Institute.
Feuerstein, R., Haywood, H.C., Hoffman, M.B. & Jensen A.R. (1986).
Learning Potential Assessment Device Manual. Israel: HWCRI
publications.
Feuerstein, R., Rand, Y. & Hoffman, M. B. (1979). The Dynamic Assessment
of Retarded Performers: The Learning Potential Assessment Device,
Theory, Instruments and Techniques. Baltimore: University Park
Press.
Feuerstein, R., Rand, Y., Hoffman, M.B. & Miller, R. (1980) Instrumental
Enrichment: An Intervention Program for Cognitive Modifiability.
Baltimore: University Park Press.
Figg, J. Y. (2002). Dynamic assessment and educational intervention, Paper
presented at the Tenth International Conference on Thinking.
Harrogate.
Flanagan, D. P. & McGrew, K. S. (1997). A cross-battery approach to
assessing and interpreting cognitive abilities: narrowing the gap
between practice and cognitive science, in: D. P. Flanagan, J.
Flanagan, J. L. Genshaft & P. L. Harrison (Eds.) Contemporary
Intellectual Assessment: Theories, Tests and Issues. New York:
Guilford Press.
Frasier, M. M. & Passow, A. H. (1994). Toward a new paradigm for
identifying talent potential. USA: University of Connecticut
publishing center.
81
Frasier, M. M. (1997). Multiple criteria: The mandate and the challenge.
Roeper Review: Gifted Education Supplement, 4,4-16.
Frederickson, N. (1993). CRA: has it had its day? Educational and Child
Psychology, 10, 14±26.
Frederickson, N. (1999). The ACID test: or is it? Educational Psychology in
Practice, 15, 3±9.
Frederickson, N., Webster, A. & Wright, A. (1991). Psychological
assessment: a change of emphasis. Educational Psychology in
Practice, 7, 20±29.
Freeman, L. & Miller, A. (2001). Norm-referenced, criterion-referenced, and
dynamic assessment: what exactly is the point?, Educational
Psychology in Practice, 17, 3±16.
Fuchs, D., & Fuchs, L. S. (2006). Introduction to responsiveness-to-
intervention: What, why, and how valid is it? Reading Research
Quarterly, 41(1), 93±99.
Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading
disability by responsiveness-to-instruction: Specifying measures and
criteria. Learning Disability Quarterly, 27(4), 216±227.
Fuchs, D., Fuchs, L. S., Benowitz, S. & Barringer, K. (1987). Norm-
referenced tests: Are they valid for use with handicapped students?
Exceptional Children, 54, 263±271.
Genshaft. L. & P. L. Harrison. (1998). Contemporary Intellectual
Assessment: Theories, Tests and Issues. New York: Guilford Press.
Gerber, M. M. (2000). Dynamic assessment for students with learning
disabilities: Lessons in theory and design, in: C. S. Lidz & J. G. Elliott
(Eds.) Dynamic Assessment: Prevailing models and applications. New
York: Elsevier.
Ginsburg, H.P. & Opper, S. (1988). PiageW¶V 7KHRU\ RI ,QWHOOHFWXDO
Development (3rd ed.). Prentice Hall, Englewood Cliff, NJ.

82
Gipps, C. (1999). Socio-cultural aspects of assessment. In A. Irannejad & P.
D. Pearson (Eds.) Review of Research in Education. Washington:
American Educational Research Association.
Goldmacher, R. L. & Lawrence, R. L. (1992). An experiment: computer
literacy and self esteem for Head Start preschoolers ± Can we
leapgrog? Paper Presented at the Annual Conference of National
Association for the Education of Young Children. England: Oxford.
Greenberg, K. H. (2000). Inside professional practice: A collaborative
systems orientation to linking dynamic assessment and intervention.
In C. S. Lidz & J. G. Elliott (Eds.) Dynamic Assessment: Prevailing
models and applications. New York: Elsevier.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for
treatment planning, classification, and placement decisions: Recent
empirical findings and future directions. School Psychology
Quarterly, 12, 249±267.
Grigorenko, E. L. & Sternberg, R. J. (1998). Dynamic Testing, Psychological
Bulletin, 124, 75±111.
Guthke, J. & Beckmann, J. F. (2000a). The Learning Concept: Application
and Practice, in: C. S. Lidz & J. G. Elliott (Eds.) Dynamic Assessment:
Prevailing models and applications. New York: Elsevier.
Guthke, J. & Beckmann, J. F. (2000b). The learning test concept and dynamic
assessment. In A. Kozlin & Y. Rand (Eds.) Experience of mediated
OHDUQLQJ $Q LPSDFW RI )HXHUVWHLQ¶V WKHRU\ LQ HGXFDWLRQ DQG
psychology. Oxford: Elsevier.
Guthke, J. & Beckmann, J. F. (2003). Dynamic assessment with diagnostic
programs, in: R. J. Sternberg, J. Lautery & T. I. Lubart (Eds.) Models
of Intelligence for the new millennium. Washington DC: American
Psychological Society.
Guthke, J. & Gitter, K. (1991). Prognose der Schulleistungsentwicklung
mittels Status- und Lerntests in der Vorschulzeit [Predicting school
83
achievement by means of static and learning tests applied to
preschoolers]. In H. Teichmann, B. Meyer-ptobst & D. Roether (Eds.)
Risikobewaeltigung in der lebenlangen psychischen Entwicklung.
Berlin: Verlag Gesundheit.
Guthke, J. & Stein, H. (1996). Are learning tests the better version of
intelligence tests? European Journal of Psychological Assessment, 12,
1±13.
Guthke, J. & Wiedl, K. H. (1996). Dynamisches Testen. Zur Psychdiagnostik
der IntraindivduellenVariabilitat [Dynamic Assessment. On
psychodiagnosis of intraindividual variability]. Gottingen: Hogrefe.
Guthke, J. & Wingenfield, S. (1992). The learning test concept: Origins, state
of the art, and trends. In H. C. Haywood & D. Tzuriel (Eds.)
Interactive Assessment. New York: Springer.
Guthke, J., Beckmann, J. F. & Dobat, H. (1997). Dynamic testing²problems,
uses, trends and evidence of validity, Educational and Child
Psychology, 14, 17±32.
Guthke, J., Harnish, A. & Caruso, D. (1986). The diagnostic program of
syntactical rule and vocabulary acquisition: a contribution to the
psycho-diagnostics of foreign language learning ability. In F. Klix &
H. Hagendorf (Eds.) Human Memory and Cognitive Capabilities.
Amsterdam: North Holland.
Hamers, J. H. M., De koning, E. & Ruijssenaars, A. J. J. M. (1997). A
diagnostic programs learning potential assessment procedure,
Educational and Child Psychology, 14, 46±55.
Hamers, J. H. M., Hessells, M. G. & Van Luit, J. E. H. (1991). The Learning
Potential Test for Ethnic Minorities. Test and Manual. Switzerland:
Lisse, Swets & Zeitlinger.
Hamers, J. H. M., Pennings, A. H. & Guthke, J. (1994). Training-based
assessment of school achievement, Learning and Instruction, 4, 347±
360.
84
Harrisson, L. (2003). Contemporary Intellectual Assessment: Theories, Tests
and Issues. New York: Guilford Press.
Hatch, E. & H. Farhady, (1981). Research design and statistics: For applied
linguistics. Tehran: SAMT.
Haywood, H. C. (1977a). Alternatives to normative assessment. In P. Mittler
(Ed.), Research to practice in mental retardation: Proceedings of the
4th Congress of the International Association for the Scientific Study of
Mental Deficiency, Vol. 2, Education and training (pp. 11±18).
Baltimore: University Park Press.
Haywood, H. C. (1977b). A cognitive approach to the education of retarded
children. Peabody Journal of Education, 54, 110±116.
Haywood, H. C. (1986). The Test of Verbal Abstracting (TVA). In R.
Feuerstein, H. C. Haywood, Y. Rand, M. B. Hoffman, & M. Jensen,
Examiner manual for the Learning Potential Assessment Device.
Jerusalem: Hadassah-WIZO-Canada Research Institute.
Haywood, H. C. (1992). Interactive assessment: A special issue. Journal of
Special Education, 26, 233±234.
Haywood, H. C. (1993). Interactive assessment: Assessment of learning
potential, school learning, and adaptive behavior. Ninth Annual
Learning Disorders Conference. USA: Harvard University.
Haywood, H. C. (1995). Cognitive early education: Confluence of psychology
and education. Paper presented at the Second International Congress
on Psychology and Education. Spain: Madrid.
Haywood, H. C. (1997). Interactive assessment. In R. Taylor (Ed.),
Assessment of individuals with mental retardation (pp. 103±129). San
Diego, CA: Singular.
Haywood, H. C. (2001). :KDWLV'\QDPLFµ7HVWLQJ¶$UHVSRQVHWR6WHUQEHUJ
and Grigorenko, Issues in Education, 7, 201±210.
Haywood, H. C., Brooks, P. & Burns, S. (1986). Stimulating cognitive
development at developmental level: A tested non-remedial preschool
85
curriculum for preschoolers and older retarded children. In M.
Schwebel & C. A. Maher (Eds.), Facilitating cognitive Principles,
practices, and programs (pp. 127±147). New York: Haworth Press.
Haywood, H. C., Brown, A. L. & Wingenfeld, S. (1990). Dynamic
approaches to psycho-educational assessment, School Psychology
Review, 19, 411±422.
Haywood, H.C & Lidz, C.S. (2007). Dynamic assessment in practice.
Cambridge, UK: Cambridge University Press.
Haywood, H.C. & Tzuriel, D. (1992). Interactive Assessment. New York:
Springer.
Hessels, M. G. & Hessels-Schlatter, C. (2002) Learning potential in
immigrant children in two countries: the Dutch and Swiss-French
version of the Learning potential test for Ethnic Minorities, in: D. G.
M. Van der Aalsvoort, W. C. M. Resing & A. J. J. M. Ruijssenaars,
Learning potential assessment and cognitive training. Actual research
and perspectives in theory building and methodology. Greenwich, CT:
JAI Press.
Hessels, M. G. (1997). Low IQ but high learning potential: Why Zeyneb and
Moussa do not belong in special education, Educational and Child
Psychology, 14, 121±136.
Hessels, M. G. (2000). The Learning Potential Test for Ethnic Minorities
(LEM): A Tool for Standardized Assessment of Children in
Kindergarten and the First Years of Primary School. In C. S. Lidz & J.
G. Elliott (Eds.) Dynamic Assessment: Prevailing models and
applications. New York: Elsevier.
Hessels, M. G. P., & Hamers, J. H. M. (1993). A learning potential test for
ethnic minorities. In J. H. M. Hamers & A. J. J. M. Ruijssenaars
(Eds.), Learning potential assessment (pp. 285±312). Amsterdam:
Swets & Zeitlinger.

86
Hessels-Schlatter R, C. (2002) A dynamic test to assess learning capacity in
people with severe impairments, American Journal of Mental
Retardation, 107, 340±351.
Hilliard, A.G. (1976). Alternative to IQ testing: An approach to the
identification of gifted ³minority´ children. Seattle: Hilliard
publications.
Jensen, M. R. (2003). Mediating knowledge construction: Towards a dynamic
model of assessment and learning, Educational and Child Psychology,
14, 23-43.
Kahn, R. (1992). The Dynamic Assessment of Infants and Toddlers. Hartford
Court Family Development Resource Centre.
Kanevsky L. (1990). Pursuing qualitative differences in the flexible use of
problem-solving strategy by young children. Journal for the
Education of the Gifted, 13(2), 115-140.
Kanevsky, L. & Gaeke, J. (2005). Inside the zone of proximal development:
Validating a multifactor model of learning potential with gifted
students and their peers. Journal for the Education of the Gifted, 28
(2) 192-217.
Kanevsky, L. (1992) The learning Game. In P. Klein & A.J. Tannenbaum
(Eds.) To be young and gifted (pp. 204-241). Norwood, NJ. Ablex.
Kanevsky, L. (1994). Exploring the implications of dynamic and static
assessments for gifted education. Exceptionality Education Canada, 4
(2), 77-98.
Kanevsky, L. (1995). Learning potentials of gifted students. Roeper Review,
17 (3), 157-163.
Kanevsky, L. (2000). Dynamic assessment of gifted students. In K.A.Heller,
F.J. Monks, R.J. Sternberg, & R.F. Subotnik (Eds.) Part III:
Identification of giftedness and talent. International handbook of
giftedness and talent (2nd ed.) (pp.271- 327). New York: Elsevier.

87
Kanevsky, L., & Rapagna, S.O. (1990). Dynamic analysis of problem-solving
by average and high ability children. Canadian Journal of Special
Education, 6,15-30.
Kaniel, S. & Reichenberg, R. (1990). Dynamic assessment and a cognitive
program for disadvantaged gifted children. Gifted Education
International, 7, 9-15.
Kao, M.T., Lehman, J.D. & Cennamo, K.S. (1996). Scaffolding in
hypermedia assisted instruction: An example of integration. Eric
Document Reproduction.
Kaplan, R. M. & Saccuzzo, D. P. (1997). Psychological Testing: Principles,
applications and issues. Seattle: Hogrefe and Huber Publishers.
Kaufman, A.S. & Kaufman, N. L. (1983). K-ABC: Kaufman Assessment
Battery for Children. Circle Pines, MN: American Guidance Service.
Kay, S.I. & Subotnik, R.F. (1994). Talent Beyond Words: Unveiling spatial,
expressive, kinesthetic and musical talent in young children. Gifted
Child Quarterly, 38, 70-74.
Khaghaninezhad, M. S. & Hadigheh, S. (2012). The effect of dynamic
DVVHVVPHQWRQJHQHUDO(QJOLVKWHVW¶VSHUIRUPDQFHVRI,UDQLDQPHGLFDO
students. International Journal of Learning and Development 2 (5),
111-128.
Khun, T. (1970). The Structure of Scientific Revolutions, Chicago, IL:
University of Chicago Press.
Kirschenbaum, R. J. (1998). Dynamic assessment and its use with
underserved gifted and talented populations. Gifted Child Quarterly,
42 (3), 140-147.
Koszalska, T.A. (1999). The relationship between the types of resources used
LQVFLHQFHFODVVURRPVDQGPLGGOHVFKRROVWXGHQWV¶LQWHUHVWLQVFLHQFH
careers: An exploratory analysis. Unpublished doctoral thesis. The
Pennsylvania State University.

88
Kozulin, A. (1990). 9\JRWVN\¶VSV\FKRORJ\$ELRJUDSK\RILGHDV. London:
Harvester Wheat sheaf.
Kulik, J.A. (1994). Meta-analytic studies of findings on computer-based
LQVWUXFWLRQ,Q(%DNHU +2¶1HLO (eds.) Technology Assessment in
Education and Training (pp. 9-33. Lawrence Erlbaum, Hillsdale, NJ.
Lantolf, J. P. & G. Appel. (1994). Vygotskian Approaches to Second
Language Research. Norwood, NJ: Ablex Publishing.
Lantolf, J. P., & Poehner, M. E. (2004). Dynamic assessment: Bringing the
past into the future. Journal of Applied Linguistics, 1, 49±74.
Lantolf, J. P., & Thorne, S. L. (2006). The socio-genesis of second language
development. Oxford: Oxford University Press.
Lauchlan, F. & Carrigan, D. (2005). Getting dynamic about dynamic
assessment: moving from theory to practice. Presentation, IACEP 10th
International Conference, University of Durham, UK.
Lauchlan, F. & Elliott, J. G. (2001). The psychological assessment of learning
potential, British Journal of Educational Psychology, 71, 647±665.
Lauchlan, F. and Carrigan, D. (2005). Getting dynamic about dynamic
assessment: moving from theory to practice. Presentation, IACEP 10th
International Conference: University of Durham, UK.
Lidz, C. (1987). Dynamic Assessment: an interactional approach to
evaluating learning potential, New York: Guildford Press.
Lidz, C. S. & Elliott, J. G. (2000). Dynamic Assessment: Prevailing models
and applications. New York: Elsevier.
Lidz, C. S. & Jepsen, R. H. (1997). Application of Cognitive Functions Scale
Administration Manual, Unpublished doctoral thesis, Cambridge
University.
Lidz, C. S. & Macrine, S.L. (2001). An alternative approach to the
identification of gifted culturally and linguistically diverse learners:
The contribution of dynamic assessment. School Psychology
International, 22(1), 74-96.
89
Lidz, C. S. (1991). 3UDFWLWLRQHU¶V*XLGHWR'\QDPLF$VVHVVPHQW. New York:
Guilford Press.
Lidz, C. S. (1992). Dynamic assessment: some thoughts on the model, the
medium and the message, in: J. S. Carlson (Ed.) Advances in
Cognition and Educational Practice, Vol. 1: Theoretical Issues:
Intelligence, cognition and Assessment. Greenwich, CT: JAI Press.
Lidz, C. S. (1997). Dynamic assessment: psycho-educational assessment with
cultural sensitivity, Journal of Social Distress and the Homeless, 6,
pp. 95±111.
Lidz, C. S. (2000). The Application of Cognitive Functions Scale (ACFS): An
Example of Curriculum- Based Dynamic Assessment. In C. S. Lidz &
J. G. Elliott (Eds.) Dynamic Assessment: Prevailing models and
applications. New York: Elsevier.
Lidz, C.S. & Elliott, J. (2000). Dynamic Assessment: Prevailing models and
applications. New York: Elsevier.
Lidz, C.S. & Pena, E.D. (1996). Dynamic assessment: The model, its
relevance as a nonbiased approach, and its application to Latino
American preschool children. Language, Speech, and Hearing
Services in Schools, 27, 367-372.
Lidz, C.S. (1991). Practitioner¶s guide to dynamic assessment. New York:
Guilford.
Lidz, C.S. (1992). The extent of incorporation of dynamic assessment into
cognitive assessment courses: A national survey of school psychology
trainers. The Journal of Special Education, 26, 325±331.
Lidz, C.S. (1995). Dynamic assessment and the legacy of Vygotsky. School
Psychology International,16, 143±153.
Lidz, C.S. (1997). Dynamic assessment: Restructuring the role of school
psychologists. Communiqué, 5, 22-56.
Lidz, C.S. (2001). Multicultural issues and dynamic assessment. In L.A.
Suzuki, J. G. Ponterotto, & P. J. Meiler, (Eds.), Handbook of
90
multicultural assessment: Clinical, psychological and educational
applications (pp.523- 539). San Francisco: Jossey-Bass.
Lidz, C.S. (2003). Early childhood assessment. New York: Wiley.
Lisse, N., S. & Zeitlinger B. V. Olswang, L. B., & Bain, B. A. (1996).
Assessment information for predicting upcoming change in language
production. Journal of Speech and Hearing Research, 39, 414±423.
Lopez, E. C. (1997). The cognitive assessment of limited English proficient
and bilingual children. In D. P. Flanagan, J., L. Genshaft & P. L.
HARRISON (Eds.) Contemporary Intellectual Assessment: Theories,
Tests and Issues. New York: Guilford Press.
Lopez, E.C. (1995). Working with bilingual children. In A. Thomas & J.
Grimes, (Eds.), Best Practices in School Psychology (pp.1111,1121),
Washington, DC: National Association of School Psychologists.
Luria, A. R. (1961). Study of the abnormal child. American Journal of
Orthopsychiatry: A Journal of Human Behavior 31, 1±16.
Luria, A. R. (1973). The working brain: An introduction to neuropsychology.
New York: Basic Books.
Maker, C. J. (1996). Identification of gifted minority students: A national
problem, needed changes and a promising solution. Gifted Child
Quarterly, 40, 41-49.
Matthews, D. J. (1996). Teaching gifted students in regular classrooms:
Adapting instruction to meet high level needs. Caribbean Curriculum,
6, 39-55.
Matthews, M. S. (2002). Dynamic assessment of academic ability of bilingual
Latino children. Humanities & Social Sciences, 63, 25-49.
McGrew, K. S. (1994). Clinical interpretation of the Woodcock-Johnson
Revised Tests of Cognitive Ability. Boston: MA, Allyn & Bacon.
McGuinness, C. (1999). From thinking skills to thinking classrooms: a review
DQG HYDOXDWLRQ RI DSSURDFKHV IRU GHYHORSLQJ SXSLOV¶ WKLQNLQJ
Norwich: HMSO publishers.
91
Meijer, J. (1993). Learning potential, personality characteristics, and test
performance. In J. H. M. Hamers, K. Sijtsma, & A. J. J. M.
Ruijssenaars (Eds.), Learning potential assessment: Theoretical,
methodological, and practical issues (pp. 341±362). New York:
Elsevier.
Messick, S. A. (1989). Validity. In R. L. Linn (Ed.), Educational
measurement: Third edition (pp. 13±96). New York: American
Council on Education.
Mevarech, Z. R. (1993). Who benefits from cooperative computer- assisted
instruction? Educational Computing Research, 9, 451±464.
Miller, A. & Leyden, G. (1999). A coherent framework for the application of
psychology in schools. British Educational Research Journal, 25,
389±400.
Miller, L., Gillam, R. B. & Peña, E. D. (2002). Dynamic assessment and
LQWHUYHQWLRQ ,PSURYLQJ FKLOGUHQ¶V QDUUDWLYH DELOLWLHV. Austin, TX:
Pro-Ed.
Miller, M. B. (2002). Verbal Memory Test to accompany the TVA.
Unpublished report. Patterson, NY: United Cerebral Palsy of Putnam
and Southern Duchess Counties.
Mills, C. J. & Tissot, S. L. (1995). Identifying academic potential in students
from underrepresented populations: Is using the Ravens Progressive
Matrices a good idea? Gifted Child Quarterly, 39 (4), 209-217.
Missiuana, C. & Samuels, M. (1988). Dynamic Assessment: review and
critique. Special Services in the Schools, 5(2), 1±22.
Naglieri, J. A. (1997). Naglieri Nonverbal Ability Test: Multilevel technical
manual. San Antonio, TX: The Psychological Corporation.
Neitzke, C. & Rohr-Sendlmeier, U. M. (1992). Achievement motivation of
intellectually gifted students when confronted with challenging and
unchallenging tasks. European Journal for High Ability, 3,197-205.

92
Nettelbladt, U., Sahlén, B., Ors, & M., Johannesson, P. (1989). A
multidisciplinary assessment of children with severe language
disorders. Clinical Linguistics and Phonetics 3(4): 313±46.
Olswang, L. B. & Bain, B. A. (1996). Assessment information for predicting
upcoming change in language production. Journal of Speech and
Hearing Research 39, 414±423.
Olswang, L., Bain, B. & Johnson, G. (1992). Using dynamic assessment with
children with language disorders. In S. Warren, and J. Reichle, (eds.),
Causes and effects in communication and language intervention (pp.
187±215). Baltimore: Paul H., Brookes.
Passow, A. H. & Frasier, M. M. (1996). Toward improving identification of
talent potential among minority and disadvantaged students. Roeper
Review, 18,198-202.
Peña, E., & Gillam, R. (2000). Dynamic assessment of children referred for
speech and language evaluations. In C. S. Lidz, and J. Elliott, (eds.)
Dynamic assessment: prevailing models and applications (pp. 543±
74). Amsterdam: JAI/Elsevier Science.
Peña, E., Quinn, R. & Iglesias, A. (1992). The application of dynamic
methods to language assessment: a nonbiased procedure. Journal of
Special Education 26(3), 269±280.
Piaget, J. & Inhelder, B. (1974). 7KH &KLOG¶V &RQVWUXFWLRQ RI 4XDQWLWLHV.
Routledge and Kogan Paul, London.
Piaget, J. (1952). 7KHFKLOG¶VFRQFHSWLRQRIQXPEHU. Humanities Press, New
York.
Piaget, J. (1976). To Understand Is to Invent: the Future of Education. New
York: Penguin.
Plucker, J. A., Callahan, C. M. & Tomchin, E. M. (1996). Wherefore art thou,
multiple intelligences? Alternative assessments for identifying talent
in ethnically diverse and low income students. Gifted Child Quarterly,
40 (2), 80-92.
93
Reschly, D. J. (1988). Special education reform: school psychology
revolution, School Psychology Review, 17, 459±475.
Reschly, D. J. (1997). Diagnostic and treatment utility of intelligence tests. In
D. P. Flanagan, J. L. Genshaft & P. L. Harrison, (Eds.) Contemporary
Intellectual Assessment: Theories, Tests and Issues. New York:
Guilford Press.
Resing, W. (1997). Learning potential assessment: The alternative for
measuring intelligence? Educational and Child Psychology, 14, 68±
82.
Resing, W. (2000). Assessing the Learning Potential for Inductive Reasoning
in Young Children. In C. S. Lidz & J. G. Elliott (Eds.) Dynamic
Assessment: Prevailing models and applications. New York: Elsevier.
Resing, W. C. M. (1993). Measuring inductive reasoning skills: The
construction of a learning potential test. In J. H. M. Hamers, K.
Sijtsma, & A. J. J. M. Ruijssenaars (Eds.), Learning potential
assessment: Theoretical, methodological, and practical issues (pp.
219±242). Netherlands: Swets & Zeitlinger publishers.
5H\ $   '¶XQ SURFpGp SRXU pYDOXHU O¶pGXFDELOLWp TXHOTXHV
applications en psychopathologie) [A procedure for assessing
educability (applications to psychopathology).] Archives de
Psychologie, 24, 297±337.
5H\ $   /¶([DPHQ SV\FKRORJLTXH GDQV OHV FDV G¶HQFpSKDORSDWKLH
traumatique [Psychologicalassessment in cases of traumatic brain
injury]. Archives de Psychologie, 28,112-134.
Rey, A. (1959). Test de copie et de ublications de mémoire de figures
géométriques complexes [Test of copying and memory reproduction of
geometric figures]. Paris: Centre de Psychologie Appliquée.
Robinson-Zanartu, C. A. & Aganza, J. S. (2000). Dynamic Assessment and
Socio-cultural Context: Assessing the Whole Child, in: C. S. Lidz & J.

94
G. Elliott (Eds.) Dynamic Assessment: Prevailing models and
applications. New York: Elsevier.
Ryan, A.W. (1991). Meta-analysis of achievement effects of microcomputer
applications in elementary schools. Educational Administration
Quarterly, 27, 161±184.
Samuels, M. T., Killip, S. M., McKenzie, H., & Fagan, J. (1992). Evaluating
preschool programs: The role of dynamic assessment. In H. C.
Haywood & D. Tzuriel (Eds.) Interactive assessment (pp. 251±271).
New York: Springer-Verlag.
Samuels, M. Tzuriel, D. & Malloy-Miller, T. (1989). Dynamic assessment of
children with learning difficulties. In R. T. Brown & M. Chazan
(Eds.) Learning difficulties and emotional problems. Calgary: Detselig
Enterprises.
Sarouphim, K. M. (2002). DISCOVER in high school: Identifying gifted
Hispanic and Native American students. Journal of Secondary Gifted
Education, 14, (1), 30-38.
Schinke-Llano, L. (1993). On the value of a Vygotskian framework for SLA
theory and research. Language Learning 34(4): 12-30.
Schlatter, C. & Buchel, F. (2000). Detecting reasoning abilities of persons
with moderate mental retardation: The Analogical Reasoning
Learning Test (ARLT). In C. S. Lidz & J. G. Elliott (Eds.) Dynamic
Assessment: Prevailing models and applications. New York: Elsevier.
Scott, M. S, Deuel, L. S., Jean-Francois, B., & Urbano,R.C. (1996).
Identifying cognitively gifted ethnic minority children. Gifted Child
Quarterly, 40, 324-339.
Shamir, A. (1999). The Effect of Teaching for Peer- Mediation on
Mediational Teaching Style and Cognitive Modifiability among
Children who Teach and Study. Doctoral Dissertation, School of
Education, Bar-Ilan University.

95
Shaughnessy, M. F. ( 1993). 9\JRWVN\¶V zone of proximal development:
Implications for gifted education. Cambridge University Press.
Shaunessy, E., Kames, K. A., & Cobb, Y. ( 2004). Assessment of culturally
diverse potentially gifted students with nonverbal measures of
intelligence. USA: The School Psychologist.
Skuy, M. Kaniel, S. & Tzuriel, D. (1988). Dynamic assessment of
intellectually superior Israeli children in a low socio-economic status
community. Gifted Education International, 5, 90-96.
Snider, S. (1996). Effects of alternative software in development of creativity
in at-risk and non at-risk young children. Abstracts International,
57,12-34.
Snow, R. E. (1990). Progress and propaganda in learning assessment,
Contemporary Psychology, 35, 1134±1136.
Stanley, N. V. (1993). Gifted and the zone of proximal development. Gifted
Education International, 9, (2), 78-81.
Stemberg, R. J. (2004). Introduction to definitions and conceptions of
giftedness. Thousand Oaks, CA.: Corwin Press.
Sternberg G, R. J. & Grigorenko, E. L. (2001a). The truth can now be told:
This whole exchange is one big dynamic test!, Issues in Education, 7,
251±260.
Sternberg, R. J. & Grigorenko, E. L. (2001b). All testing is dynamic testing.
Issues in Education 7(2), 137±170.
Sternberg, R. J. & Grigorenko, E. L. (2002). Dynamic Testing. New York:
Cambridge University Press.
Sternberg, R. J., & Grigorenko, E. L. (2002). Dynamic testing: The nature
and measurement of learning potential. Cambridge: Cambridge
University Press.
Stringer, P., Elliot, J. & Lauchlan, F. (1997). Dynamic Assessment and its
potential for educational psychologists. Educational Psychology in
Practice, 12, 152±160.
96
Swanson, H. L. & Lussier, C. M. (2001). A selective synthesis of the
experimental literature on dynamic assessment. Review of Educational
Research, 71, 321±349.
Swanson, H. L. (1994). The role of working memory and dynamic assessment
in the classification of children with learning disabilities. Learning
Disabilities Research and Practice, 9, 190±202.
Swanson, H. L. (1995). Effects of dynamic testing on the classification of
learning disabilities: The predictive and discriminated validity of the
Swanson-Cognitive Processing Test (S-CPT). Journal of Psycho-
educational Assessment, 13, 204±229.
Swanson, H. L. (1995). Using the cognitive processing test to assess ability:
development of a dynamic assessment measure, School Psychology
Review, 24, 672±693.
Swanson, H. L. (2000). Swanson-Cognitive Processing Test: Review and
Applications, in: C. S. Lidz & J. G. Elliott (Eds.) Dynamic
Assessment: Prevailing models and applications. New York: Elsevier.
Thorndile, R. M. (1997). The early history of intelligence testing. In D. P.
Flanagan, J. L. Genshaft & P. L. Harrison, (Eds.) Contemporary
Intellectual Assessment: Theories, Tests and Issues. New York:
Guilford Press.
Tzuriel, D. & Bengio, E. (2005). Identifying potential of giftedness in young
children by dynamic assessment and emotional intelligence measures.
Paper presented at the International Conference for Cognitive
Education and Psychology Conference, Durham, England, July.
Tzuriel, D. & Haywood, H. C. (1992). The development of interactive-
dynamic approaches for assessment of learning potential. In H. C.
Haywood, & D. Tzuriel (eds.) Interactive assessment (pp. 3±37). New
York: Springer-Verlag.
Tzuriel, D. & Shamir, A. (1999). Computer-Assisted Dynamic Assessment of
Seriational Thinking. Paper presented at the 7th Conference of the
97
International Association for Cognitive Education (IACE). Canada:
Calgary.
Tzuriel, D. & Shamir, A. (2001). Computer-assisted dynamic assessment of
seriational thinking, Journal of Computer Assisted Learning, 18, 21±
32.
Tzuriel, D. (1992). The Dynamic Assessment approach: a reply to Frisby &
Braden. Journal of Special Education 26(3), 302±324.
Tzuriel, D. (1995). The Cognitive Modifiability Battery (CMB): Assessment
and Intervention² Instruction Manual. School of Education: Bar Ilan
University.
Tzuriel, D. (1997a). A novel dynamic assessment approach for young
children: Major dimensions and current research. Educational and
Child Psychology, 14, 83±108.
Tzuriel, D. (1997b). The relation between parent±child MLE interactions and
FKLOGUHQ¶V FRJQLWLYHPRGLILDELOLW\ ,Q $ .ozulin (Ed.) The Ontology
of Cognitive Modifiability. Israel: ICELP ublications.
Tzuriel, D. (1999). Parent-child mediated learning transactions as
determinants of cognitive modifiability: Recent research and future
directions. Genetic, Social, and General Psychology Monographs,
125, 109±156.
Tzuriel, D. (2000a). Dynamic assessment of young children: Educational and
interventional perspectives. Educational Psychology Review 12(3),
385±435.
Tzuriel, D. (2000b). The Seria-Think instrument: A novel measure for
assessment and intervention in seriational-computational domain.
School Psychology International, 20, 173±190.
Tzuriel, D. (2001). Dynamic assessment of young children. New York:
Kluwer.
Tzuriel, D. Samuels, M.T., & Eeuerstein, R. (1998). Non-intellective factors
in dynamic assessment. In R. M. Gupta, & P Coxhead ( Eds.) Cultural
98
diversity and learning efficiency: Recent developments in assessment
(pp.141-163). London: NFER-Nelson.
Van Tassel-Baska, J. Johnson, D., & Avery, L.D. (2002). Using performance
tasks in the identification of economically disadvantaged and minority
gifted learners. Gifted Child Quarterly, 46 (2), 110-123.
Vaught, S. R. & Haywood, H. C. (1990). Interjudge agreement in dynamic
assessment: Two instruments from the Learning Potential Assessment
Device. The Thinking Teacher, 5, 1±13.
Vygotsky, L. (1978). Mind in society. USA: Harvard University Press.
Vygotsky, L. S. (1956). Isbrannye psikhologicheskie issledovaniya [Selected
psychological investigations]. 0RVFRZ ,]GDWHO¶VWYR $NDGHPLL
Pedagogischeskikh Nauk SSSR.
Vygotsky, L. S. (1979). Mind in society: The development of higher
psychological processes. London: Harvard University Press.
Vygotsky, L. S. (1986). Thought and language. Cambridge: MIT Press.
Vygotsky, L. S. (1998). The problem of age. In R. W. Rieber (Ed.), The
collected works of Vygotsky (pp. 187± 205). New York: Plenum.
Wagner, R. & Sternberg, R. J. (1984). Alternative conceptions of intelligence
and their implications for education, Review of Educational Research,
54, 179±223.
Wechsler, D. (1991). Manual for the Wechsler Intelligence Scale for
Children. San Antonio: TX, Psychological Corporation.
Wiedl, K. H., & Herrig, D. (1978). Okologische Validitat und
Schulerfolgsprognose im Lern- und Intelligenztest: Eine
exemplarische Studie. Diagnostica, 24, 175-186.
Wood, A. (1998). OK then, what do educational psychologists do? Special
Children,7, 11±13.
Wright, L., and Boriand, J. H. (1993). Using early childhood developmental
portfolios in the identification and education of young, economically

99
disadvantaged potentially gifted students. Roeper Review, 15, 205-
210.
Zorman, R. (1997). Language assessment: New Directions. Oxford University
Press.

100
Index

Binet 8- 9- 33- 34- 57-58


Cognitive modifiability 19- 25- 26- 32
Dynamic Assessment (DA) 7- 8- 9- 10-11- 12- 14- 22- 30-
32- 33- 35- 37- 39- 50- 51- 53-
54- 58- 59- 63- 64- 65- 68- 73

Intelligence (IQ) 9- 12- 25- 33- 34- 37- 39- 59- 60-
61
Mediation 7- 17- 18- 23- 27- 32- 41- 42- 47-
56- 57- 58- 65- 72- 74

Piaget 56- 57
Remedial Teaching 8- 65- 67
Scaffolding 16- 54
Socio-cultural Theory of Mind (SCT) 17
Static Testing 13- 23- 30- 41- 42- 74
Vygotsky 8- 11- 12- 13- 14- 15- 16- 17- 18-
19- 26- 29- 32- 49- 54- 59- 68- 72

Zone of Proximal Development (ZPD) 11- 13- 18- 20- 60- 68- 72

101
View publication stats

You might also like