Professional Documents
Culture Documents
Jurnal Intel 3 PDF
Jurnal Intel 3 PDF
While IQ tests are the most common and largely accepted measurement of how “smart”
a person is, whether they are the best measure of this construct is up for debate. This
paper will discuss the relationship between IQ tests and their corresponding order of
hierarchical complexity developmental stage scores based on the model of hierarchical
complexity (MHC). The Verbal Comprehension Index (VCI) scales of the Wechsler
Adult Intelligence Scale—Fourth Edition (WAIS–IV) were used for scoring. The study
shows that, according to the Hierarchical Complexity Scoring System (HCSS), the
WAIS–IV fails to test verbal intelligence beyond the formal stage. This study used
Rasch analysis to demonstrate that scoring the VCI of the WAIS–IV from a develop-
mental sequence using the HCSS was successful in explaining the majority of the
difficulty in VCI items. Much of the additional difficulty of tasks came from the
knowledge of rare items and noise. This demonstrates the ceiling effect of the VCI of
the WAIS–IV. Difficulties with scoring items, additional limitations with the IQ test,
and their implications are discussed.
For years, psychologists have attempted to cognitive ability or g factor. While this g factor,
define and test the concept of intelligence. The according to the Cattell–Horn–Carroll theory of
field has created and revised theories of intelli- multiple intelligences, is made up of several
gence, and countless instruments have been de- factors—most significantly, fluid and crystal-
signed to attempt to match these theories. Still lized intelligence—the idea of the existence of
today there exist many theories of intelligence. an overlying general cognitive ability is very
The most influential of these posit a general prominent in the field today. IQ is supposed to
Kyle Gramer Featherston, Dare Institute, Cambridge, pour la Valorisation des Intelligences Multiples, Liège,
Massachusetts; Shuling Julie Chen, Department of Psy- Belgium.
chology, New York University; Maria Toth-Gauthier Michael Lamport Commons discussed this topic with
and James M. Day, Department of Psychology and Ed- the authors and provided notes on which this paper is
ucational Sciences, Université catholique de Louvain; partially based.
Philippe Herman, Raquel Laverdeur, Laurence Nicolaï, Correspondence concerning this article should be ad-
Anne-Catherine Nicolay, Alice Sini, Caroline Tilkin, dressed to Kyle Gramer Featherston, 234 Huron Avenue, Cam-
Mireille Tyberghein, and Christian Vanheck, Le Centre bridge, MA 02138. E-mail: kylegramerfeatherston@gmail.com
50
STAGE AND IQ 51
be a rough estimate of this general ability (Kam- after the fact. The analysis misses the charac-
phaus, Winsor, Rowe, & Kim, 2005). IQ is a teristic of the stimuli—in this case, the test
largely accepted construct in today’s psycho- items—that cause the discrepancy in responses.
metric research community. In fact, IQ is often To a large extent, intelligence tests only test
used interchangeably with the terms general surface information (McClelland, 1973). Addi-
cognitive ability, mental ability, and intelli- tionally, these tests have been shown to have
gence, both in academics to describe the com- cultural and education-level biases. The infor-
mon core that cognitive tests share (Deary, mation subtest, for example, depends on cul-
Penke, & Johnson, 2010) and in everyday com- ture, experience, and knowledge to a large ex-
mon conversation. tent. People in other cultures may not have
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
The modern IQ test evolved from an intelli- knowledge of specific facts the WAIS–IV tests,
This document is copyrighted by the American Psychological Association or one of its allied publishers.
gence test that French psychologist Alfred Binet which may lower their scores simply based on
and colleague Theodore Simon developed in their past experiences. While the creators argue
order to identify students with learning disabil- that these are tests of “general cognitive abil-
ities (Binet & Simon, 1914). Binet viewed the ity,” what is actually tested in many instances is
test as a measure of scholastic ability and did whether or not certain knowledge has been
not believe that it was a measure of intelligence, learned in a participant’s educational or cultural
nor that intelligence was a singular construct history. McClelland (1973) argues that intelli-
that could be identified in such an instrument. gence tests may mainly predict test taking and
He thus condemned those for using it as such a symbol manipulation competencies as opposed
measure (White, 2000). There are several dif- to actual “smartness.” Recent additions to the
ferent IQ tests that are used based on variations test have included measurement of basic cogni-
of intelligence models, but the most common tive processes such as working memory and
instrument used with adults is the Wechsler processing speed. Instead, the everyday notion
Adult Intelligence Scale (WAIS). The most re- of “smartness” may be better defined by the
cent version is the fourth edition (WAIS–IV). complexity of a task a person is able to accom-
However, IQ tests have received criticisms from plish.
different areas of psychology for many years An alternative way to look at intelligence is
(Borsboom, 2006; Mackintosh, 2011; McClel- to view it as a progression along a developmen-
land, 1973; Neisser et al., 1996; Schönemann, tal sequence. Inhelder and Piaget’s (1958) def-
1997), and their predictive ability of success has inition of IQ is an individual’s place in a uni-
been called into question. Whether or not IQ versal sequence of development toward formal
tests are an accurate measure of “smartness” has operational reasoning. The Piagetian cognitive
been a hotly debated topic in psychology for a stage measures provide a rational standard for
long time. This paper will argue that the idea of educational intervention (Kohlberg & Mayer,
IQ in general, specifically through the WAIS– 1972). Piaget’s definition of intelligence is not
IV, does not measure complex problem solving limited to school-type success (Devries, 1974)
of multiple variables, as defined by a quantita- but instead takes the long-range perspective
tive behavioral developmental theory, the of the evolution of knowledge and intelli-
model of hierarchical complexity (MHC). gence in the individual. It describes changes
There are a great number of flaws with the with age in the structure of knowledge and
WAIS–IV and similar tests of IQ, although changes in reasoning about reality. There
these tests are used broadly by many psycholo- have been several studies that have attempted
gists for various purposes. IQ tests are devel- to determine the correlation between tradi-
oped using norms and psychometric analysis. tional IQ measures and Piagetian develop-
The responses are only analyzed in psychomet- mental tasks. These studies have had a wide
rics using factor analysis. The problem with this range of results, with the average correlation
is that the test is not developed with an idea of being a modest r ⫽ .578 (see Table 1).
what is being measured in mind or with an a While these were compelling studies, their
priori knowledge of what items will be more flaw is that Inhelder and Piaget’s (1958) devel-
difficult than others and why. There is no stim- opmental task sequence is limited. While their
ulus measure in this type of analysis. The anal- developmental model ended at formal opera-
ysis only identifies which questions are difficult tions, future research has demonstrated that
52 FEATHERSTON ET AL.
Table 1
Correlations Between Piagetian Tasks and IQ Tests
Correlation Tests used N Age
r ⫽ .34a Fifteen Piaget-type tasks and Stanford–Binet IQ Test 143 5–7
r ⫽ .837b Twenty-seven heterogeneous Piagetian tasks and 11 Wechsler subtests 150 6–10, 10–14,
and 14–18
r(kindergarten)c ⫽ .48 Nine Piaget tests and Wechsler Intelligence Scale for Children 100 5–8
r(Grade 1) ⫽ .52
r(Grade 2) ⫽ .56
a
Adapted from “Relationships Among Piagetian, IQ, and Achievement Assessments,” by R. DeVries, 1974, Child
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Development, 45, pp. 746 –756. Copyright 1974 by Wiley. b Adapted from “Piagetian Tasks Measure Intelligence and
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Intelligence Tests Assess Cognitive Development: A Reanalysis,” by L. G. Humphreys and C. K. Parsons, 1979,
Intelligence, 3, pp. 369–381. Copyright 1979 by Elsevier. c Adapted from “Relationship of Piaget Measures to
Standard Intelligence and Motor Scales,” by S. Z. Dudek, E. P. Lester, J. S. Goldberg, and G. B. Dyer, 1969,
Perceptual and Motor Skills, 28, pp. 351–362. Copyright 1969 by Sage.
adults are capable of reasoning beyond this There are three axioms of the MHC. A higher
level (Commons, Richards, & Kuhn, 1982). It is order action (a) is defined in terms of task
through this reasoning that the MHC was de- actions from the next lower order of hierarchical
veloped. complexity, (b) organizes two or more less
complex actions, and (c) is carried out in a
The MHC nonarbitrary way (see Figure 1).
The MHC is a nonmentalistic, neo-Piagetian, Why Use the MHC to Score IQ Tests?
and quantitative behavioral development the-
ory. It offers a standard method of examining As discussed previously, there are a number
the universal pattern of development. A funda- of flaws with traditional IQ tests, including the
mental assumption is that development pro- WAIS–IV. While no proposed view of intellec-
ceeds across a large number of general se- tual ability is without flaws, the MHC has a few
quences of behavior. These sequences exist in advantages over traditional IQ testing. First, IQ
every domain, including, but not limited to, the tests generally rely largely on testing previously
mathematical, logical, scientific, moral, social,
and interpersonal domains. The stages of the
MHC have been shown to predict humans’ Table 2
“smartness” in the colloquial sense using the Orders of Hierarchical Complexity
laundry and balance beam instruments (Com- Order number Order name
mons et al., 2008).
The different layers in a hierarchical se- 0 Computational
1 Automatic
quence of task complexity are referred to as
2 Sensory or motor
orders. The successful completion of a task of a 3 Circular sensory motor
given order is referred to as a stage. Orders of 4 Sensory motor
hierarchical complexity (OHCs) assess the pre- 5 Nominal
dicted difficulty of behavior tasks (Commons, 6 Sentential
Gane-McCalla, Barker, & Li, 2014; Commons 7 Preoperational
& Miller, 1998; Commons & Pekker, 2008; 8 Primary
9 Concrete
Commons & Richards, 1984; Commons, 10 Abstract
Trudeau, Stein, Richards, & Krause, 1998). The 11 Formal
OHC is an equally spaced unidimensional ordi- 12 Systematic
nal scale that measures difficulty independent of 13 Metasystematic
domain and content. The higher the OHC, the 14 Paradigmatic
more difficult the task. The order names and 15 Crossparadigmatic
16 Meta-crossparadigmatic
numbers are listed in Table 2.
STAGE AND IQ 53
Method
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Participants
This document is copyrighted by the American Psychological Association or one of its allied publishers.
for each item. The vocabulary section asks par- were considered exact translations from French to
ticipants to define words, and based on how English. Only using items that were exactly the
accurate their definitions are according to the same in English and French avoided any language
scoring guide, they are given 0, 1, or 2 points. confusion in the scoring manual. The information
The similarities section asks participants to subsection lacked items that directly translated to
compare how two words are similar, and they the English version and also asked for simple
are also given 0, 1, or 2 points based on how recall of information. This was ruled to be entirely
accurate their answers are according to the scor- concrete stage tasks with no higher stage. For
ing guide. these reasons, items from the information section
Items were translated directly from French to were not used in the analysis. For the purpose of
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
English. In order to preserve the meaning of the the analysis, 1-point responses were considered
This document is copyrighted by the American Psychological Association or one of its allied publishers.
items, only those items that were identical in separate questions from 2-point responses. A cor-
French and English were used for analysis. The rect 2-point response was considered a correct
items were scored independently by three answer on the 1-point item. This left 40 items.
groups of scorers using the Hierarchical Com- Table 3 shows the sample scoring of an item from
plexity Scoring System (HCSS; Commons, the similarities section, with three raters all in
Miller, Goodheart, & Danaher-Gilpin, 2005). agreement.
The HCSS is a system to score the difficulty of As stated previously, a group discussion
tasks based on the order of hierarchical com- yielded consensus on all items where there was
plexity. It entails several steps for assessing not unanimous agreement among raters. Of the 40
performance on a task. Any differences in scor- items, 24 were scored as abstract, 11 were scored
ing were resolved in a group discussion among
as concrete, three as formal, and two as primary. It
all raters. The raters were able to come to a
is clear that there was not a great deal of difference
consensus on all items. Twenty items were
in the orders of the items, which will be discussed
scored for both their 2- and 1-point responses.
further later in this paper.
Zero-point responses were considered random
A Rasch analysis was run on the items. This
and were not scored.
A second measure was taken to evaluate the is a psychometric analysis of the responses to
difficulty of items, which was evaluating the the items showing their relative difficulty (Ra-
rarity of the words used in the verbal section. sch, 1980). The analysis yields two scales: the
Each word was reviewed based on the fre- person’s stage of performance and the Rasch-
quency of appearance in Belgian works in the scaled item difficulty. A linear regression was
French language. Each word that was included then performed to compare the Rasch item dif-
as one of the 10,000 most common words in the ficulty with the prescored OHC of the items,
French language was given a score of 0, and with and without the word rarity variable. Then,
those that were not were given a score of 1. Rasch person scores were regressed against par-
ticipant Full Scale IQ (FSIQ) and the VCI.
Results The Rasch map (see Figure 2) indicated that the
OHC scoring of items matched up fairly well with
The mean IQ of the sample was 121.34 the item difficulty. Both of the primary items were
(SD ⫽ 12.497), well above the population mean at the very bottom of the map, and all of the
of 100. On the items tested, the participants concrete items were spread toward the very bot-
scored a range of 61 to a perfect score of 80 out tom. The most difficult item was one of the three
of 80, with an extremely high mean of 72. formal items, and the other two were also among
87(SD ⫽ 4.237). Four participants got the full 2 the most difficult. Meanwhile, the many abstract
points on every question we tested, and several items were spread throughout the map, mostly
more got close-to-perfect scores, with six get- occurring somewhere in the middle.
ting 1 point short of a perfect score. There were
also five questions that every participant re- Regression Analysis
ceived 2 points on and an additional three that
every participant got at least 1 point on. A simple linear regression was calculated to
The translation of the items yielded 20 items predict Rasch item scores based on the OHC of
from the Vocabulary and Similarities subtests that items. An r ⫽ .666 (R2 ⫽ .443) was found, F(1,
STAGE AND IQ 55
Final verdict
Concrete
Abstract
equal to ⫺30.163 ⫹ 2.93 (OHC).
The rarity of an item word metric was in-
cluded as a second independent variable in the
next regression. Six vocabulary words (12
items) and one word from the similarities sec-
Abstract (recognizing the individual
Rater 3
Factor Analysis
Concrete (specific or
Rater 1
classification)
concrete fact)
2 points
Points
1 point
XXXXXXX | ab14
This document is copyrighted by the American Psychological Association or one of its allied publishers.
| ab20
|S
XXXXXXXXXX |
2 M+
XXXXXXXXXXXXX |
XXXXXXXXX | ab18 ab21 fo3
|
XXXXXX | ab6
| ab23 fo2
1 XXXXXXX +
| ab8
XXX |
S|
XXXXXXX |
| ab9
0 XXX +M
| ab24
| ab17
| ab19
XXX |
X |
-1 T+
X |
| ab5
X | ab7
| ab4
X |
-2 +
| ab15 ab3 co10
|S
| co4
|
|
-3 +
|
|
|
| ab11 ab13 co5
|
-4 + ab1 ab10 ab12 ab2 co1 co11 co2 co3
co6 co7 co8 co9 pr1 pr2
Figure 2. Rasch map of persons and items. The Xs to the left of the dashed line mark the
person Rasch scores, and the items are on the right. The letter abbreviations mark order
questions. The number marks the sequential number of that item among items of the same
order. pr ⫽ primary; co ⫽ concrete; ab ⫽ abstract; fo ⫽ formal.
loadings were not used in scoring, and thus it is idea that the first factor is OHC and the second
difficult to decipher too much from the table. factor is rarity. Because it was determined that the
However, it should be noted that the only similar- information section is mainly about knowing rare
ities item that had a rarity score of 1 was the items and not stage, it makes sense that there are
highest loading item on Component 2, supporting two items from the information section loading on
the argument that this is rarity. Additionally, the the second factor and none on the first factor.
location of the information items helps support the Because all the items loading on the third and
STAGE AND IQ 57
Table 4
Coefficients for Linear Regression Predicting Rasch Item Difficulty From OHC and Rarity
Unstandardized Standardized Collinearity
coefficients coefficients statistics
Model B SE  t Sig. Tolerance VIF
(Constant) ⫺25.199 4.600 ⫺5.478 .000
OHC 2.323 .481 .528 4.834 .000 .895 1.118
Rarity 2.663 .683 .425 3.897 .000 .895 1.118
Note. Dependent variable is Rasch item difficulty. OHC ⫽ order of hierarchical complexity; Sig. ⫽ significance; VIF ⫽
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
fourth factors were unscored items, it is hard to item loadings. This second-factor analysis did
say what these factors are. not offer much support to our hypothesis that
A second exploratory factor analysis was run the first factor was OHC and the second was
using only the items that were previously scored rarity (see Table 6). Most of the items loading
in order to help determine the third and fourth on the first factor were scored as abstract, with
factors. This second-factor analysis again dem- a rarity of 1, indicating that rarity heavily
onstrated that the first factor had a much larger loaded on this factor. The third component had
eigenvalue than any other factor, as demon- only two factors with an eigenvalue above .5,
strated in Figure 4, with this factor having an and the fourth component had zero. From this
eigenvalue of 5.29, explaining 21.162% of the information, it is difficult to determine what
variance. There were a total of 25 factors, and these factors may be. Additionally, it does not
we again extracted four and looked at the top seem that these are highly relevant factors.
Figure 3. Scree plot of factors affecting all IQ items. The first component had by far the
largest eigenvalue, and we propose that this is order of hierarchical complexity (OHC).
58 FEATHERSTON ET AL.
Figure 4. Scree plot of factors affecting only scored IQ items. The first component had by
far the largest eigenvalue, and we propose that this is order of hierarchical complexity (OHC).
analyses demonstrated that there were two from answering further questions. However,
factors that contributed much of the variance because it is not clear why the more difficult
but only provided limited support that those items are more difficult for the participant,
factors were OHC and rarity. The third and stopping at a certain point does not give any
fourth factors had weak loadings and did not information as to what the participant has
yield any obvious evidence of additional fac- failed at. If the test were designed with se-
tors other than noise. Ultimately, because we quentially more complex items, it would be
could not score many of the items for OHC, possible to see what order of complexity a
the first-factor analysis yielded little informa- participant was struggling with. Instead, the
tion, and the second-factor analysis lost a lot IQ test yields an arbitrary number based on
of power by having to eliminate so many norms as an IQ score. This does not give any
questions. insight to academic instructors, clinicians,
This difficulty in determining the stage of employers, or whoever is administering the
the participants largely comes from the lack test about what specific tasks a participant
of variance in the stage of the questions. may struggle with or succeed at. From a com-
Almost all of the items were scored as ab- plexity perspective, all that can really be gath-
stract or concrete. Meanwhile, the entire in- ered is whether an individual is able to suc-
formation section of the test asked the partic- cessfully operate at the abstract order.
ipants to recall facts, which was determined In addition to the simple lack of variance
to be entirely concrete and was not used in the and correct ordering of questions, there were
analysis. The lack of variability in stage is not only three formal order questions and nothing
a good way to design a test of cognitive higher. This means that it was not possible to
ability, as ideally it would progress from give an accurate prediction of stage beyond
lower to higher stage progressing all the way the formal stage and not even truly accurate
up to at least the metasystematic stage. If assessments of whether a participant reached
several questions are incorrectly answered in the formal stage. This is a major flaw of the
a row, the WAIS–IV stops the participant IQ test because there are several stages be-
60 FEATHERSTON ET AL.
Similarities 9a .543 Abstract 0 mean IQ was 121.34, well above the population
Vocabulary 28a .527 Abstract 1 mean of 100. Four participants got the full 2
Vocabulary 24b .520 Abstract 1 points on every question we analyzed, and sev-
Component 2 eral more got close-to-perfect scores. There
were also a few questions every participant re-
Similarities 6b .710 Abstract 0
Similarities 6a .638 Abstract 0 ceived 2 points on, essentially rendering the
Similarities 7a .610 Abstract 1 question useless. This problem is partially due
to the fact that the IQ test is based on norms and
Component 3
does not have a very good measure of people at
Vocabulary 15b .658 Abstract 0 the very high and low ends of the spectrum. The
Similarities 5a .546 Abstract 0 MHC does not rely on norms and so would not
Component 4 have these problems.
Similarities 17b .471 Abstract 0 Furthermore, in this study, only those ques-
Vocabulary 7a .432 Concrete 0 tions that were identical to the English version
Vocabulary 15b .432 Abstract 0 of the WAIS–IV were used in the analysis. This
Vocabulary 19b .416 Abstract 1 was in order to decrease errors in the translation
Note. OHC ⫽ order of hierarchical complexity. of the scoring manual but decreased the number
of items that could be used. This similarly de-
creased the variance, and with more items the r
yond the formal stage that adult humans are could have been higher. The fact that the French
very capable of reaching, and yet this test and English versions had many differences is
operates as if formal stage is the highest stage also an interesting characteristic of the WAIS–
that exists. Commons, Miller, and Giri (2014) IV. If the translation is not the same, the test
report that 20% of educated adults reach Sys- may be testing different constructs in different
tematic Stage 12 (one stage beyond Formal cultures because, as discussed previously, the
Stage 11), with an additional 1.5% reaching creators do not have a good idea of what exactly
Metasystematic Stage 13. they are testing. This study also only used ques-
tions from the VCI and not the entire test. This
Limitations and Difficulties means that any generalizations to the entire
WAIS–IV and the measurement of IQ in gen-
One difficulty that emerged during this study eral cannot strictly be made from this study
was that the IQ items were not designed with alone. However, the other sections of the IQ test
OHC in mind, and there were thus many diffi- were judged to not be easily scored in terms of
culties scoring the items. The 2-point sample stage, which indicates that they do not assess
answers for a single item might vary widely in the solving of complex tasks.
stage, making it impossible to determine what
stage a participant actually answered at. This is Conclusion and Future Directions
a problem with the way the IQ test is designed
because a more complex answer is not neces- As discussed, this study had a sample of
sarily rewarded properly. However, it also French-speaking, above-average-IQ partici-
meant that the answers could not be scored pants. Future studies would benefit from a more
entirely accurately. An ideal study would record diverse sample that would increase the variance.
STAGE AND IQ 61
While it is unlikely that culture impacts stage Borsboom, D. (2006). The attack of the psychome-
(Giri, in press), and the WAIS–IV translations tricians. Psychometrika, 71, 425– 440. http://dx.doi
were psychometrically designed to have the .org/10.1007/s11336-006-1447-6
same distribution across different languages, Commons, M. L., Gane-McCalla, R., Barker, C. D.,
& Li, E. Y. (2014). The model of hierarchical
studies from different cultures and languages
complexity as a measurement system. Behavioral
could prove useful. An ideal study for the future Development Bulletin, 19, 9 –14.
would consist of recording participants’ re- Commons, M. L., Goodheart, E. A., Pekker, A.,
sponses to the items directly and then scoring Dawson, T. L., Draney, K., & Adams, K. M.
them according to the HCSS. These scores (2008). Using Rasch scaled stage scores to validate
could then be used to get a more accurate mea- orders of hierarchical complexity of balance beam
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
sure of stage, which could then be compared to task sequences. Journal of Applied Measurement,
This document is copyrighted by the American Psychological Association or one of its allied publishers.
standard intelligence and motor scales. Perceptual McClelland, D. C. (1973). Testing for competence
and Motor Skills, 28, 351–362. http://dx.doi.org/ rather than for “intelligence.” American Psycholo-
10.2466/pms.1969.28.2.351 gist, 28, 1–14. http://dx.doi.org/10.1037/h0034092
Giri, S. (in press). Crosscultural homogeneity in so- Neisser, U., Boodoo, G., Bouchard, T. J., Boykin,
cial perspective taking. Behavioral Development A. W., Brody, N., Ceci, S. J., . . . Urbina, S. (1996).
Bulletin. Intelligence: Knowns and unknowns. American
Humphreys, L. G., & Parsons, C. K. (1979). Piaget- Psychologist, 51, 77–101. http://dx.doi.org/10
ian tasks measure intelligence and intelligence .1037/0003-066X.51.2.77
tests assess cognitive development: A reanalysis. Pelli, D. G., & Farell, B. (1995). Psychophysical
Intelligence, 3, 369 –381. http://dx.doi.org/10 methods. In M. Bass, E. W. Van Stryland, D. R.
.1016/0160-2896(79)90005-9 Williams, & W. L. Wolfe (Eds.), Handbook of
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Inhelder, B., & Piaget, J. (1958). The growth of optics (2nd ed., Vol. I, pp. 29.21–29.13). New
This document is copyrighted by the American Psychological Association or one of its allied publishers.