ARTICLES
The Revised Almost Perfect Scale
Robert B, Slaney, Kenneth G. Rice, Michael Mobley, Joseph Trippi,
and Jeffrey S. Ashby
This article describes the development of the Almost Perfect Seale-Revised (APS-R). Explor
‘atory and confirmauory factor analyses and data exploring the reliability and construct validity
of the subscales are provided. The results support the existence of 3 subscales with adequate
internal consistencies and promising relationships with other relevant measures.
‘e construct of perfectionism has recently been receiving increased attention in the
psychological literature. For example, in a theoretical article titled “The Destruc-
tiveness of Perfectionism” in the American Psychologist, Blatt (1995) discussed the
relationship among suicide, perfectionism, and what he termed “introjective or self-critical
depression.” More recently, Blatt, Zuroff, Quinlan, and Pilkonis (1996) and Blatt, Zuroff,
Bondi, Sanislow, and Pilkonis (1998) presented additional analyses of data gathered in
the National Institute of Mental Health Treatment of Depression Collaborative Research
Program. These analyses indicated that perfectionism was negatively related to respon-
siveness to brief therapies for depression. Furthermore, a recent survey of a large num-
ber of college students who were in counseling found that more than 26% of the women
and 21% of the men reported that perfectionism was “quite distressing or extremely
distressing” to them (Research Consortium of Counseling and Psychological Services to
Higher Education, 1995)
The aforementioned studies clearly suggest that perfectionism has important implications
for counseling. However, the precise nature of these implications is uncertain because no
clear definitions of perfectionism are provided. These studies are not unique in this regard.
Although perfectionism is typically addressed as if it were a familiar personality character-
istic or trait, and clients in counseling are often described as perfectionistic, no formally
agreed-on definition of perfectionism exists within the psychological literature. Given this
apparent need for a formal, professionally useful definition of perfectionism, it seems reason-
able to investigate the meanings that are attached to this term by examining definitions of
perfectionism that are in general use.
Robert B. Slaney is a professor in the Department of Counselor Education, Counseling Psychology. and
Rehabilitation Services at Pennsylvania State Universit. Kenneth G. Rice is an associate professor in the
Department of Counseling, Educational Psychology, and Special Education at Michigan State University.
Michael Mobley is an assistant professor in the Depariment of Counseling Psychology at the University
of Missouri-Columbia, Joseph Trippi is a senior consultant at SHL Landy Jacobs, Inc, in State College,
Pennsylvania. Jeffrey S. Ashby isan associate professor in the Department of Counseling and Psychologi-
‘eal Services at Georgia State University; Atlanta, The authors thank Jennifer Greegorek for her helpful
comments on an earlier version of his article. Correspondence regarding this article should be sent 10
Robert B. Slaney, Department of Counselor Education, Counseling Psychology, and Rehabilitation Ser-
vices, 327 CEDAR, Pennsylvania State University, University Park, PA 16802-3110 (e-mail: rxt@psuedu)
130 oasirement ond Evokiaton in Counseling and Develcoment October 2001 + Volume 34A sampling of dictionary definitions reveals that perfectionism is defined as “an extreme
or excessive striving for perfection, as in one’s work” (Webster's Ninth New Collegiate
Dictionary, 1988, p. 873); “a predilection for setting extremely high standards and being
displeased with anything less" (Webster's II New College Dictionary, 1995, p. 816); or,
similarly, “a disposition to regard anything short of perfection as unacceptable” (Merriam.
Webster's Collegiate Dictionary, 1993, p. 863). Perfection, in turn, is defined as “an
unsurpassable degree of accuracy or excellence” (Merriam-Webster’s Collegiate Dictionary,
1993, p, 863). It seems understandable, then, that when the term perfectionism is used in the
Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric
Association, 1994), extreme standards of behavior are emphasized. The first criterion for
the diagnosis of obsessive-compulsive personality disorder refers to “perfectionism that
interferes with task completion (e.g., is unable to complete a project because his or her
overly strict standards are not met)” (American Psychiatric Association, 1994, p. 672).
The definitions quoted above assume that (a) having excessively high personal standards
for performance or behavior is central to defining perfectionism and (b) having such stan-
dards is problematic, if not pathological. These assumptions also permeate the anecdotal
literature on perfectionism (¢.g., Hollender, 1965; Pacht, 1984) and, logically enough, have
significantly influenced attempts to develop empirical measures of perfectionism. For ex-
ample, Burns's (1980) Perfectionism Scale used items measuring high personal standards
and was based on a previous scale that measured “a number of self-defeating attitudes
commonly seen in people who suffer from clinical depression and anxiety” (p, 34). Hewitt
and Flett (1991) also indicated that having high personal standards, as best captured by the
Self-Oriented subscale of their Multidimensional Perfectionism Scale, was problematic, as
were their Other-Oriented and Socially Prescribed subscales. Frost, Marten, Lahart, and
Rosenblate (1990), in developing their Multidimensional Perfectionism Scale, developed
original items in addition to taking items from Burns's scale, a second scale measuring
cating disorders, and yet another scale measuring obsessiveness. With the exception of
their Personal Standards scale, which Frost et al. believed was not necessarily problem-
atic, the other scales were seen as measuring negative psychological concerns. Slaney and
Johnson (1992) attempted to emphasize the potentially positive aspects of perfectionism
as well as the implications for counseling in developing their measure of perfectionism,
the Almost Perfect Scale (APS). However, their review of the available literature led them
to include more negative than positive dimensions in their scale.
All of the scales by Hewitt and Flett (1991), Frost et al, (1990), and Slaney and Johnson
(1992) conceived of perfectionism as multidimensional. Despite the initial emphasis of the
scales on the negative aspects of perfectionism, three studies have indicated that the con
struct possesses both negative and positive dimensions, Frost, Heimberg, Holt, Mattia, and
Neubauer (1993) factored the subscale scores of Hewitt and Flett’s (1991) scale and Frost
etal.’ (1990) scale and found two higher order factors, a “Positive Striving” factor and a
“Maladaptive Evaluation Concerns” factor. Slaney, Ashby, and Trippi (1995) included the
APS and essentially replicated the factor analysis by Frost et al. (1993). Again, there were
two higher order factors that represented positive and negative dimensions that were highly
consistent with the Positive Striving and the Maladaptive Evaluation Concerns factors that
Frost et al. (1993) had found. A more recent study by Rice, Ashby, and Slaney (1998)
performed a confirmatory factor analysis on Frost et al,’ (1990) and Staney and Johnson's
(1992) scales. They also found support for a higher order two-factor structure, labeled
adaptive and maladaptive perfectionism, that was very similar to the results of Slaney
et al. (1995).
Although the Positive Striving or Adaptive factor in all three studies was clearly dominated
by the subscales that measured high standards, the essential nature of the Maladaptive or
Maladaptive Evaluation Concerns factor is harder to distinguish. In a similar manner,
although the centrality of high standards to the definitions of perfectionism quoted above
Meosurement and Evaluation in Counseling and Develonmant + October 2001 @ volume 34 131seems quite clear, the connection of the subscales composing the Maladaptive Evaluation
Concerns factor to these definitions is more difficult to discern. In fact, the subscales com-
posing the Maladaptive factor seem to be based on assumed causes, concomitants, or the
resultant effects of being perfectionistic rather than a definition of perfectionism itself. For
example, Hewitt and Flet’s (1991) Socially Prescribed subscale, according to which perfec-
tionists believe that others “have unrealistic standards for them, evaluate them stringently,
and exert pressure on them to be perfect” (p. 457), seems to be a cause of perfectionism.
Frost et al.'s (1990) Parental Expectations and Parental Criticism subscales also seem to be
causal. Slaney and Johnson's (1992) maladaptive subscales—Anxiety, Procrastination, and.
Relationship Difficulties—can all be construed as the result of being perfectionistic rather
than defining the essential nature of the construct. In a similar manner, the Other-Oriented
subscale of Hewitt and Flett (1991) and Frost et al.’s (1990) Organization, Concern Over
Mistakes, and Doubts About Actions subscales can also be seen as resulting from being
perfectionistic. It does seem that, despite the development of the aforementioned scales,
there remains a clear need for a more adequate definition of perfectionism, especially one
that clearly addresses the negative aspect of perfectionism.
Two qualitative studies on perfectionism seem suggestive, Slaney and Ashby (1996)
approached the study of perfectionism by locating and interviewing a criterion group of 37
perfectionists. When these participants described themselves or defined perfectionism, the
centrality of high personal standards was evident. In addition, orderliness, neatness, or
organization was also frequently seen as integral to the definition of perfectionism, most
often in combination with high standards. There was, however, considerable variance in
the participants’ evaluations of their perfectionism. Although most participants saw their
perfectionism as distressing to some degree, none who were asked said they would give it
up. This ambivalence seems highly consistent with the empirical findings indicating that
the construct is multidimensional and contains both positive and negative dimensions. The
interviews also seemed to suggest that distress was associated with what participants per-
ceived as a discepancy between their high personal standards for performance and theit
perceptions of their success in meeting those high standards. Slaney, Chadha, Mobley, and
Kennedy (2000), in a study conducted on Hindu University students in India, also found
support for the centrality of high standards and, secondarily, for a sense of orderliness i
the participants’ definitions of perfectionism. Again, there was clear ambivalence about
being perfectionistic. The distress associated with perfectionism also seemed to be related
to the perceived discrepancy between the high standards that the participants held and
their performance, especially their academic performance. These results tentatively sug-
gest that the concept of a perceived discrepancy between standards and performance might
provide a potentially useful definition of the negative aspect of perfectionism. Discrep-
ancy, as defined, seems integral to perfectionism and phenomenologically operationalizes
the excessive aspect of perfectionism contained in the dictionary definitions.
On the basis of the preceding review, an adequate and useful measure of perfec-
tionism would seemingly need to meet at least four criteria: (a) It should clearly specify
the variables that define perfectionism as discriminated from variables that are seen as
causal, correlational, or the effeets of being perfectionistic; (b) it should pay close
attention to the empirically supported negative and positive aspects of perfectionism;
(c) it should be closely related to commonly held ideas about perfectionism as ex-
emplified in the dictionary definitions; and (d) it should be empirically sound. In
addition, if the definition is to be potentially relevant to counseling psychology, in
general, and therapeutic work, in particular, it should contain clear and logical impli-
cations for both treatment and the potential evaluation of the effects of that treatment
‘The development of the Almost Perfect Scale-Revised (APS-R), reported here, was
an attempt to develop a more adequate measure of perfectionism to meet the afore-
mentioned criteria
132 Mecsuroment and Evaluation in Counseling and Development @ October 2001 & Voume 3dMETHOD
Participants
There were 809 participants overall. Three hundred forty-seven undergraduate students
were recruited from two introductory classes at a large mid-Atlantic university, 258
undergraduates were recruited from introductory classes at a large midwestern univer-
sity, and 204 students were attending a moderately sized university in the midwest.
Extra credit was given for completion of the assigned materials. There were two mid
Atlantic samples. The first one was from introductory psychology classes and had 173
participants: 74 men, 89 women, and 10 who did not indicate their gender. Their mean
age was 19.23 years (SD = 3.79 years), and the age range was from 17 to 43 years. The
racial-ethnic backgrounds of the participants were 86.2% European American, 7.8%
Asian American, 2.4% African American, 1.8% Hispanic or Latino American, 1.2%
Native American, and 0.6% “other” (or no response was provided for this item). The
second mid-Atlantic sample had a total of 174 educational psychology participants.
There were 50 men and 121 women, and 3 participants did not report their gender.
Their mean age was 20.42 years (SD = 3.82 years), and the age range was from 17 to
51 years. The racial-ethnic backgrounds of the participants were 92.3% European
American, 3.8% Asian American, 2.2% African American, 1.2% Latina or Latino
American, and 0.5% “other” (or no response was provided). The first midwestern sample
was composed of 83 men, 159 women, and 16 participants who did not specify their
gender. Their mean age was 21,00 years (SD = 5.82 years), and the age range was
from 17 to $7 years. The sample was 90% European American, 3% Asian American,
3% African American, 1% Latina or Latino American, and 3% “other” (or no response
was provided). The second midwestern sample consisted of 71 men and 133 women. The
average age of these participants was 20.81 years (SD = 3,66 years). Approximately 95%
of the participants were White or European American, another 3% were Black or African
American, and 2% checked “other” or did not respond.
Item Development
The perfectionism research team had a series of 2-hour meetings to discuss whether to
revise the APS (Slaney & Johnson, 1992). The concerns that we noted previously in
this article were discussed. After the research team decided to proceed with the revi-
sion, they discussed dimensions that should be included in defining perfectionism,
with particular emphasis on what construct or constructs would best represent the
negative aspect of the definition of perfectionism. Discussions encompassed a variety
of possibilities and were substantively aided by the available research on the APS,
especially the interview study by Slaney and Ashby (1996), as well as the research by
Hewitt and Flett (1991) and the studies by Frost and his associates (Frost et al., 1990,
1993). The discussions eventually led to a consensus that high standards and orderli.
ness captured the essential and defining positive aspects of perfectionism. The central
and defining negative aspect of perfectionism was believed to have been captured by
the concept of discrepancy, defined as the perceived discrepancy or difference be-
tween the standards one has for oneself and one’s actual performance,
The next task was to develop items that measured each of the hypothesized dimensions of
perfectionism. Discussions began with the items comprising the original Standards and
Order subscale from the original APS. The APS contained subscales measuring high stan-
dards and orderliness that were developed in an attempt to measure these constructs with.
out conveying any negative preconceptions about the possession of such characteristics
(ohnson & Slaney, 1996), An indication that this attempt was successful can be found in
“Measurement and Evaluation in Counseling and Development ++ October 2001 Volume 34 133Slaney et al. (1995), who found that the combined Standards and Order subscale had a
structure coefficient of .89 on the Positive Striving factor and a coefficient of 02 on the
Maladaptive Evaluation Concerns factor. Subscale intercorrelations in Slaney et al. (1995)
and Johnson and Slaney (1996) are consistent with those coefficients in suggesting that the
Standards and Order subscale had negative or minimal correlations with measures of anxi-
ety or depression. Frost et al, (1990) discarded organization as irrelevant to perfectionism.
However, the frequency with which participants in Slaney and Ashby’s (1996) interview
study mentioned orderliness in defining perfectionism suggested that the construct of or-
der, though of secondary importance to the possession of high standards, deserved further
consideration, The six items assessing order from the original APS were seen as adequate
psychometrically, on the basis of their structure coefficients in Slaney et al. (1995), and
were included in the revised version. To measure high standards, the six items from the
APS assessing standards were also retained, Seven additional items were added to strengthen
the Standards subscale while retaining an emphasis on the positive aspects of having high
standards. Both high standards and orderliness were posited as representing the positive
aspects of perfectionism.
Finally, it was clear that the subscales of the original APS did not adequately measure the
negative dimension of perfectionism. Such a subscale would need to measure the extreme
or excessive nature of the standards involved in perfectionism as defined above. The con-
cept of discrepancy, defined here as the perception that personal high standards are not
being met, was seen as representing the defining negative aspect of perfectionism. It seems
consistent with the dictionary definitions, especially Webster's 11 New College Dictionary
(1995) definition of perfectionism as “a predilection for setting extremely high stan-
dards and being displeased with anything less” (p. 816). The concept is also consistent
with the results of research (Slaney & Ashby, 1996; Slaney et al., 2000) and captures the
essential negative aspect of the construct. Twenty new items were added to the subscale
to measure the concept of discrepancy. Thus, the initial version of the APS-R comprised
a total of 39 items: 6 that measured order, 13 that measured high standards, and 20 that
measured discrepancy.
Analysis
There were four major components to the data analysis strategy. In general, a very
conservative approach to the analyses was taken. Listwise deletion of data was used
throughout the analyses. We considered Joreskog’s (1977) statement that the delinea-
tion between exploratory and confirmatory analyses is not as strict a dichotomy as is
often assumed. On the basis of this statement and recommendations from Briggs and
Cheek (1986), the 39 items chosen to represent the three core aspects of perfectionism
were initially subjected to an exploratory principal-components factor analysis (EFA)
with data from the first sample (N= 347). Items remaining after the EFA were sub-
jected to a multiple-groups confirmatory factor analysis (CFA). Anderson and Gerbing
(1988) observed that “because initially specified models almost invariably fail to
provide acceptable fit, the necessary re-specification and re-estimation using the same
data mean that the analysis is not exclusively confirmatory” (p. 412). They noted that
the next logical step would be to cross-validate the final model on another sample. For
that reason, ina multiple-groups analysis, the second sample (V = 258) was used to
cross-validate the results from the EFA on the first sample (Fassinger, 1987; Joreskog
& Sorbom, 1988). The resulting measure was then administered to another sample
to verify the factor structure of the reduced and revised scale. Finally, the construct
validity of the scores from the subscales was examined in terms of their associations
with other measures of perfectionism and with various indicators of psychological
adjustment
134 ‘Mecsurement and Evaluation in Counseling ond Development & October 2001 & Volume 34Instruments
Multidimensional Perfectionism Scale (HFMPS; Hewitt & Flett, 1991). This scale con-
sists of three 15-item subscales. The items are scored on Likert scales ranging from |
(strongly agree) to 7 (strongly disagree). The Self-Oriented subscale assesses high self-
standards and excessive motivation to attain perfection (e.g., “One of my goals is to be
perfect in everything | do”). Other-Oriented Perfection assesses unrealistic expectations
for significant others (e.g., “If | ask someone to do something, I expect it to be done
flawlessly”), and Socially Prescribed Perfectionism assesses the belief that others are im-
posing their perfectionistic standards on the self (e.g., “Anything I do that is less than
excellent will be seen as poor work by those around me”), Hewitt, Flett, Turnbull-Donovan,
and Mikail (1991) reported test-retest reliabilities over a 3-month period ranging from .60
to .69 using 387 psychiatric inpatients and 49 outpatients, Hewitt and Flett (1991) found
coefficient alphas ranging from .82 to .87 for scores derived from their three scales among
a sample of 156 college students. Scale score intercorrelations ranged from .25 to .40. A
principal-components factor analysis of the responses of 1,106 university students yielded
three factors that accounted for 36% of the variance and primarily conformed to the three
scales of the HFMPS. A similar analysis with 263 psychiatric patients yielded similar
results that accounted for 34% of the variance; the coefficient alphas ranged from .74 to
-88. Hewitt and Flett (1991) interpreted the results of studies of four samples of college
students, with whom a variety of measures were used, supporting the concurrent and dis-
criminant validity of their three subscales.
Multidimensional Perfectionism Scale (FMPS; Frost et al., 1990). This scale consists of
35 items that are scored on a 5-point Likert scale ranging from | (strongly disagree) to 5
(strongly agree). Frost et al. (1990) defined perfectionism as “the setting of excessively
high standards for performance accompanied by overly critical self-evaluations” (p. 450 ).
The FMPS has five dimensions that relate to total perfectionism. The first and, to Frost,
the most important dimension is concern about making mistakes. The second is the setting
of high personal standards, The third and fourth dimensions concern perceived parental
expectations and parental criticism; the fifth dimension is the tendency to doubt the qual-
ity of one’s performance. An additional dimension measures the tendency to be organized
and orderly. Frost et al. (1990) used the responses of 282 female university students to
their scale items and found that 10 factors accounted for 64% of the variance. Six factors
were retained that accounted for 54% of the variance, and their items were administered
to 178 female university students. These factors “roughly corresponded” (Frost et al., 1990,
p. 453) to the six dimensions that were used to develop the items. The Cronbach's coeffi
cient alphas for scores on the six scales ranged from .77 to .93, with a coefficient of .90
for the overall scale score. Concurrent validity for the FMPS scores was demonstrated in
that the measure related in expected directions with other measures of perfectionism, such
as Hewitt et als (1991) HFMPS and Slaney et al’ (1995) APS (Frost et al., 1993: Rice et
al., 1998). Criterion-related validity was evidenced by correlations between FMPS subscales
and measures of psychological symptoms (e.g., Brief Symptom Inventory; Derogatis &
Melisaratos, 1983) and adjustment such as compulsiveness, self-esteem, procrastination,
and depression (Frost & Marten, 1990; Frost et al., 1990, 1993).
Beck Depression Inventory (BDI; Beck, 1978). The BDI is a 21-item scale scored with 4-
point ratings. The scale is designed to measure the severity of depressive symptomatology.
Higher scores indicate more cognitive, motivational, behavioral, and somatic symptoms
of depression. Beck, Steer, and Garbin (1988) reviewed numerous studies of the BDI that
were conducted over a 25-year period and found considerable support for the reliability
and validity of BDI scores across a variety of samples. The BDI was found to have high
internal consistency, with a mean coefficient alpha of .87 across 25 studies (range ~ .73-.95).
Beck et al. reported the test-retest reliability of the standard version of the BDI across
“Measurement and Evaluation in Counseling ond Develooment October 200) Volume 24 135numerous studies, with ranges of .65 to .90 over periods of 5 days to 3 weeks. Evidence of
concurrent validity was found in studies relating the BDI to other measures of depression
such as the Zung Self-Rating Depression Scale (Zung, 1965) and the Multiple Affect Adjective
Checklist Depression Scale (Zuckerman & Lubin, 1965); Pearson product-moment correla-
tions in these studies ranged from .57 to .86, Evidence of discriminant validity was found
in several studies in which the BDI differentiated between individuals in psychiatric and
nonpsychiatric samples (Beck et al., 1988).
Rosenberg Self-Esteem Inventory (Rosenberg, 1965). This scale consists of 10 statements
that measure positive self-esteem. Items are scored on a 4-point Likert scale ranging from
1 (strongly agree) to 4 (strongly disagree). Half of the items are worded positively and half
are worded negatively. Higher scores on the measure indicate positive self-esteem or a
general perception of self-worth. Adequate reliability for Self-Esteem Inventory scores
has been demonstrated and summarized by Goldsmith (1986) and Crandall (1973). Esti-
mates of internal consistency reliability have ranged from .86 to 93 (Goldsmith, 1986),
whereas test-retest reliability over a 2-week period was .85 (Crandall, 1973). Goldsmith
(1986) and Rosenberg (1965, 1979) reported that scores from the inventory have corre-
lated with other measures in expected directions, thus supporting its validity.
Penn State Worry Questionnaire (PSWO; Meyer, Miller, Metzger, & Borkovec, 1990). The
PSWQ is a 16-item scale that is scored on a S-point scale ranging from I (not at all typical of
‘me) to 5 (very typical of me). A principal-components analysis with oblique rotation on the
responses of 336 college students to a pool of 161 items yielded one primary factor that ac-
counted for 22.6% of the variance. Scores that were based on the final 16 items yielded a
coefficient alpha of .93. A sample of 405 college students completed the PSWQ along with the
State and Trait Anxiety subscales of the State-Trait Anxiety Inventory (Spielberger, Gorsuch,
Lushene, Vagg, & Jacobs, 1983) and the BDI (Beck, 1978). The coefficient alpha was.93, and
the correlations with the Trait and State Anxiety subscales were .64 and 49, respectively, and
36 with the BDI. Test-retest on a subsample of 47 participants from the previous study yielded
2 correlation of .92, Separate samples of 60 college students were administered the PSWQ
twice, at 2- and 4-week intervals, The test-retest correlation was .75 at 2 weeks and .74 at 4
weeks. Additional measures assessing the internal consistency, the test-retest reliability, and
the concurrent validity of the scores from the scale provided supportive results.
Marlowe-Crowne Social Desirability Scale (Crowne & Marlowe, 1960). This scale is a
33-item measure of socially desirable responding. Crowne and Marlowe (1964) reported
alphas for multiple studies ranging from 73, to .88 and test-retest reliability of .88 over a
I-month period. A series of correlations suggested to the authors that the scale measured
the need for approval, although Paulus (1984) concluded that it primarily measured im-
pression management.
RESULTS
Exploratory Factor Analysis
Principal-components factor analysis was performed on the 39 items from the first sample.
Bartlett’s test of sphericity was large (6449.93; p < .0001) and indicated that the items
shared common factors. The Kaiser-Meyer-Olkin measure of sampling adequacy was 90,
which also supported the use of these data in a factor analysis (Kim & Mueller, 1978),
Three factors were expected in this analysis, and oblique rotation was used because of the
expectation that the factors could correlate with one another. The three factors that emerged
in this analysis had eigenvalues, before rotation, of 8.32, 6.95, and 2.28 and accounted for
approximately 45% of the variance. After rotation, structure coefficients for the first factor
ranged from .32 to .81. Coefficients on the second factor ranged from .23 to .82. Coeffi-
cients on the third factor ranged from .30 to .94. As an initial step toward item reduction,
136 ‘Meastrement and Evatiaion h Counseling ond Development & October2001 ¢ Volume 4we used a decision rule that, to be retained on the scale, an item must have had a structure
coefficient greater than .45 on one factor and less than .35 on any other factor. These
ctiteria seemed to balance conservative and liberal approaches in structure coefficient
decision making. This procedure resulted in a measure with 13 items on the Discrepancy
factor, 7 items on the High Standards factor, and 4 items on the Order factor.
Multiple-Groups Confirmatory Factor Analysis
Cross-validation was performed by using the results derived from the mid-Atlantic sample
and comparing the parameter estimates from the first midwestern sample with the initially
derived estimates. This multiple-groups analysis consisted of several steps and tests of
various models. In the first analysis, a model was tested in which parameter estimates
were held to be equivalent between the two samples. A second analysis tested a model in
which parameter estimates in the second sample were freely estimated. If this model pro-
duced a significant improvement in fit or, stated differently, if the invariant model signifi-
cantly deteriorates model fit, then various subsets of parameter estimates could be exam-
ined to determine where sample differences lay (¢.g., structure coefficients, measurement
error terms, factor correlations). We were interested in exploring two questions with these
analyses. First, could we find cross-validated support for the item allocations determined
from the first set of analyses, and second, what, if any, deviations from our initial findings
would emerge in the second sample?
‘The items retained after the initial EFA were subjected to a multiple-groups CFA in which
each item was constrained to indicate only the factor identified in the first analysis. Thus,
13 items were restricted to indicate only the Discrepancy factor, 7 items were constrained
as indicators of the High Standards factor, and 4 items were constrained to indicate the
Order factor. The factors identified by these constraints were permitted to correlate with
one another. Maximum likelihood was the estimation method, and covariance matrices
were analyzed. The EQS 5.7 program was used (Bentler, 1995) in these and subsequent
analyses. Adequacy of model fit was determined by the chi-square test, the comparative fit
index (CFI; Bentler, 1990), the standardized root-mean-square residual (SRMR), the root-
mean-square error of approximation (RMSEA), and the 90% confidence interval around
the RMSEA. A nonsignificant chi-square would indicate no significant difference between
a hypothesized model and observed variance-covariance matrices, However, because the
chi-square statistic can be affected by sample size under certain conditions (Marsh, Balla,
& MeDonald, 1988) and because some underlying assumptions regarding the chi-square
may be invalid (Bentler, 1990), other indexes of fit were examined. The CFI is an index of
model fit that compares the theoretical model with a null model. The CFI ranges from 0 to
1.00; values closer to 1.00 indicate better fitting models, Chi-square difference tests (2)
were used to compare nested structural models. Finally, as recommended by Quintana and
Maxwell (1999), and because all fit indexes evaluate fit on the basis of omitted paths, we
examined structure coefficients to aid our decision making regarding model adequacy.
Our first multiple-groups CFA constrained structure coefficients, error terms, and factor
correlations to be invariant between samples. These constraints resulted in a fair fit for the
data, 1°(549, N= 601) = 1,460.27, p <.05, CFI = .87, SRMR = .108, RMSEA =.053 (.049
to 056). Inspection of the standardized solution in this analysis revealed a conspicuously
low structure coefficient (.05) for one of the items on the Discrepancy factor, whereas ail
remaining structure coefficients ranged from ,49 to .83. Indeed, problems associated with
this item were confirmed by the Lagrange multiplier test, which revealed that the most
significant change to the fit of this model would occur by eliminating this item. Accord-
ingly, we dropped that item and reran this analysis, This model and the existing constraints
resulted in a close to fair fit for the data, 7°(503, N= 601) = 1,249.23, p <.05, CFI = .89,
SRMR = .087, RMSEA = .050 (.046 to .053). The Lagrange multiplier test revealed that
Measurement and Evaivationin Counseling ond Developmont @ October2001 2 Volume 34 137the most significant change to the fit of this model would occur by permitting one of the
error terms to be freely estimated. However, none of the other suggested model modifica-
tions would have resulted in a substantial change in fit.
The next model that we tested constrained the error terms and factor correlations to be
invariant between the groups but allowed the structure coefficients to be freely esti-
mated. This model did not significantly improve the fit for the data when compared with
the model that constrained all parameters to be invariant, Ax3(23, N= 601) = 24.12, p>
05. The next model that we tested allowed the error terms to be freely estimated be-
tween the groups. This model significantly improved fit when compared with the fully
constrained model, Ax2(23, N= 601) = 50.48, p < .001. Finally, we compared the fully
constrained model with one in which factor correlations were permitted to be freely
estimated while the other parameters were constrained invariantly. This model did not
significantly enhance model fit, Ay:(3, N= 601) = 3.05, p > .05. Except for measure-
ment error associated with the items, these comparisons suggest that we were able to
cross-validate all aspects of our results, The final standardized solution for the com-
bined sample appears in Table | along with the subscale intercorrelations.
‘On the basis of the standardized solution, structure coefficients for the first factor
ranged from .49 to .83. Item content for this factor was consistent with the diserep-
ancy construct previously described. The second factor yielded coefficients ranging
from .50 to .75, with items tapping high standards. The third factor measured order,
with coefficients ranging from .73 to .85. Factor intercorrelations were -.07 between
Discrepancy and High Standards, -.05 between Discrepancy and Order, and .41 be-
tween High Standards and Order. Raw item scores for each factor were then used to
construct three subscales. Cronbach’s coefficient alphas for the three subscales de-
rived from the CFA were .92 for Discrepancy, .85 for High Standards, and .86 for
Order, Although there is not an agreed-on cutoff for acceptable reliability estimates,
recent guidelines by Salvia and Ysseldyke (1998) indicated that values greater than
80 are suitable for screening purposes and values greater than .90 are adequate for
diagnostic purposes
Confirmatory Factor Analysis With the Final Items
The second midwestern sample was used to examine the factor structure of the final
version of the APS-R. The purpose of this portion of the study was to avoid the con-
founding that could occur if the final 23-item version was analyzed only in the context
of the original 39 items. Thus, we administered the APS-R to a third sample of parti
pants, conducted a CFA, and examined the internal consistency of the resulting subscales.
On the basis of the previous analyses, the CFA model constrained 12 items as indi-
cators of the Discrepancy factor, 7 items to indicate the High Standards factor, and the
remaining 4 items to indicate the Order factor. Again, the factors were permitted to
correlate with one another. As in the other analyses, maximum likelihood was the es-
timation method, covariance matrices were analyzed, and the EQS 5.7 program was
used (Bentler, 1995). This model revealed an adequate fit for the data, ¥2(227, N =
204) = 459.22, p < .05; SRMR = .08; RMSEA = .07; 90% confidence interval around
RMSEA = .06 to .08; CFI = .90. Structure coefficients ranged from .56 to .87 for the
Discrepancy factor, .42 to .84 for High Standards, and .S8 to .82 for Order. Subscales
‘were created on the basis of the items constrained to indicate each factor. The Discrep-
ancy subscale was correlated ~.13 with High Standards and ~.19 with Order. The cor-
relation between High Standards and Order was .47. Cronbach’s coefficient alphas for
scores from the Discrepancy, High Standards, and Order subscales were .91, 85, and
.82, respectively. The CFA and internal consistency results were comparable to results
achieved with the previous samples.
138 Mocasement and Evaluation in Couneoing and Development @ Ccfober 2001 Volume stTABLE 1
Combined Sample Confirmatory Factor Analysis of the Almost Perfect
Scale-Revised: Structure Coefficients and Correlations
High
Item Number Discrepancy Standards Order
1. often fee! frustrated because | can't meet
my goals. 0.49 0.00 0.00
2. My best just never seems to be good
enough for me. 0.73 0.00 0.00
3. | rarely live up to my high standards. 0.66 0.00 0.00
4, Doing my best never seems to be enough. 0.78 0.00 0.00
5. lam never satisfied with my accomplishments. 0.74 0.00 0.00
6. | often worry about not measuring up to my
‘own expectations. 0.83 0.00 0.00
7. My performance rarely measures up to my
standards, 0.80 0.00 0.00
8. | am not satisfied even when | know | have
done my best 0.69 0.00 0.00
9. | am seldom able to meet my own high
standards for performance. 0.80 0.00 0.00
10, | am hardly ever satisfied with my
performance. 0.83 0.00 0.00
11. I hardly ever feel that what I've done is
‘good enough, 0.73 0.00 0.00
12, often feel disappointment after completing
a task because | know | could have done
better. os? 0.00 0.00
18. | have high standards for my performance
at work or at school 0.00 0.67 0.00
14. If you don't expect much out of yourself you
will never succeed 0.00 0.50 0.00
15. | have high expectations for myselt. 0.00 0.74 0.00
16. | set very high standards for myselt. 0.00 0.70 0.00
17. | expect the best from myset 0.00 on 0.00
18. | try to do my best at everything | do. 0.00 0.61 0.00
19. | have a strong need to strive for excellence 0.00 0.78 0.00
20. | am an orderly person. 0.00 0.00 o77
21. Neatness is important to me. 0.00 0.00 0.85
22. | think things should be put away in their
place. 0.00 0.00 073
23. | like to always be organized and
disciplined. 0.00 0.00 075
Discrepancy =
High Standards 0.07 —
Order 0.05 oat -
Construct Validity
To examine the construct validity of the APS-R, each of the mid-Atlantic samples had com-
pleted other measures in addition to the APS-R. The participants from the first mid-Atlantic
sample completed the HFMPS (Hewitt & Flett, 1991) and the Rosenberg Self-Esteem Inven-
tory (Rosenberg, 1965) and recorded their grade point averages (GPAS). The intercorrelations
for these scales are reported in Table 2. The participants from the second mid-Atlantic sample
completed the HEMPS, the FMPS (Frost et al., 1990), the BDI (Beck, 1978), the Rosenberg
Self-Esteem Inventory, the PSWQ (Meyer et al., 1990), and the Marlowe-Crowne Social
Desirability Scale (Crowne & Marlowe, 1960), and they recorded their GPAs. The
intercorrelations for these scales are reported in Table 3.
‘Measurement ond Evakiaion in Counssing and Development & October2001 & Vouune Sd 139TABLE 2
Intercorrelations of the First Mid-Atlantic Sample
APS-R MPS
Scale Item a 2 3 4 6
1, High Standards —
2. Order 33 =
3. Discrepancy 03 06 =
4. Self-Oriented 64 38 31 =
5. Other-Oriented 24 16-04 At =
8. Socially
Prescribed 00 o7 43, 39 7 =
7. Self-Esteem 15 05-35 071 = 22 =
8. GPA 42 038 18 24 00-08 =.03 -
Note. N= 173. Absolute values of correlations greater than .13 were significant at p < .05, one-tailed
test. APS-R = Almost Perfect Scale-Revised; MPS = Multidimensional Perfectionism Scale (Hewitt
& Flett, 1991); Self-Esteem = Rosenberg Self-Esteem Scale; GPA = grade point average.
To assess validity, we examined the correlations between selected perfectionism subscales
within each sample. As indicated in Tables 2 and 3, in both samples, the High Standards
subscale from the APS-R was significantly correlated (.64 and .55) with the Self-Oriented
Perfectionism subscale from the HFMPS (Hewitt & Flett, 1991), but correlations with the
other HFMPS subscales were less substantial (e.g., the correlation between High Stan-
dards and Socially Prescribed Perfectionism was .00 and ~.05 in Samples | and 2, respec-
tively). In both samples, the APS-R Discrepancy subscale was significantly correlated
with Self-Oriented (.31 and .23) and Socially Prescribed Perfectionism (.43 and .45) on
the HFMPS, In the second sample, the High Standards subscale of the APS-R was signifi-
cantly correlated with the Personal Standards subscale of the FMPS (.64) but was gener-
ally less related to the other FMPS dimensions (correlations ranged from —.11 to .31). In
a similar manner, the APS-R Order subscale was significantly correlated with the FMPS
Organization subscale (.88) but was not significantly related to the other subscales (corre-
lations ranged from 02 to .16). The APS-R Discrepancy subscale was significantly corre-
lated with Concern Over Mistakes (.55) and Doubts About Actions (.62) and less related
to the other FMPS subscales (correlations ranged from ~.13 to .34)
We also examined the relative strength of associations between perfectionism subscales
and indicators of adjustment or well-being, In these analyses, we tested differences be-
tween dependent correlations within each sample following the procedure described by
Glass and Hopkins (1984). The resulting t test from this procedure indicates whether there
is a significant difference between correlations obtained from the same sample. Because
we expected our APS-R subscales to be better indicators of various adjustment indices,
we based our inferential decision making on one-tailed statistical tests. We also limited
our analyses to the High Standards and Discrepancy subscales from the APS-R, the Self-
Oriented and Socially Prescribed Perfectionism subscales from the HFMPS, and the Per-
sonal Standards and Concern Over Mistakes subscales from the FMPS.
In the first sample, we compared the correlations obtained (a) when the High Standards
and Discrepancy subscales were related to self-esteem and GPA and (b) when Hewitt and
Flett’s (1991) Self-Oriented Perfectionism and Socially Prescribed Perfectionism subscales
were correlated with self-esteem and GPA (see Table 2). Thus, in the first test, we com-
pared r= .15 (High Standards subscale and self-esteem) with r = -.07 (Self-Oriented
Perfectionism subscale and self-esteem). This test was significant, (170) = 3.50, p <.001,
4. Likewise, the comparison between the High Standards subscale and GPA (r= .42)
140 Measurement and Evalvationin Counseling and Development @ October200) @ Volume 34109 JO S@nIeA SINIOSGY “PLL = NV “SION Z IGE 895 "2I0N
— e0- eF pO CO nn a -aje9g
211800 [B100§
— wo- 6 ES so Leek 60- 60~ 60° €2- 60° ve" eBBlanY qUIOg apeID
— 6&- tw oF fy BL se pe er ae omsd
— ze 6 ae pe te- lo- €0- py pL es
— e0- By 9 6e 660° vt GO nu
uoissaidsa y08,
— 0 Zzo- eo- zo rz el ae vONBZIUeHLO %
— 6y 80 le- zo "= suogoy jnogy siqn0q
= iy tz Lt ve 20
es 62’ ek’
ve oe 6
19 az es ggg
— oF FF sh 90~
— 00 £0
— 8 8 7 peweLOIeS
= 9 om ‘Kouedsi9siq
— zy 18810
= Spuepurig UBIH
TE wy] 81295,
“ye 42 18014—SaW Held HIMEH—San wsdv
ajdwes onueny-PIN Puosag ey) Jo sUoNe|e110910}U
ea1evl
141and the Self-Oriented Perfectionism subscale and GPA (r = .24) revealed a significant
difference in the strength of associations, (170) = 3.05, p <.001, d=.47. In both analyses,
the High Standards subscale was more strongly correlated with self-esteem and GPA than
was the Self-Oriented Perfectionism subscale.
Also in the first sample, we examined correlations between problematic dimensions of
perfectionism and adjustment. The first analysis found a significant difference between
the correlations between the Discrepancy subscale and self-esteem (r= ~.35) and the
Socially Prescribed Perfectionism subscale and self-esteem (r= -.22), (170) =-1./0,
p <.05, d = .26. The final comparison within this sample examined the correlation
‘between the Discrepancy subscale and GPA (r-=—.18) and the Socially Prescribed Perfec-
tionism subscale and GPA (r-=-.08). This test revealed no statistically significant difference
between the correlations, 1(170) = 1.24, p < 10, d= 19.
In the second sample, we planned to conduct eight comparisons between correlations of
perfectionism subscales and measures of self-esteem, GPA, depression, and worry. Be-
cause the strength of associations between the HFMPS and the APS-R with indicators of
adjustment were comparable to those observed in the first sample, we focused our com-
parisons on subscales from the FMPS (Personal Standards and Concern Over Mistakes)
along with the APS-R (High Standards and Discrepancy). Because some of the comparisons
ostensibly involved two nonsignificant correlations, those comparisons were dropped from
the analyses. All of the comparisons involving the High Standards and Personal Standards
subscales revealed statistically significant differences between correlations. In the first
analysis, we compared the correlation between the High Standards subscale and self-esteem
(r= .19) with the correlation between the Personal Standards subscale and self-esteem
(r= ~.04) and found a significant difference between the correlations, t(171) = 3.70,
p <.001, d=.57. The same was true when the adjustment indicator was GPA, #(171) =
2.79, p <.005, d= .43. A significant difference also emerged in comparing correlations
With the PSWQ, (171) = -6.37, p <.001, d=.97, We should note that, in the comparison
with the PSWQ, the APS-R High Standards subscale was negatively and not statistically
significantly correlated with worry, whereas the FMPS Personal Standards subscale was
statistically significantly and positively correlated with worry.
The next and final set of comparisons examined the correlations between the APS-R
Discrepancy subscale (denoted r,) and the four adjustment indicators (self-esteem, GPA,
BDI, and PSWQ) with the correlations observed between the Concern Over Mistakes subscale
and those indicators (denoted r,,,). There were statistically significant differences between
the correlations in comparisons involving self-esteem (r,, =—4, r¢,,=—28), (171) =-2.46,
p<.01,d=.38, and GPA, (r,,==.23, fe ~—07), (171) =-2.27, p <.025, d= .35. However,
‘comparisons involving the BDI (r,, = 49, ry = 41) and the PSWQ (r, = 46, r.,,= 48) did
not reveal statistically significant differences between correlations, (171) = 1.28, p<.10,
d= 20, and (171) = 0.33, p <.250, d= .05, respectively.
DISCUSSION
The results of the principal-components factor analysis and the initial and the validating
CFAs provide support for a three-factor measure of perfectionism specifically, High Stan-
dards, Order, and Discrepancy. The structure coefficients of all of the items ranged from
42 to a high of .88. The Cronbach's alphas of the subscale scores ranged from .82 to .92,
indicating acceptable levels of internal consistency. The subscale score intercorrelations
indicated a moderate overlap between High Standards and Order, as was found in the
original APS. The modest relationships between the new Discrepancy subscale score and
both the High Standards and the Order subscale scores seemed to be consistent with the
conceptualization of perfectionism as consisting of positive and negative dimensions and
suggest that these dimensions are virtually independent
142 ‘Measurement ond Evolvationin Counseling and Development + October 2001 Volume 34Tables 2 and 3 indicate that the APS-R subscales relate to other measures of perfection-
ism in expected directions and with moderate to high correlations, as defined by Cohen
(1988). The relationships between High Standards and both self-esteem and GPA as well
as the adjustment scales suggest that the attempts to give the High Standards subscale a
positive connotation were successful. These results also suggest that this subscale may be
more positively associated with measures of achievement than with positive psychologi-
cal dimensions. The negative relationships between Discrepancy and both GPA and self-
esteem and the adjustment scales suggest the opposite for Discrepancy. That is, this subscale
is positively and substantively related to negative adjustment indicators and negatively,
but modestly, related to measures of achievement. The relationships between all of the
subscales of the APS-R and social desirability were minimal
‘The comparisons between the APS-R subscales and other subscales of perfectionism
using measures of achievement, self-esteem, depression, and worry revealed that, in most
cases, the APS-R subscales were more strongly associated with these measures. The dif-
ferences in correlations when High Standards from the APS-R was compared with other
perfectionism subscales were highly consistent, Overall, the intercorrelations in Tables 2
and 3 provide the beginning framework for a persuasive nomological network of results
that support the construct validity of APS-R scores.
The results of this initial study seem promising for the APS-R. The Discrepancy subscale
seems to provide a measure of the negative aspect of perfectionism that makes sense in
terms of standard definitions and descriptions of perfectionism. The data are quite prom-
ising in terms of the network of correlational results. The independence of the Discrepancy
subscale from the High Standards and Order subscales suggests that these variables may
be well suited to measure the separate positive and negative aspects of perfectionism. The
Order subscale was moderately correlated with the High Standards subscale, although the
EFAs and CFAs worked well with all three factors, Although Order does not seem to be
related to the other variables measured here, it was mentioned with some frequency in the
interview studies by Slaney and Ashby (1996) and Slaney et al. (2000). Future studies
should address the importance, or lack of importance, of this variable in the study of
perfectionism. Given the positive results in the aforementioned studies with predominantly
European American participants, there is a clear need for future studies to investigate the
APS-R using more diverse samples.
In comparing the APS-R with the HFMPS and the FMPS, it needs to be stated that the
latter scales are based on quite different conceptualizations of perfectionism. Although we
attempted to incorporate a clear delineation between the positive and negative factors of
the APS-R, it makes sense that Hewitt and Flett (1991), in developing the HFMPS, in-
cluded both positive and negative items in their Self-Oriented subscale. Although the FMPS
seems closer in its conceptualization to the APS-R than the HFMPS, it was also based on
a generally negative conception of perfectionism. For both scales, the research conducted
by these capable colleagues provided a basis for the concept that perfectionism consists of
both negative and positive dimensions,
Overall, the APS-R does seem to be promising in terms of providing an adequate mea-
sure of perfectionism. That is, its subscales address the two-dimensional structure found
in previous research, they are closely related to commonly used definitions, and they have
empirical results that suggest that the subscales are sound, The discrepancy concept also
seems to logically suggest that psychological treatments for this negative aspect of perfec-
tionism might begin with carefully examining the discrepancy that separates one’s stan-
dards and one’s perceived performance. Cognitive-behavioral approaches may come to
mind most readily in terms of interventions, but other theoretical perspectives can also be
applied. Measuring change through repeated administrations of the Discrepancy subscale, and
others, seems like a logical approach to measuring possible therapeutic progress. Although the
APS-R seems promising, future researchers choosing a measure of perfectionism may need
‘Moosurement and Evaluation in Counseling and Development & October 2001 & Vekime 34 143to consider the conceptual basis that best fits their ideas and best meets the needs of their
particular research. At this point, the APS-R and its Discrepancy subscale, in particular,
might be usefully included in those deliberations.
REFERENCES
‘American Psychiatrie Association. (1994). Diagnostic and statistical manual of mental disorders (Ath
ed.). Washington, DC: Author.
‘Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and rec~
‘ommended two-step approach. Psychological Bulletin, 103, 411-423.
Beck, A. T. (1978). Depression Inventory. Philadelphia: Center for Cognitive Therapy.
Beck, A. T., Steer, R. A., & Garbin, M. G. (1988). Psychometric properties of the Beck Depression
Inventory: Twenty-five years of evaluation. Clinical Psychology Review: 8, 77-100.
Bentler, P, M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238—
246.
Bentler, P. M, (1995). EQS: Structural equations program manual. Encino, CA: Multivariate Statistical
Software.
Blatt, S. J, (1995). The destructiveness of perfectionism. American Psychologist, 50, 1003-1020.
Blatt, S. J., Zuroff, D. C., Bondi, C. M., Sanislow, C. A., & Pilkonis, P. A. (1998). When and how
perfectionism impedes the brief treatment of depression: Furtheranalyses of the National Institute of
‘Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and
Clinical Psychology, 66, 423-428.
Blatt, S. 1, Zuroff, D. C., Quinlan, D. M., & Pilkonis, P. A, (1996), Interpersonal factors in brief treat-
‘ment of depression: Further analyses of the National Institute of Mental Health Treatment of Depres-
sion Collaborative Research Program. Journal of Consulting and Clinical Psychology, 64, 162-171
Briggs, S. R., & Check, J. M (1986). The role of factor analysis in the development and evaluation of
personality scales. Journal of Personality, 54. 106-148
Burns, D. (1980, November). The perfectionist’s script for self-defeat. Psychology Today, 34-52.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum,
Crandail, R. (1973). The measurement of self-esteem and related concepts. In J. P. Robinson & P. R.
Shaver (Eds.), Measures of social psychological attitudes (pp. 45-67). Ann Arbor: University of
Michigan Press.
Crowne, D. P, & Marlowe, D. (1960). A new scale of social desirability independent of psychopathol-
ogy: Journal of Consulting Psychology, 24, 349-354,
Crowne, D. P, & Marlowe, D. (1964). The approval motive: Studies in evaluative dependence. New
York: Wiley.
Derogatis, L. R. & Melisaratos, N. (1983). The Brief Symptom Inventory: An introductory report. Psy
chological Medicine, 13, 595-605.
Fassinger, R. E. (1987). Use of structural equation modeling in counseling psychology research. Journal
‘of Counseling Psychology, 34, 425-436
Frost, R. O., Heimberg, R. G., Holt, C. S., Mattia, J. 1., & Neubauer, A. L. (1993). A comparison of two
‘measures of perfection. Personality and Individual Differences, 14, 119-126
Frost, R. O., & Marten, P. (1990). Perfectionism and evaluative threat. Cognitive Therapy and Research,
14, $59-572.
Frost, R. 0., Marten, P.A., Lahart, C., & Rosenblate, R. (1990). The dimensions of perfectionism. Cog-
nitive Therapy and Research, 14, 449-468.
Glass, G.V., & Hopkins, K. D. (1984). Statistical methods in education and psychology. Englewood
Cliffs, NJ: Prentice Hall,
Goldsmith, R. E. (1986). Dimensionality of the Rosenberg Self-Esteem Scale. Journal of Soctal Behaviour
and Personality, 1, 253-264.
Howitt, P. L., & Flett, G. L. (1991), Perfectionism in the self and social contexts: Conceptualization,
‘assessment, and association with psychopathology. Journal of Personality and Social Psychology,
60, 456-470.
Hewitt, P.L., Flett, G. L., Turnbull-Donovan, W., & Mikail, S. F. (1991). The Multidimensional Perfec~
tionism Scale: Reliability, validity, and psychometric properties in psychiatric samples. Psycho-
logical Assessment: A Journal of Consulting and Clinical Psychology, 3, 464-468.
Hollender, M. H. (1965). Perfectionism. Comprehensive Psychiatry, 6, 94-103.
144 Measurement ond Evaluation in Couneeing and Development @ October2001 # Volume 34Johnson, D. P,, & Slaney, R, B. (1996). Perfectionism: Scale development and a study of perfectionistic
clients in counseling, Journal of College Student Development, 37, 29-41
Joreskog, K. G. (1977). Factor analysis by least squares and maximum likelihood methods. In K. Enstein,
A. Ralston, and H. S, Wilf (Eds.), Statistical methods for digital computers (pp.125-153). New
York: Wiley.
Joreskog, K. G., & Sorbom, D. (1988). LISREL VII: A guide to the program and applications. Chicago:
SPSS.
Kim, J, & Mueller, C. (1978). Factor analysis: Statistical methods and practical issues. Beverly Hills,
CA: Sage
Marsh, H. W., Balla, J. R., & MeDonald, R. P. (1988). Goodness-of-fit indexes in confirmatory factor
analysis: The effect of sample size, Psychological Bulletin, 103, 391-410,
Merriam-Webster's collegiate dictionary (10th ed.). (1993). Springfield, MA: Merriam-Webster.
Meyer, T. J., Miller, R. L., Metzger, R. L., & Borkovee, T. D. (1990). Development and validation of the
Penn State Worry Questionnaire. Behaviour Research and Therapy, 28, 487-495.
Pacht, A. R. (1984). Reflections on perfection. American Psychologist, 39, 386-390.
Paulus, D. L. (1984). Two-component model of socially desirable responding. Journal of Personality
and Social Psychology, 46, 598-609.
Quintana, S. M., & Maxwell, S. E. (1999). Implications of recent developments in structural equation
‘modeling for counseling psychology. The Counseling Psychologist, 27, 485-527.
Research Consortium of Counseling and Psychological Services to Higher Education. (1995, October).
Nature and severity of college students ‘counseling concerns. Paper presented at the annual meeting
of the Association of University and College Counseling Center Directors.
Rice, K. G., Ashby, J. ., & Slaney, R. B. (1998). Self-esteem as a mediator between perfectionism and
depression: A structural equations analysis. Journal of Counseling Psychology, 45, 304-314.
Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, Ni: Princeton University Press.
Rosenberg, M. (1979). Conceiving the self. New York: Basic Books.
Salvia, J, & Ysseldyke, J. E. (1998). Assessment. Boston: Houghton Mifflin,
Slaney, R. B., & Ashby, J, S. (1996). Perfectionists: Study of a criterion group. Journal of Counseling &
Development, 74, 393-398,
Slaney, R. B., Ashby, JS. & Trippi, J. (1995). Perfectionism: Its measurement and career relevance,
Journal of Career Assessment, 3, 279-297.
Slaney, R. B., Chadha, N, Mobley, M., & Kennedy, S. (2000). Perfectionism in Asian Indians: Exploring
the meaning of the construct in India. The Counseling Psychologist, 28, 10-31
Slaney, R. B., & Johnson, D. G. (1992). The Almost Perfect Scale. Unpublished manuseript, Pennsylva-
nia State University
Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg, P. R., & Jacobs. G. A.
State—Trait Anxiety Inventory (Form ¥) (‘Self-Evaluation Questionnaire
sulting Psychologists Press,
Webster's ninth new collegiate dictionary. (1988). Springfield, MA: Merriam-Webster
Webster's Il new college dictionary. (1995). New York: Houghton Mifflin
Zuckerman, M., & Lubin, B. (1963). Manual for the Multiple Affect Adjective Checklist. San Diego, CA:
Educational and Industrial Testing Service.
Zung, W. W. K. (1965). A self-rating depression scale, Archives of General Psychiatry, 12, 63-70.
1983). Manual for the
Palo Alto, CA: Con-
Measurement ane Evotuation in Counsaing and Development & October 2001 + Volume 3d 145