Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Variables Affecting the Clarity of Psychological Reports

Virginia Smith Harvey


University of Massachusetts–Boston

Effective psychological reports are consumer-focused: They address the


concerns of the referring persons, present data appropriately, communi-
cate clearly and concisely, and include useful and appropriate recommen-
dations. Although the importance of clear communication has been stressed
repeatedly, psychologists often write reports that are very difficult for non-
psychologists to read. In this article, the author explores four reasons
behind this dichotomy: (a) model reports available to psychologists in train-
ing are written at a level that is very difficult to understand; (b) psycholog-
ical terms are not commonly defined; (c) the amount of time it takes to
write easily understood reports is substantial; and (d) psychologists are
confused about how to address multiple audiences. Methods to address
each issue are discussed. © 2005 Wiley Periodicals, Inc. J Clin Psychol
62: 5–18, 2006.

Keywords: assessment; professional practice; psychological reports;


report writing

The purposes of psychological reports are to (a) increase others’ understanding of clients,
(b) communicate interventions in such a way that they are understood, appreciated, and
implemented, and (c) ultimately result in clients manifesting improved functioning. How-
ever, reports are often difficult to read, particularly for nonpsychologists. They are likely
to include jargon and poorly defined terms, to have poor or illogical explanations of
results, to make vague or inappropriate recommendations, to be poorly organized, to
emphasize numbers rather than explanations, and to be of an inappropriate length (Kam-
phaus, 1993; Ownby, 1997; Sattler, 2001; Tallent, 1993). They are also likely to be writ-
ten at a high level of reading difficulty, which is problematic in that they are read by
multiple audiences with varied levels of educational background (Harvey, 1997; Weddig,
1984; Whitaker, 1994).

The author would like to acknowledge and express appreciation for Tim Dawson’s and Jenny Clair Dawson’s
assistance in preparing this article.
Correspondence concerning this article should be addressed to: Virginia Smith Harvey, Graduate College of
Education, University of Massachusetts–Boston, 100 Morrissey Blvd., Boston, MA 02125–3393; e-mail:
virginia.harvey@umb.edu

JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 62(1), 5–18 (2006) © 2006 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/jclp.20196
6 Journal of Clinical Psychology, January 2006

Authors commonly recommend that psychological reports be written in a readable


and clear manner, using precise vocabularies with common meanings (Groth-Marnat,
2003; Kamphaus, 1993; Ownby, 1997; Sattler, 2001; Tallent, 1993). Effective reports are
consumer-focused: They address the concerns of the referring person, present data appro-
priately, communicate clearly and concisely, and include useful and appropriate recom-
mendations (Brenner, 2003; Whitaker, 1994). The importance of clear
communication has been repeatedly stressed for more than 30 years (Martin, 1972).
Nonetheless, the average psychologist often writes reports that are very difficult to read
(Harvey, 1997).
This dichotomy is striking, long-standing, and ongoing. Given its long-standing nature,
it is likely that several variables contribute to this phenomenon. To change the habits of
those whom write reports, it will be necessary to explore the variables that contribute to
the extreme discrepancy between recommended clarity and the persistence of psycholog-
ical reports being written in a manner that are difficult to read. In this article, I explore
several of these variables.

An Exploration of Contributing Variables


Method
Procedure. During exit interviews from a university-based training program, the pro-
gram director asked students who recently graduated from a specialist-level professional
psychology program, “What are the factors that contribute to writing psychological reports
that are difficult to understand?” The graduates were asked to respond by e-mail. Their
written responses were clustered by the author using qualitative analysis.

Participants. Recent graduates from a university-based training program were ques-


tioned rather than practicing psychologists because it was felt that novice psychologists
only recently made the transition from “nonpsychologist” to “psychologist” and therefore
were likely to have a vivid memory of the difficulties that nonpsychologists have when
attempting to understand psychological reports and terminology. These recent graduates
drew on their experience from a 1-year internship; many indicated that they worked at
sites providing internships for students from more than one graduate program. Conse-
quently, they were able to compare their experiences with those of students from other
training programs.

Results

Eleven recent graduates responded to the open-ended question. Their responses clustered
into the following categories: training, language, time, and multiple audiences.

Training. The recent graduates noted that graduate textbooks, professors, and field
supervisors all contribute to the problem of “difficult-to-read” psychological reports.
They indicated that in some situations, they, or other interns they observed, had been
taught (a) to provide data and numbers without explaining them clearly, (b) to define
psychological terminology clearly but were not taught methods to incorporate these def-
initions and explanations into reports clearly and succinctly, and (c) to overemphasize
test results and scores and underemphasize writing about the client as a person. The
client-as-person perspective was what the graduates felt nonpsychologists were more
likely to understand.
Journal of Clinical Psychology DOI 10.1002/jclp
Variables Affecting Report Clarity 7

Language. Students also indicated that the jargon used by psychologists contributes
to the lack of report clarity. They indicated that terms such as “perceptual reasoning,”
“working memory,” “processing speed,” and “full-scale IQ” are not familiar to nonpsy-
chologists. One student indicated, “because of our training it is easy to slip into thinking
that ‘everyone knows’ what [terms and scales] mean” while the reader is “probably encoun-
tering them for the first time.” In addition, students noted that the general population is
unfamiliar with statistics and has difficulty understanding “how 90–109 IQ and the 25th–
75th percentile are both Average.”

Time. The amount of time necessary to write a clear report was mentioned by several
recent graduates. “Explaining what tests are, what they mean, and how they were devel-
oped takes a lot of time when you want to focus on the student”; “More thorough reports
are longer and explain the results more clearly, but there might not be enough time in the
day”; and “Time crunches contribute to this problem. When one has so many reports to
write in a limited amount of time, they sometimes just state numbers and don’t explain
their data enough for [others] to understand it in their reports.”

Multiple Audiences. Several recent graduates revealed confusion regarding the intended
audience of psychological reports. One student stated, “There is the dilemma of who you
are writing for. I feel torn between writing for another psychologist or doctor . . . or the
parent.” Another student indicated that she believes that “Using simpler language reduces
credibility. A well-written report makes parents, administrators, and other professionals
feel more secure in the quality of my assessment . . . attempting to put information that is
technical in non-technical language can make it feel watered down or dishonest.” Yet one
student seemed to have confusion about the ultimate purpose of reports by saying “Many
psychologists intentionally write reports that are difficult to understand as a way to prove
their intelligence.”

Discussion

This exploratory study suggests that training programs may not adequately address the issue
of writing understandable reports. While this exploration was conducted with a very small
sample and is therefore of limited generalizability, previous studies finding psychological
reports to have difficult readability suggest that this is not an isolated problem (Harvey, 1997;
Rucker, 1967; Shively & Smith, 1969; Teglasi, 1983; Weddig, 1984).

An Exploration of Training

As related above, recent graduates indicated that they, or others they observed, had not
been taught to provide and explain data in a way understandable to nonpsychologists. The
following study investigates one component of training programs—the readability of
model reports contained in textbooks.
Several formulas to calculate the readability of written text have been developed and
are used by authors and editors to monitor the readability of journals, magazines, and
technical publications. The Flesch reading ease score is commonly used to assess the
difficulty of adult reading material. It is obtained by the following formula: 206.835 ⫺
1.015 (average sentence length) ⫺ 84.6 (average syllables per word). Scores range from
100 to 0 and are interpreted as Very Easy (90–100), Easy (80–90), Fairly Easy (70–80),
Journal of Clinical Psychology DOI 10.1002/jclp
8 Journal of Clinical Psychology, January 2006

Standard (60–70), Fairly Difficult (50– 60), Difficult (30–50), and Very Difficult (0–30).
Authors are advised to aim for scores between 60 and 70 (Microsoft, 2002).
Another formula resulting in readability scores rates written text on a U.S. grade-
school level. For example, a score of 8.0 indicates that a person with an eighth-grade
education can understand the document. The formula for the Flesch–Kincaid grade-level
readability score is: .39 (average sentence length) ⫹ 11.8 (average syllables per word) ⫺
15.59. Authors are encouraged to aim for a score between 7.0 and 8.0 (Microsoft, 2002).

Method

Excerpts from 60 psychological report models were obtained from 20 graduate-level


professional psychology textbooks and report writing handbooks published between 1990
and 2002. Of the 60 reports, 38 were described as psychoeducational reports in response
to referrals from parents and teachers. The remaining 22 were described as neuropsycho-
logical, clinical, or forensic reports in response to referrals from professionals (doctors,
lawyers, and other psychologists). The two groups were analyzed separately in acknowl-
edgment of the varied education level of the intended recipients.
Writing samples were analyzed from sections of the model reports that summarized
the findings. Usually the section was titled “Summary” but in some reports it was titled
“Conclusions,” “Discussion,” or “Summary and Recommendations.” Samples of at least
200 words were obtained from each report (more when necessary to complete the last
paragraph). If the summary section was not at least 200 words, additional sentences were
added from a similar section of the report (e.g., “Recommendations”). These sections
were used, rather than sections reporting test results, with the presumption that they are
the least technical and most read portions of psychological reports. Therefore, they should
actually be the most reader-friendly portions of the reports.
The 60 sample reports were analyzed for reading difficulty using the grammar checker
utilities of Microsoft Word威 2002 and Grammar Expert Plus version 1.5 (1998). These
programs determine the number of words, characters, paragraphs, sentences, passive sen-
tences, syllables, and syllables per word in a passage. They then use these values to
calculate measures of readability: percentage of passive sentences, Flesch reading ease,
and grade-level readability. The Microsoft Word program has a maximum score (ceiling)
of 12.0 on the grade-level readability calculation; therefore, the Grammar Expert Plus
program was used for that result.

Results

All 20 textbooks analyzed in this study recommended that psychologists write reports
that are readable and presented model reports for students to emulate. A few presented
reports written at different levels of difficulty for different audiences. In those books,
sample reports varied in their readability and reports geared toward school personnel and
parents tended to be more readable than those geared toward clinicians. Nonetheless, the
results (shown in Table 1) indicate that regardless of the intended audience, model psy-
chological reports presented to graduate students have unreasonably high reading levels
and require advanced reading skills for comprehension. Psychoeducational and clinical–
neuropsychological–forensic reports had Flesch reading ease scores ranging from 0 to
48.4; means were 27.53 (SD ⫽ 11.68) and 20.94 (SD ⫽ 8.13), respectively. The psycho-
educational reports targeted toward parents and teachers were statistically less difficult to
read than those targeted toward professionals ( p ⬍ .05), yet means for both groups fell
Journal of Clinical Psychology DOI 10.1002/jclp
Variables Affecting Report Clarity 9

Table 1
Readability of Model Reports Contained in Textbooks

Psychoeducational Clinical, neuropsychological


reports and forensic reports
(N ⫽ 38) (N ⫽ 22) p
(two-
Measure of readability M SD SE M SD SE t(58) tailed)

Percentage of passive sentences 22.82 16.84 16.8 22.95 18.92 18.9 ⫺.029 .977
Flesch reading ease a 27.53 11.68 1.89 20.94 8.13 1.73 2.34 .023
Grade level readability b 18.49 3.42 .56 20.26 1.66 .35 ⫺2.27 .027

a
Authors are advised to aim for scores between 60 and 70 (Microsoft, 2002). b Authors are encouraged to aim for a score
between 7.0 and 8.0 (Microsoft, 2002).

within the Very Difficult range. There were no model reports whose readability fell within
the recommended standard range of 60–70.
Grade-level readability scores ranged from 12.8 to 31.6; means were 18.49 (SD ⫽
3.42) for the psychoeducational reports and 20.26 (SD ⫽ 1.66) for the clinical–
neuropsychological–forensic reports. Predictably, the psychoeducational reports targeted
toward parents and teachers were statistically less difficult to read than those targeted
toward professionals ( p ⬍ .05), but again the means for both groups were above grade 8,
the recommended level of difficulty. One report attained a score of 12.8, within reach of
the general population’s 12th-grade education level. The remaining reports were all above
grade 14, the level at which writing is in danger of being misunderstood or ignored.

Discussion

Given the difficult reading level of typical psychological reports in textbooks, it is highly
probable that psychologists in training do not encounter psychological reports written at
a level understandable to nonpsychologists. These results are reinforced by previous stud-
ies that revealed psychological reports written by practicing psychologists tend to be
difficult to read (Harvey, 1989, 1997; Weddig, 1984). Consequently, it is highly probable
that novice psychologists in the course of their fieldwork rarely encounter psychological
reports that are easily understood by nonpsychologists.

An Exploration of Language
Nuttall, Devaney, Malatesta, and Hampel (1999) discuss the importance of considering
the audience in selecting the language used in a psychological report. They argue that
because normally only one version of a report is prepared, “plain English” should be used
and technical terms defined for nonpsychologists. Multiple sources recommend that the
use of jargon and psychological terms be minimized in psychological reports because it
confuses readers—psychologists as well as parents, teachers, and other professionals
(Brenner, 2003; Cuadra & Albaugh, 1956; Ownby, 1997; Rucker, 1967; Sattler, 2001;
Shively & Smith, 1969; Tallent, 1993; Weddig, 1984; Whitaker, 1994).
Yet clearly defining psychological terms is very difficult for several reasons. Many
terms have been statistically derived; hence, they do not have clear and easily understood
verbal definitions. Many test manuals do not include easily understood definitions of
Journal of Clinical Psychology DOI 10.1002/jclp
10 Journal of Clinical Psychology, January 2006

terms that psychologists could adopt and quote in their reports. Furthermore, when psy-
chologists use terms such as “depressed” or “learning disabled” they assume they are
using a common definition and are referring to an agreed-upon population, but this is not
the case. Because every state (and in some states, every community) has latitude in the
interpretation of eligibility under the Individuals with Disabilities Educational Act (IDEA;
1991), there is little consistency from one state, community, or school to another in
designating a student as disabled.
The American Psychiatric Association (APA) in The Diagnostic and Statistical Man-
ual of Mental Disorders attempts to provide clear definitions of psychological disorders
but there remains considerable confusion over many terms (APA, 2000). For example,
controversy over the definition and measurement of intelligence permeates the history of
psychology and continues to be problematic (Sattler, 2001). Measures of intelligence are
widely used in psychological assessments. The numerical results they generate and the
interpretations attached to these numbers are often critical elements in the determination
of client eligibility for services in schools and vocational programs. Yet the interpretation
of even this common term (intelligence) is not clear. This study explores one of the
interpretations attached to the measurement of intelligence, that of “average intelligence.”

Method

Participants. Subjects were chosen by randomly selecting 200 members from three
American Psychological Association divisions closely associated with practicing psy-
chologists: Clinical, Counseling, and School Psychology. Of the 600 psychologists sur-
veyed, 208 (35%) returned the survey. Responses to the demographic information indicated
that 96% (N ⫽ 185) of the respondents had worked as a psychologists at least 5 years.
Work settings included private practice (37%), schools (22%), higher education (20%),
and clinics or hospitals (20%).

Materials. A one-page survey and stamped return envelope were mailed to the sub-
jects. The initial questions were open-ended, free response items in which the respondent
was first asked for a definition of average intelligence and then to specify the answer
numerically. The remaining questions were multiple choice and requested demographic
information.

Results

The results of the survey revealed that practicing psychologists do not have a standard
definition of average intelligence. The majority (N ⫽ 119, 67%) of psychologists con-
curred with the numerical definition of average intelligence (within a range of 90 to110
or 89 to110) given by the authors of the most commonly used intelligence measures, the
Wechsler and the Stanford-Binet Intelligence Scales (Termin & Merrill, 1960, 1973;
Thorndike, Hagan, & Sattler, 1986; Wechsler, 1949, 1955, 1967, 1974, 1981, 1989, 1991,
2003). However, a third (N ⫽ 59, 33%) did not. One fifth (N ⫽ 36, 20%) of the respon-
dents defined average intelligence as within a standard deviation of the mean (numeri-
cally, an IQ within the range of 85 to 115). Three respondents qualitatively defined average
intelligence as “not retarded,” numerically an IQ within the range of 70 to 130. Others
qualitatively defined average intelligence as “a child who is able to function without
major modifications in regular classrooms,” numerically an IQ within the range of 80
Journal of Clinical Psychology DOI 10.1002/jclp
Variables Affecting Report Clarity 11

to 120. Thirteen respondents (7%) indicated that to be considered average a client would
need to demonstrate IQ scores above 100.

Discussion
This study demonstrates a fundamental and profound problem with psychological terms
in general, and the use of psychological terms in reports in particular. Even among psy-
chologists, and even with a term that has a repeatedly published quantitative definition,
there is a lack of consensus regarding meaning. This lack of consensus has serious con-
sequences in the practice of psychology. For example, when test results are interpreted to
clients, parents, the courts, or school personnel, the interpretation can differ radically
depending on the particular interpreter. The designation of average intellectual function-
ing has implications for educational and vocational eligibility and success. The psychol-
ogist who utilizes a definition of average intelligence ranging from 70 to 129 will interpret
an IQ score of 83 in a substantially different manner from the psychologist who utilizes
a definition of average intelligence ranging from 95 to 105. Furthermore, the nonpsy-
chologists to whom the test results are being reported have their own definitions. For
example, parents might interpret the term average to indicate that their child is “all right”
while the psychologist intends average to indicate the population mean. In sum, the use
of psychological terms in reports, unless they are accompanied by very clear, specific,
and commonly held definitions, cannot help but lead to miscommunication.
The methodological limitations of this study include the fact that follow-up surveys
were not mailed. The response rate of 35% is the midrange of the typical response rates
for surveys; without follow-up mailings, the rate typically falls to between 10 and 60%.
Follow-up mailings would likely have increased the response rate significantly and thereby
increased the generalizability of the results.

An Exploration of Time
The writing of a psychological report takes considerable time. Whitaker (1994) found
that novice psychologists took 6 to 8 hours to write a report, while veteran psychologists
averaged 3 hours per report. Furthermore, writing easily understood reports actually
takes more time than writing reports that are difficult to comprehend. Brenner (2003) and
Sattler (2001) show that multiple steps are necessary to develop reports that are consumer-
friendly. Harvey (1997) found that even when psychology graduate students were aware
of the importance of writing reports understandable to their consumers (parents, teach-
ers, and clients), they had difficulty writing reports at a readable level unless they took
time to rewrite them at a more readable level. Unfortunately, the additional time needed
to write readable reports is not within the schedule of time that psychologists have avail-
able for report writing. This study examines the strategies used by psychologists to man-
age the time demands involved in psychological report writing.

Method
Participants. A survey was mailed to 500 randomly selected members of the National
Association of School Psychologists. Surveys were returned by 272 individuals, a response
rate of 54%. Demographic information completed by the respondents indicated that the
average respondent was aged 42.99 (SD ⫽12.1), had been working as a psychologist for
11.7 years (SD ⫽ 9.3), and completed 66.94 (SD ⫽ 44.19) psychological evaluations
annually. The majority worked in schools (74%) and were female (80%). Doctorate degrees
Journal of Clinical Psychology DOI 10.1002/jclp
12 Journal of Clinical Psychology, January 2006

were held by 18.8% of the respondents, specialist degrees by 53%, and master’s degrees
by 26.9%.

Materials. A four-page survey and stamped return envelope were mailed to the sub-
jects. Initial questions were multiple choice or short-answer; the final questions were
open-ended, free-response items. The survey was designed to assess sources of stress and
methods of stress and time management. It included a list of activities commonly com-
pleted by psychologists working in schools and asked the psychologists to indicate the
frequency of these activities and whether they would like to increase or decrease time
devoted to each activity. The survey also included 36 methods of stress reduction and
time management, six of which pertained to report writing. Respondents were asked to
indicate which of these methods they use, and which they find to be most effective.
Finally, respondents were provided with open-ended questions asking for additional sources
of stress and methods of stress reduction.

Results
Respondents indicated that report writing was an extremely frequent activity, with 33%
writing reports weekly and 89% writing psychological reports at least monthly. Forty-
nine percent of the respondents indicated that they would like to decrease the amount of
time they spend writing reports and 1% indicated that they would like to increase the
amount of time they spend writing reports (50% did not indicate a preference in terms of
time). This is in contrast to the respondents (a) who indicated that they would like to
increase time spent in monitoring the academic progress of students (54%); (b) who
would like to increase time spent in implementing interventions designed to improve
social and emotional functioning (48%); and (c) who would like to increase time spent
conducting nontraditional assessments (56%) such as Curriculum-Based Measurement
(Shinn, 2002).
The open-response items elicited comments expressing extreme frustration regard-
ing the time devoted to report writing. Many respondents indicated that they spend so
much time on paperwork that they are unable to provide counseling and intervention
services to students, the activities that they felt should be the basis of their practice of
psychology. Some respondents wrote their reports at home in the evening and on week-
ends, thus adding to their overall stress levels.
The degree to which psychologists use methods of stress reduction related to report
writing are revealed in Table 2. Few respondents used methods that would significantly
reduce the time required to write psychological reports. Respondents (a) did not have
access to secretarial support (76%); (b) did not use computer report-writing software
(70%); (c) did not use report templates on their computers (52%); (d) did not use a laptop
computer at work (44%); and (e) did not use computer test scoring software (30%). Those
respondents who used each technique indicated whether they simply used them or found
them to be very effective. Of those using each method, the following percentages indi-
cated that they were very effective: (a) using report templates (40%), (b) freeing them-
selves for a day to write reports (31%), (c) using secretarial support (31%), (d) using
computer scoring software (30%), (e) using computer report writing software (25%), and
(f ) using a laptop computer at work (25%).

Discussion
Psychological report writing is very time consuming and interferes with time available to
provide meaningful direct and indirect services to clients. Nonetheless, few methods to
Journal of Clinical Psychology DOI 10.1002/jclp
Variables Affecting Report Clarity 13

Table 2
Use of Report Writing Methods

Not used/ Used and When used,


available Used very effective very effective
Scores 0 1 1 or 2 2

Develop a report template on my laptop 51.9% 48.1% 19.0% 40%


Free myself for a day to write reports 53.2% 46.9% 14.6% 31.1%
Use secretarial support 75.5% 24.5% 7.5% 30.6%
Use a laptop computer at work 43.8% 56.3% 13.8% 24.5%
Use computer report writing software 70.3% 29.8% 7.6% 25.5%
Use computerized test scoring software 29.1% 70.9% 21.5% 30.3%

reduce the time necessary to write reports are implemented. In part, this is because appro-
priate resources such as secretarial support and laptop computers are not available. This
results in the relatively expensive time of psychologists being devoted to clerical tasks
rather than tasks requiring their training and expertise. In sum, many psychologists do not
have the time necessary to write reports at all, much less the additional time necessary to
rework reports so that they are intelligible to nonpsychologists.

Considerations Regarding Multiple Audiences


As indicated earlier, new psychologists are confused about the intended audience of psy-
chological reports. Recent graduates expressed uncertainty regarding whether reports
should be written for parents or other professionals. In truth, psychological reports should
be written for multiple audiences with widely ranging education backgrounds: parents,
teachers, school administrators, other psychologists, and often the client (Teglasi, 1983;
Tallent, 1993). Various studies have found that recipients of psychological reports prefer
clearly explained technical terms, understandable solutions, clear examples, and expla-
nations (Cuadra & Albaugh, 1956; Pryzwansky & Hanania, 1986; Rucker, 1967; Shively
& Smith, 1969; Tallent, 1993; Wiener, 1987).
Since the passing of legislation such as the Family Educational Rights and Privacy
Act of 1974 (FERPA; 1993) and the Individuals with Disabilities Education Act of 1991
(IDEA; 1992), it became common practice for copies of psychological and psychoedu-
cational reports to be distributed to parents and clients in addition to physicians, educa-
tors, and other psychologists. In the year 2000, 80% of the population 25 years old and
above had attained a high school diploma, but only 24% had attained a bachelor’s degree
or above (U.S. Bureau of the Census, 2000). Therefore, the average parent is unlikely to
be able to read comfortably above the 12th-grade level. It is critical that psychological
reports be understandable to parents. However, psychologists working in schools, clinics,
and private practice often write reports at levels higher than the education level of their
audience, including parents (Harvey, 1989, 1997; Weddig, 1984).
Unless reports are clearly written and easily understood by parents, teachers, and
other nonpsychologists, they are likely to be misunderstood and their treatment recom-
mendations will not be implemented (Harvey, 2002; Ownby, 1997). Treatment recom-
mendations are more likely to be implemented when recipients are persuaded that (a) the
situation is serious, (b) the treatment will be efficacious, (c) the benefits of treatment
implementation are greater than the inconveniences, and (d) they have some control over
Journal of Clinical Psychology DOI 10.1002/jclp
14 Journal of Clinical Psychology, January 2006

treatment decisions. Additional variables that increase the likelihood of treatment imple-
mentation are the promptness of the response to the referral, good communication, and
minimizing recommendations for treatments that are complex, lengthy, or require signif-
icant lifestyle changes (Meichenbaum & Turk, 1987). All of these variables are more
apparent in clear and understandable psychological reports.
Recommendations and interventions should be described in such a way that the reader
is convinced not only that they are appropriate, but also that they can be implemented
without undue duress. The psychologist needs to refrain from the temptation to accept
canned recommendations from a computer program, textbook, or test manual without
first exploring their appropriateness. A psychologist should understand his or her com-
munity and school well enough to know that the recommendations are appropriate, and
that parents, teachers, and other school personnel have the skills, knowledge, and expe-
rience to implement them.
Using a collaborative approach in writing reports facilitates this process. Bersoff
(1995) believes that to obtain valid assessments parents and students should be respected.
They should be included in all aspects of the assessment process including the opportu-
nity to add to, clarify, or disagree with the content of written reports prior to final editing
and dissemination. St. George and Wulff (1998) found that for clients involved in the
writing of psychological reports, the report served as a therapeutic tool and generated
goals that are more realistic for the client, increased the client’s empowerment, and increased
the client’s commitment toward goal completion. This collaborative approach requires
that psychologists include the child, parent, and teacher in the report-writing process so
that they have an opportunity to accept, refuse, and modify recommendations.
If psychologists keep these variables in mind as they write reports, they will facili-
tate the implementation of recommended interventions with integrity. In the communi-
cation of assessment results to children, parents, and teachers. Kamphaus and Frick (2001)
suggest that psychologists should:
• Be tactfully honest.
• Use percentile ranks when reporting quantitative data.
• Encourage parents and teachers to ask questions and provide corrective feedback.
• Schedule sufficient time to process the results and reach closure.
• Adjust their communication style to the recipient’s level of education and
sophistication.
• Clearly define all terms.
• Avoid making global or negative future predictions.
• Use good counseling skills while being careful not to exceed their level of training
and expertise.

General Discussion
The importance of clear communication in psychological reports has been repeatedly
stressed for more than 30 years. Nonetheless, psychologists often write reports that are
very difficult to read. In this article, I identified and explored four variables that contrib-
ute to difficult-to-read psychological reports: training, language, time, and multiple
audiences.
To teach psychologists how to write clear reports, they should have clearly written
model reports to follow during their training. The available model reports published in
assessment textbooks and report-writing handbooks have a readability level that is very
Journal of Clinical Psychology DOI 10.1002/jclp
Variables Affecting Report Clarity 15

difficult for nonpsychologists to understand. Model reports need to be rewritten using


shorter sentences and fewer multisyllabic and “difficult” words with more subheadings
and less jargon, acronyms, and passive verbs.
Practicing psychologists should obtain feedback about the readability of their own
writing through supervisor or peer review, consumer feedback, and readability calcula-
tions (Harvey, 1997). Readability is also improved when practitioners imagine them-
selves as report recipients and revise accordingly, or when they write reports as though
they are writing for their grandmothers, “intelligent women who know little about psy-
chology, but who can be sensitive and empathic when they understand someone’s per-
sonality” (Ritzler, 1998, pp. 422– 423).
Another variable contributing to the low readability of psychological reports is the
use of psychological terms. Many authors of test interpretation manuals do not include
clear and accurate definitions of the terms they use within the text. If all test interpreta-
tion manuals included term definitions easily understood by the general population, psy-
chologists could then adopt the definitions and incorporate them into report templates on
their computers. This would promote the use of common definitions and clearer commu-
nication among professionals, educators, and parents.
Further, report recipients’ understanding of psychological terms would be facilitated
by increasing the “ecological validity” of these terms. Concrete examples of everyday
behaviors preferably from the client’s own life can be tied to the terms (Groth-Marnat,
2003). For example, when discussing “poor working memory” in a report the psycholo-
gist could mention the “inability to remember a telephone number long enough to com-
plete dialing it.”
Report clarity would also be improved if psychologists would provide a context
whenever quantitative data is included in a psychological report. As indicated in the
Standards for Educational and Psychological Testing (American Educational Research
Association, American Psychological Association, and National Council on Measure-
ment in Education, 1999), “Test scores, per se, are not readily interpreted without other
information, such as norms or standards, indications of measurement error, and descrip-
tions of test content. Just as a temperature of 508 in January is warm for Minnesota and
cool for Florida, a test score of 50 is not meaningful without some context” (p. 62).
Scores presented as standard scores should be accompanied by percentile scores and an
explanation of percentile scores (“John’s standard score of 100, which falls at the 50th
percentile, indicates that out of a group of 100 children he would probably score higher
than about 49 and lower than about 49.”).
When they have the opportunity, psychologists can provide this context during an
oral presentation of a report. However, written reports are put into the client’s file. If it is
a report on a child, the parents usually are given a copy to take home. This means that
psychologists are not available to provide oral interpretations to all of a report’s readers;
thus, the written report must be able to stand on its own.
Psychologists can remedy the conflict between time and writing quality reports by
two approaches: increasing the time allocated or by using time more efficiently and
thereby decreasing the time necessary. To increase the time allocated, psychologists will
need to reserve time in their schedules for report writing and rewriting. Writing reports
well takes considerable time, and even more time is necessary when report writing is
embedded in a consultative approach that starts with problem definition and ends with
intervention monitoring. For this time to be allocated, psychologists will need to collect
data about the time required and inform the necessary administrators.
Methods to decrease the time necessary to write a clear report include (a) developing
a report template containing definitions of technical terms and test interpretations on a
Journal of Clinical Psychology DOI 10.1002/jclp
16 Journal of Clinical Psychology, January 2006

laptop computer, (b) freeing oneself to be fully focused on report writing for an entire
morning or afternoon to increase concentration, (c) enlisting secretarial support, (d) using
test scoring software, and (e) using report writing software. Several of these methods
require capital expenditure, which can be justified by a time/cost analysis.
However, care should be taken in utilizing test scoring and report writing software.
Software programs that score tests, interpret the results, generate reports, and even sug-
gest special education classifications appeal to psychologists because they save time,
provide scoring accuracy, assist in interpretation, and generate hypotheses and recom-
mendations developed by the experts who authored the programs (Anastasi & Urbina,
1997; Kamphaus, 1993). In fact, computer programs can halve the time it takes to pro-
duce a report (Ferriter, 1995). On the other hand, computer-generated reports can be
problematic. Because computer software is easy to use, psychologists may be tempted to
use instruments and methods outside their areas of expertise, which violates professional
ethics (Carlson & Harvey, 2004). Unedited computer-generated reports can be exces-
sively lengthy and have stilted narratives that appear “canned.” They frequently use psy-
chological jargon unintelligible to its intended readership. This detracts from the report’s
quality and results in readers’ impatiently discounting their conclusions.
To ensure the ethical use of computer-generated reports, psychologists should edit
any computer-generated report by deleting overstatements, inappropriate hypotheses, and
inappropriate recommendations; remove jargon and define terms clearly; and integrate
results with other information. They should also retain ethical and professional respon-
sibility for the accuracy of the results by using computer-scoring systems only for instru-
ments in which they are trained, validating computerized scoring systems for
appropriateness before using, and critically examining and evaluating the models upon
which computer programs are based.
As mentioned previously, psychologists can better meet the informational needs of a
highly varied audience by increasingly involving the client or the child, parents, and
teachers as collaborators in the assessment and report writing process. Involving parents
and clients requires an investment of time, training in such involvement, and a willing-
ness to abandon the expert role for a collaborative and respectful role on the part of
psychologists.
To fulfill their ultimate purpose—improving client functioning by influencing both
current and future programming—psychological reports must be responsive, persuasive,
communicative, and embedded in the consultation process. Responsive reports clarify
and address the referral question throughout the report, contextualize all assessment com-
ponents, and involve all stakeholders in the report writing process. Verbal and written
communication is facilitated by clear definitions of terms and the use of percentile ranks
when reporting quantitative data. Psychologists should be tactfully honest and encourage
clients, parents, and teachers to ask questions and provide corrective feedback. They
should also adjust their communication style to the recipients’ level of education and use
good interpersonal skills. They then should schedule sufficient time to process the results.
These procedures are necessary to ensure that psychological reports serve their ultimate
purpose—to improve their clients level of functioning.

References
American Educational Research Association, American Psychological Association, & National Coun-
cil on Measurement in Education. (1999). Standards for educational and psychological testing.
Washington, DC: American Educational Research Association.

Journal of Clinical Psychology DOI 10.1002/jclp


Variables Affecting Report Clarity 17

American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders
(4th ed.-text rev.). Washington, DC: Author.
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice
Hall.
Bersoff, D.N. (1995). Ethical conflicts in psychology. Washington, DC: American Psychological
Association.
Brenner, E. (2003). Consumer-focused psychological assessment. Professional Psychology: Research
and Practice, 34, 240–247.
Carlson, J.F., & Harvey, V.S. (2004). Using computer-related technology for assessment activities:
Ethical and professional practice issues for school psychologists. Computers in Human Behav-
ior, 20, 645– 659.
Cuadra, C.A., & Albaugh, W.P. (1956). Sources of ambiguity in psychological reports. Journal of
Clinical Psychology, 12, 267–272.
Family Education Rights and Privacy Act of 1974. Pub. L. No. 93–380, 20 U.S.C.A. § 1232g, 34
C.F.R. § Part 99 (1993).
Ferriter, M. (1995). Automated report writing. Computers in Human Services, 12 (3/4), 221–228.
Grammar Expert Plus. (1998). Version 1.5 [Computer software]. Retrieved July 2004 from
www.wintertree-software.com
Groth-Marnat, G. (2003). Handbook of psychological assessment (4th ed.). New York: Wiley.
Harvey, V.S. (1989, March). Eschew obfuscation: Support clear writing. Communiqué, p. 12.
Harvey, V.S. (1997). Improving readability of psychological reports. Professional Psychology:
Research and Practice, 28(3), 271–274.
Harvey, V.S. (2002). Reporting and using assessment results. In J. Carlson (Ed.), Social and Per-
sonal Assessment of School Aged Children (pp. 229–245). Reading, MA: Allyn and Bacon.
Individuals With Disabilities Education Act of 1991. 20 U.S.C. Chapt. 33; Department of Educa-
tion Regulations for IDEA at 34 CFR 300 and 301 (September 29, 1992).
Kamphaus, R.W. (1993). Clinical assessment of children’s intelligence. Boston: Allyn & Bacon.
Kamphaus, R.W., & Frick, P.J. (2001). Clinical assessment of child and adolescent personality and
behavior (2nd ed.). Needham, MA: Pearson Allyn & Bacon.
Martin, W.T. (1972). Writing psychological reports. Springfield, IL: Charles C. Thomas.
Meichenbaum, D., & Turk, D.C. (1987). Facilitating treatment adherence: A practitioner’s guide-
book. New York: Plenum.
Microsoft. (2002). Microsoft word 2002 help text [Computer software]. Seattle, WA: Author.
Nuttall, E.V., Devaney, J.L., Malatesta, N.A., & Hampel, A. (1999). Writing assessment results. In
E.V. Nuttall, I. Romero, & J. Kalesnik (Eds.). Assessing and screening preschoolers: Psycho-
logical and educational dimensions (pp. 396– 406). Boston: Allyn & Bacon.
Ownby, R.L. (1997). Psychological reports: A guide to report writing in professional psychology
(3rd ed.). New York: Wiley.
Pryzwansky, W.B., & Hanania, J.S. (1986). Applying problem solving approaches to school psy-
chological reports. Journal of School Psychology, 24, 133–141.
Ritzler, B.A. (1998). Teaching and learning issues in an advanced course in personality assessment.
In L. Handler & M.J. Hilsenroth (Eds.), Teaching and learning personality assessment (pp. 431–
452). Mahweh, NJ: Lawrence Erlbaum Associates.
Rucker, C.M. (1967). Technical language in the school psychologist’s report. Psychology in the
Schools, 4, 146–150.
Sattler, J. (2001).Assessment of children: Cognitive applications (4th ed.). San Diego: Jerome M. Sattler.
Shinn, M.R. (2002). Best practices in using curriculum-based measurement in a problem-solving
model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 671–
697). Bethesda, MD: National Association of School Psychologists.

Journal of Clinical Psychology DOI 10.1002/jclp


18 Journal of Clinical Psychology, January 2006

Shively, J.J., & Smith, A.E. (1969). Understanding the psychological report. Psychology in the
Schools, 6, 272–73.
St. George, S., & Wulff, D. (1998). Integrating the client’s voice within case reports. Journal of
Systemic Therapies, 17 (4), 3–13.
Tallent, N. (1993). Psychological report writing (4th ed.). Englewood Cliffs, NJ: Prentice Hall.
Teglasi, H. (1983). Report of a psychological assessment in a school setting. Psychology in the
Schools, 20, 466– 479.
Terman, L.M., & Merrill, M.A. (1960). Stanford-Binet Intelligence Scale. Boston: Houghton-Mifflin.
Terman, L.M., & Merrill, M.A. (1973). Technical manual for the Stanford-Binet Intelligence Scale:
1972 norms edition. Boston: Houghton-Mifflin.
Thorndike, R.L., Hagan, E.P., & Sattler, J.M. (1986). Technical manual, Stanford-Binet Intelligence
Scale (4th ed.). Chicago: Riverside Publishing.
U.S. Bureau of the Census. (2002). Brief report: Educational attainment. Retrieved August 9, 2004,
from http://www.census.gov/prod/2003pubs/c2kbr-24.pdf
Wechsler, D. (1949). Manual for the Wechsler Intelligence Scale for children. New York: The
Psychological Corp.
Wechsler, D. (1955). Manual for the Wechsler Adult Intelligence Scale (WAIS). New York: The
Psychological Corp.
Wechsler, D. (1967). Manual for Wechsler Preschool and Primary Scale of Intelligence. New York:
The Psychological Corp.
Wechsler, D. (1974). Manual for the Wechsler Intelligence Scale for Children-Revised. New York:
The Psychological Corp. .
Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale-Revised (WAIS-R). San
Antonio, TX: The Psychological Corp.
Wechsler, D. (1989). WPPSI-R Manual: Wechsler Preschool and Primary Scale of Intelligence-
Revised. San Antonio, TX: The Psychological Corp.
Wechsler, D. (1991). Wechsler Intelligence Scale for Children: Manual (3rd ed.). San Antonio, TX:
Psychological Corp.
Wechsler, D. (2003). Wechsler Intelligence Scale for Children (WISC-IV): Technical and Interpre-
tive Manual (4th ed.). San Antonio, TX: The Psychological Corp.
Weddig, R.R. (1984). Parental interpretation of psychoeducational reports. Psychology in the Schools,
21, 477– 481.
Whitaker, D. (1994). How school psychology trainees learn to communicate through the school
psychological report. Unpublished doctoral dissertation, University of Washington, Seattle.
Wiener, J. (1987). Factors affecting educator’s comprehension of psychological reports. Psychol-
ogy in the Schools, 24, 116–126.

Journal of Clinical Psychology DOI 10.1002/jclp

You might also like