Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

INT. J. LANG. COMM. DIS.

, MARCH 2007,
VOL. 42, NO. S1, 123–135

Clinical reasoning skills of speech and language


therapy students
Kirsty Hoben{, Rosemary Varley{ and Richard Cox{
{Department of Human Communication Sciences, University of Sheffield,
Sheffield, UK
{Department of Informatics, School of Science and Technology, University of
Sussex, Falmer, Brighton, UK

Abstract
Background: Difficulties experienced by novices in clinical reasoning have been
well documented in many professions, especially medicine (Boshuizen and
Schmidt 1992, 2000; Elstein, Shulman and Sprafka 1978; Patel and Groen 1986;
Rikers, Loyens and Schmidt 2004). These studies have shown that novice
clinicians have difficulties with both knowledge and strategy in clinical reasoning
tasks. Speech and language therapy students must also learn to reason clinically,
yet to date there is little evidence of how they learn to do so.
Aims: In this paper, we report the clinical reasoning difficulties of a group of
speech and language therapy students. We make a comparison of a subgroup of
these with experienced speech and language therapists’ reasoning and proposes
some methods and materials to aid the development of clinical reasoning in
speech and language therapy students.
Methods & Procedures: Student diagnostic reasoning difficulties were analysed
during the assessment of unseen cases on an electronic patient database, the
Patient Assessment and Training System (PATSy http://www.patsy.ac.uk) (Lum
et al. 2006). Pairs of students were videoed as they completed a one hour
assessment of one of three ‘virtual patients’. One pair of experienced speech
and language therapists, who were not part of the project team, also completed
an assessment of one of these cases under the same conditions. Screen capture
was used to record all on screen activity within PATSy web pages (i.e. mouse
pointer position, hyperlink and button presses, page scrolling, browser
navigation interactions and data entered); Verbal comments made by
participants were analysed via a seven-level coding scheme that aimed to
describe the events that occur in the process of diagnostic reasoning.
Outcomes & Results: Students displayed a range of competence in making an
accurate diagnosis. Diagnostically accurate students showed use of specific
professional vocabulary, and a greater use of firm diagnostic statements. For the

Address correspondence to: Kirsty Hoben, Department of Human Communication Sciences,


University of Sheffield, 31 Claremont Crescent, Sheffield S10 2TA, UK; e-mail: k.hoben@
sheffield.ac.uk

International Journal of Language & Communication Disorders


ISSN 1368-2822 print/ISSN 1460-6984 online # 2007 Royal College of Speech & Language Therapists
http://www.informahealthcare.com
DOI: 10.1080/13682820601171530
124 Kirsty Hoben et al.

diagnostically inaccurate students, typical difficulties were a failure to interpret


test results and video observations, difficulty in carrying out a sequence of tests
consistent with a diagnostic reasoning path, and problems in recalling and using
theoretical knowledge.
Conclusions and Implications: We discuss how identification of student diagnostic
reasoning difficulties can inform the design of learning materials intended to
address these problems.

Keywords: Clinical reasoning, students, speech and language therapy.

What this paper adds


What is already known on this subject
Studies in medicine and related fields have shown that students have difficulties
with both knowledge and strategy in clinical diagnostic reasoning tasks. To date,
there has been little research in this area in speech and language therapy.
What this study adds
Speech and language therapy students show clinical reasoning difficulties
similar to those observed in medicine and other related domains. They could
benefit from explicit teaching of strategies to improve their clinical reasoning
and consolidate their domain knowledge.

Introduction
The literature on novice and expert reasoning in medicine and other domains has
identified areas of difficulty for novices in problem solving and reasoning, all of
which are connected to a lack of domain knowledge, both in terms of content and
structure. Novices show a difficulty in conceptualising problems at a deep, abstract
level, characterized by an inability to inhibit attention to surface features, perceive
abstract features or judge the relevance of findings and observations. For example,
Sloutsky and Yarlas (2000) found that novices, when judging whether visually
presented equations had been seen before, based their decisions on surface features
such as commonality of numbers or number of addends in the equation, rather than
deeper mathematical principles such as associativity. In a study of novices and
experts diagnosing cases of breast pathology using microscopy, Crowley et al. (2001)
found that novices had difficulty in recognising and interpreting key clinical findings,
often missing the location of the lesion altogether or erroneously construing normal
findings as pathological. They also found that novices tended to describe features
rather than categorise them (e.g. describing what was seen as ‘big blue blobs’,
compared to a more expert classification of ‘central necrosis’). The process of
gathering information is more effortful for novices and because they use a less
accessible and structured knowledge base, recognition and interpretation of salient
cues becomes difficult.
Novices have also been found to have difficulty discriminating between relevant
and irrelevant information. In a nursing study, Shanteau et al. (1991) found that
student nurses rated 67% of items in a nursing scenario as essential in making a
diagnosis, compared to expert nurses’ judgement of 44% of the items as essential.
Thus novices are unable to understand and select the pertinent information about a
Clinical reasoning skills of speech and language therapy students 125

problem and this in turn makes the data to be considered, and relationships between
them, larger and more complex.
Boshuizen and Schmidt (2000) argue that changes in knowledge depth and
organisation facilitate changes in reasoning skill. The novice moves from individual
pieces of knowledge to integration of separate facts into higher level, compiled
knowledge in the form of concept clusters, which Boshuizen and Schmidt term
‘knowledge encapsulation’. They view the final stage of medical reasoning
development to be the formation of clinical concepts, where biomedical knowledge
is integrated into clinical knowledge and is no longer explicitly used in reasoning.
Their experimental work reveals however, that physicians are still able to access this
detailed biomedical knowledge when necessary (Boshuizen and Schmidt 1992, Van
de Wiel et al. 2000). Boshuizen and Schmidt also claim that at the final stage of
reasoning development, illness scripts are formed, consisting of enabling conditions,
the ‘fault’ (i.e. the pathophysiological process involved), and the consequences or
signs and symptoms of the disease process. The studies of novices cited above
illustrate the early stages of Boshuizen and Schmidt’s model, with knowledge
networks being activated but a lack of ability to extract higher level concepts from
the data presented.
Patel and Kaufman (2000) claim that biomedical knowledge is not as well
integrated in experts as Boshuizen and others propose. Their experiments
(e.g. Kaufman et al. 1996) showed that experienced clinicians were able to give
accurate clinical explanations of pathophysiology but not necessarily give correct
biomedical explanations of basic physiology. Their work suggests that the process
of increasing clinical expertise depends on an integration of basic knowledge into
clinical concepts. This makes it difficult subsequently to disentangle the basic
knowledge from the particular clinical manifestations which a clinician would
encounter. The teaching of biomedical sciences on medical courses with a traditional
curriculum is generally done at a pre-clinical stage, which explains why first,
students are unable to use it at this stage to help with diagnostic reasoning and,
second, why it seems to be relearned in a different form when clinical knowledge
becomes deeper.
These difficulties with knowledge depth, structure and integration impact on
novices’ ability to reason through a problem and use effective strategies. Novices
have difficulty planning a diagnostic strategy and organizing incoming information.
Mavis et al. (1998) found that novice medical students had difficulty applying
knowledge and strategies flexibly to a diagnostic situation, planning a focussed
neurological examination but carrying out an unnecessarily comprehensive one. This
unfocussed information gathering has also been shown in case history taking and
hypothesis formation. Patel et al. (1997) found that medical students attempting to
diagnose a complex endocrinology case were unable to develop specific hypotheses
and thereby target their questioning of the patient. Instead, using a standard case
history, they adopted a data gathering rather than hypothesis driven approach, and
therefore generated a large amount of irrelevant information. Similarly, in scientific
reasoning, Klahr (2000) found that novices begin tasks without clear goals, testing
without a hypothesis, rather than starting with a hypothesis and carrying out specific
tests.
Problems with knowledge organization and strategy lead to difficulties in
evaluating progress and interpreting findings. Hmelo-Silver et al. (2002) studied
novice and expert physicians designing a trial for a new cancer drug. They found
126 Kirsty Hoben et al.

that novices spent a considerable amount of time monitoring their cognitive


activities, through discussion of whether or not they understood all aspects of the
task, but much less time than the experts evaluating their progress, for example by
comparing the results of trials. Being able to evaluate progress is necessary in any
activity, such as clinical reasoning, where hypotheses are formed, tested and
evaluated, often in cycles. Efficient evaluation of progress has an effect on the
amount of effort involved and the accuracy of the final results or diagnosis.
Difficulties in evaluating progress and understanding the significance of findings
were also found by Arocha and Patel (1995) in a study of novice medical diagnostic
reasoning using inconsistent cases (i.e. cases where clinical findings or test results do
not fit with initial information). Novices were generally unable to incorporate
inconsistent data into their hypotheses. They were able to generate only a limited
number of hypotheses, which in itself precludes the consideration of alternative
hypotheses even if the inconsistency is perceived.
McAllister and Rose (2000) acknowledge the relative paucity of research into
the processes of clinical reasoning in speech and language therapy. However,
there are similarities in the global characteristics of diagnostic reasoning
across related professions such as medicine, physiotherapy, occupational therapy
and nursing, i.e. all these professionals take a history, carry out some form
of assessment or investigative testing, compare the results to known disorders
and come to a diagnosis. It is likely, therefore, that speech and language therapy
novices will display similar reasoning difficulties to those observed in novices
from other clinical domains. Cox and Lum (2004) described a range of
domain general and domain specific skills necessary in speech and language
therapy. Examples of domain general skills (i.e. skills that any intelligent
person could be expected to have without any specific training) include:
hypothesis generation, an ability to decide how evidence supports a hypothesis
and deductive reasoning ability (i.e. making a hypothesis from a piece of data
and then systematically examining further data to determine whether they
support or contradict the hypothesis). Examples of domain specific skills (i.e.
those that relate to a specific knowledge and skill base, in this case speech
and language therapy) include: knowledge of language processing models and tests
and an ability to link these to observations of patients’ communication
impairments.
The current research examined speech and language therapy students’
developing diagnostic reasoning skills, using an existing database of speech and
language therapy cases, the Patient Assessment and Training System (PATSy)
(Lum et al. 2006). The database consists of ‘virtual patients’, and includes video
clips, medical history, assessment results and links to related publications. Students
were able to ‘administer’ tests to patients and keep a log of their findings and
conclusions.

Methods
Participants
The study recruited eight masters level and 26 undergraduate speech and language
therapy students from two UK universities via posters on notice boards and e-
mail. Undergraduate students were in the penultimate year of their studies and
Clinical reasoning skills of speech and language therapy students 127

masters level students were in their final year. In addition, a pair of speech and
language therapists experienced in aphasia but not members of the project team
took part in order to provide a comparison. University ethical approval was
granted for the conduct of the research and all usual ethical practices were
observed.

Procedure
Students worked in pairs (dyad pairings were mostly self-selected). The main reason
for the use of dyads was to collect verbal protocols that were more natural and less
cognitively disruptive for participants to produce than think-aloud protocols from
individual participants. Dyads were given one hour to undertake the diagnosis of
one of three pre-selected PATSy cases; DBL, RS, both acquired cases, or JS1, a
developmental case. The PATSy cases used for the study all exhibited a degree of
ambiguity in their clinical presentation i.e. their behavioural profile might be
consistent with a number of possible diagnoses of underlying impairment.
Participants were asked to come to a consensus decision and produce a set of
statements that described key impairments shown by the case, and if possible, an
overall diagnostic category.
The interaction of the students was video-recorded and all participants
completed a learning log that is automatically generated and stored within PATSy.
The learning log contained questions such as ‘What conclusions can you draw from
what you have just seen?’ and ‘Where do you want to go next (further test data?,
introductory video?)’. A dynamic video screen capture was also performed using
commercial software (CamtasiaH). Subsequently these multiple data sources were
synchronized for playback and analysis using NITE tools (http://www.ltg.ed.ac.uk/
NITE/), developed at the University of Edinburgh.

Coding
The statements made by participants in dialogue were coded for particular
statement types that might occur in diagnostic reasoning. Coding was carried out
directly from the combined files described above. All utterances of students
relating to the diagnosis of the case were coded. The coding scheme was
developed for the purposes of this study. An initial six-level scheme was devised a
priori from expectations of novice difficulties in the diagnostic process by two
experienced speech and language therapists on the project team with extensive
experience of student clinical learning, who then independently coded the
statements. Using a consensus method, they discussed codings and revised
descriptors for each level. This process resulted in the original level 5 category
being subdivided into two levels based on the granularity of diagnosis. After a
series of iterations of the descriptors for each category, inter-rater reliability was
established by the two raters independently coding a random selection of 30% of
the dialogue data sample. One rater was blind to the PATSy case, participants and
site at which data were collected, although occasionally the content of the
discussion, particularly regarding the paediatric case, made it impossible always to
be blind to the case. A Kappa score of 0.89 was achieved, indicating satisfactory
inter-rater reliability. One rater then coded all spoken statements produced by the
128 Kirsty Hoben et al.

student dyads and the expert pair. Intra-rater reliability was assessed on codings
with a time interval of 4 months between categorisations. A Kappa score of 0.97
was achieved, indicating highly satisfactory intra-rater reliability. The coding
scheme contained seven categories.
N Level Zero: Other. Ambiguous statements and hypotheses that could not be
tested with the data available on the PATSy system.
N Level One: Reading of data. Statements that consisted of reading aloud data
without making any comment or additional interpretation.
N Level Two: Making a concrete observation. This category included
statements about a single piece of data and which did not use any
professional terminology.
N Level Three: Making a superordinate level clinical observation. Descriptive
statements which used a higher level concept couched in professional
register, or compared information from two or more test results.
N Level Four: Hypothesis. Statements that expressed a predicted causal
relationship between two factors.
N Level Five: General diagnostic statement. Statements including or excluding a
superordinate diagnostic category and of the type that might be used in a
report to another, non-SLT professional.
N Level Six: Specific diagnostic statement. Statements in this category shared
the characteristics of Level Five diagnostic statements. However, statements
at this level had a greater detail of description than Level Five statements and
might be used in a report to another speech and language therapist. (See
appendix A for definitions and examples of each category.)
The coding categories relate to the literature reviewed earlier, for example, Level
Two might be seen as attention to surface features of behaviour (Sloutsky and Yarlas
2000), Level Three could be viewed as the integration of knowledge into larger
concept clusters (Boshuizen and Schmidt 2000), Level Four links to the ability to
plan a diagnostic strategy (Patel et al. 1997), and Levels Five and Six could
characterize the evaluation of progress through a series of assessments (Patel and
Arocha 1995).

Analyses
Before the coding of data, student pairs were independently categorized as
diagnostically accurate (DA) or inaccurate (DI) based on whether they reached a
diagnosis for the case they had worked on (either DBL, RS or JS1) that was at an
appropriate level for a novice/newly qualified speech and language therapist. This
categorisation was carried out by experienced speech and language therapists on the
project team. DA students were those that were able to report key impairments
displayed by the case (e.g. type and extent of lexical retrieval deficit). Similarly, the
tests selected by a pair were evaluated for relevance to the case and to the comments
in dialogue and in their written log. In addition, test choices were examined for
relatedness and movement along a diagnostic path. For example, a test sequence
involving a switch from a picture semantics task such as Pyramids and Palm Trees
(Howard and Patterson 1992) to non-word repetition in the context of discussion of
written word comprehension was classed as an unrelated sequence. The
Clinical reasoning skills of speech and language therapy students 129

performance of a subset of student dyads who diagnosed the aphasic case DBL was
compared with the dyad of experienced clinicians who diagnosed the same case.

Results
Eight pairs of students were categorized as being diagnostically accurate (DA). The
remaining nine pairs did not produce a diagnosis that was viewed as accurate for a
novice clinician and were categorised as diagnostically inaccurate (DI). The
difficulties displayed by the diagnostically inaccurate subgroup were: a failure to
interpret test results and video observations by assessing their relevance in relation
to evidence already gathered, difficulty in carrying out a sequence of tests consistent
with a diagnostic reasoning path, and problems in recalling and using theoretical
knowledge.

Figure 1. Mean number of statements made by student dyads from the diagnostically accurate and
diagnostically inaccurate subgroups over 1 hour and all three PATSy test cases.

Figure 1 displays the mean number of statements of each type produced by the
DA and DI subgroups (17 dyads total, across all three PATSy cases). The data reveal
some disparities between the groups: the DI dyads produced a greater number of
Level One and Two statements, indicating more reading aloud of information on the
screen without interpretation and a greater number of descriptive statements in non-
specialist register. Student use of Level Three, Four and Five, statements, that is,
superordinate statements using professional terminology, statements postulating
relationships between two variables, and general diagnostic statements, appeared
with similar frequency in the two subgroups. The DA group produced more Level
Six statements, where the diagnosis was expressed in professional terminology of
fine granularity. This suggested that this subgroup could link the patterns of
behaviour observed in the patient case to highly specific domain knowledge, i.e.
these students could bridge the gap between theory and practice.
130 Kirsty Hoben et al.

Table 1. Numbers of statements at each level for experts, diagnostically accurate (DA) and
diagnostically inaccurate (DI) pairs expressed as a percentage of the total for each pair for
PATSy case DBL

Row
Diagnostic statement level percentage of total total (%)
Participants 0 1 2 3 4 5 6
Expert pair 0 0 5.26 60.52 13.15 7.89 13.15 100
DA pair C 4.16 0 12.5 41.66 20.83 12.5 8.33 100
DA pair F 8.33 0 4.16 54.16 12.5 4.16 16.66 100
DA pair P 0 9.52 4.76 23.80 33.33 4.76 14.28 100
DI pair E 3.70 7.40 7.40 37.03 40.74 0 3.70 100
DI pair K 13.04 4.34 8.69 34.78 21.73 13.04 4.34 100
DI pair M 6.45 29.03 25.80 25.80 12.90 0 0 100

The experienced pair of therapists diagnosed case DBL and their performance
was compared with the subset of students (dyads n56) who also diagnosed this case.
For the purposes of making a valid comparison, we compared only those student
pairs who diagnosed the same case.
The results are presented in Table 1. Table 1 shows that the experienced therapists
did not make Level Zero or Level One statements. They make very few Level Two
statements but a greater number of Level Three statements, compared with either of
the student groups. Experienced therapists also made approximately 20% firm
diagnostic statements at Levels Five and Six. Student results on case DBL conform to
the general pattern observed across all PATSy cases. Again, DA students made
fewer Level One and Two statements and more Level Six statements. The profile
of the DA students was more similar to that of experienced clinicians than to that of
the DI group. DA students also made approximately 20% Level Five and Six
statements, whilst the occurrence of such statements was generally lower in the DI
subgroup.
Further qualitative analyses of student performance revealed a number of
themes indicative of problems in diagnostic reasoning. Examples are given below
which are typical of students in the DI group. Students displayed few well-
elaborated schemata of diagnoses, leading to difficulties in making sense of data:
‘She was making errors, wasn’t she, in producing words but I haven’t really found
any pattern yet.’

‘It’s hard to know what to look at isn’t it? What means something and what
doesn’t.’

‘I’m not entirely sure why they’re (client’s responses) not very appropriate.’
The high numbers of Level One and Two statements in the DI group reflect
problems in this area. The patient’s behaviours are noted, but the students have
difficulty interpreting their significance or relationship. Some students showed
difficulty in carrying out a sequence of tests consistent with a diagnostic-reasoning
path. For example one dyad chose the following test sequence at the beginning of
their assessment of the paediatric case JS1: a handwriting sample, a non-word reading
test, followed by a word reading test and then a questionnaire on the client’s social and
academic functioning. They started with marginally relevant and relatively fine grained
Clinical reasoning skills of speech and language therapy students 131

tests before going on to look at the questionnaire. In this case, the questionnaire gave
useful background information about broad areas of difficulty for the client.
Evaluating this evidence would have been more useful at the beginning of their
assessment as it allows the clinician to ‘reduce the problem space’ in which they are
working and to focus their diagnostic effort on areas that are more likely to be crucial
to the understanding of the case. No hypotheses or specific clinical reasons for these
tests were given by the students, indicating that they were not using the tests to
attempt to confirm or disconfirm a hypothesis about the case they were diagnosing.
Their approach was descriptive, rather than theory or hypothesis-driven.

Discussion
Studies of reasoning in domains related to speech and language therapy and the
results of the current study provide evidence that there may be common patterns of
development from novice to expert. Results from the current research show that
student speech and language therapists exhibit difficulty in conceptualising
problems at a deep, abstract level, planning a diagnostic strategy, organising
incoming information, evaluating progress and interpreting findings, as reported by
the studies in other domains.
Theoretically motivated and empirically supported resources to address these
issues could be developed, such as ‘intelligent tutors’, using hypermedia support to
allow novice speech and language therapy students to learn in a ‘virtual’ situation,
thus allowing them to be better prepared when interacting with real patients. The
analysis of the data presented here has led to a number of ideas for enhancing
students’ diagnostic reasoning, which offer potential for use as formative assessment
tools for educators, but also as self-assessment tools for students. For example,
making students aware of the types of statement described in the coding scheme
presented here could provide a structure for self-monitoring and assessment,
enabling students to evaluate and develop their own diagnostic reasoning skills. A
student making Level Two statements could use the descriptors in the coding
scheme to develop those types of statements into Level Three statements, for
example, from ‘Scores worse when words are involved’ (Level Two) to ‘Worse at
accessing semantic information from written form’ (Level Three).
Similarly, assisting students to formulate testable hypotheses can facilitate an
efficient and effective assessment strategy. For example, from ‘I think this is quite
significant, this non-word reading thing’ to ‘Her inability to read non-words might
indicate dyslexia and poor ability to translate from orthography to speech’.
In addition, a resource currently under development consists of a computer-
based interactive tree diagram (Figure 2) of the cognitive subprocesses associated
with language comprehension and production which may be evaluated by a
particular speech, language or cognitive test.
Such tools enable students to understand the processes that are probed by a
particular test. In turn, this could then facilitate theory and hypothesis-driven
reasoning during the assessment process. The tool could also be used as part of a
tutorial with questions such as: ‘Using the diagram, trace a description for the TROG.
How might thinking about tests in this way be helpful when planning a test strategy?’
Students could be prompted to make superordinate clinical observations and a
tentative hypothesis early in an assessment session. After making a hypothesis,
132 Kirsty Hoben et al.

Figure 2. Computer-based interactive tree diagram for describing tests.

students could be prompted about a suitable test either before they had made a
choice, or immediately afterwards if they chose an inappropriate test for their
hypothesis. For students using PATSy, these prompts could take the form of a video
showing two students discussing a relevant topic. After a series of tests, students
could be prompted to attempt a firm diagnostic statement. Again, within PATSy,
video clip examples of students doing this could be offered concurrently. This
concept of vicarious learning (i.e. learning by observing the learning of others) is the
focus of current research activity (http://www.vicarious.ac.uk).
McAllister and Rose (2000) promote and describe curriculum interventions
designed to make the diagnostic reasoning process conscious and explicit, without
separating it from domain knowledge. They claim that this helps students to
integrate knowledge and reasoning skills. Whilst this is not a universally shared
opinion (e.g. Doyle 1995, 2000), the results described here indicate that students may
benefit from explicit teaching of strategies to improve their clinical reasoning and
consolidate their domain knowledge.

Acknowledgements
The research was funded as part of a 3-year ESRC grant, under the Teaching and
Learning Research Programme (Grant No. RES139-25-0127). The authors thank Dr
Julie Morris at the University of Newcastle for her advice and contributions to this
paper, and colleagues at the University of Edinburgh, John Lee and Susen Rabold,
The University of Newcastle, Barbara Howarth and the University of Sussex,
Jianxiong Pang.

References
AROCHA, J. F. and PATEL, V. L., 1995, Novice diagnostic reasoning in medicine: Accounting for clinical
evidence. Journal of the Learning Sciences, 4, 355–384.
BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 1992, On the role of biomedical knowledge in clinical reasoning
by experts, intermediates and novices. Cognitive Science, 16, 153–184.
BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 2000, The development of clinical reasoning expertise. In
J. Higgs and M. Jones (eds), Clinical Reasoning in the Health Professions (Edinburgh: Butterworth
Heinemann), pp. 15–22.
Clinical reasoning skills of speech and language therapy students 133

CAMTASIAH, 2006, Techsmith Camtasia Studio Screen Recording and Presentation (available at: http://
www.techsmith.com/camtasia.asp) (accessed on 1 September 2006).
COX, R. and LUM, C., 2004, Case-based teaching and clinical reasoning: seeing how students think with
PATSy. In S. Brumfitt (ed.), Innovations in Professional Education for Speech and Language Therapy
(London: Whurr), pp. 169–196.
CROWLEY, R. S., NAUS, G. J. and FRIEDMAN, C. P., 2001, Development of visual diagnostic expertise in
pathology. In S. Bakken (ed.), American Medical Informatics Association Annual Symposium
(Washington, DC: AMIA).
DOYLE, J., 1995, Issues in teaching clinical reasoning to students of speech and hearing science. In
J. Higgs and M. Jones (eds), Clinical Reasoning in the Health Professions (Edinburgh: Butterworth
Heinemann), pp. 224–234.
DOYLE, J., 2000, Teaching clinical reasoning to speech and hearing students. In J. Higgs and M. Jones
(eds), Clinical Reasoning in the Health Professions (Edinburgh: Butterworth-Heinemann),
pp. 230–235.
ELSTEIN, A. S., SHULMAN, L. S. and SPRAFKA, S. A., 1978, Medical Problem Solving: An Analysis of Clinical
Reasoning (Cambridge, MA: Harvard University Press).
HMELO-SILVER, C., NAGARAJAN, A. and DAY, R. S., 2002, ‘It’s harder than we thought it would be’: a
comparative case study of expert-novice experimentation strategies. Science Education, 86,
219–243.
HOWARD, D. and PATTERSON, K., 1992, Pyramids and Palm Trees: A Test of Semantic Access from Words and
Pictures (Bury St Edmunds: Thames Valley Test Co.).
KAUFMAN, D. R., PATEL, V. L. and MAGDER, S. A., 1996, The explanatory role of spontaneously generated
analogies in a reasoning about physiological concepts. International Journal of Science Education, 18,
369–386.
KLAHR, D., 2000, Exploring Science: The Cognition and Development of Discovery Processes (Cambridge, MA:
MIT Press).
LANGUAGE TECHNOLOGY GROUP, UNIVERSITY of EDINBURGH, 2006, NITE XML Toolkit Homepages (available
at: http://www.ltg.ed.ac.uk/NITE) (accessed on 1 September 2006).
LUM, C., COX, R., KILGOUR, J. and MORRIS, J., 2006, Universities of Sussex, Edinburgh and Newcastle.
PATSy: A Database of Clinical Cases for Teaching and Research (available at: http://www.patsy.ac.uk)
(accessed on 1 September 2006).
MAVIS, B. E., LOVELL, K. L. and OGLE, K. S., 1998, Why Johnnie can’t apply neuroscience: testing
alternative hypotheses using performance-based assessment. Advances in Health Sciences Education,
3, 165–175.
MCALLISTER, L. and ROSE, M., 2000, Speech–language pathology students: learning clinical reasoning. In
J. Higgs and M. Jones (eds), Clinical Reasoning in the Health Professions (Edinburgh: Butterworth-
Heinemann), pp. 205–213.
PATEL, V. L. and AROCHA, J. F., 1995, Congnitive models of clinical reasoning and conceptual
representation. Methods of Information in Medicine, 34 (1), 1–10.
PATEL, V. L., GROEN, G. J. and PATEL, Y. C., 1997, Cognitive aspects of clinical performance during
patient workup: the role of medical expertise. Advances in Health Sciences Education, 2, 95–114.
PATEL, V. L. and GROEN, G. J., 1986, Knowledge based solution strategies in medical reasoning. Cognitive
Science, 10, 91–116.
PATEL, V. L. and KAUFMAN, D. R., 2000, Clinical reasoning and biomedical knowledge: Implications for
teaching. In J. Higgs and M. Jones (eds), Clinical Reasoning in the Health Professions (Edinburgh:
Butterworth-Heinemann), pp. 33–44.
RIKERS, R. M. J. P., LOYENS, S. M. M. and SCHMIDT, H. G., 2004, The role of encapsulated knowledge in
clinical case representations of medical students and family doctors. Medical Education, 38,
1035–1043.
SHANTEAU, J., GRIER, M., JOHNSON, J. and BERNER, E., 1991, Teaching decision-making skills to student
nurses. In J. Baron and R. V. Brown (eds), Teaching Decision Making to Adolescents (Hillsdale, NJ:
Lawrence Erlbaum Associates).
SLOUTSKY, V. M. and YARLAS, A. S., 2000, Problem representation in experts and novices: Part 2.
Underlying processing mechanisms. In L. R. Gleitman and A. K. Joshi (eds), Twenty Second
Annual Conference of the Cognitive Science Society (Mahwah, NJ: Lawrence Erlbaum Associates).
VAN DE WIEL, M. W. J., BOSHUIZEN, H. P. A. and SCHMIDT, H. G., 2000, Knowledge restructuring in
expertise development: evidence from pathophysiological representations of clinical cases by
students and physicians. European Journal of Cognitive Psychology, 12, 323–355.
134 Kirsty Hoben et al.

Appendix: Seven-level coding scheme


Level Zero: Other
Statements that contain features that cross-coding categories are recorded as
ambiguous. In addition, hypotheses that cannot be tested with the data available on
the PATSy system, e.g. speculation about the patient’s lifestyle or the patient’s state
of mind on a particular day, e.g. ‘he’s had two heart attacks and he’s had this parietal
infarct which kind of, suggests to me … poor lifestyle’ or ‘Perhaps he didn’t feel like
doing a test that day’.

Level One: Reading of data


Statements that consist of reading aloud data without making any comment or
additional interpretation, e.g. ‘Forty-six out of fifty two on Pyramids and Palm Trees
picture–picture’.

Level Two: Making a concrete observation


Statements about a single piece of data which do not use any professional terminology
(i.e. they could be made by a lay person with no domain specific knowledge).
Statements at this level do make some level of comment on, or interpretation of the
data, beyond simply reading it aloud, e.g. ‘He didn’t get them all right’.

Level Three: Making a superordinate level clinical observation


Statements that use higher level concepts and show evidence of the use of some
professional register rather than lay terms, e.g. ‘there were some semantic errors
there’. Alternatively, statements at this level may compare information to a norm or
other test result, e.g. ‘11 months behind but that doesn’t strike me as particularly
significant’. These statements can be differentiated from higher level statements
because they do not make firm diagnostic statements such as ‘he’s definitely got
comprehension problems’. Similarly, they are not couched in hypothesis language,
i.e. they could not trigger a specific search strategy though assessments or
observations. They may make statements from the data including words such as
‘seems’, but do not predict from the data.

Level Four: Hypothesis


The crucial element is the expression of a causal relationship between two factors.
This may be expressed by an explicit or implicit ‘if … then’ structure, e.g. ‘If he’s
finding the picture–picture condition difficult, then that suggests his central
semantic system is probably not intact’.
Statements at this level may be phrased as a question directed at the data, e.g. ‘are
these results saying autism?’ They may be couched as a predictive statement that
might trigger a search/test strategy, e.g. ‘he could be dyspraxic’.

Level Five: Diagnostic statement


Statements in this category are phrased in the language of diagnostic certainty. They
may contain strong linguistic markers of certainty, such as ‘definitely’ or ‘certain’.
Clinical reasoning skills of speech and language therapy students 135

They do not contain any indicators of uncertainty. Statements with tag questions,
such as ‘He has an expressive language disorder, doesn’t he?’ are carefully evaluated,
as the tags have a social function. Statements in this category consist of those which
include or exclude a superordinate diagnostic category. The granularity of the
statement is such that it allows broad exclusion/inclusion of diagnostic categories,
e.g. language versus speech disorder. Statements in this category are likely to be
found in a letter to another professional rather than a speech and language therapist,
e.g. ‘He definitely has word finding problems’.

Level Six: Diagnostic certainty


Statements in this category are phrased in the language of diagnostic certainty. They
may contain strong linguistic markers of certainty, such as ‘definitely’ or ‘certain’.
They do not contain any indicators of uncertainty. Statements in this category
consist of those which include or exclude a superordinate diagnostic category. They
use predominantly appropriate professional register. They are likely to be used in a
report to a speech and language therapist, i.e. they use specific professional
terminology. Statements at this level have a finer granularity of description than level
five statements, e.g. ‘The patient has an impaired central semantic system’.

You might also like