Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

1325

C OPYRIGHT Ó 2010 BY T HE J OURNAL OF B ONE AND J OINT S URGERY, I NCORPORATED

Topics in Training
Learning Assessment Toolkit
By Peter G. de Boer, MA, FRCS, Rick Buckley, MD, FRCS(C), Pascal Schmidt, Robert Fox, EdD, and Jesse Jupiter, MD

Modern surgical education is evolving gical education, as it is delivered, has course has evolved over the past forty-
with changes in the academic had a measurable effect5-7. nine years, but it has been taught in a
environment. Surgeons, surgical Through a series of pilot-tested standardized form for the past decade.
educators, and administrative associations steps, we developed a set of instruments, The ‘‘key competencies’’ guide the
need to know what to teach, how to do it, providing insights into the effectiveness development of the Learning Assessment
and whether it has succeeded. Whereas of surgical education in the field of Toolkit, and the teaching and learning in
surgeons have many tools to help them to orthopaedic trauma. We termed the the course (Table I). A key competency was
see what is needed for better patient care instrument the Learning Assessment defined as a piece of knowledge and/or
and to measure patient progress, surgical Toolkit. It was developed to supplement skill that educators expected the course
education is just beginning to develop the judgment of surgical educators participants must know or be able to do
techniques of interpreting the needs of before and after a teaching event with after the course8,9. Key competencies are
students and to assess outcomes of real evidence of need, motivation, and statements describing what behaviors are
learning in their practice1,2. The absence of outcomes of educational programs in necessary to address problems related to
surgical education assessment techniques orthopaedic trauma surgery. successfully providing the gold standard
has made it necessary to guess what is The primary goal of this report is for patient care. These responses were
needed to improve education. We have to outline the elements of the Learning collected from three experienced course
estimated the success or failure of Assessment Toolkit and its develop- chairmen, all with greater than ten years
teaching by attempting to score faculty mental steps. A secondary goal is to of teaching experience acting as a panel of
performance rather than on obtaining show how its use can change the nature experts.
more valid data. This has led surgical and content of surgical educational
educators to use mistaken assumptions events to improve learning outcomes. How the Learning Assessment
based on their personal experience or This report also suggests what further Toolkit Works (Table II)
reasoning without evidence3. instruments are necessary to achieve the Precourse Assessment
Surgical education should be based desired end result: education that sub- Two weeks before the educational event,
on needs assessment, efficient program stantially improves patient care. course participants were contacted on-
planning, and a strong curriculum. The line. They were presented with the
motivation of learners should be high, Materials and Methods fourteen key competencies and were
and the learners’ needs should be met. The educational event used to design and asked two questions about each com-
This is dependent on measurable out- test the assessment toolkit is a fracture petency. The first question was: ‘‘How
comes with objective data demonstrating course based on the AO principles of important is this competency to you in
knowledge acquisition that meets the operative fracture management. The your daily practice?’’ Participants were
needs of surgical learners4. course is aimed at residents in their first asked to score from 1 to 5 (with 5
In the last ten years, the providers few years of training. The participant indicating the highest importance) on a
of educational resources—governments, groups are homogeneous in North Likert scale. They were then asked to
training boards, charitable foundations, America and Western Europe but het- evaluate their own ability relating to the
and commercial companies—have been erogeneous in the developing world in individual competencies using the same
increasingly interested in whether sur- terms of their needs and experience. The scoring system.

Disclosure: The authors did not receive any outside funding or grants in support of their research for or preparation of this work. Neither they nor a
member of their immediate families received payments or other benefits or a commitment or agreement to provide such benefits from a commercial
entity.

J Bone Joint Surg Am. 2010;92:1325-9 d doi:10.2106/JBJS.I.00775


1326
THE JOURNAL OF B O N E & J O I N T S U R G E RY J B J S . O R G
d
LE A R N I N G A S S E S S M E N T TO O L K I T
V O LU M E 92 N U M B E R 5 M AY 2 010
d d

The response patterns for each pilot


TABLE I Competencies Defined for Principles of Fracture Management Course have been continually assessed to build
up evidence as to the validity and
1 Assess fracture-related soft-tissue injuries and apply strategies to treat these injuries
discriminatory capacity of each indi-
2 Safely treat open fractures vidual test item.
3 Demonstrate techniques to achieve relative stability based on an accurate description The objective assessment enables
of indications
the faculty to have insight into whether
4 Diagnose and treat compartment syndrome the course participants’ assessment of
5 Explain how design features and functions of various implants relate to their their current performance was or was
application to fracture fixation not accurate and enables learners to
6 Accurately explain the influence of reduction and fixation techniques on bone healing have an understanding as to their true
7 Apply reduction techniques in fracture management level of knowledge. Previous studies
8 Apply techniques to stabilize the pelvis in the emergency management of a have shown that a doctor’s perception
hemodynamically unstable patient with a pelvic fracture of his or her own level of knowledge
9 Explain the factors that lead to malunion, nonunion, and infection following or skill is not accurate, with a tendency
fracture treatment for doctors to overestimate their own
10 Explain why the components of preoperative planning are essential for successful abilities12-14.
fracture treatment
11 Treat diaphyseal fractures according to the principles of restoring length, axis, Evaluation During the Course
and rotation The course participants were asked to
12 Treat articular fractures to achieve early active movement according to the principles evaluate each presentation—lecture,
of anatomical reduction and absolute stability discussion group, or practical skill ses-
13 Demonstrate techniques to achieve absolute stability based on an accurate sion. The evaluation system for each
description of indications presentation was on a 5-point Likert
14 Explain strategies for treating fractures in osteoporotic bone scale with use of an audience response
system. Two questions were asked:
1. How relevant is the presenta-
Three pieces of information were At the same time that the course tion to your daily practice?
obtained from this survey, passed on to participants were contacted online with 2. How effectively was the presen-
the faculty of the course, and fed back to regard to their needs assessment, they tation given?
the individual course participants. were also given two multiple-choice
1. Which competencies were rated questions relating to each key compe-
as being the more important from the tency. The multiple-choice test ques- TABLE II Components of Learning
Assessment Toolkit
participants’ point of view—an indica- tions were developed by the surgical
tion of where they thought they ought educators through an expert panel11
Precourse
to be in their practice? consisting of existing experienced fac-
Subjective needs assessment
2. How capable did the partici- ulty members and a small group of (self-assessment of key competencies)
pants believe they actually were in each senior North American orthopaedic
Objective needs assessment (test of
competency? surgical residents following best practice knowledge with multiple-choice
3. What was the difference be- guidelines of multiple-choice question- questions)
tween these two measures for each key writing (Fig. 1).
Course
competency, i.e., the gap score? This is The test questions were placed
Participant evaluation of lecture,
an indication of the difference between into assessment software (Question- practical, or discussion
where the participants thought they mark Perception; Questionmark, Nor-
Relevance
were and where they ought to be in their walk, Connecticut) for online pilot
Faculty performance
practice. testing to collect statistical data as to
Faculty evaluation
The gap score is a reasonable how the learners were answering the
of presentation (evaluation based
measure of the motivation of the course individual questions. After pilot testing on objective criteria)
participants to learn at the course. Dis- these questions in three courses, the
Postcourse
comfort over the perception of where expert panel was reassembled to study
Subjective needs assessment
adult learners believe they are and where the data obtained and to eliminate or
(self-assessment: how much did the
they believe they ought to be, as indi- refine questions that were too easy, too participants think they had learned?)
cated in this case by the measure of the hard, or too confusing. Following fur-
Objective needs assessment (test
importance of a given competency, ther pilot testing of the adapted ques-
with multiple-choice questions to show
creates discomfort in the learner and tions, they were reviewed again and how much had been learned)
spurs the drive to learn and change10. became part of a library of test questions.
1327
THE JOURNAL OF B O N E & J O I N T S U R G E RY J B J S . O R G
d
LE A R N I N G A S S E S S M E N T TO O L K I T
V O LU M E 92 N U M B E R 5 M AY 2 010
d d

Fig. 1
A sample question from the precourse and postcourse objective assessment used to assess Competency 1: Assess fracture-
related soft-tissue injuries and apply strategies to treat these injuries.

The course participant rating was Results Precourse Knowledge Assessment


collected electronically after each ses- The Learning Assessment Toolkit has The level of knowledge of the course
sion, with participation rates in excess been used in twenty courses in eight participants varied from course to
of 80%. To ensure validity as to countries involving 1812 participants course. On the average, the questions
whether the course participants’ as- originating from forty-seven countries. were answered correctly by 59% (range,
sessment of faculty performance was or However, only data collected from courses 51% to 67%) of the course participants.
was not accurate, two faculty members that took place after finalization of the The courses held in the developing
were also assigned to assess each pre- assessment questions are presented— world were attended by surgeons with
sentation on the basis of a group of ten eleven courses in six countries involving greater experience, and this was re-
criteria that had been agreed on by the 912 participants from forty-six countries. flected in their higher knowledge scores.
faculty before the educational event Response rates ranged from 41% to 98%, The correlation between knowledge,
and that were supported by the avail- with an average of 62% for the assess- as measured by objective testing and by
able literature15-17. ments before the course and 51% for the self-assessment, was variable. In three
assessments after the course. courses, participants who rated their need
Postcourse Assessment to learn about compartment syndrome as
Two weeks after the course, the partic- Precourse Subjective Needs Assessment low because they believed they had good
ipants were contacted online. They were Overall gap scores were large for an existing abilities with regard to that subject
asked to repeat the online questionnaire. educational event10. The average gap score were incorrect in their self-assessment as
They were also given two multiple- was 2.25, with a fairly narrow range they had low scores on their objective
choice questions for each competency. between 1.9 and 2.4. Certain competen- assessment. In two courses, participants
The set of questions was new to each cies were consistently ranked as being rated their need to learn about compart-
individual participant but had been more important than others, and this ment syndrome as low because of good
asked before the course to the other half pattern was independent of the geo- existing abilities, and objective testing
of the course participants. Four ques- graphical location of the course. The showed them to be correct.
tions (A, B, C, and D) were allocated to ‘‘emergency management of a hemody-
each course. Half of the course partici- namically unstable patient with a pelvic Course Evaluation
pants answered questions A and B before fracture’’ was consistently identified as the Electronic evaluation of faculty perfor-
the course and answered C and D highest need by the course participants. mance by the course participants was
afterward. Half of the participants an- Large gap scores can occur in one carried out in four courses. The other
swered questions C and D before the of two ways—either the participant courses were evaluated with use of a
course and answered A and B afterward. ranks the competency to be very im- paper-based system. A total of 45,600
Participants were not asked the same portant or the participant believes that responses were analyzed. There was a very
questions before and after the course to his or her ability is poor for the com- strong correlation between the partici-
avoid test-retest bias of their answers, petency tested. Similarly, a small gap pants’ perception of the relevance of the
which would have tended to improve score can be explained in two ways— presentation to their practice (average
their scores after the course, giving a false either the participant thinks that the score, 4.04; range, 3.88 to 4.21) and to
impression about knowledge transfer competency is not important for his or their perception of faculty performance
occurring as a result of the course. her practice or that his or her ability is (average score, 3.99; range, 3.77 to 4.17).
reasonable for the competency tested. When presentations were given about the
Source of Funding Those competencies, which consistently same subject to different audiences by
This project was funded entirely by the showed large and small gap scores, are different faculty, there was a wide varia-
AO Foundation, Davos, Switzerland. listed in Table III. tion in the participants’ assessment of
1328
THE JOURNAL OFB O N E & J O I N T S U R G E RY J B J S . O R G
d
LE A R N I N G A S S E S S M E N T TO O L K I T
V O LU M E 92 N U M B E R 5 M AY 2 010
d d

or patient health status. Course adminis-


TABLE III High and Low Gap Scores* trators must document the assessment of
Large Gap Small Gap these outcomes18. This presents a formi-
dable challenge because tools for assessing
Large Gap Score: Large Gap Score: Small Gap Score: Small Gap Score: needs, motivation, and outcomes in terms
High Level of Low Level of Low Level of High Level of
Importance Knowledge Importance Knowledge of gaps in knowledge and performance
have not been available. The Learning
Emergency management Treatment of Principles of Diagnose and Assessment Toolkit provides a short
of a hemodynamically articular preoperative treat a practical system for discovering objective
unstable patient with a fractures planning compartment
and self-assessed gaps in performance of
pelvic fracture syndrome
key competencies before and after edu-
How design Techniques of
cational programs.
and function of relative
implants relate to stability
The toolkit data are designed to
their application in provide accurate information related
fracture fixation to level of competency for surgeon
Techniques of absolute performance by using case-based
stability multiple-choice questions to test clinical
judgment and decision making. Objec-
*Gap scores are a measure of the difference between where a learner wants to be and where tive evidence and perception together
the learner thinks he or she is. Gap scores are derived from self-assessment and are purely provide feedback to the learners and
subjective. teachers with regard to the learners’
level of motivation and their gaps in
knowledge and skill before and after a
performance. The participants’ percep- figures reflect the belief that the course learning experience. With these kinds of
tion of relevance also changed following participants believed that they had data, educators can understand the level
the change in performance perception. learned as a result of the course. The of motivation before and after instruc-
Therefore, basing changes in curriculum learners’ highest residual needs varied tion and also assess gaps in knowledge
on an analysis of perception of relevance from course to course, but the ‘‘man- and skill related to solving clinical
in isolation from faculty performance agement of open fractures’’ and ‘‘the problems before and after instruction.
may be invalid. emergency management of a hemody- The educator learns how learners per-
The evaluation of faculty perfor- namically unstable patient with a pelvic ceive themselves, how accurate these
mance by a trained faculty assessor on fracture’’ were the most common areas perceptions are, and to what extent an
the basis of agreed-on criteria correlated of residual perceived need. educational activity has changed per-
very weakly with the participants’ eval- ceptions and actual knowledge.
uations of either relevance or perfor- Postcourse Objective Assessment This information can help
mance (Pearson correlation coefficient, Most competencies showed an im- learners to correct their self-assessed
r 5 0.54; p , 0.0001). On those provement in their objective assessment weaknesses and guide them in self-
occasions when the performance and scores (average, 73%; range, 69% to directed learning activities. After the
relevance scores were not closely related, 77%). There was considerable variation educational experience, learners are
the faculty assessment closely followed in learning outcomes for each of the given personal data with regard to their
the gap between the two different competencies. Problem areas were in perceptions and their individual scores
evaluation criteria. For example, a the teaching of reduction techniques, on objective questions related to clinical
presentation ranked very effective by preoperative planning, and compart- cases. This improves the accuracy of
the faculty evaluator was very likely to ment syndrome, where improvements their self-assessment and can help them
have a performance score considerably in learning outcomes were modest. to plan for future participation in con-
higher than the relevance score. tinuing medical education events.
Discussion The education of doctors has one
Postcourse Subjective Evaluation With accreditation changes, continuing major purpose: to produce changes in
All courses evaluated showed marked education for surgeons must meet new knowledge that result in improved pa-
decreases in the gap scores measured requirements18. In addition to having tient care. The Learning Assessment
two weeks after the course (average, well-prepared faculty who present infor- Toolkit provides objective evidence as
1.17; range, 0.55 to 1.645). The gap mation in a thoughtful and organized to the success or failure of an educa-
scores of all competencies declined, manner, new standards for the accredita- tional event in producing improved
with the biggest decreases occurring in tion of continuing medical education levels of knowledge19. It provides infor-
the competencies that had had the specify that programs must be based on mation to educators as to the strengths
highest needs before the course. These learner gaps in knowledge, performance, and weaknesses of their program and
1329
THE JOURNAL OF B O N E & J O I N T S U R G E RY J B J S . O R G
d
LE A R N I N G A S S E S S M E N T TO O L K I T
V O LU M E 92 N U M B E R 5 M AY 2 010
d d

provides evidence on which effective All that is required is the creation of a E-mail address for P.G. de Boer:
future changes can be made. list of competencies for the educational Piet.deboer@aofoundation.org.
This evaluation system provides event and the test questions designed to E-mail address for P. Schmidt:
Pascal.schmidt@aofoundation.org
a useful guide to enable educators to test knowledge of these competencies.
design appropriate educational offer- Many software packages exist to allow
Rick Buckley, MD, FRCS(C)
ings to meet the needs of surgeons. online testing. The assessment toolkit
Foothills Medical Centre, AC 144A,
However, if individuals learn knowledge described in this paper is not subject to 1403-29th Street NW, Calgary,
and skills in an education event but copyright, and the authors would be Alberta T2N 2T9, Canada.
cannot put their skills into practice after pleased to assist any groups interested E-mail address: buckclin@ucalgary.ca
the event, then the event clearly has in setting up their own assessment
not been successful. Assessment of the program. Robert Fox, EdD
barriers to knowledge implementation NOTE: The authors thank Dr. Laurent Audigé, DVM, PhD, 1001 Olde Oak Court,
Manager, Methodology, AO Clinical Investigation and Docu-
after a course is therefore critical. We mentation, for his invaluable help in the statistical analysis Norman, OK 73026.
within this paper. E-mail address: drrdfox@me.com
are presently conducting a study to
identify barriers that are encountered by
doctors in implementing what they have Jesse Jupiter, MD
learned at courses and how educators Department of Orthopaedic Surgery,
Massachusetts General Hospital,
can help them to overcome these Peter G. de Boer, MA, FRCS Yawkey Building 2100,
barriers20. Pascal Schmidt 55 Parkman Street,
A Learning Assessment Toolkit AO Education, Stettbachstrasse 6, Boston, MA 02114.
can be used by any group of educators. CH-8600 Dübendorf, Switzerland. E-mail address: JJupiter1@partners.org

References
1. Spencer JA, Jordan RK. Learner centered new vision of the professional development of 14. Davis DA, Mazmanian PE, Fordis M, Van
approaches in medical education. BMJ. 1999;318: physicians. Acad Med. 2000;75:1167-72. Harrison R, Thorpe KE, Perrier L. Accuracy of
1280-3. physician self- assessment compared with
8. Norman GR, Davis DA, Lamb S, Hanna E,
observed measures of competence: a systematic
2. Davis DA, Barnes BE, Fox RD. The continuing Caulford P, Kaigas T. Competency assessment of
review. JAMA. 2006;296:1094-102.
professional development of physicians: from primary care physicians as part of a peer review
research to practice. Chicago: AMA Press; 2003. program. JAMA. 1993;270:1046-51. 15. Green JS, de Boer PG. AO principles of
teaching and learning. Stuttgart: Georg Thieme
3. Davis DA, Thomson MA, Oxman AD, Haynes 9. Ward J, Macfarlane S. Needs assessment in
Verlag; 2005.
RB. Changing physician performance. A continuing medical education. Its feasibility and
systematic review of the effect of continuing value in a seminar about skin cancer for general 16. Coats W, Smidchens U. Audience recall as a
medical education strategies. JAMA. 1995;274: practitioners. Med J Aust. 1993;159:20-3. function of speaker dynamism. J Educ Psychology.
700-5. 1966;57:189-91.
10. Fox RD, Miner C. Motivation and the facilitation
4. Wilkes M, Bligh J. Evaluating educational of change, learning, and participation in educational 17. Russell IJ, Hendricson WD, Herbert RJ. Effects
interventions. BMJ. 1999;318:1269-72. programs for health professionals. J Contin Educ of lecture information density on medical student
Prof. 1999;19:132-41. achievement. J Med Educ. 1984;59:881-9.
5. Casebeer L, Raichle L, Kristofco R, Carillo A.
Cost-benefit analysis: review of an evaluation 11. Ebel RL. Essentials of educational 18. ACCME. http://www.accme.org. Accessed
methodology for measuring return on investment in measurement. 3rd ed. Englewood Cliffs, NJ: 2010 Jan.
continuing education. J Contin Educ Health Prof. Prentice-Hall; 1979.
19. Sachdeva AK. The new paradigm of
1997;17:224-7.
12. Ward M, MacRae H, Schlachta C, Mamazza J, continuing education in surgery. Arch Surg. 2005;
6. Regnier K, Kopelow M, Lane D, Alden E. Poulin E, Reznick R, Regehr G. Resident self- 140:264-9.
Accreditation for learning and change: quality and assessment of operative performance. Am J Surg.
20. Mazmanian PE, Daffron SR, Johnson RE, Davis
improvement as the outcome. J Contin Educ Health 2003;185:521-4.
DA, Kantrowitz MP. Information about barriers to
Prof. 2005;25:174-82.
13. Dunning D, Heath C, Suls JM. Flawed self- planned change: a randomized controlled trial
7. Bennett NL, Davis DA, Easterling WE Jr, assessment. Implications for health, education and involving continuing medical education lectures and
Friedmann P, Green JS, Koeppen BM, Mazmanian the workplace. Psychological Science in the Public commitment to change. Acad Med. 1998;73:
PE, Waxman HS. Continuing medical education: a Interest. 2004;5:69-106. 882-6.

You might also like