Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Clinical Chemistry 44:2 401 407 (1998)

TDM Conference

Strategies for physician education in therapeutic drug monitoring


David W. Bates,1* Steven J. Soldin,2 Petrie M. Rainey,3 and Joseph N. Micelli4
Although therapeutic drug monitoring (TDM) is probably very useful overall, studies suggest that it could be used better. Many drug concentrations appear to have inappropriate indications or suboptimal timing, particularly in the inpatient setting. Undermonitoring is also a concern. Thus, it may be possible to both improve the quality of TDM and reduce the overall costs of care. Here we review approaches for improving the use of TDM and present some illustrative experiences. Specific approaches discussed include use of traditional approaches such as lectures and newsletters, multidisciplinary quality improvement efforts, formal TDM services, and use of the computer as a tool for education and behavior change. Computerized methods appear to hold substantial potential, particularly as more organizations develop better information systems, but other approaches are also effective and are complementary. To be most successful, interventions should consider all stages of the process. Although therapeutic drug monitoring (TDM) is probably highly beneficial in the aggregate, increasing evidence suggests that current use is suboptimal. In some studies, up to 70 80% of drug quantifications performed in inpatients have been inappropriate, primarily because of routine daily monitoring without pharmacological justification [1, 2]. These data suggest that efforts to improve the use of TDM could result in substantial cost reductions without missing important clinical results. The other side of this coin is that TDM is sometimes omitted when it is indicated. The magnitude of this problem is harder to assess, but the potential consequences in terms of patient harm are greater. Our objective is to discuss strategies for physician education regarding optimal use of TDM. Traditional education approaches, e.g., lectures, can improve practice, but such approaches are labor-intensive and their effects wane with time. Development of formal TDM services with one-on-one education is also labor-intensive, but the effect does not diminish if the service can be maintained. Multidisciplinary quality-improvement projects and the development of critical pathways and guidelines offer additional opportunities for education and for obtaining clinician buyin. Making guidelines evidence-based is likely to promote acceptance from physicians [3]. Computerized provider order entry, in which providers write orders directly on-line, offers the opportunity for decision support, including reminders and feedback at the time of order writing. For example, for a TDM order, the patients last drug value and guidelines for ordering the next measurement can be displayed. The computer can also help with the issue of underuse by suggesting an order, e.g., after a specific interval or when a new medication is ordered. Computerized suggestions may not be sufficient, however, if physicians are not convinced that the recommended approach is optimal; in these instances, supplemental educational sessions may be required to change behavior. Here, we will discuss our experiences with and the potential of these initiatives for favorably altering physician behavior.

Traditional Educational Approaches and Multidisciplinary Quality-Improvement Efforts


In general, most traditional educational approaches are effective at changing physician behavior in the short term, but the problem has been that these approaches are laborintensive and their effect has waned with time [4]. This has been demonstrated across a variety of domains, e.g., drugordering and test-ordering. Direct education is essential, however, for convincing providers about new behaviors. For example, computerized reminders to providers to make a change that the providers are not convinced is worthwhile will have little or no effect [5]. Although direct education can be presented either one-on-one or in groups, for logistical

1 Center for Applied Medical Information Systems Research, Division of General Medicine, Department of Medicine, Brigham and Womens Hospital, and Harvard Medical School, Boston MA. 2 Department of Medicine, National Medical Center, Washington, DC. 3 Department of Laboratory Medicine, School of Medicine, Yale University, New Haven, CT. 4 Pfizer Pharmaceuticals, Atlanta, GA. *Address correspondence to this author, at: Division of General Medicine and Primary Care, Brigham and Womens Hospital, 75 Francis St., Boston, MA 02115. Fax (617)732-7072. Received August 25, 1997; revision accepted October 22, 1997.

401

402

Bates et al.: Strategies for physician education in TDM

reasons group educational sessions are probably the best place to start, with more-intensive one-on-one education helpful as a follow-up. Multidisciplinary quality-improvement efforts are now underway at most hospitals nationwide, and it has been demonstrated, for example, that critical paths can improve the quality of care [6]. For TDM, it is important that those responsible for drug measurement play a role in paths in which TDM will be a main feature, e.g., in postneurosurgery or transplantation guidelines and critical path development. Multidisciplinary efforts focused primarily on improving the use of TDM may also have an important impact.

case study: yale universitys efforts to improve tdm use


Two recent multidisciplinary efforts in which the Yale TDM laboratory has participated have demonstrated that some desirable behaviors may be maintained with relatively minimal ongoing input. Both efforts began as attempts to reduce the number of drug assays ordered, and both originated outside the laboratory in response to articles about unnecessary measurements. Previous efforts by the TDM laboratory to improve the use of drug measurements, through educational newsletters and direct intervention, had produced few lasting effects. This was attributed in part to the fact that the house staff more often viewed the laboratory service as a utility than as professional colleagues. The new initiatives provided the opportunity to provide guidance through faculty more likely to be viewed as role models. A major objective of the laboratory was to redirect the focus from simply reducing the number of tests to improving the use of TDM results. While reduced testing could lower costs modestly, better use of TDM results could both reduce testing and improve outcomes. Phenytoin. The neurosurgery intensive care unit proposed an initiative to reduce phenytoin monitoring as a quality-

improvement project. Schoenenberger et al. [1] had reported that the major reason for excessive phenytoin monitoring was routine daily measurements. This practice was almost universal in the unit, an average of 0.92 results/day being reported for each patient on therapy. The initial goal of the project was to reduce phenytoin measurements by 50%. In addition to habit, and to the fear of not having a result when the attending physician wanted one, another major impetus for daily measurements was discovered in a review of the usual dosing strategy. Patients were almost uniformly loaded with 1 g of phenytoin and maintained with 100 mg every 8 h. Adjustments in response to out-of-range results consisted of withholding a dose or giving an extra one. Almost all adjustments were reactive. This strategy could be compared to driving down a road and steering your car only when you went off the road. Moreover, you could only look once a day to see if you were on the road. Indeed, in this view, the initial goal of the proposed program (50% reduction of utilization) was to look only once every other day. Instead, the revised goal became figuratively to encourage defensive prescribing, steering drug therapy proactively rather than reactively. This needed to be done without any substantial imposition on the house staff, who were more concerned about not cutting the patients spinal cord or carotid artery than in what the patients phenytoin level is. Simple guidelines were developed for loading and maintenance doses that were weightadjusted in convenient steps. Additional guidelines suggesting appropriate responses to drug concentrations in various ranges (dose changes, if any, and when to obtain the next result) were distributed on cards. Table 1 shows the changes in phenytoin requests in the first 3 months, compared with the same period 1 year earlier. The number of requests decreased by 26%, but the percentage of values falling within the therapeutic range increased by 22%. The latter change probably resulted

Table 1. Results from the Yale TDM laboratory: phenytoin and vancomycin results after intervention to improve the use of these drugs and their monitoring.
Intervention Before After % change Before After >6 mo. later After % change

Phenytoin Specimens/day Specimens/patient % in therapeutic range Vancomycin Specimens/day Peak measurements/day % in therapeutic range

3.6 3.9 46 13.4 3.9 46

2.6 2.9 56 5.5 0.1 29

26 26 22 59 97 37

3.1 3.8 50 13.4 3.6 48

3.2 3.7 60 6.4 0.1 33

3 3 20 52 97 31

Unpublished data from PM Rainey, LM Dembry, and J Farrington. For phenytoin, the intervention was to promote the use of simple guidelines for loading and maintenance doses; for vancomycin, the intervention consisted of recommendations to forgo monitoring in patients with normal renal function, to use weight-based dosing, and to order only trough measurements. The interventions were implemented by using education and a computer screen at the time of ordering (see text for details).

Clinical Chemistry 44, No. 2, 1998

403

from more-appropriate dosing, in that many patients began receiving regimens other than 100 mg every 8 h. This interpretation was also supported by a follow-up study of the period 6 9 months after the program was initiated. There had been no ongoing efforts to promote compliance in the interim, and the frequency of phenytoin requests returned to baseline; however, the improvement in the percentage of results within the therapeutic range was largely maintained. Without ongoing promotion, reducing order frequency was not positively rewarded (but did receive negative reinforcement when attending physicians wanted results that had not been obtained). Accordingly, that behavior underwent extinction. On the other hand, results falling within the therapeutic range were intrinsically reinforcing. Thus, more-appropriate dosing appeared to have been maintained. This occurred despite some unexpected negative feedback: When patients were discharged on individualized dosing regimens, their regular physicians complained. Doses involving other than an integral number of 100-mg tablets were felt to decrease compliance and were also noted to increase costs, because both 30-mg and 100-mg tablets were required. These are valid concerns, particularly with regard to compliance. However, because phenytoin has nonlinear pharmacokinetics, a sizable minority of patients could not achieve steady-state phenytoin concentrations of between 10 and 20 mg/L when only doses that were multiples of 100 mg were used. If more-complex regimens are not acceptable, there must be a willingness to accept some phenytoin concentrations outside the traditional therapeutic range, provided the clinical effects are acceptable. Vancomycin. A vancomycin initiative was initially proposed by the Antibiotic Drug Use Subcommittee of the Pharmacy and Therapeutics Committee in response to articles citing a lack of evidence justifying therapeutic monitoring of vancomycin [7, 8]. The subcommittee did not propose to halt vancomycin monitoring, but rather to discourage it in patients with normal renal function. This proposal was supported by the pharmacy and the TDM laboratory. Also suggested was the situation that measurements of vancomycin peaks were rarely indicated, given the lack of evidence linking vancomycin peaks with toxicity [79], as well as practical considerations based on the pharmacokinetics of vancomycin. The recommended draw time for peaks was during the distribution phase, when concentrations were changing rapidly and timing was critical. Past experience, however, suggested that incorrect draw times for vancomycin peaks were the rule rather than the exception. Moreover, clearance calculated from such peaks would include a component of distribution, resulting in overestimation of clearance and underestimation of half-life. When the distribution phase is not taken into account, vancomycin peak values may mislead more often than inform.

Because patient weight was the largest contributor to variability in the volume of distribution, and the peak target range was quite broad, administering a weightadjusted vancomycin dose to a patient with an appropriate trough concentration could largely assure a peak concentration within the target range. Accordingly, it was felt that almost all patients could be monitored with only trough concentrations. The recommendations to forgo vancomycin monitoring in patients with normal renal function and to forgo vancomycin peaks in almost all patients were presented at the Infectious Disease Service conference. Additional discussions were held with the Infectious Disease fellows, the ones most likely to recommend use of vancomycin. The recommendations also were introduced elsewhere by pharmacy in-service presentations. Finally, a computer screen appeared whenever a request for vancomycin determination was ordered on-line. This screen briefly summarized the recommendations and further noted that requests for vancomycin peaks should be discussed with the laboratory medicine resident. This screen provided the only ongoing reminder of the recommendations. The initial response was quite impressive, with a decrease in frequency of vancomycin requests of nearly 60% (Table 1). Almost no tests labeled as peak measurements were ordered, although some requests for peak values were submitted as random or trough measurements to circumvent the need to provide a rationale. Most of the reductions were maintained a full year later, with the only ongoing reinforcement being a computer screen that could be skipped unread in less than a second. This nearly subliminal input, coupled with a modest barrier requiring discussion before ordering a properly labeled peak, appeared to be enough to maintain the desired behaviors. An apparent negative effect of the recommendations was the decrease in the percentage of peaks and troughs that were within the target ranges (random results had no defined target range and were not included). This probably resulted from concentrations not being obtained for patients with normal renal function, who should usually achieve concentrations within these ranges when receiving standard doses. Additionally, peak values were not being measured, and these too were more likely to be within their range. When only troughs were considered, the in-range percentage fell from 33% to 27%. The low percentages of patients with measured values in the target range suggested substantial room for improvement in the dosing of vancomycin. The only guidance provided to improve dosing was the suggestion to use weight-based dosing, which proved to be difficult to implement. When weight-adjusted doses were ordered, they were often converted to standard doses by the pharmacy. In these cost-conscious times, one may ask whether efforts to reduce unnecessary testing are cost-effective. The marginal costs (the costs of doing one more assay) for

404

Bates et al.: Strategies for physician education in TDM

many laboratory tests are often quite low, making savings difficult to achieve. The sustained reduction in vancomycin requests was 2500 specimens per year, yielding paper reductions in annual costs of $40 000. Because drug concentrations must be obtained at fairly precise times, this usually means obtaining a separate specimen, and the largest component of these paper savings was in the costs associated with obtaining the specimen. The only unequivocal savings was in the costs of the reagents, amounting to $1500 per year. Although this is modest, it exceeded the costs of the intervention. Moreover, because the behavior seems to be sustained, the savings will continue to accrue in subsequent years. These efforts were probably successful for several reasons. First, they were evidence-based, and clinicians responded positively to evidence. Second, the entire process was considered, including drug dosing. Third, the recommendations were relatively modest and readily understandable. Finally, they were multidisciplinary, involving clinicians inside and outside the laboratory as well. Others have reported that it took surprisingly long for new evidence or guidelines to change practice [10, 11]. Strategies such as these used at Yale may help narrow that interval.

in developing what was then and remains an outstanding TDM program. At the onset an important goal was to centralize the TDM services within the hospital. Much of the TDM was being performed in small splinter laboratories and, as a result, no one had the central control to establish that: (a) the time of sampling relative to drug dosage was properly enforced; (b) the information to accompany the sample was provided on an appropriate requisition; (c) the quality control met certain stringent guidelines; and (d) appropriate action was taken to follow up on those measurements that were in the probably subtherapeutic and toxic ranges. Pre- and postlaboratory considerations. For drug measurements to be useful, optimal timing is imperative. One approach for achieving this was to designate several individuals whose sole task was ensuring that blood samples were drawn at appropriate time intervals relative to the drug dose. At this hospital, all of the technologists working in the TDM area were trained in blood collection to provide an important back-up service to the venipuncture team. Educational sessions with nurses and residents emphasized the critical relationship between time of sampling and dose administration. Another key strategy was to use a special TDM requisition form listing the patients age, sex, weight, and height; dose, time of last dose, and time of sampling; clinical status, especially with regard to renal, hepatic, and cardiac function; and other medications received by the patient. In addition, it was made clear that the analysis requested by the clinician would not be performed unless the appropriate information was provided. This information was essential if clinicians were going to make educated adjustments to drug dose or dosage interval based on the analytical data generated by the laboratory. Before introducing both the new requisition and the various educational sessions, investigation found that in 30% of the cases the 2-h postdose theophylline concentration was lower than the predose concentration, because of erroneous sampling time. After introduction of the requisition and the educational program, this dismal state of affairs was found in only 0.5% of cases. The results of analysis also must be rapidly conveyed to the requesting physician. Ideally, an interpretive arm of the TDM service would interrelate between the laboratory and the clinicians to ensure that the appropriate adjustments in drug regimen occur. A key for the consultatory service (TCS) was to provide this communication effectively. In the TCS, all drug measurements found to be above the currently accepted therapeutic range were immediately phoned both to the ward and to the pharmacy TDM group, who on occasion conferred with the Clinical Pharmacology Unit. The official report form included the drug concentration found, the desired therapeutic concentration range, and recommendations as to how the latter may be achieved.

TDM Programs
Several studies show that TDM programs can improve the probability that drug concentrations will be within the therapeutic range, that samples will be collected appropriately, and that results will be used appropriately [1214]. Fewer data are available on the impact of TDM on patient outcomes [15], though some data suggest improvements [12, 14]. The major difficulty with implementing these programs has been that it is relatively laborintensive to do so. Some data do suggest that (as might be expected) if the programs are discontinued, their beneficial effects wane [14].

case study: implementing and optimizing a tdm program at the hospital for sick children, toronto
The Hospital for Sick Children is an outstanding Canadian childrens hospital affiliated with the University of Toronto. In the early 1980s, attempts were made to introduce a TDM program that would provide: (a) improved documentation of time of sampling relative to dose; (b) centralization within the Hospital of all drugmonitoring services, and (c) a TDM consultative service (TCS). The initial attempts to launch this program failed because of the costs. However, a crisis was about to occur, involving the unexpected deaths of numbers of children in the cardiology ward, several of whom were found to have digoxin concentrations in the high toxic range. The resulting furor and 4 years of subsequent investigation set the scene for the acceptance by the administration of a full-fledged TDM program. One of us (S.J.S.) participated

Clinical Chemistry 44, No. 2, 1998

405

Structure and functions of the TCS. The TCS began its active function on July 1, 1982. The TCS had several teams, each of which included a fellow and a staff member from the division of clinical pharmacology. The fellows were either Pharm.D. pharmacists or pediatricians who were doing their postdoctoral training in pediatric clinical pharmacology. Each team served a week, and 24-h coverage was provided. The fellow was paged for any result that was excessively high and that could be associated with clinical toxicity. Daily between 1500 and 1600, after the load of samples had been analyzed, the TCS fellow screened the results and listed those above the therapeutic range for the reported drugs. He or she then went to the wards, performed a chart review, and evaluated the patients condition with regard to the efficacy or toxicity of the drug. After this evaluation, the involved clinicians were contacted and a note was written in the chart. In many instances, the discussion would include an explanation of the pharmacokinetic concepts with recommendations for necessary steps to be taken (e.g., a change in dose or dose interval). In some cases, a full pharmacokinetic workup was indicated, initiated, and carried out by the TCS team. Physicians may have also contacted the TCS team for advice on therapeutic issues. Long-term efficacy of the TCS service. For evaluating the efficacy of the service in improving drug dosing by avoiding toxic concentrations, the TDM laboratory data between July 1, 1982, and October 31, 1983, were reviewed. All events of excessive concentrations in serum of the following drugs were recorded monthly: theophylline ( 20 mg/L), phenytoin ( 20 mg/L), digoxin ( 2.5 mg/ L), gentamicin and tobramycin (trough 2 mg/L, peak 10 mg/L), and amikacin (trough 10 mg/L, peak 30 mg/L). Because new residents began their training schedule each year on July 1, it was assumed that the worst performance would be shown during this month and would gradually improve as these physicians learned the various skills of pediatric practice. This putative improvement could not be attributed solely to the TCS service, of course, because these residents presumably had improved their skills in many other areas by exposure to various educational processes. During the second summer of the program (beginning July 1, 1983) one-third of the residents left the service to be replaced by newcomers. Nonetheless, two-thirds of the house officers had been exposed to the program for a whole year, and if any

efficacy of the TCS program had been achieved, it was reflected in their ability to modify the ignorance of the new residents. The percentage of toxic concentrations for the monitored drugs clearly fell after implementation of the TCS in July 1982 (Table 2), although the performance in July 1982 and July 1983 was obviously worse than in subsequent months (P 0.05 for both). Of special interest was the deterioration in performance from June 1983 to July 1983, when the new residency year begins and newcomers substitute for the trained residents. Comparison of the respective months (July 1982 vs July 1983, August 1982 vs August 1983, and September 1982 vs September 1983) showed a significant improvement (P 0.05 for each month), with fewer and fewer numbers of toxic values being recorded [13]. Why was the TCS successful? First, initial implementation was difficult, and was supported by the administration only after a sense of urgency was created by several serious adverse events. Second, the TCS was a true system for improving the use of TDM, beginning from before the sample was drawn to appropriate use of the results; strategies focusing primarily in the laboratory were unlikely to be successful. The prelaboratory portion required providers to give essential information, and the postlaboratory efforts involved one-on-one consultation with providers.

Computerization of Ordering and TDM


Nationally, few hospitals currently have computerized physician order-entry, 5% [16]. However, this tool is so powerful for changing ordering behavior that it is likely to become more widely available soon [17]; in a randomized controlled trial, this approach was demonstrated to substantially decrease costs and length of stay [18]. For TDM, its use will make a number of things possible that have been major problems. First, guided dose algorithms can be implemented for many drugs for which TDM is required, taking into account patient-specific factors such as age, gender, weight, interacting drugs taken, and creatinine. One analysis evaluating the percent of adverse events that might be preventable through use of computerized interventions suggests that this was one of the highest-yield interventions [19]. These algorithms can also suggest monitoring of drugs at appropriate times. Second, they can give clinicians information about the last drug result and ask for information about the indication for the next measurement. Third, when the measurement is per-

Table 2. Results from the TDM program at the Hospital for Sick Children: monthly percentage of toxic results for six drugs.
Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec

1982 1983

2.8

2.4

2.5

2.5

2.9

2.7

6.4 3.7

3.5 2.0

4.6 2.9

3.1 2.8

5.2

2.7

The service was implemented in July 1982. The abnormal results (as a percentage of the total number of sample results) fell over time, although there was a statistically significant increase in July 1983, when new interns came to the hospital. All comparisons between the same months in 1982 and 1983 were statistically significant (P 0.05).

406

Bates et al.: Strategies for physician education in TDM

formed, they can require provision of needed information, such as time of draw in relation to last dose. Fourth, they can perform evaluations in the background, looking for trends in drug concentrations potentially suggesting that a dose needs to be increased or decreased [20], or that another measurement should be ordered because of time elapsed or a change in the patients renal or hepatic function.

has also probably made them less effective. Closer integration with dosing is planned for the near future.

Conclusions
A key issue for TDM in general is that few data are available showing that TDM improves patient outcomes [15]; more are badly needed. An even more difficult challenge is to show that TDM is cost-effective. Future studies should address both effectiveness and costs, especially given the national focus on cost reduction. All the approaches discussed for changing physician behaviortraditional education, formal TDM services, multidisciplinary quality-improvement efforts, and computerized approaches can improve the use of TDM and will continue to have a role. In hospitals that have relatively little in the way of computerization, great improvements are possible, particularly through TDM services and multidisciplinary teams. Computerization of electronic records will offer many tools that were not previously available, and has the advantages of being immediately generalizable to all providers and patients, being relatively inexpensive, and not waning in effect over time. Nonetheless, its best use is still being evaluated, and much more work is needed. Successful interventions are most likely to consider the key stages of the system, from the dosing of the drugs involved, to obtaining the samples, measuring the concentrations, and using the results appropriately.

case study: computerized order entry and tdm at brigham and womens hospital
Several studies conducted at Brigham and Womens Hospital suggested that TDM was suboptimal there; its use is now being improved. In one study, development of evidence-based appropriateness criteria for the use of antiepileptic drug concentrations demonstrated that only 27% of values had an appropriate indication. Of these, only 51% were sampled correctly, resulting in an overall appropriateness rate of 14% [1]. Use of a similar approach for inpatient digoxin results revealed that only 14% were appropriate [2]. Currently, several approaches to improve the use of TDM are being taken. Although guided dosing algorithms have not yet been fully implemented, they are being developed for a number of drugs. Evaluation of redundant reminders and structured ordering for improving the use of TDM has begun. Antiepileptic drug tests are now ordered by computer order entry. These requests are first checked to determine whether another sample for assay was drawn within a drug-specific interval and, if so, a redundant reminder is displayed. If not, the assays are ordered by using a structured ordering approach, in which the clinician is asked to describe the indication for ordering the drug. The effectiveness of this approach is currently being evaluated; initial evaluations suggest that the impact has been modest. Clinicians are not yet required to provide all needed information in relation to TDM samples because no direct electronic interface between order entry and the laboratory has been established. This will be implemented soon and will facilitate requiring these data, because all samples will be barcoded. Generation of the barcode will require an interaction with the information system, at which time this necessary information can be obtained. Systems to look for patients who need additional TDM have also not yet been implemented, but an event engine , an application that sits over a database and looks for events of interest and can make suggestions to clinicians based on rules, has been built. Such rules will be implemented for a variety of drugs. Success to date of the above efforts has been modest, for several reasons. Obviously, many of the efforts with the greatest potential have not yet been implemented. For antiepileptic drug concentrations, most of the intervention was carried out with the computer, and many clinicians may not have been convinced that the criteria were appropriate. In contrast to the Yale approach, the guidelines were not integrated with changes in dosing, which

References
1. Schoenenberger RA, Tanasijevic MJ, Jha A, Bates DW. Appropriateness of antiepileptic drug level monitoring. JAMA 1995;274:1622 6. 2. Canas F, Tanasijevic M, MaLuf N, Bates DW. Evaluating the appropriateness of digoxin level monitoring [Abstract]. J Gen Intern Med 1997;12:66. 3. Guyatt GH, Rennie D. Users guides to the medical literature [Editorial]. JAMA 1993;270:2096 7. 4. Soumerai SB, Avorn J. Efficacy and cost-containment in hospital pharmacotherapy: state of the art and future directions. Milbank Mem Fund Q Health Soc 1984;62:44774. 5. Harpole LH, Khorasani R, Kuperman G, Fiskio J, Bates DW. On-line counterdetailing: does it change physician behavior when ordering abdominal radiographs? [Abstract]. J Gen Intern Med 1996;11:74. 6. Pearson SD, Goulart-Fisher D, Lee TH. Critical pathways as a strategy for improving care: problems and potential. Ann Intern Med 1995;123:941 8. 7. Freeman CD, Quintiliani R, Nightingale CH. Vancomycin therapeutic drug monitoring: is it necessary? Ann Pharmacother 1993;27:5948. 8. Cantu TG, Yamanaka-Yuen NA, Lietman PS. Serum vancomycin concentrations: reappraisal of their clinical value. Clin Infect Dis 1994;18:533 43. 9. Saunders NJ. Why monitor peak vancomycin concentrations? Lancet 1994;344:1748 50. 10. Lomas J, Sisk JE, Stocking B. From evidence to practice in the United States, the United Kingdom, and Canada. Milbank Q 1993;71:40510. 11. Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med 1989;321:1306 11.

Clinical Chemistry 44, No. 2, 1998

407

12. Ried LD, McKenna DA, Horn JR. Meta-analysis of research on the effect of clinical pharmacokinetics services on therapeutic drug monitoring. Am J Hosp Pharm 1989;46:94551. 13. Koren G, Soldin SJ, MacLeod SM. Organization and efficacy of a therapeutic drug monitoring consultation service in a pediatric hospital. Ther Drug Monit 1985;7:295 8. 14. Wing D, Duff HJ. Impact of a therapeutic drug monitoring program for digoxin. Arch Intern Med 1987;147:1405 8. 15. Tonkin AL, Bochner F. Therapeutic drug monitoring and outcome. Clin Pharmacokinet 1994;27:169 74. 16. Sittig DF, Stead WW. Computer-based physician order entry: the state of the art. J Am Med Inform Assoc 1994;1:108 23.

17. Bates DW, Kuperman G, Teich JM. Computerized physician order entry and quality of care. Qual Manag Health Care 1994;2:18 27. 18. Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA 1993;269:379 83. 19. Bates DW, ONeil AC, Boyle D, Teich J, Chertow GM, Komaroff AL, Brennan TA. Potential identifiability and preventability of adverse events using information systems. J Am Med Inform Assoc 1994;1:404 11. 20. Harrison JH Jr, Rainey PM. Identification of patients for pharmacologic review by computer analysis of clinical laboratory drug concentration data. Am J Clin Pathol 1995;103:710 7.

You might also like