Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Editorials

and mean GCS (propensity matching not performed). Multi- 2. Lazaridis C, Robertson CS: The role of multimodal invasive moni-
toring in acute traumatic brain injury. Neurosurg Clin N Am 2016;
variable regression determined that β-blockers were associated 27:509–517
with lower mortality (adjusted odds ratio, 0.35; p < 0.001), and 3. Chesnut RM, Marshall LF, Klauber MR, et al: The role of secondary
propranolol was the superior agent (adjusted odds ratio, 0.51; brain injury in determining outcome from severe head injury. J Trauma
p = 0.010). A Cox regression model using a time-dependent 1993; 34:216–222
variable confirmed the mortality benefit. 4. Carney N, Totten AM, O’Reilly C, et al: Guidelines for the manage-
ment of severe traumatic brain injury, fourth edition. Neurosurgery
There are several questions that still need to be addressed in 2017; 80:6–15
the design of such a trial. Besides decreasing HR, β-adrenergic 5. Simard JM, Bellefleur M: Systemic arterial hypertension in head
blockade can also decrease cardiac contractility, alter metabo- trauma. Am J Cardiol 1989; 63:32C–35C
lism, lead to decreased arterial resistance, and increase venous 6. Meyfroidt G, Baguley IJ, Menon DK: Paroxysmal sympathetic hyper-
activity: The storm after acute brain injury. Lancet Neurol 2017;
vascular resistance (12). These effects could be deleterious to 16:721–729
a significant subgroup of patients with TBI. How would the 7. Krishnamoorthy V, Vavilala MS, Chaikittisilpa N, et al: Association of
treatment be monitored and what surrogates (for efficacy and Early Myocardial Workload and Mortality Following Severe Traumatic
safety) are appropriate? HR? Blood pressure? RPP? Serum Brain Injury. Crit Care Med 2018; 46:965–971
catecholamines? Echocardiographic markers? Would there be 8. Barmparas G, Liou DZ, Lamb AW, et al: Prehospital hypertension is
predictive of traumatic brain injury and is associated with higher mor-
monitoring of advanced hemodynamics and/or multimodal- tality. J Trauma Acute Care Surg 2014; 77:592–598
ity neuromonitoring? Which agent(s) should be tested and at 9. Alali AS, Mukherjee K, McCredie VA, et al: Beta-blockers and trau-
what dosing? This could be a highly anticipated trial due to matic brain injury: A systematic review, meta-analysis, and eastern
relative procedural simplicity of the intervention, in conjunc- association for the surgery of trauma guideline. Ann Surg 2017;
266:952–961
tion with biologic plausibility, robust observational data, and 10. Grände PO: Critical evaluation of the lund concept for treatment of
low cost; as Derek Parfit said, “it is not irrational to have high severe traumatic head injury, 25 years after its introduction. Front
hopes.” Neurol 2017; 8:315
11. Ley EJ, Leonard SD, Barmparas G, et al; Beta Blockers TBI Study
Group Collaborators: Beta blockers in critically ill patients with trau-
REFERENCES matic brain injury: Results from a multicenter, prospective, observa-
1. Maas AIR, Menon DK, Adelson PD, et al; InTBIR Participants and tional American Association for the Surgery of Trauma study. J Trauma
Investigators: Traumatic brain injury: Integrated approaches to Acute Care Surg 2018; 84:234–244
improve prevention, clinical care, and research. Lancet Neurol 2017; 12. Magder SA: The ups and downs of heart rate. Crit Care Med 2012;
16:987–1048 40:239–245

Mortality Prediction Gets a “Boost”*


David M. Maslove, MD, MS, FRCPC univariate models have been shown to be predictive of ICU
Department of Critical Care Medicine mortality (4). All of which begs the question: In 2018, what
Queen’s University and Kingston Health Sciences Centre remains to be done in the field of ICU mortality prediction?
Kingston, ON, Canada Enter the study by Delahanty et al (5), published in this issue of
Critical Care Medicine, which describes the development of the Risk

T
he last few decades have seen the creation and sequen- of Inpatient Death (RIPD) score, a 17-feature model showing best-
tial refinement of a host of ICU mortality prediction in-class performance in predicting in-hospital mortality among
models. Familiar models such as the Acute Physiology patients admitted to the ICU. The RIPD score was developed from
and Chronic Health Evaluation, Simplified Acute Physiology a large dataset of more than 237,000 patients cared for in a diver-
Score, and Mortality Probability Model use common clini- sity of ICUs within a 53-hospital network. Like its predecessors, this
cal features and regression analyses to predict ICU outcomes, new construct relies on an assembly of patient indicators to gener-
with the latest iteration of each of these displaying excellent ate a quantitative prediction. But the RIPD score differs from earlier
overall performance (1). Other composite scores, such as the models in important ways, providing a glimpse into where ICU
Sequential Organ Failure Assessment, can also be used to mortality prediction may be heading in the era of big data analytics.
stratify patients by mortality risk (2, 3), and even some simple Four notable innovations show how the RIPD score uses data
science to modernize mortality prediction. First, the use of machine
*See also p. e481. learning algorithms enabled the authors to evaluate a much larger
Key Words: critical care; machine learning; medical informatics; predictive number of clinical and administrative features for inclusion in
value of tests; severity of illness index
their final model. Whereas previous scores have looked only at a
Dr. Maslove has disclosed that he does not have any potential conflicts
of interest. small collection of hand-curated elements—including vital sign
Copyright © 2018 by the Society of Critical Care Medicine and Wolters measurements, laboratory values, and diagnostic details—the
Kluwer Health, Inc. All Rights Reserved. algorithm used by the RIPD developers was able to sift through
DOI: 10.1097/CCM.0000000000003037 a much larger selection of potentially predictive features. These

1024 www.ccmjournal.org June 2018 • Volume 46 • Number 6

Copyright © 2018 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Editorials

included not only conventional clinical features but also novel con- calibration easier and more intuitive with another innovative fea-
structs, such as the mean, median, or most recent value of a given ture, an online data exploration tool that allows readers to evaluate
indicator or even a change in that indicator’s value over time. The model performance on customizable subsets of the overall cohort
capacity to select from a larger pool of features opens the door to (http://shiny.cac.queensu.ca/CritCareMed/RIPD/). Like the online
“engineered features” or “meta-features” that may show predictive visualization tool published alongside, another recent assessment
utility beyond any obvious clinical utility. of mortality prediction models by Badawi et al (12), this platform
Second, while conventional mortality prediction models allows the reader to parse the RIPD dataset based on diagnosis,
rely on regression analysis, the RIPD score was derived using a length of stay, and other variables, and directly assess model per-
machine learning technique known as gradient boosting. This formance for a more homogeneous group of patients at a similar
form of “ensemble learning” relies on combining a large number level of mortality risk.
of simple prediction models to generate a single, more accurate There are a few limitations to consider. The RIPD score still
model. Ensemble learning has garnered significant attention requires validation by different groups using different datasets,
among data scientists for the striking prediction accuracy it has and although an increase in usability is touted as one of the main
achieved in a variety of settings. The XGBoost algorithm used in strengths of the RIPD score, it is possible that greater ease of use
developing the RIPD score is a form of gradient boosting that is on the horizon for legacy ICU mortality predictors as well,
has been deployed to great effect in a number of machine learn- simply by virtue of the wider availability of EMR data. The RIPD
ing competitions. Gradient boosting shows promise for use with might also face barriers to deployment in resource-limited set-
big biomedical data (6, 7) and has recently been used to derive tings or in ICUs that lack robust EMR systems. The RIPD score
ICU mortality prediction models from the popular Medical does, however, illustrate how the advances in data science yield-
Information Mart for Intensive Care database as well (8). ing better predictions in other industries might be brought to
Third is the RIPD score’s emphasis on usability. While some bear on the ever-expanding troves of clinical and administrative
existing scores can be calculated automatically, the viability of data that characterize modern ICU care. Integrating these tools
this approach may be constrained by the cost of acquiring soft- with existing health information systems stands to increase the
ware or of ready access to emergency medical record (EMR) data. uptake of benchmarking in critical care and make it easier for
Manual approaches require trained personnel abstracting data researchers and administrators to compare ICU cohorts.
from other sources, which may also incur substantial expense
and opportunity costs. Ostensibly, the RIPD score is open source,
which will increase its accessibility for some users. It also makes REFERENCES
1. Vincent JL, Moreno R: Clinical review: Scoring systems in the critically
use of data that could be recovered from some EMR systems. ill. Crit Care 2010; 14:207
Exceptions to this are the All Patient Refined Diagnosis Related 2. Timsit JF, Fosse JP, Troché G, et al; OUTCOMEREA Study Group,
Group (APR-DRG) elements that were the first and third most France: Calibration and discrimination by daily Logistic Organ
influential features in the model. The reliance on these adminis- Dysfunction scoring comparatively with daily Sequential Organ
Failure Assessment scoring for predicting hospital mortality in criti-
trative features undercuts the RIPD’s usability, as APR-DRG ele- cally ill patients. Crit Care Med 2002; 30:2003–2013
ments might also require software licenses and access to accurate 3. Pettilä V, Pettilä M, Sarna S, et al: Comparison of multiple organ dys-
postdischarge diagnostic coding. Acknowledging this potential function scores in the prediction of hospital mortality in the critically
limitation, the authors also developed the RIPD_reduced, a ver- ill. Crit Care Med 2002; 30:1705–1711
4. Bazick HS, Chang D, Mahadevappa K, et al: Red cell distribution
sion of the model that functions without APR-DRG codes and width and all-cause mortality in critically ill patients. Crit Care Med
that nonetheless maintains excellent performance. 2011; 39:1913–1921
Fourth, The RIPD achieves notable improvements over pre- 5. Delahanty RJ, Kaufman D, Jones SS: Development and Evaluation of
vious mortality predictors in terms of discrimination—how an Automated Machine Learning Algorithm for In-Hospital Mortality
Risk Adjustment Among Critical Care Patients.Crit Care Med 2018;
well the model distinguishes patients who survive from those 46:e481–e488
who die. Breaking a potentially important (albeit arbitrary) 6. Torlay L, Perrone-Bertolotti M, Thomas E, et al: Machine learning-
barrier, the model surpasses an area under the receiver oper- XGBoost analysis of language networks to classify patients with epi-
ating characteristics curve of 0.9. This is a notable milestone lepsy. Brain Inform 2017; 4:159–169
because even incremental gains are hard to come by at higher 7. Cobb AN, Daungjaiboon W, Brownlee SA, et al: Seeing the forest
beyond the trees: Predicting survival in burn patients with machine
levels of model performance, a phenomenon evidenced by the learning. Am J Surg 2017 Nov 7. [Epub ahead of print]
Netflix Prize (9), in which the video streaming service awarded 8. Awad A, Bader-El-Den M, McNicholas J, et al: Early hospital mortality
$1 million to a team that was able to improve the predictive prediction of intensive care unit patients using an ensemble learning
performance of their algorithm by a mere 10%. approach. Int J Med Inform 2017; 108:185–195
9. Netflix Prize. Available at: https://en.wikipedia.org/wiki/Netflix_Prize.
Further comparisons of the RIPD with its predecessor scores Accessed January 16, 2018
may prove challenging. Previous mortality prediction models have 10. Keegan MT, Gajic O, Afessa B: Severity of illness scoring systems in
used the Hosmer-Lemeshow test to assess model calibration—how the intensive care unit. Crit Care Med 2011; 39:163–169
well the model’s predictions compare with actual outcomes across 11. Kramer AA, Zimmerman JE: Assessing the calibration of mortality
the entire range of mortality risk (10). But, because this test may benchmarks in critical care: The Hosmer-Lemeshow test revisited.
Crit Care Med 2007; 35:2052–2056
be overly pessimistic with larger cohorts (11), the authors used a
12. Badawi O, Liu X, Hassan E, et al: Evaluation of ICU risk models
modified Brier score for this purpose, making direct comparison adapted for use as continuous markers of severity of illness through-
with previous models difficult. They did, however, make assessing out the ICU stay [Abstract 14]. Crit Care Med 2018; 46(Suppl 1):S7

Critical Care Medicine www.ccmjournal.org 1025


Copyright © 2018 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.

You might also like