Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Christie 2019

Reported
Domain Key items on page #
Cohort study database - 1494
Source of data (e.g., cohort, case-control, randomized trial
SOURCE OF DATA
participants, or registry data)

Activation of Coagulation and Inflammation


in Trauma study. Severely injured trauma
patients from emergency department
admission through the first 28 days of
hospitalization or death.

Participant eligibility and recruitment method (e.g., consecutive


participants, location, number of centers, setting, inclusion and
exclusion criteria) Exclusion criteria included patient age less
than 15 years, pregnancy, incarceration, and
transfer from outside hospital
PARTICIPANTS

Participant description As above


Fluid resuscitation, no specific
Details of treatments received, if relevant treatment being investigated

Between February 2005 and April 2015


Study dates

Mortality, transfusion, massive


Definition and method for measurement of outcome transfusion, VTE, multi-organ failure,
ARDS
Was the same outcome definition (and method for measurement) Yes
used
Type in
of all patients?
outcome (e.g., single or combined endpoints) Single outcome point(binary)
OUTCOME(S) TO No blinding
Was the outcome assessed without knowledge of the candidate
BE PREDICTED
predictors (i.e., blinded)?
Were candidate predictors part of the outcome (e.g., in panel or No
consensus diagnosis)?
Clinical, laboratory and interval
Time of outcome occurrence or summary of duration of follow-up outcomes data were collected at
admission, 2, 3, 4, 6, 12,past
patient demographics, 24, medical
48, 72, 96,
history, substance use, and injury
characteristics.

Number and type of predictors (e.g., demographics, patient history, Physiologic factors - vital signs,
physical examination, additional testing, disease characteristics) laboratory monitoring including
coagulation and inflammation markers,
CANDIDATE ventilator parameters input/output
PREDICTORS data, and all fluid, colloid, blood product
(OR INDEX TESTS) and medication administration
N/A
Definition and method for measurement of candidate predictors

Timing of predictor measurement (e.g., at patient presentation, at 2, 3, 4, 6, 12, 24, 48, 72, 96, and 120
diagnosis, at treatment initiation) hours after injury
Were predictors assessed blinded for outcome, and for each other (if No
relevant)?
ensemble machine learning algorithm
logistic and linear regression,
generalized additive models with
Handling of predictors in the modelling (e.g., continuous, linear, non- various levels of smoothing, random
linear transformations or categorised) forest, lasso and systems based on
sieves of parametric models (e.g.
polyclass)

Number of participants and number of outcomes/events -


-
SAMPLE SIZE Number of outcomes/events in relation to the number of candidate
predictors (Events Per Variable)

Number of participants with any missing value (include predictors and Not given
outcomes)
Number of participants with missing data for each predictor Not given
Data with missing outcomes was
dropped from the analysis.

To evaluate the role of missing data,


first, all nominal variables were
converted to the appropriate dummy
MISSING DATA variables. Then, a new set of basic
Handling of missing data (e.g., complete-case analysis, imputation, or functions was created for every variable
other methods) with any missing observations,
specifically we made, for variable X, an
indicator that X is observed, say Δ, that
is one if the variable is observed, 0
otherwise and then a variable Δ*X that
is the actual observed value if it is
observed and 0 otherwise

Modelling method (e.g., logistic, survival, neural network, or machine ensemble learning method
learning techniques)
Modelling assumptions satisfied -

Method for selection of predictors for inclusion in multivariable -


modelling (e.g., all candidate predictors, pre-selection based on
MODEL
unadjusted association with the outcome)
DEVELOPMENT
Method for selection of predictors during multivariable modelling -
(e.g., full model approach, backward or forward selection) and criteria
used (e.g., p-value, Akaike Information Criterion)
Shrinkage of predictor weights or regression coefficients (e.g., no Lasso regression was used
shrinkage, uniform shrinkage, penalized estimation)
area under the curve (AUC) of a
Calibration (calibration plot, calibration slope, Hosmer-Lemeshow test) receiver-operator curve (ROC) was used
and Discrimination to select learner combinations
MODEL (C-statistic, D-statistic, log-rank) measures with confidence intervals
PERFORMANCE
Classification measures (e.g., sensitivity, specificity, predictive values,
net reclassification improvement) and whether a-priori cut points
were used
Cross-validation
Method used for testing model performance: development dataset For two high-performance outcomes, a
only (random split of data, resampling methods e.g. bootstrap or variable importance measure available
cross-validation, none) or separate external validation (e.g. temporal, in the ensemble learning method
MODEL geographical, different setting, different investigators) random forest was applied
EVALUATION
As above
In case of poor validation, whether model was adjusted or updated
(e.g., intercept recalibrated, predictor effects adjusted, or new
predictors added)
Final and other multivariable models (e.g., basic, extended, simplified) -
presented, including predictor weights or regression coefficients,
intercept, baseline survival, model performance measures (with
standard errors or confidence intervals)
RESULTS
Any alternative presentation of the final prediction models, e.g., sum -
score, nomogram, score chart, predictions for specific risk subgroups
with performance Non-random missingness was found
Comparison of the distribution of predictors (including missing data)
for development and validation datasets repeatedly to contribute substantially to
prediction butwas
SuperLearner wasable
not atotop variable
establish
near-perfect discrimination for death
and MOF across the post-injury
timecourse using large uncurated sets of
potential predictors. SuperLearner fits
demonstrated excellent cross-
Interpretation of presented models (confirmatory, i.e., model useful validated prediction of death
for practice versus exploratory, i.e., more research needed) (overall AUC 0.94–0.97), multi-organ
failure (overall AUC 0.84–0.90), and
INTERPRETATION transfusion (overall AUC 0.87–0.9)
AND DISCUSSION
across multiple post-injury time
points, and good prediction of Acute
Respiratory Distress Syndrome
(overall AUChave
prior studies 0.84–0.89) and venous
demonstrated that
algorithms such as SuperLearner are
capable of generating superior
Comparison with other studies, discussion of generalizability, discrimination of trauma death
strengths and limitations. compared to conventional statistical
approaches

SuperLearner fits demonstrated excellent cross-validated prediction of death (overall AUC 0.94–
0.97), multi-organ failure (overall AUC 0.84–0.90), and transfusion (overall AUC 0.87–0.9) across
multiple post-injury time points, and good prediction of Acute Respiratory Distress Syndrome (overall
AUC 0.84–0.89) and venous thromboembolism (overall AUC 0.73–0.83). Outcomes with inferior data
quality included coagulopathic trajectory (AUC 0.48–0.88)

You might also like