Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

T HE USE OF INCIDENT AND PREVALENT- USER DESIGNS IN

PHARMACOEPIDEMIOLOGY: A SYSTEMATIC REVIEW OF THE


LITERATURE

P ROTOCOL

Kim Luijken Judith J. Spekreijse


Department of Clinical Epidemiology Department of Pulmonary Diseases
Leiden University Medical Center St. Antonius Hospital
Leiden, 2333 ZA Nieuwegein, the Netherlands
k.luijken@lumc.nl

Maarten van Smeden


Department of Clinical Epidemiology
Leiden University Medical Center
Leiden, the Netherlands

Helga Gardarsdottir
Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences,
Utrecht, the Netherlands
Department of Clinical Pharmacy, Division Laboratories,
Pharmacy and Biomedical Genetics, University Medical Center Utrecht, Utrecht, the Netherlands
Faculty of Pharmaceutical Sciences, University of Iceland, Reykjavik, Iceland

Rolf H. H. Groenwold
Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, the Netherlands
Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands

March 6, 2020

Version 3 Based on feedback from the science committee at the Department of Clinical Epidemiology at LUMC the
following changes were made, compared to Version 2:

• Changed scoring options of item "Funding" to: "Pharmaceutical/ Non-pharmaceutical".


• Changed item "Incident/Prevalent design equal for treatment-of-interest and comparator arm" into: "Compara-
tor arm incident user or prevalent user"; scoring options "Incident/ Prevalent/ Unclear/ Irrelevant".
• Changed item "Causal contrast during follow-up explicitly stated" into: "Contrasted treatment groups explicitly
stated; A comparative effectiveness or safety study contrasts the effect of different treatment strategies on
a health outcome. When the article explicitly reports which treatment groups are compared in the index
analysis*, this item is scored ’yes’".
• Changed item "Covariate assessment window prior to start of follow-up" into: "Covariate assessment window
prior to start of treatment".

Version 2

• Uploaded correct Figure 1.


P ROTOCOL - M ARCH 6, 2020

A BSTRACT
Guidelines for observational comparative effectiveness and drug safety research recommend to imple-
ment an incident-user design whenever possible, since it reduces the risk of selection bias in exposure
effect estimation compared to a prevalent-user design. Despite its theoretical appeal, implementation
of the incident-user design is not always feasible in practice. We aim to systematically review the lit-
erature about contemporary observational comparative effectiveness and safety studies to assess how
frequently incident-user and prevalent-user designs are being implemented and to what extent consid-
erations about the appropriateness of the design are reported. To provide an assessment of current
practice in pharmaco-epidemiological research, we will systematically review observational com-
parative effectiveness and drug safety studies published in six high-ranked pharmaco-epidemiology
journals. Information will be extracted about the implemented design (incident versus prevalent-user)
and the reporting on the definition of time zero in the study, operationalized as alignment of the
moment of eligibility, treatment initiation, and start of follow-up. All articles will be screened for
eligibility in duplicate, and information from the included articles will be extracted independently by
two reviewers.

Keywords Incident-user design · Prevalent-user design · Observational research · Selection bias · Pharmacoepidemiol-
ogy

1 Introduction
Health care decisions such as initiation of treatment are routinely supported by scientific evidence of the treatment
effect on a variety of health outcomes. Such evidence ideally stems from both randomized and non-randomized studies
[1, 2]. Whereas randomized controlled trials (RCTs) are the gold standard for making causal claims about the effects of
medicines, they may be limited in generalizability. Non-randomized or observational studies can provide information
about the safety and efficacy of medicines in daily clinical use once a medicine has received approval from regulatory
agencies [3]. However, observational studies are generally more susceptible to confounding and other sources of bias
affecting the treatment effect estimator. One potential source of bias is that treatment may be initiated prior to study
follow-up, meaning that some participants have been exposed to the treatment prior to being enrolled in the study.
Including such prevalent users of treatment can bias estimates of the treatment effect as a consequence of various
selection mechanisms [4, 5, 6, 7, 8].
An example of this can be found in an observational study into the effect of one (newer) type of oral contraceptive (OC)
that showed an apparent increased risk of venous thromboembolism, compared to other types of oral contraceptives.
This was a spurious finding however, as users of the newer OC had started treatment more recently than users of older
agents and the risk of venous thromboembolism is greatest early in the course of treatment [9, 10]. The risk of venous
thromboembolism from using the older OCs was masked because early events were not observed in this treatment arm.
This incorrect selection can be prevented by observing all participants from the first use of OCs, also referred to as
implementing an incident-user design.
In pharmacoepidemiology, the incident-user design is currently considered the standard to study effectiveness and safety
of treatments. In the incident user design, follow-up time starts with the first prescription of the treatment of interest,
while in a prevalent-user design both current and new users of a drug are included. Methodological discussions showing
the favorable properties of the incident-user design [4, 5, 6] have led to the incident-user design being the design that
is recommended in guidelines, e.g., [3, 7, 11]. A strength of the incident-user design that has been emphasized is the
alignment of start of follow-up, start of treatment, and the moment of meeting eligibility criteria [5]. This ensures an
unambiguous definition of time zero in the study.
Despite its theoretical appeal, implementation of the incident-user design is not always feasible in practice [6, 8, 12].
The start of a course of treatment cannot always be defined unambiguously. For example, when a patient is exposed to
multiple treatment episodes, incident use can refer to the very first treatment episode only or to a restart of treatment,
depending on the clinical context. Sometimes researchers refrain from implementing an incident-user design for
practical considerations; exclusion of prevalent users may reduce follow-up time or sample size [8, 13]. It is not
well-studied how often pharmacoepidemiological studies defer from implementing an incident-user design and whether
this varies across clinical contexts.
Previous studies that explored the commonly incident and prevalent-user designs are being implemented suggest that
not all studies strictly adhere to the incident-user design recommendation [refer to Hempenius, forthcoming]. A review
on exclusion criteria applied to observational comparative effectiveness and safety studies by Perrio and colleagues
found that around 32% of 200 included studies define exclusion based on exposure, most of which is due to exclusion of

2
P ROTOCOL - M ARCH 6, 2020

previous users [14]. The review concerned studies published between 1999 and 2004, meaning that most of the included
articles were published before Ray’s landmark paper on the incident-user design in 2003 [4]. In 2015, Yoshida and
colleagues reviewed studies into biological disease-modifying antirheumatic drugs and found that approximately half of
the included studies implemented an incident-user design [15]. The definition of incident users in these studies was not
further specified in terms of alignment of start of follow-up, start of treatment, and moment of meeting eligibility criteria.
Furthermore, the choice for an incident-user or prevalent-user design has not been related to other study characteristics,
such as sample size, type of exposure or specification of a target estimand.
We aim to systematically review the literature about contemporary observational comparative effectiveness and safety
studies to assess how frequently incident-user and prevalent-user designs are being implemented and to what extent
considerations about the appropriateness of the design are reported. Specifically, we aim to assess how commonly and
unambiguously the start of follow-up, the start of treatment, and the moment of meeting eligibility criteria are being
reported and to what extent these appear to be aligned.

2 Study Design
We aim to provide a systematic assessment of the reporting practices in observational studies on treatment effects
regarding the choice for inclusion of incident versus prevalent users of treatment. To provide an assessment of the state
of the art in pharmaco-epidemiology, we systematically review observational studies of pharmacological treatments
published in six high-ranking pharmaco-epidemiology journals.

2.1 Journal selection and types of studies to be included

We aim to review the reporting of approximately 100 articles published before the 1st of July 2019. Study inclusion
criteria are: study described original pharmacoepidemiologic research into the relation between drug exposure and
a clinical outcome; data were collected for research purposes or obtained from routinely collected health data; the
data were gathered according to a cohort study design, since the definition of incident versus prevalent users is not as
straightforward in other designs, such as a cross-sectional, case-crossover or case-control design. Exclusion criteria
are: pharmacokinetic-pharmacodynamic studies; cost-effectiveness studies; exposure was either a vaccination, an
antibiotic treatment of a single treatment episode (up to 10 days), chemotherapy, or administered intravenously; data on
exposure were collected through self-report. Six pharmaco-epidemiological journals that were highly ranked in terms
of H-index of 2018 on the Scimago Journal Country rank [16] were included: Annals of Pharmacotherapy, British
Journal of Clinical Pharmacology, Drug Safety, European Journal of Clinical Pharmacology, Pharmacotherapy, and
Pharmacoepidemiology and Drug Safety.
KL will screen the title and abstract of all studies that result from the searches and include relevant articles based on
the eligibility criteria specified above (see Figure 1 for a preview of how this will be reported). We will apply a quota
sampling strategy and continue screening articles until we have reached the most recent 100 articles before July 1st,
2019.

2.2 Search string (performed 16th January 2020; 2462 results)

(("Pharmacoepidemiology AND drug safety"[Journal]) OR "Pharmacotherapy"[Journal] OR "British journal of clinical


pharmacology"[Journal] OR "The Annals of pharmacotherapy"[Journal] OR "Drug safety"[Journal] OR "European
journal of clinical pharmacology"[Journal]) AND ("2016/01/01"[Date - Entrez] : "2019/07/01"[Date - Entrez]) NOT
(“case reports”[Publication Type] OR “comment”[Publication Type] OR "clinical trial, phase i"[Publication Type] OR
"clinical trial, phase ii"[Publication Type] OR "clinical trial, phase iii"[Publication Type] OR “dictionary”[Publication
Type] OR “editorial”[Publication Type] OR "historical article"[Publication Type] OR "interview"[Publication Type]
OR “lectures”[Publication Type] OR "legislation"[Publication Type] OR "letter"[Publication Type] OR "meta analy-
sis"[Publication Type] OR "personal narratives"[Publication Type] OR "randomized controlled trial"[Publication Type]
OR "review"[Publication Type])

2.3 Data extraction: extraction of study characteristics and evaluation of reporting quality

Articles will be reviewed independently by KL and JS, results will be discussed between the two reviewers and in
case of disagreement a third reviewer (RG) will be consulted. Articles will be scored on a set of items derived from
guideline recommendations about elements that should be reported in protocols [2, 17] or articles [9, 11] of comparative
effectiveness and safety research using large observational databases, as well as two methodological articles that discuss
the definition of ‘time zero’ in observational studies of causal effects [5, 18]. The items are described in Table 1.

3
P ROTOCOL - M ARCH 6, 2020

When multiple comparative effectiveness or safety analyses are described in a single article, only the first-reported
comparative analysis is scored. When subgroup analyses are performed, only the main analysis is scored. When
methods are discussed in an online protocol or referred to in a different article, we will review the referred material.
To ensure uniform interpretation of the listed items, six randomly chosen studies will be reviewed by JS and KL and
discrepancies of the scores will be discussed. If conclusions are not unanimous, a third reviewer (RG) will be consulted
to reach consensus. After consensus is reached, the remaining studies will be reviewed in duplicate.

3 Strategy for data synthesis


Results will be presented as percentages of total number of included studies. We plan to stratify the results by type of
funding source, type of data source, patient domain, sample size, and length of enrollment window. The other items
(exposure of interest, outcome, sample size calculation, length of follow-up, analytical unit) will be used in narrative
interpretation of results.

4
P ROTOCOL - M ARCH 6, 2020

Adapted from PRISMA 2009 Flow Diagram

# of records is identified
through database searching

# of records after
duplicates removed

# of records screened # of records excluded

# of full-text articles # of full-text articles


accessed for eligbility excluded, with reasons

# of studies included
in qualitative synthesis

Figure 1: The screening and inclusion of eligible articles.

5
P ROTOCOL - M ARCH 6, 2020

Table 1: Items used to score included articles.

Item Descpription of Item How to score item


First author The first author of the article. Open question.
Journal The journal in which the article was Score: Annals of Pharmacotherapy/
published. British Journal of Clinical Pharma-
cology/ Drug Safety/ European Jour-
nal of Clinical Pharmacology/ Phar-
macoepidemiology and Drug Safety/
Pharmacotherapy.
Country The country/countries in which the Open question.
cohort data were collected.
Year of publication The year in which the article was Open question.
published.
Funding The funding of the article Score: Pharmaceutical/ Non-
could be pharmaceutical or pharmaceutical
non-pharmaceutical, where ’phar-
maceutical’ refers to a funding
statement or one of the authors
being employed by a pharmaceutical
company.
Data source type The type of dataset from which in- Score: Claims/ Prescription/ Dis-
formation on the exposure of interest pensing/ Hospital data.
was gathered.
Calendar time period The calendar time range of data col- Open question.
lection.
Exposure of interest The name of the drug of central in- Open question.
terest in the index analysis*.
Comparator The name of the drug to which the Open question.
comparator arm in the index analy-
sis* was exposed.
Outcome The outcome assessed in the index Open question.
analysis*.
Domain The patient domain to which the Open question.
study was aimed.
Sample size The total number of partici- Open question.
pants/treatment episodes in the index
analysis*.
Number of events The total number of outcome events Open question.
included in the index analysis*.
Level of measurement of outcome The level of measurement of the out- Nominal/ Ordinal/ Continuous.
come in the index analysis*.
Sample size calculation A description of a sample size calcu- Reported: Yes/ No.
lation in the methods section.
If sample size calculation re- A reference to whether the actual Score: Yes/ No.
ported, size reached? sample size was sufficient accord-
ing to the calculated required sam-
ple size. Mentioning of the actual
sample size suffices.
Length of follow-up The actual duration of follow-up in Open question: average
the index analysis* [11, item E1]. (mean/median) follow-up (note:
sometimes maximum).
Length of enrollment window The time window prior to study entry Open question.
date in which an individual was re-
quired to be contributing to the data
source [11, item C4].

6
P ROTOCOL - M ARCH 6, 2020

Cohort entry The definition of ‘cohort’ that was Time-based/ Event-based/ Exposure-
applied in the study. Four different based/ Multiple-event-based.
cohort definitions are distinguished
(see [18]): time-based cohorts, event-
based cohorts, exposure-based co-
horts and multiple-event-based co-
horts. Each definition entails a differ-
ent risks of immortal time bias.
Analytical unit Person or episode level entry into the Score: Patient/ Treatment episode/
cohort [11, item C2]. Other.
Cohort entry date The date from which onwards it was Reported Yes/ No.
possible to enroll participants (also
called: index date) [11, item C1].
Washout for exposure The time prior to cohort entry during Reported Yes/ No/ Irrelevant.
which there should be no exposure
in order to assess whether exposure
during the study represents new ex-
posure [11, item C11].
Treatment of interest: Start of The time at which follow-up is Score: Yes/ No/ Unclear.
follow-up coincided with meeting started relative to the time at which
eligibility criteria eligibility criteria are met for the
treatment of interest (recommended
in [2], defined in [5]).
Treatment of interest: Start of The time at which follow-up is Score: Yes/ No/ Unclear.
follow-up coincided with start of started relative to the time at which
treatment treatment is started for the treatment
of interest (recommended in [2], de-
fined in [5]).
Treatment of interest: Meeting The time at which eligibility criteria Score: Yes/ No/ Unclear.
eligibility criteria coincided with are met relative to the time at which
start of treatment treatment is started for the treatment
of interest (recommended in [2], de-
fined in [5]).
Comparator: Start of follow-up The time at which follow-up is Score: Yes/ No/ Unclear.
coincided with meeting eligibility started relative to the time at which
criteria eligibility criteria are met for the
comparator treatment (recommended
in [2], defined in [5]).
Comparator: Start of follow-up The time at which follow-up is Score: Yes/ No/ Unclear.
coincided with start of treatment started relative to the time at which
treatment is started for the compara-
tor treatment (recommended in [2],
defined in [5]).
Comparator: Meeting eligibility The time at which eligibility criteria Score: Yes/ No/ Unclear.
criteria coincided with start of are met relative to the time at which
treatment treatment is started for the compara-
tor treatment (recommended in [2],
defined in [5]).
Exposure risk window The time between the minimum and Reported Yes/ No.
maximum hypothesized induction
time following ingestion of the drug
of interest or comparator drug [11,
item D2].
Treatment-of-interest arm inci- The type of use of the drug of inter- Score: Incident/ Prevalent/ Unclear.
dent user or prevalent user est that is captured or measured [11,
item D1].
In case of prevalent user, was a ra- The rationale provided for including Reported: Yes/ No/ Irrelevant.
tionale provided? prevalent users (recommended in [2]
for cohort study protocols).

7
P ROTOCOL - M ARCH 6, 2020

Comparator arm incident user or The type of use of the comparator Score: Incident/ Prevalent/ Unclear/
prevalent user drug that is captured or measured [11, Irrelevant.
item D1].
Type of exposure comparator arm The type of comparator population Active comparator – Incident/ Active
[2]: 1) Active comparator; 2) Unex- comparator – Prevalent/ Unexposed
posed (with 2a, No use; 2b, Recent – No use/ Unexposed – Recent use/
use; 2c, Past use). Unexposed – Past use/ Combination/
Other.
Contrasted treatment groups ex- A comparative effectiveness or safety Reported: Yes/ No.
plicitly stated study contrasts the effect of different
treatment strategies on a health out-
come. When the article explicitly
reports which treatment groups are
compared in the index analysis*, this
item is scored ’yes’.
Covariate assessment window The time over which patient covari- Reported: Yes/ No.
ates are assessed [11, item G1].
Covariate assessment window The time over which each of the pa- Yes/ No/ Not reported/ Irrelevant
equal for all covariates tient covariates is assessed. (where ’irrelevant’ is scored when
there is no covariate assessment win-
dow).
Covariate assessment window The time over which patient covari- Yes/ No/ Not reported/ Irrelevant
prior to start of treatment ates are assessed relative to the time (score ’yes’ also if at same time-
at which treatment starts. point. Only score ’yes’ if all covari-
ates score ’yes’).
If exposure is time-varying: was The time over which patient covari- Yes/ No/ Not reported/ Irrelevant.
confounder information gathered ates are assessed in case of a time-
at multiple timepoints? varying exposure.
If confounder information was The decision whether multiple mea- Yes/ No/ Not reported/ Irrelevant +
gathered at multiple timepoints: surements of confounders over time free text reporting of analysis proce-
was this used in the index analy- are incorporated in the analysis. Con- dure.
sis*? founding adjustment in an analysis
with time-varying exposure may be
performed in various ways. For ex-
ample, adjustment for confounding
may be handled at different time-
points or not.

* Index analysis: the analysis described in the article which is scored in this review (selected on criteria in section 2.3).

8
P ROTOCOL - M ARCH 6, 2020

References
[1] Harold C Sox and Sheldon Greenfield. Comparative effectiveness research: a report from the institute of medicine.
Annals of internal medicine, 151(3):203–205, 2009.
[2] Priscilla Velentgas, Nancy A Dreyer, Parivash Nourjah, Scott R Smith, Marion M Torchia, et al. Developing a
protocol for observational comparative effectiveness research: a user’s guide. Government Printing Office, 2013.
[3] Wenying Yang, Alexey Zilov, Pradana Soewondo, Ole Molskov Bech, Fawzia Sekkal, and Philip D Home.
Observational studies: going beyond the boundaries of randomized controlled trials. Diabetes research and
clinical practice, 88:S3–S9, 2010.
[4] Wayne A Ray. Evaluating medication effects outside of clinical trials: new-user designs. American journal of
epidemiology, 158(9):915–920, 2003.
[5] Miguel A Hernán, Brian C Sauer, Sonia Hernández-Díaz, Robert Platt, and Ian Shrier. Specifying a target
trial prevents immortal time bias and other self-inflicted injuries in observational analyses. Journal of clinical
epidemiology, 79:70–75, 2016.
[6] Eric S Johnson, Barbara A Bartman, Becky A Briesacher, Neil S Fleming, Tobias Gerhard, Cynthia J Kornegay,
Parivash Nourjah, Brian Sauer, Glen T Schumock, Art Sedrakyan, et al. The incident user design in comparative
effectiveness research. Pharmacoepidemiology and drug safety, 22(1):1–6, 2013.
[7] Jennifer L Lund, David B Richardson, and Til Stürmer. The active comparator, new user study design in
pharmacoepidemiology: historical foundations and contemporary application. Current epidemiology reports,
2(4):221–228, 2015.
[8] Emily Cox, Bradley C Martin, Tjeerd Van Staa, Edeltraut Garbe, Uwe Siebert, and Michael L Johnson. Good
research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the
design of nonrandomized studies of treatment effects using secondary data sources: the international society for
pharmacoeconomics and outcomes research good research practices for retrospective database analysis task force
report—part ii. Value in Health, 12(8):1053–1061, 2009.
[9] Jonathan AC Sterne, Miguel A Hernán, Barnaby C Reeves, Jelena Savović, Nancy D Berkman, Meera Viswanathan,
David Henry, Douglas G Altman, Mohammed T Ansari, Isabelle Boutron, et al. Robins-i: a tool for assessing risk
of bias in non-randomised studies of interventions. bmj, 355:i4919, 2016.
[10] Samy Suissa, Walter O Spitzer, B Rainville, J Cusson, M Lewis, and L Heinemann. Recurrent use of newer oral
contraceptives and the risk of venous thromboembolism. Human reproduction, 15(4):817–821, 2000.
[11] Shirley V Wang, Sebastian Schneeweiss, Marc L Berger, Jeffrey Brown, Frank de Vries, Ian Douglas, Joshua J
Gagne, Rosa Gini, Olaf Klungel, C Daniel Mullins, et al. Reporting to improve reproducibility and facilitate
validity assessment for healthcare database studies v1. 0. Value in health, 20(8):1009–1022, 2017.
[12] Andrew W Roberts, Stacie B Dusetzina, and Joel F Farley. Revisiting the washout period in the incident user study
design: why 6–12 months may not be sufficient. Journal of comparative effectiveness research, 4(1):27–35, 2015.
[13] Jan Vandenbroucke and Neil Pearce. Point: incident exposures, prevalent exposures, and causal inference: does
limiting studies to persons who are followed from first exposure onward damage epidemiology? American journal
of epidemiology, 182(10):826–833, 2015.
[14] Michael Perrio, Patrick C Waller, and Saad AW Shakir. An analysis of the exclusion criteria used in observational
pharmacoepidemiological studies. Pharmacoepidemiology and drug safety, 16(3):329–336, 2007.
[15] Kazuki Yoshida, Daniel H Solomon, and Seoyoung C Kim. Active-comparator design and new-user design in
observational studies. Nature Reviews Rheumatology, 11(7):437, 2015.
[16] Scimago, (n.d.). sjr — scimago journal country rank [portal]. http://www.scimagojr.com. Accessed:
2020-01-17.
[17] Miguel A Hernán and James M Robins. Using big data to emulate a target trial when a randomized trial is not
available. American journal of epidemiology, 183(8):758–764, 2016.
[18] Samy Suissa. Immortal time bias in pharmacoepidemiology. American journal of epidemiology, 167(4):492–499,
2007.

You might also like