Professional Documents
Culture Documents
DORA Indicators Guidance
DORA Indicators Guidance
DORA Indicators Guidance
Ginny Barbour
Rachel Bruce
Stephen Curry
Bodo Stern
Stuart King
Rebecca Lawrence
This content is available under a Creative Commons Attribution Share Alike License
(CC BY-SA 4.0).
1
While the term “metric” suggests a quantity that is being measured directly, “indicator” better reflects the fact
that quantities being used in research assessment are more often indirect proxies for quality. We therefore use
“indicator” throughout this briefing document.
DORA metrics guidance 4
What is your rationale for using How will you take account of the
particular quantitative indicators in proxy and reductive nature inherent
your research or researcher in any indicator? (e.g., citations are
assessments? Is it grounded in good not a direct measure of quality; the
evidence? h-index takes no account of age,
discipline, or career breaks).
The Journal Impact Factor (JIF) is an shows that JIFs are poor predictors of
indicator that can essentially be citation performance of individual
defined as the annual average articles. Further, it is also often stated
number of citations to papers in any that the quality of peer review is
given journal in the two preceding higher at high JIF journals, but we
years. (The actual calculation is more know of no good evidence to support
opaque than this.) this.
Criticisms of the JIF are laid out in So, when judging individual
some detail in the DORA declaration publications or their authors, one has
and elsewhere, but the critical issue to look closer. The individual citation
for research assessment is that claims performance of the paper can provide
that the JIF is a signifier of the value some insight but, as discussed below,
or quality of an individual paper are needs to be considered in context.
not supported by a close examination Assessment of the content is also
of the evidence. The JIF is a measure critical, as is knowing the particular
of what might be termed the “average contribution of any author listing it in
citation performance” of papers in a their CV. Narrative CVs are emerging
particular journal but, aside from its as a useful tool for capturing this
many technical shortcomings, it gives more qualitative information in
no indication of the variation in the concise and comparable forms.
distribution of citations, which
The reservations noted here
typically range over 2-3 orders of
regarding the use of the journal
magnitude, from which it is
impact factor apply equally to other
calculated. Although it is tempting to
journal-based indicators, e.g., the
rely on the law of averages and
Citescore, the Eigenfactor Score, and
conclude that a paper from a high JIF
the Source Normalized Impact per
journal is likely to be better than one
Paper (SNIP).
from a low JIF journal, the evidence
DORA metrics guidance 7
The h-index for individual authors is assessment. For example, the h-index
defined as the number of their papers will usually be higher for researchers
that have been cited at least h times; at later career stages, or who have
an author with a h-index of 10, for not taken career breaks, or who work
example, has ten papers, each with at in disciplines that attract higher
least ten citations. citation rates (e.g., medical sciences
as compared with mathematics or
The h-index is commonly used by
humanities); nor does it take account
institutions and individuals to
of the nature of the author’s
compare researchers or to monitor
contributions to each of their papers.
their “performance” over time.
In disciplines that rely increasingly on
However, it is difficult to interpret
interdisciplinary, collaborative
meaningfully, not least because it can
approaches, the h-index may thus
give inconsistent and counterintuitive
reflect participation in large teams
readings of researcher impact.
rather than individual contributions.
Moreover, the value of the h-index
depends on the database used to Any organization making use of the
derive it (e.g., Web of Science, h-index in research assessment
Scopus, Google Scholar) and can be should be able to explain how it
manipulated by gaming. provides a meaningful insight into
individual research performance, and
As a reductive aggregate indicator,
how account is taken of individual
the h-index also lacks crucial
circumstances (e.g., academic age,
contextual information that should be
career breaks, scholarly discipline).
included in responsible research
DORA metrics guidance 9
The guidance given in this briefing evaluation since the ability to win
note is neither exhaustive nor competitive funding for ideas is a
comprehensive, but it illustrates how desirable attribute, but this
the principles laid out in the DORA information should always be
declaration can be applied when contextualized. For example, it is
other metrics are considered for use important to recognize that the
in assessment of research or requirement for funds differs
researchers. The examples included markedly between fields (even within
here refer only to publication-based STEM disciplines), that biases still
metrics, but other indicators should disfavor women and other under-
be treated in the same way (e.g., see represented groups, and that even
the Metrics Toolkit and the the most rigorous funding decisions
challenges associated with making are attended by uncertainty and are
targets of metrics). For example, poorly predictive of research
grant funding income is often productivity.
assessed during researcher
DORA
sfdora.org