Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Emerging Standards

Editors: Ramaswamy Chandramouli, mouli@nist.gov


Tim Grance, tim.grance@nist.gov
Rick Kuhn, kuhn@nist.gov
Susan Landau, susan.landau@sun.com

Common Vulnerability
Scoring System

H
istorically, vendors have used their own methods cva/), and the Microsoft Security Re- PETER M ELL
sponse Center Severity Rating Sys- AND KAREN
for scoring software vulnerabilities, usually with- tem (www.microsoft.com/technet/ SCARFONE
security/bulletin/rating.mspx). As US National
out detailing their criteria or processes. This cre- with these scoring systems, CVSS Institute of
has several restrictions. For example, Standards
ates a major problem for users, particularly those it doesn’t provide a method for ag- and
gregating individual scores across Technology
who manage disparate IT systems and applications. For systems or departments. By itself, it’s
also unsuitable for managing IT risk SASHA
example, should they first address a and the differences among, vul- because it doesn’t consider mitiga- ROMANOSKY
vulnerability with a severity of “5” nerabilities scores. tion strategies such as firewalls and Carnegie
or one with a severity of “high”? access control. Further, CVSS is nei- Mellon
The Common Vulnerability The goal is for CVSS to facilitate the ther a score repository (Bugtraq), University
Scoring System (CVSS) is a public generation of consistent scores that nor a vulnerability database (Open
initiative designed to address this accurately represent the impact of Source Vulnerability Database), nor
issue by presenting a framework for vulnerabilities. a vulnerability classification system
assessing and quantifying the impact (Common Vulnerabilities and Ex-
of software vulnerabilities. Organi- Introduction posures; CVE).
zations currently generating CVSS CVSS was conceived by the US CVSS scores are composites de-
scores include Cisco, US National In- National Infrastructure Assurance rived from the following three cat-
stitute of Standards and Technology Council (NIAC), a group of indus- egories of metrics:
(through the US National Vulnerabil- try leaders that provides the US De-
ity Database; NVD), Qualys, Oracle, partment of Homeland Security • Base. This group represents the
and Tenable Network Security. CVSS with security recommendations for properties of a vulnerability that
offers the following benefits: critical US infrastructures. Pub- don’t change over time, such as ac-
lished in 2005 as a first-generation cess complexity, access vector, de-
• Standardized vulnerability scores. open scoring system,1 CVSS is cur- gree to which the vulnerability
CVSS is application-neutral, en- rently under the custodial care of compromises the confidentiality,
abling an organization to score all the international Forum for Inci- integrity, and availability of the
of its IT vulnerabilities using the dent Response and Security Teams system, and requirement for au-
same scoring framework. This al- (FIRST; www.first.org/cvss/), which thentication to the system.
lows them to leverage a single manages the official mailing list • Temporal. This group measures the
vulnerability management policy and documentation. Many indi- properties of a vulnerability that
to govern how quickly each viduals and organizations have do change over time, such as the
vulnerability is validated and re- formed a special interest group existence of an official patch or
mediated. (SIG) to promote and refine the functional exploit code.
• Contextual scoring. CVSS scores framework. A current list of adop- • Environmental. This group mea-
represent the actual risk a given ters is available at www.first.org/ sures the properties of a vulnera-
vulnerability poses, helping them cvss/eadopters.html. bility that are representative of
prioritize remediation efforts. Several vulnerability scoring sys- users’ IT environments, such as
• Open framework. CVSS provides tems exist. Examples include US- prevalence of affected systems and
full details regarding the parame- CERT (www.kb.cert.org/vuls/html/ potential for loss.
ters used to generate each score. fieldhelp#metric), the SANS Insti-
This helps organizations under- tute’s Critical Vulnerability Analysis We further describe these metric
stand both the reasoning behind, Scale (www.sans.org/newsletters/ groups in the next section.

PUBLISHED BY THE IEEE COMPUTER SOCIETY ■ 1540-7993/06/$20.00 © 2006 IEEE ■ IEEE SECURITY & PRIVACY 85
Emerging Standards

The temporal metrics represent


Base metric group
the vulnerability’s properties that
Impact bias might change over time. They mod-
ify the base score, lowering it by as
Access complexity Confidentiality impact
much as one-third:
Authentication Integrity impact
• Exploitability. What is the current
Access vector Availability impact
state (or ease) of the vulnerability’s
exploitability? Options are un-
Base formula proven, proof-of-concept,
functional, or high.
• Remediation level. What level of
Overall mitigating controls currently
vulnerability
Temporal score Environmental exists for the vulnerability? Scor-
metric group metric group ing options are official-fix,
Exploitability Collateral temporary-fix, workaround,
Temporal Environmental
formula formula damage or unavailable.
Remediation potential • Report confidence. How credible are
level the vulnerability details? Options
Report Target
are unconfirmed, uncorrobo-
confidence distribution rated, or confirmed.

The environmental metrics mod-


Figure 1. Common Vulnerability Scoring System (CVSS) metric groups. The overall ify this result to generate a final score,
score is determined by generating a base score and modifying it through the temporal ranging from 0.0 (no affected systems)
and environmental formulas. to 10.0 (most systems affected with a
high risk of catastrophic damage).
This score is most important because
CVSS metrics ample, does it require action by the it provides the context for vulnerabili-
As Figure 1 illustrates, CVSS uses user, such as clicking on a malicious ties within the organization:
simple metrics and formulas to create URL?) Options are high or low.
composite scores, derived from base, • Confidentiality impact. What is the po- • Collateral damage potential. What is
temporal, and environmental groups. tential impact of unauthorized access the degree of loss to information,
You can find complete details regard- to the system’s data? Options are systems, or people? Options are
ing the algorithms at www.first.org/ none, partial, or complete. none, low, medium, or high.
cvss/cvss-guide.html. • Integrity impact. What is the poten- • Target distribution. What percent-
The base metrics represent the tial impact of unauthorized modi- age of systems could the vulnera-
vulnerability’s immutable character- fication or destruction of the bility affect? Options are none,
istics (properties that are constant system’s files or data? Options are low, medium, or high.
over time and across systems). They none, partial, or complete.
produce a score within the range of • Availability impact. What is the po- Having three scores for each vulner-
0.0 to 10.0: tential impact if the system or data ability is efficient and flexible be-
is unavailable? Again, the scoring cause all organizations can use the
• Access vector. Can an attacker ex- options are none, partial, or same base score for a vulnerability,
ploit the vulnerability remotely or complete. yet each organization can customize
locally? • Impact bias. To which property (if the score for its own environment.
• Authentication. Must an attacker any) should the score convey a
authenticate to the operating sys- greater weighting: confiden- Assessing CVSS
tem, application, or service after tiality, integrity, or score generation
gaining access to the target to ex- availability? For example, In an effort to validate real-world
ploit the vulnerability? Scoring confidentiality might be the most CVSS scoring, NVD analysts calcu-
options are required or not- important for encryption soft- lated CVSS base scores for 1,291 vul-
required. ware, and so the scoring analyst nerabilities listed in the CVE
• Access complexity. How difficult is it would assign a confidential- vulnerability dictionary. Although
to exploit the vulnerability (for ex- ity bias. 101 values are possible (0.0 to 10.0, in

86 IEEE SECURITY & PRIVACY ■ NOVEMBER/DECEMBER 2006


Emerging Standards

increments of 0.1), the experimental the metrics and equations and con- Documentation: Target
data produced only 25 distinct scores. sidering what granularity level privileges assumptions
Indeed, 79 percent of the vulnerabil- would be most useful to users. Varying assumptions from scoring
ities had one of two base scores: The CVSS SIG has also ob- analysts resulted in scoring inconsis-
served discrepancies among analysts tencies. For example, some analysts
• 2.3 typically reflected a locally ex- scoring particular types of vulnera- assumed that IT assets would be se-
ploitable vulnerability with user- bility. To address these issues, the cured properly according to best
level impact (40.3 percent of the SIG created a separate task force practices, thus limiting the impact of
vulnerabilities). and identified several problem areas exploitation; others assumed that
• 7.0 typically reflected a remotely and solutions. To date, the SIG has targets would use weaker, default se-
exploitable vulnerability with user- addressed the following issues curity configurations (end users
level impact (38.9 percent of the through clarifications to the docu- running email clients or Web
vulnerabilities). mentation and proposed modifica- browsers with administrative-level
tions to the metrics. privileges, for example), thus caus-
The seven most commonly assigned ing a greater impact.
scores covered 94 percent of the vul- Base metric: Access vector To ensure greater scoring consis-
nerabilities. In addition, we analyzed This metric didn’t differentiate be- tency, the SIG updated the docu-
the base equation itself and found tween remote and local network mentation to specify that scores
that most of the scores were in the (subnet) vulnerabilities. For exam- should reflect typical (most com-
lower half of the range. This is largely ple, attacks launched from hosts on mon) software deployments and
because of the equation’s multiplica- the target’s subnet received the same configurations. If privileges aren’t
tive nature—it can generate only 68 access vector score as attacks clear, the scorer should assume the
of the 101 possible base score values, launched from hosts on the Internet. default configuration.
and only 23 of those values are in the To capture the relative difficulty of
upper half of the range. exploitation, the SIG proposed creat- Documentation: Scoring
Moreover, the score distribution ing three access vector values: remote, indirect vs. direct impact
is highly bimodal. We expected a much local-network accessible, and local. Attackers exploit many cross-site
more uniform distribution than for scripting vulnerabilities by luring
79 percent of the vulnerabilities to Base metric: users to click on Web links contain-
map to just two scores. A primary Authentication ing malicious code. Some analysts
reason for the distribution is that This metric didn’t differentiate be- scored such vulnerabilities based on
multiple sets of metric inputs can tween requiring one or multiple the impact to the server because
generate the same scores. For exam- successful authentication steps. For that’s where the malicious data is lo-
ple, a remotely exploitable vulnera- example, a vulnerability that could cated, whereas others scored them
bility that provides user-level access be exploited only after authenticat- based on the impact to end users,
would produce the same score as a ing to an operating system and the which is generally more severe but
locally exploitable vulnerability that application running on it received also much harder to quantify. These
provides complete control of the the same score as a vulnerability that conflicting approaches caused signif-
same target (assuming all other met- could be exploited after simply au- icant scoring discrepancies for vul-
rics are set to the same values). In thenticating to the operating system. nerabilities that differed in terms of
most environments, however, the To more accurately reflect the their direct and indirect impact.
former should have a higher score difference in authentication so- The SIG responded by updating
because it necessitates a faster re-
sponse than the latter.
Proposed changes to the base A vulnerability-scoring system that’s
metrics (described later) might add
greater diversity to scores by increas- accurate is wonderful; a scoring system
ing the range of values for individual
metrics. However, this won’t elimi- that’s accurate and usable is even better.
nate the highly bimodal distribu-
tion. Another concern is that users
might mistakenly believe that the phistication, the SIG proposed the documentation to specify that
scoring is more accurate than it truly three authentication metric values: vulnerabilities should be scored with
is, based on the scoring range’s gran- none, single, or multiple authenti- respect to their impact to the vulner-
ularity. The CVSS SIG is reviewing cations required. able target or target service only. For

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 87


Emerging Standards

cross-site scripting vulnerabilities, ity-scoring analysts in large enter- better. For CVSS to become suc-
this typically means a partial impact prises. Environmental scoring thus cessful, the scoring mechanism must
on integrity and no impact on either represents a possible barrier to entry be clear, quick, and understandable.
confidentiality or availability. for CVSS. Complex scoring metrics might in-
crease score accuracy and resolution,
but if this comes at the expense of us-
CVSS is significant because it can eliminate ability, analysts might score incor-
rectly, or end users might avoid
duplicate efforts in IT vulnerability scoring and incorporating temporal and envi-
ronmental factors.
allow organizations to make more informed
Scoring for IT
decisions when managing vulnerabilities. misconfigurations
CVSS was developed to generate
scores for vulnerabilities in the con-
Increasing adoption End-user adoption text of software flaws. Yet, many or-
The CVSS SIG’s primary effort has CVSS must be compatible with an ganizations are also interested in
been to create a scoring system that organization’s vulnerability manage- scores for security misconfigurations,
generates consistent and representa- ment processes. Rather than a techni- such as applications configured to
tive vulnerability scores—that is, cal challenge of metrics or scoring permit blank passwords or operating
scores that properly reflect the sever- algorithms, the concern is that the systems that allow any user to modify
ity and impact of a given vulner- CVSS framework must satisfy a busi- critical system files. Such misconfigu-
ability to any user’s computing ness need and make a valuable rations can be exploited, so it should
environment. However, CVSS has contribution to vulnerability man- be possible to use a similar or identical
numerous challenges. agement practices. Altering existing scoring system for both types of prob-
processes, procedures, and technolo- lems. The CVSS specification and its
User participation in gies could require considerable effort. accompanying documentation don’t
environmental scoring For CVSS to make business sense, its currently address misconfigurations,
Most end users will access and use payoff must be greater to an organiza- but this opportunity certainly exists.
CVSS scores indirectly—through tion than the integration costs. CVSS would be even more valuable
vulnerability-scanning tools, for to users if it let them compare and pri-
example. Although such tools can Vendor adoption oritize both software flaws and mis-
determine base scores and possibly For CVSS to succeed, software and configurations on their systems.
temporal scores, they probably service vendors must identify the
won’t be able to generate accurate business case ( just as with end
environmental scores without user users). Demand will likely be the
participation. A scanning tool with
full visibility into a user’s network
greatest driver: vendors will offer
CVSS scores when customers re-
C VSS is an emerging standard, but
for it to succeed, we must ensure
that it generates consistent scores, that
could potentially deduce target dis- quest them. Vendors might also they accurately represent the impact
tribution, but it couldn’t infer a vul- choose to support CVSS as a way to of the vulnerabilities assessed, and that
nerability’s damage potential. A differentiate themselves from their users can easily provide temporal and
proper solution therefore needs to competition. Authoritative sources environmental data. We can achieve
allow users to score environmental such as NVD make CVSS scores these goals by refining the CVSS
metrics (collateral damage potential freely available to the public. (NVD metrics and equations, maintaining
and target distribution), although currently offers XML feeds of clear documentation, and clearly de-
we recognize that they might often CVSS base scores for all vulnerabili- fining what CVSS is and is not.
face difficulties in collecting this in- ties.) These sources are particularly We believe CVSS is significant
formation. Disaster recovery and useful for vendors that don’t wish to because it can eliminate duplicate ef-
business continuity departments develop proprietary scoring mecha- forts in IT vulnerability scoring and
often prioritize assets according to nisms or perform their own scoring. let organizations make more in-
their value to the organization. This formed decisions when managing
might provide the necessary data, Security vs. usability vulnerabilities. We’ve observed scor-
but political and procedural diffi- A vulnerability-scoring system that’s ing discrepancies and responded by
culties could make obtaining that accurate is wonderful; a scoring sys- improving the documentation and
information difficult for vulnerabil- tem that’s accurate and usable is even proposing better metrics. We’ve

88 IEEE SECURITY & PRIVACY ■ NOVEMBER/DECEMBER 2006


Emerging Standards

seen strong industry adoption and


are encouraged by the increasing
participation, yet we continue to
improve both the mechanics and us-
ability of CVSS for the benefit of the
information security community.

Acknowledgments
We thank the CVSS SIG members, the SIG
scoring analysts, and those who continue to use
and implement CVSS.

Reference
1. M. Schiffman et al., Common Vul-
nerability Scoring System, tech.
report, US Nat’l Infrastructure Ad-
visory Council, 2004; www.dhs.
gov/interweb/assetlibrary/NIAC
_CVSS_FinalRpt_12-2-04.pdf.

Peter Mell is a senior computer scientist


in the Computer Security Division at the
US National Institute of Standards and
Technology (NIST). He is the creator and
project manager of the National Vulner-
ability Database. For this work, he was
awarded the 2006 Federal 100 award
and the US Commerce Department’s
bronze medal. He also wrote the NIST
Special Publications on patching, mal-
ware, intrusion detection, and the Com-
mon Vulnerabilities and Exposures (CVE)
standard. Mell has an MS in computer
science from the University of California,
Davis. Contact him at mell@nist.gov.

Sasha Romanosky is a PhD student at


Carnegie Mellon University researching
the economics of information security. He
has worked with Internet and security
technologies for more than 10 years, pre-
dominantly within the financial and e-
commerce industries. Romanosky has a
BS in electrical engineering from the Uni-
versity of Calgary. He is coauthor of J2EE
Design Patterns Applied (WROX Press,
2002), developer of the FoxTor tool for
anonymous Web browsing, and co-
developer of the Common Vulnerability
Scoring System (CVSS). Contact him at
sromanos@cmu.edu.

Karen Scarfone is a computer scientist in


the Computer Security Division at NIST.
She has coauthored NIST publications on
incident response, intrusion detection,
malware, and security configuration
checklists, and has contributed to several
books, including Inside Network Peri-
meter Security (Sams, 2005) and Intru-
sion Signatures and Analysis (Sams,
2001). Scarfone has an MS in computer
science from the University of Idaho. Con-
tact her at karen.kent@nist.gov.

www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 89

You might also like