Professional Documents
Culture Documents
Common Vulnerability Scoring System
Common Vulnerability Scoring System
Common Vulnerability
Scoring System
H
istorically, vendors have used their own methods cva/), and the Microsoft Security Re- PETER M ELL
sponse Center Severity Rating Sys- AND KAREN
for scoring software vulnerabilities, usually with- tem (www.microsoft.com/technet/ SCARFONE
security/bulletin/rating.mspx). As US National
out detailing their criteria or processes. This cre- with these scoring systems, CVSS Institute of
has several restrictions. For example, Standards
ates a major problem for users, particularly those it doesn’t provide a method for ag- and
gregating individual scores across Technology
who manage disparate IT systems and applications. For systems or departments. By itself, it’s
also unsuitable for managing IT risk SASHA
example, should they first address a and the differences among, vul- because it doesn’t consider mitiga- ROMANOSKY
vulnerability with a severity of “5” nerabilities scores. tion strategies such as firewalls and Carnegie
or one with a severity of “high”? access control. Further, CVSS is nei- Mellon
The Common Vulnerability The goal is for CVSS to facilitate the ther a score repository (Bugtraq), University
Scoring System (CVSS) is a public generation of consistent scores that nor a vulnerability database (Open
initiative designed to address this accurately represent the impact of Source Vulnerability Database), nor
issue by presenting a framework for vulnerabilities. a vulnerability classification system
assessing and quantifying the impact (Common Vulnerabilities and Ex-
of software vulnerabilities. Organi- Introduction posures; CVE).
zations currently generating CVSS CVSS was conceived by the US CVSS scores are composites de-
scores include Cisco, US National In- National Infrastructure Assurance rived from the following three cat-
stitute of Standards and Technology Council (NIAC), a group of indus- egories of metrics:
(through the US National Vulnerabil- try leaders that provides the US De-
ity Database; NVD), Qualys, Oracle, partment of Homeland Security • Base. This group represents the
and Tenable Network Security. CVSS with security recommendations for properties of a vulnerability that
offers the following benefits: critical US infrastructures. Pub- don’t change over time, such as ac-
lished in 2005 as a first-generation cess complexity, access vector, de-
• Standardized vulnerability scores. open scoring system,1 CVSS is cur- gree to which the vulnerability
CVSS is application-neutral, en- rently under the custodial care of compromises the confidentiality,
abling an organization to score all the international Forum for Inci- integrity, and availability of the
of its IT vulnerabilities using the dent Response and Security Teams system, and requirement for au-
same scoring framework. This al- (FIRST; www.first.org/cvss/), which thentication to the system.
lows them to leverage a single manages the official mailing list • Temporal. This group measures the
vulnerability management policy and documentation. Many indi- properties of a vulnerability that
to govern how quickly each viduals and organizations have do change over time, such as the
vulnerability is validated and re- formed a special interest group existence of an official patch or
mediated. (SIG) to promote and refine the functional exploit code.
• Contextual scoring. CVSS scores framework. A current list of adop- • Environmental. This group mea-
represent the actual risk a given ters is available at www.first.org/ sures the properties of a vulnera-
vulnerability poses, helping them cvss/eadopters.html. bility that are representative of
prioritize remediation efforts. Several vulnerability scoring sys- users’ IT environments, such as
• Open framework. CVSS provides tems exist. Examples include US- prevalence of affected systems and
full details regarding the parame- CERT (www.kb.cert.org/vuls/html/ potential for loss.
ters used to generate each score. fieldhelp#metric), the SANS Insti-
This helps organizations under- tute’s Critical Vulnerability Analysis We further describe these metric
stand both the reasoning behind, Scale (www.sans.org/newsletters/ groups in the next section.
PUBLISHED BY THE IEEE COMPUTER SOCIETY ■ 1540-7993/06/$20.00 © 2006 IEEE ■ IEEE SECURITY & PRIVACY 85
Emerging Standards
increments of 0.1), the experimental the metrics and equations and con- Documentation: Target
data produced only 25 distinct scores. sidering what granularity level privileges assumptions
Indeed, 79 percent of the vulnerabil- would be most useful to users. Varying assumptions from scoring
ities had one of two base scores: The CVSS SIG has also ob- analysts resulted in scoring inconsis-
served discrepancies among analysts tencies. For example, some analysts
• 2.3 typically reflected a locally ex- scoring particular types of vulnera- assumed that IT assets would be se-
ploitable vulnerability with user- bility. To address these issues, the cured properly according to best
level impact (40.3 percent of the SIG created a separate task force practices, thus limiting the impact of
vulnerabilities). and identified several problem areas exploitation; others assumed that
• 7.0 typically reflected a remotely and solutions. To date, the SIG has targets would use weaker, default se-
exploitable vulnerability with user- addressed the following issues curity configurations (end users
level impact (38.9 percent of the through clarifications to the docu- running email clients or Web
vulnerabilities). mentation and proposed modifica- browsers with administrative-level
tions to the metrics. privileges, for example), thus caus-
The seven most commonly assigned ing a greater impact.
scores covered 94 percent of the vul- Base metric: Access vector To ensure greater scoring consis-
nerabilities. In addition, we analyzed This metric didn’t differentiate be- tency, the SIG updated the docu-
the base equation itself and found tween remote and local network mentation to specify that scores
that most of the scores were in the (subnet) vulnerabilities. For exam- should reflect typical (most com-
lower half of the range. This is largely ple, attacks launched from hosts on mon) software deployments and
because of the equation’s multiplica- the target’s subnet received the same configurations. If privileges aren’t
tive nature—it can generate only 68 access vector score as attacks clear, the scorer should assume the
of the 101 possible base score values, launched from hosts on the Internet. default configuration.
and only 23 of those values are in the To capture the relative difficulty of
upper half of the range. exploitation, the SIG proposed creat- Documentation: Scoring
Moreover, the score distribution ing three access vector values: remote, indirect vs. direct impact
is highly bimodal. We expected a much local-network accessible, and local. Attackers exploit many cross-site
more uniform distribution than for scripting vulnerabilities by luring
79 percent of the vulnerabilities to Base metric: users to click on Web links contain-
map to just two scores. A primary Authentication ing malicious code. Some analysts
reason for the distribution is that This metric didn’t differentiate be- scored such vulnerabilities based on
multiple sets of metric inputs can tween requiring one or multiple the impact to the server because
generate the same scores. For exam- successful authentication steps. For that’s where the malicious data is lo-
ple, a remotely exploitable vulnera- example, a vulnerability that could cated, whereas others scored them
bility that provides user-level access be exploited only after authenticat- based on the impact to end users,
would produce the same score as a ing to an operating system and the which is generally more severe but
locally exploitable vulnerability that application running on it received also much harder to quantify. These
provides complete control of the the same score as a vulnerability that conflicting approaches caused signif-
same target (assuming all other met- could be exploited after simply au- icant scoring discrepancies for vul-
rics are set to the same values). In thenticating to the operating system. nerabilities that differed in terms of
most environments, however, the To more accurately reflect the their direct and indirect impact.
former should have a higher score difference in authentication so- The SIG responded by updating
because it necessitates a faster re-
sponse than the latter.
Proposed changes to the base A vulnerability-scoring system that’s
metrics (described later) might add
greater diversity to scores by increas- accurate is wonderful; a scoring system
ing the range of values for individual
metrics. However, this won’t elimi- that’s accurate and usable is even better.
nate the highly bimodal distribu-
tion. Another concern is that users
might mistakenly believe that the phistication, the SIG proposed the documentation to specify that
scoring is more accurate than it truly three authentication metric values: vulnerabilities should be scored with
is, based on the scoring range’s gran- none, single, or multiple authenti- respect to their impact to the vulner-
ularity. The CVSS SIG is reviewing cations required. able target or target service only. For
cross-site scripting vulnerabilities, ity-scoring analysts in large enter- better. For CVSS to become suc-
this typically means a partial impact prises. Environmental scoring thus cessful, the scoring mechanism must
on integrity and no impact on either represents a possible barrier to entry be clear, quick, and understandable.
confidentiality or availability. for CVSS. Complex scoring metrics might in-
crease score accuracy and resolution,
but if this comes at the expense of us-
CVSS is significant because it can eliminate ability, analysts might score incor-
rectly, or end users might avoid
duplicate efforts in IT vulnerability scoring and incorporating temporal and envi-
ronmental factors.
allow organizations to make more informed
Scoring for IT
decisions when managing vulnerabilities. misconfigurations
CVSS was developed to generate
scores for vulnerabilities in the con-
Increasing adoption End-user adoption text of software flaws. Yet, many or-
The CVSS SIG’s primary effort has CVSS must be compatible with an ganizations are also interested in
been to create a scoring system that organization’s vulnerability manage- scores for security misconfigurations,
generates consistent and representa- ment processes. Rather than a techni- such as applications configured to
tive vulnerability scores—that is, cal challenge of metrics or scoring permit blank passwords or operating
scores that properly reflect the sever- algorithms, the concern is that the systems that allow any user to modify
ity and impact of a given vulner- CVSS framework must satisfy a busi- critical system files. Such misconfigu-
ability to any user’s computing ness need and make a valuable rations can be exploited, so it should
environment. However, CVSS has contribution to vulnerability man- be possible to use a similar or identical
numerous challenges. agement practices. Altering existing scoring system for both types of prob-
processes, procedures, and technolo- lems. The CVSS specification and its
User participation in gies could require considerable effort. accompanying documentation don’t
environmental scoring For CVSS to make business sense, its currently address misconfigurations,
Most end users will access and use payoff must be greater to an organiza- but this opportunity certainly exists.
CVSS scores indirectly—through tion than the integration costs. CVSS would be even more valuable
vulnerability-scanning tools, for to users if it let them compare and pri-
example. Although such tools can Vendor adoption oritize both software flaws and mis-
determine base scores and possibly For CVSS to succeed, software and configurations on their systems.
temporal scores, they probably service vendors must identify the
won’t be able to generate accurate business case ( just as with end
environmental scores without user users). Demand will likely be the
participation. A scanning tool with
full visibility into a user’s network
greatest driver: vendors will offer
CVSS scores when customers re-
C VSS is an emerging standard, but
for it to succeed, we must ensure
that it generates consistent scores, that
could potentially deduce target dis- quest them. Vendors might also they accurately represent the impact
tribution, but it couldn’t infer a vul- choose to support CVSS as a way to of the vulnerabilities assessed, and that
nerability’s damage potential. A differentiate themselves from their users can easily provide temporal and
proper solution therefore needs to competition. Authoritative sources environmental data. We can achieve
allow users to score environmental such as NVD make CVSS scores these goals by refining the CVSS
metrics (collateral damage potential freely available to the public. (NVD metrics and equations, maintaining
and target distribution), although currently offers XML feeds of clear documentation, and clearly de-
we recognize that they might often CVSS base scores for all vulnerabili- fining what CVSS is and is not.
face difficulties in collecting this in- ties.) These sources are particularly We believe CVSS is significant
formation. Disaster recovery and useful for vendors that don’t wish to because it can eliminate duplicate ef-
business continuity departments develop proprietary scoring mecha- forts in IT vulnerability scoring and
often prioritize assets according to nisms or perform their own scoring. let organizations make more in-
their value to the organization. This formed decisions when managing
might provide the necessary data, Security vs. usability vulnerabilities. We’ve observed scor-
but political and procedural diffi- A vulnerability-scoring system that’s ing discrepancies and responded by
culties could make obtaining that accurate is wonderful; a scoring sys- improving the documentation and
information difficult for vulnerabil- tem that’s accurate and usable is even proposing better metrics. We’ve
Acknowledgments
We thank the CVSS SIG members, the SIG
scoring analysts, and those who continue to use
and implement CVSS.
Reference
1. M. Schiffman et al., Common Vul-
nerability Scoring System, tech.
report, US Nat’l Infrastructure Ad-
visory Council, 2004; www.dhs.
gov/interweb/assetlibrary/NIAC
_CVSS_FinalRpt_12-2-04.pdf.