Professional Documents
Culture Documents
Lecture 5: Network Threat Management Part 1: Networks Standards and Compliance-Dhanesh More
Lecture 5: Network Threat Management Part 1: Networks Standards and Compliance-Dhanesh More
MANAGEMENT PART 1
The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the
characteristics and impacts of IT vulnerabilities
CVSS enables IT managers, vulnerability bulletin providers, security vendors, application vendors and
researchers to all benefit by adopting this common language of scoring IT vulnerabilities.
CVSS consists of 3 groups: Base, Temporal and Environmental. Each group produces a numeric score ranging
from 0 to 10, and a Vector, a compressed textual representation that reflects the values used to derive the score
The Base group represents the intrinsic qualities of a vulnerability.
The Temporal group reflects the characteristics of a vulnerability that change over time.
The Environmental group represents the characteristics of a vulnerability that are unique to any user's
environment
Common Vulnerability Scoring System (CVSS)
Open Framework: Users can be confused when a vulnerability is assigned an arbitrary score.
"Which properties gave it that score? How does it differ from the one released yesterday?"
With CVSS, anyone can see the individual characteristics used to derive a score.
Prioritized Risk: When the environmental score is computed, the vulnerability now becomes
contextual. That is, vulnerability scores are now representative of the actual risk to an
organization. Users know how important a given vulnerability is in relation to other
vulnerabilities.
How does CVSS work?
When the base metrics are assigned values, the base equation calculates a score ranging from 0 to 10, and
a vector is created
The vector facilitates the "open" nature of the framework. It is a text string that contains the values assigned
to each metric, and it is used to communicate exactly how the score for each vulnerability is derived
Therefore, the vector should always be displayed with the vulnerability score
If desired, the base score can be refined by assigning values to the temporal and environmental metrics.
If a temporal score is needed, the temporal equation will combine the temporal metrics with the base score
to produce a temporal score ranging from 0 to 10.
Similarly, if an environmental score is needed, the environmental equation will combine the environmental
metrics with the temporal score to produce an environmental score ranging from 0 to 10.
FAQs
Who performs the scoring?
Generally, the base and temporal metrics are specified by vulnerability bulletin analysts, security product
vendors, or application vendors because they typically have better information about the characteristics of
a vulnerability than do users
The environmental metrics, however, are specified by users because they are best able to assess the
potential impact of a vulnerability within their own environments.
Who owns CVSS?
CVSS is under the custodial care of the Forum of Incident Response and Security Teams (FIRST), However, it
is a completely free and open standard
No organization "owns" CVSS and membership in FIRST is not required to use or implement CVSS.
Who is using CVSS?
Vulnerability Bulletin Providers
Software Application Vendors
User Organizations
Vulnerability Scanning and Management
Security (Risk) Management
Researchers
Base Metrics
Access Vector (AV) This metric reflects how the vulnerability is exploited. The possible values for this metric are listed
in Table 1. The more remote an attacker can be to attack a host, the greater the vulnerability score
Authentication (Au) This metric measures the number of times an attacker must authenticate to a target in order to
exploit a vulnerability.
The access conditions are somewhat specialized; the following are examples:
The attacking party is limited to a group of systems or users at some level of authorization, possibly untrusted.
Some information must be gathered before a successful attack can be launched.
Medium
The affected configuration is non-default, and is not commonly configured (e.g., a vulnerability present when a server performs user account
(M)
authentication via a specific scheme, but not present for another authentication scheme).
The attack requires a small amount of social engineering that might occasionally fool cautious users (e.g., phishing attacks that modify a web browsers
status bar to show a false link, having to be on someones buddy list before sending an IM exploit).
Specialized access conditions or extenuating circumstances do not exist. The following are examples:
The affected product typically requires access to a wide range of systems and users, possibly anonymous and untrusted (e.g., Internet-facing web or
mail server).
Low (L)
The affected configuration is default or ubiquitous.
The attack can be performed manually and requires little skill or additional information gathering.
The race condition is a lazy one (i.e., it is technically a race but easily winnable).
Base Metrics (Contd)..
Exploitability (E)
This metric measures the current state of exploit techniques or code availability. Public availability of easy-to-
use exploit code increases the number of potential attackers by including those who are unskilled, thereby
increasing the severity of the vulnerability.
Metric Values: Unproven, POC, Functional, High, Not Defined
Remediation Level (RL)
The remediation level of vulnerability is an important factor for prioritization. The typical vulnerability is
unpatched when initially published. Workarounds or hotfixes may offer interim remediation until an official
patch or upgrade is issued. Each of these respective stages adjusts the temporal score downwards, reflecting
the decreasing urgency as remediation becomes final.
Metric Values: Official Fix (OF), Temporary Fix (TF), Workaround (W), Unavailable (U), Not Defined (ND)
Report Confidence (RC)
This metric measures the degree of confidence in the existence of the vulnerability and the credibility of the
known technical details. Sometimes, only the existence of vulnerabilities are publicized, but without specific
details.
The vulnerability may later be corroborated and then confirmed through acknowledgement by the author or
vendor of the affected technology.
Metric Values: Unconfirmed (UC), Uncorroborated (UR), Confirmed (C), Not Defined (ND)
Environmental Metrics
SCORING TIP #1: Vulnerability scoring should not take into account any interaction with other vulnerabilities. That is, each vulnerability
should be scored independently.
SCORING TIP #2: When scoring a vulnerability, consider the direct impact to the target host only.
SCORING TIP #3: Many applications, such as Web servers, can be run with different privileges, and scoring the impact involves making
an assumption as to what privileges are used. Therefore, vulnerabilities should be scored according to the privileges most commonly
used.
SCORING TIP #4: When scoring the impact of a vulnerability that has multiple exploitation methods (attack vectors), the analyst should
choose the exploitation method that causes the greatest impact, rather than the method which is most common, or easiest to perform.
SCORING TIP #5: When a vulnerability can be exploited both locally and from the network, the "Network" value should be chosen
SCORING TIP #6: Many client applications and utilities have local vulnerabilities that can be exploited remotely either through user-
complicit actions or via automated processing.
SCORING TIP #7: If the vulnerability exists in an authentication scheme itself (e.g., PAM, Kerberos) or an anonymous service (e.g.,
public FTP server), the metric should be scored as "None" because the attacker can exploit the vulnerability without supplying valid
credentials
SCORING TIP #8: It is important to note that the Authentication metric is different from Access Vector.
SCORING TIP #9: Vulnerabilities that give root-level access should be scored with complete loss of confidentiality, integrity, and
availability, while vulnerabilities that give user-level access should be scored with only partial loss of confidentiality, integrity, and
availability.
SCORING TIP #10: Vulnerabilities with a partial or complete loss of integrity can also cause an impact to availability.
Equations
BaseScore = AccessVector= case AccessVector of ConfImpact= case Confidentiality Impact of
round_to_1_decimal(((0.6*Impact)+(0.4*Exploitability)- requires local access: 0.395 none: 0.0
1.5)*f(Impact)) adjacent network accessible: 0.646 partial:0.275
network accessible: 1.0 complete: 0.660
Impact = 10.41*(1-(1-ConfImpact)*(1-IntegImpact)*(1-
AvailImpact))
AccessComplexity = case AccessComplexity of IntegImpact= case Integrity Impact of
Exploitability = 20* high: 0.35 none: 0.0
AccessVector*AccessComplexity*Authentication medium: 0.61 partial:0.275
low: 0.71 complete: 0.660
f(impact)= 0 if Impact=0, 1.176 otherwise
Authentication= case Authentication of AvailImpact= case Availability Impact of
requires multiple instances of authentication: 0.45 none: 0.0
requires single instance of authentication: 0.56 partial:0.275
requires no authentication: 0.704 complete: 0.660
CWSS is a part of the Common Weakness Enumeration (CWE) project, co-sponsored by the Software Assurance
program in the National Cyber Security Division (NCSD) of the US Department of Homeland Security (DHS).
The Common Weakness Scoring System (CWSS) provides a mechanism for scoring weaknesses in a consistent,
flexible, open manner while accommodating context for the various business domains
It is a collaborative, community-based effort that is addressing the needs of its stakeholders across government,
academia, and industry.
CWSS provides
a common framework for prioritizing security errors ("weaknesses") that are discovered in software
applications
provides a quantitative measurement of the unfixed weaknesses that are present within a software
application
can be used by developers to prioritize unfixed weaknesses within their own software
in conjunction with the Common Weakness Risk Analysis Framework (CWRAF), can be used by consumers to
identify the most important weaknesses for their business domains, in order to inform their acquisition and
protection activities as one part of the larger process of achieving software assurance.
Stakeholders
Software developers: often operate within limited time frames, due to release cycles and limited resources.
As a result, they are unable to investigate and fix every reported weakness. They may choose to
concentrate on the worst problems, the easiest-to-fix.
Software development managers: create strategies for prioritizing and removing entire classes of
weaknesses from the entire code base, or at least the portion that is deemed to be most at risk, by defining
custom "Top-N" lists. They must understand the security implications of integrating third-party software, which
may contain its own weaknesses.
Software acquirers: want to obtain third-party software with a reasonable level of assurance that the
software provider has performed due diligence in removing or avoiding weaknesses that are most critical to
the acquirer's business and mission. Related stakeholders include CIOs, CSOs, system administrators, and end
users of the software.
Code analysis vendors and consultants: want to provide a consistent, community-vetted scoring mechanism
for different customers.
Evaluators of code analysis capabilities: evaluate the capabilities of code analysis techniques (e.g., NIST
SAMATE). They could use a consistent weakness scoring mechanism to support sampling of reported findings,
as well as understanding the severity of these findings without depending on ad hoc scoring methods that
may vary widely by tool/technique.
Other stakeholders: may include vulnerability researchers, advocates of secure development, and
compliance-based analysts (e.g., PCI DSS).
Scoring Methods within CWSS
Targeted: Score individual weaknesses that are discovered in the design or implementation of a specific
("targeted") software package, e.g. a buffer overflow in the username of an authentication routine in line
1234 of vuln.c in an FTP server package.
Generalized: Score classes of weaknesses independent of any particular software package, in order to
prioritize them relative to each other (e.g. "buffer overflows are higher priority than memory leaks").
Context-adjusted: Modify scores in accordance with the needs of a specific analytical context that may
integrate business/mission priorities, threat environments, risk tolerance, etc. These needs are captured using
vignettes that link inherent characteristics of weaknesses with higher-level business considerations. This
method could be applied to both targeted and generalized scoring.
Aggregated: Combine the results of multiple, lower-level weakness scores to produce a single, overall score
(or "grade").
The current focus for CWSS is on the Targeted scoring method and a framework for context-adjusted
scoring.
CWSS 0.6 Scoring for Targeted Software
Scoring
In CWSS 0.6, the score for a weakness, or a weakness bug report ("finding") is calculated using 18
different factors, across three metric groups:
the Base Finding group, which captures the inherent risk of the weakness, confidence in the accuracy of the
finding, and strength of controls.
the Attack Surface group, which captures the barriers that an attacker must cross in order to exploit the
weakness.
the Environmental group, which includes factors that may be specific to a particular operational context,
such as business impact, likelihood of exploit, and existence of external controls.
Base Finding Metric Group
Technical Impact (TI) Acquired Privilege (AP) Acquired Privilege Layer (AL)
• Deployment Scope (SC Authentication Instances (AI) Required Privilege Layer (RL) Authentication Strength (AS)
Value Code Weight Value Code Weight Value Code Weight Value Code Weight
All All 1 None N 1 System S 0.9 Strong S 0.7
Moderate Mod 0.9 Single S 0.8 Application A 1 Moderate M 0.8
Rare Rare 0.5 Multiple M 0.5 Network N 0.7 Weak W 0.9
Potentially
Pot 0.1 Default D 0.8 Enterprise E 1 None N 1 The Attack Surface Sub-score is
Reachable
Default D 0.7 Unknown Unk 0.5 Default D 0.9 Default D 0.85 calculated as:
Unknown Unk 0.5 Not Applicable NA 1 Unknown Unk 0.5 Unknown Unk 0.5
[ 20*(RequiredPrivilege +
Not Applicable NA 1 Not Applicable NA 1 Not Applicable NA 1
RequiredPrivilegeLayer + AccessVector)
Quantified Q
+ 20*DeploymentScope +
Required Privilege (RP) • Level of Interaction (IN) Access Vector (AV) 10*LevelInteraction +
Value Code Weight Value Code Weight Value Code Weight 5*(AuthenticationStrength +
None N 1 Automated Aut 1 Internet I 1 AuthenticationInstances) ] / 100.0
Guest G 0.9 Limited/Typical Ltd 0.9 Intranet R 0.8
Regular User RU 0.7 Moderate Mod 0.8 Private Network V 0.8
Partially- Adjacent
P 0.6 Opportunistic Opp 0.3 A 0.7
Privileged User Network
Administrator A 0.1 High High 0.1 Local L 0.5
Default D 0.7 No interaction NI 0 Physical P 0.2
Unknown Unk 0.5 Default D 0.55 Default D 0.75
Not Applicable NA 1 Unknown Unk 0.5 Unknown U 0.5
Not Applicable NA 1 Not Applicable NA 1
Environmental Metric Group
Access Complexity (AC), Deployment Scope is indirectly covered by CVSS' Access Complexity, which combines multiple distinct factors into a
Deployment Scope
Target Distribution (TD) single item. It also has an indirect association with Target Distribution (TD).
Access Vector (AV) Access Vector The values are similar, but CWSS distinguishes between physical access and local (shell/account) access.
Required Privilege Required Privilege Level is indirectly covered by CVSS' Access Complexity, which combines multiple distinct factors
Access Complexity (AC)
Level into a single item.
Authentication This is not directly specified within CVSS, but scorers might consider the authentication strength when evaluating
N/A
Strength Access Complexity (AC).
Authentication
Authentication (Au)
Instances
Within many CVSS use-cases, the vulnerability has already been discovered and disclosed by another party when
Likelihood of CVSS scoring takes place. So there is no need to track the likelihood of discovery, as the likelihood is (effectively)
N/A
Discovery 1.0. However, within some CWSS use-cases, the issue is only known to the developer at the time of scoring, and the
developer may choose to increase the priority of issues that are most likely to be discovered.
Comparison: CVSS and CWSS (Continued)..
CVE® is a publicly available and free to use list or dictionary of standardized identifiers for common
computer vulnerabilities and exposures.
The Common Vulnerabilities and Exposures or CVE system provides a reference-method for publicly known
information-security vulnerabilities and exposures
MITRE Corporation maintains the system, with funding from the National Cyber Security Division of the
United States Department of Homeland Security. CVE is used by the Security Content Automation Protocol.
MITRE Corporation's documentation defines CVE Identifiers (also called "CVE names", "CVE numbers", "CVE-
IDs", and "CVEs") as unique, common identifiers for publicly known information security vulnerabilities.
CVE identifiers have a status of either "entry" or "candidate". Entry status indicates acceptance of a CVE
Identifier into the CVE List, while a status of "candidate" (for "candidates," "candidate numbers," or "CANs")
indicates an identifier under review for inclusion in the list.
If the Board accepts a candidate, its status is updated to "entry" on the CVE List. However, the assignment of
a candidate number is not a guarantee that it will become an official CVE entry
CEE
CEE is an open, practical, extensible, and industry-driven event logging specification with the goal of unifying event
representation and classification
It's developed as a coordinated industry initiative with participation from end user groups, logging providers, SIEM vendors,
independent experts, and U.S. government organizations
Development is facilitated by The MITRE Corporation as part of the Making Security Measurable initiative
Common vocabulary and taxonomy for event reporting One common problem in current log management
practice is that terms mean different things to different products, organizations, and communities. An IP Address
may mean just an IPV4 address to one community while it might mean an IPV6 address to another. It may mean
only external addresses to one group and both external and internal to another. Normalizing these terms
across communities of interest will allow for a common understanding of terms and support both ease of
implementation for log management vendors and easy of use for end users
Log Serialization The most obvious problem in current log management is the lack of a common syntax for
reporting log data. This includes both the data formats used (XML vs. formatted-text in many cases) as well as
common header fields (such as event record ID).
Log Transport Although syslog is a de-facto standard in the log transport space, it is not supported across all
common operating systems and has several key technical weaknesses. Transition to a more feature-complete
transport is hindered by a lack of suitable substitutes and the implementation cost of changing existing
infrastructure
4 Log requirements definition Another common issue is correlating similar event records from different
products across a heterogeneous IT infrastructure. For example, large enterprises may run both Unix servers
and servers by commercial vendors. While that company would wish for all events to be reported consistently
across their server installs there are generally differences between products in both which events they report
and which information is included in event records.
Common Configuration Enumeration
CCE™ provides unique identifiers to system configuration issues in order to facilitate fast and accurate
correlation of configuration data across multiple information sources and tools.
CCE Identifiers can be used to associate checks in configuration assessment tools with statements in
configuration best-practice documents and security guides, are the main identifiers used for the settings in the
U.S. Federal Desktop Core Configuration (FDCC) data file downloads, and are a key component for enabling
security content automation.
Why CCE ? When dealing with information from multiple sources, use of consistent identifiers can improve data
correlation; enable interoperability; foster automation; and ease the gathering of metrics for use in situation
awareness, IT security audits, and regulatory compliance.
Currently, CCE is focused solely on software-based configurations. Recommendations for hardware and/or
physical configurations are not supported.
Each entry on the CCE List contains the following five attributes:
CCE Identifier Number - current version is “CCE-3243-3”
Description - a humanly understandable description of the configuration issue
Conceptual Parameters - parameters that would need to be specified in order to implement a CCE on a system
Associated Technical Mechanisms - for any given configuration issue there may be one or more ways to implement the
desired result
References - pointers to the specific sections of the documents or tools in which the configuration issue is described in detail
Common Platform Enumeration (CPE)
Background
Secure information systems depend on reliable, cost-effective Software Asset Management practices that support security
assessment. IT managers need highly reliable and automatable software inventory processes that provide accurate, up-to-the-
minute details about the operating systems, software applications and hardware devices that are installed and available for
use.
Specification languages exist such as Common Vulnerabilities and Exposures (CVE®) for describing vulnerabilities, Open
Vulnerability and Assessment Language (OVAL®) for testing system state, and Extensible Configuration Checklist Description
Format (XCCDF) for expressing security checklists.
What these languages all have in common, however, is a need to refer to IT products and platforms in a standardized way
that is suitable for machine interpretation and processing. CPE satisfies that need.
Solutions
Developed specifically to work with specification languages, CPE provides:
A standard machine-readable format for encoding names of IT products and platforms.
A set of procedures for comparing names.
A language for constructing “applicability statements” that combine CPE names with simple logical operators.
A standard notion of a CPE Dictionary.
CPE™ is a standardized method of describing and identifying classes of applications, operating systems, and hardware
devices present among an enterprise’s computing assets.
CPE can be used as a source of information for enforcing and verifying IT management policies relating to these assets, such
as vulnerability, configuration, and remediation policies.
IT management tools can collect information about installed products, identify products using their CPE names, and use this
standardized information to help make fully or partially automated decisions regarding the assets.
Common Platform Enumeration (CPE) Dictionary