Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

LECTURE 5: NETWORK THREAT

MANAGEMENT PART 1

Networks standards and compliance- Dhanesh More


Agenda

 Common Vulnerability Scoring System (CVSS)


 Common Weakness Scoring System (CWSS)
 Common Vulnerabilities and Exposures (CVE)
 Common Event Expression (CEE)
 Common Configuration Enumeration (CCE)
 Common Platform Enumeration (CPE)
Common Vulnerability Scoring System (CVSS)

 The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the
characteristics and impacts of IT vulnerabilities
 CVSS enables IT managers, vulnerability bulletin providers, security vendors, application vendors and
researchers to all benefit by adopting this common language of scoring IT vulnerabilities.
 CVSS consists of 3 groups: Base, Temporal and Environmental. Each group produces a numeric score ranging
from 0 to 10, and a Vector, a compressed textual representation that reflects the values used to derive the score
 The Base group represents the intrinsic qualities of a vulnerability.
 The Temporal group reflects the characteristics of a vulnerability that change over time.
 The Environmental group represents the characteristics of a vulnerability that are unique to any user's
environment
Common Vulnerability Scoring System (CVSS)

Prime benefits of using CVSS:

 Standardized Vulnerability Scores: When an organization normalizes vulnerability scores


across all of its software and hardware platforms, it can leverage a single vulnerability
management policy. This policy may be similar to a service level agreement (SLA) that states
how quickly a particular vulnerability must be validated and remediated.

 Open Framework: Users can be confused when a vulnerability is assigned an arbitrary score.
"Which properties gave it that score? How does it differ from the one released yesterday?"
With CVSS, anyone can see the individual characteristics used to derive a score.

 Prioritized Risk: When the environmental score is computed, the vulnerability now becomes
contextual. That is, vulnerability scores are now representative of the actual risk to an
organization. Users know how important a given vulnerability is in relation to other
vulnerabilities.
How does CVSS work?

 When the base metrics are assigned values, the base equation calculates a score ranging from 0 to 10, and
a vector is created
 The vector facilitates the "open" nature of the framework. It is a text string that contains the values assigned
to each metric, and it is used to communicate exactly how the score for each vulnerability is derived
 Therefore, the vector should always be displayed with the vulnerability score
 If desired, the base score can be refined by assigning values to the temporal and environmental metrics.
 If a temporal score is needed, the temporal equation will combine the temporal metrics with the base score
to produce a temporal score ranging from 0 to 10.
 Similarly, if an environmental score is needed, the environmental equation will combine the environmental
metrics with the temporal score to produce an environmental score ranging from 0 to 10.
FAQs
Who performs the scoring?
 Generally, the base and temporal metrics are specified by vulnerability bulletin analysts, security product
vendors, or application vendors because they typically have better information about the characteristics of
a vulnerability than do users
 The environmental metrics, however, are specified by users because they are best able to assess the
potential impact of a vulnerability within their own environments.
Who owns CVSS?
 CVSS is under the custodial care of the Forum of Incident Response and Security Teams (FIRST), However, it
is a completely free and open standard
 No organization "owns" CVSS and membership in FIRST is not required to use or implement CVSS.
Who is using CVSS?
 Vulnerability Bulletin Providers
 Software Application Vendors
 User Organizations
 Vulnerability Scanning and Management
 Security (Risk) Management
 Researchers
Base Metrics

Access Vector (AV) This metric reflects how the vulnerability is exploited. The possible values for this metric are listed
in Table 1. The more remote an attacker can be to attack a host, the greater the vulnerability score

Metric Value Description


A vulnerability exploitable with only local access requires the attacker to have either physical access to the vulnerable system or a local
Local (L) (shell) account. Examples of locally exploitable vulnerabilities are peripheral attacks such as Firewire/USB DMA attacks, and local
privilege escalations (e.g., sudo).
A vulnerability exploitable with adjacent network access requires the attacker to have access to either the broadcast or collision
Adjacent
domain of the vulnerable software. Examples of local networks include local IP subnet, Bluetooth, IEEE 802.11, and local Ethernet
Network (A)
segment.
A vulnerability exploitable with network access means the vulnerable software is bound to the network stack and the attacker does not
Network (N) require local network access or local access. Such a vulnerability is often termed "remotely exploitable". An example of a network
attack is an RPC buffer overflow.

Authentication (Au) This metric measures the number of times an attacker must authenticate to a target in order to
exploit a vulnerability.

Metric Value Description


Exploiting the vulnerability requires that the attacker authenticate two or more times, even if the same credentials are used
Multiple (M) each time. An example is an attacker authenticating to an operating system in addition to providing credentials to access an
application hosted on that system.
The vulnerability requires an attacker to be logged into the system (such as at a command line or via a desktop session or
Single (S)
web interface).
None (N) Authentication is not required to exploit the vulnerability.
Base Metrics (Contd)..

Access Complexity (AC)


This metric measures the complexity of the attack required to exploit the vulnerability once an attacker has gained
access to the target system
Metric
Description
Value
Specialized access conditions exist. For example:
 In most configurations, the attacking party must already have elevated privileges or spoof additional systems in addition to the attacking system
(e.g., DNS hijacking).
High (H)  The attack depends on social engineering methods that would be easily detected by knowledgeable people. For example, the victim must perform
several suspicious or atypical actions.
 The vulnerable configuration is seen very rarely in practice.
 If a race condition exists, the window is very narrow.

The access conditions are somewhat specialized; the following are examples:
 The attacking party is limited to a group of systems or users at some level of authorization, possibly untrusted.
 Some information must be gathered before a successful attack can be launched.
Medium
 The affected configuration is non-default, and is not commonly configured (e.g., a vulnerability present when a server performs user account
(M)
authentication via a specific scheme, but not present for another authentication scheme).
 The attack requires a small amount of social engineering that might occasionally fool cautious users (e.g., phishing attacks that modify a web browsers
status bar to show a false link, having to be on someones buddy list before sending an IM exploit).

Specialized access conditions or extenuating circumstances do not exist. The following are examples:
 The affected product typically requires access to a wide range of systems and users, possibly anonymous and untrusted (e.g., Internet-facing web or
mail server).
Low (L)
 The affected configuration is default or ubiquitous.
 The attack can be performed manually and requires little skill or additional information gathering.
 The race condition is a lazy one (i.e., it is technically a race but easily winnable).
Base Metrics (Contd)..

Confidentiality Impact (C)


 This metric measures the impact on confidentiality of a successfully exploited
vulnerability.
 Values: None, Partial and Complete

Integrity Impact (I)


 This metric measures the impact to integrity of a successfully exploited vulnerability
 Values: None, Partial and Complete

Availability Impact (A)


 This metric measures the impact to availability of a successfully exploited
vulnerability
 Values: None, Partial and Complete
Temporal Metrics

Exploitability (E)
 This metric measures the current state of exploit techniques or code availability. Public availability of easy-to-
use exploit code increases the number of potential attackers by including those who are unskilled, thereby
increasing the severity of the vulnerability.
 Metric Values: Unproven, POC, Functional, High, Not Defined
Remediation Level (RL)
 The remediation level of vulnerability is an important factor for prioritization. The typical vulnerability is
unpatched when initially published. Workarounds or hotfixes may offer interim remediation until an official
patch or upgrade is issued. Each of these respective stages adjusts the temporal score downwards, reflecting
the decreasing urgency as remediation becomes final.
 Metric Values: Official Fix (OF), Temporary Fix (TF), Workaround (W), Unavailable (U), Not Defined (ND)
Report Confidence (RC)
 This metric measures the degree of confidence in the existence of the vulnerability and the credibility of the
known technical details. Sometimes, only the existence of vulnerabilities are publicized, but without specific
details.
 The vulnerability may later be corroborated and then confirmed through acknowledgement by the author or
vendor of the affected technology.
 Metric Values: Unconfirmed (UC), Uncorroborated (UR), Confirmed (C), Not Defined (ND)
Environmental Metrics

Collateral Damage Potential (CDP)


This metric measures the potential for loss of life or physical assets through damage or theft of property or
equipment. The metric may also measure economic loss of productivity or revenue
Metric Value: None (N), Low (L), Low-Medium (LM), Medium-High (MH), High (H), Not Defined(ND)
Target Distribution (TD)
This metric measures the proportion of vulnerable systems. It is meant as an environment-specific indicator in
order to approximate the percentage of systems that could be affected by the vulnerability.
Metric Value: None (N), Low (L), Low-Medium (LM), Medium-High (MH), High (H), Not Defined(ND)
Security Requirements (CR, IR, AR)
These metrics enable the analyst to customize the CVSS score depending on the importance of the affected IT
asset to a users organization, measured in terms of confidentiality, integrity, and availability, That is, if an IT
asset supports a business function for which availability is most important, the analyst can assign a greater value
to availability, relative to confidentiality and integrity.
CVSS follows this general model of FIPS 199, but does not require organizations to use any particular system for
assigning the low, medium, and high impact ratings.
Metric Value: Low (L), Medium (M), High (H), Not Defined(ND)
Before we get to scoring mechanism…

 SCORING TIP #1: Vulnerability scoring should not take into account any interaction with other vulnerabilities. That is, each vulnerability
should be scored independently.
 SCORING TIP #2: When scoring a vulnerability, consider the direct impact to the target host only.
 SCORING TIP #3: Many applications, such as Web servers, can be run with different privileges, and scoring the impact involves making
an assumption as to what privileges are used. Therefore, vulnerabilities should be scored according to the privileges most commonly
used.
 SCORING TIP #4: When scoring the impact of a vulnerability that has multiple exploitation methods (attack vectors), the analyst should
choose the exploitation method that causes the greatest impact, rather than the method which is most common, or easiest to perform.
 SCORING TIP #5: When a vulnerability can be exploited both locally and from the network, the "Network" value should be chosen
 SCORING TIP #6: Many client applications and utilities have local vulnerabilities that can be exploited remotely either through user-
complicit actions or via automated processing.
 SCORING TIP #7: If the vulnerability exists in an authentication scheme itself (e.g., PAM, Kerberos) or an anonymous service (e.g.,
public FTP server), the metric should be scored as "None" because the attacker can exploit the vulnerability without supplying valid
credentials
 SCORING TIP #8: It is important to note that the Authentication metric is different from Access Vector.
 SCORING TIP #9: Vulnerabilities that give root-level access should be scored with complete loss of confidentiality, integrity, and
availability, while vulnerabilities that give user-level access should be scored with only partial loss of confidentiality, integrity, and
availability.
 SCORING TIP #10: Vulnerabilities with a partial or complete loss of integrity can also cause an impact to availability.
Equations
BaseScore = AccessVector= case AccessVector of ConfImpact= case Confidentiality Impact of
round_to_1_decimal(((0.6*Impact)+(0.4*Exploitability)- requires local access: 0.395 none: 0.0
1.5)*f(Impact)) adjacent network accessible: 0.646 partial:0.275
network accessible: 1.0 complete: 0.660
Impact = 10.41*(1-(1-ConfImpact)*(1-IntegImpact)*(1-
AvailImpact))
AccessComplexity = case AccessComplexity of IntegImpact= case Integrity Impact of
Exploitability = 20* high: 0.35 none: 0.0
AccessVector*AccessComplexity*Authentication medium: 0.61 partial:0.275
low: 0.71 complete: 0.660
f(impact)= 0 if Impact=0, 1.176 otherwise
Authentication= case Authentication of AvailImpact= case Availability Impact of
requires multiple instances of authentication: 0.45 none: 0.0
requires single instance of authentication: 0.56 partial:0.275
requires no authentication: 0.704 complete: 0.660

TemporalScore = high: 1.00 not defined:1.00


round_to_1_decimal(BaseScore*Exploitability not defined:1.00
*RemediationLevel*ReportConfidence) ReportConfidence = case ReportConfidence of
RemediationLevel = case RemediationLevel of unconfirmed:0.90
Exploitability= case Exploitability of official-fix: 0.87 uncorroborated:0.95
unproven: 0.85 temporary-fix: 0.90 confirmed:1.00
proof-of-concept: 0.9 workaround: 0.95 not defined:1.00
functional: 0.95 unavailable:1.00

EnvironmentalScore = CollateralDamagePotential = case ConfReq = case ConfReq of


round_to_1_decimal((AdjustedTemporal+ CollateralDamagePotential of low: 0.5 medium: 1.0
(10- none: 0 low: 0.1 high: 1.51 not defined: 1.0
AdjustedTemporal)*CollateralDamagePotential)*TargetDis low-medium: 0.3 medium-high: 0.4
tribution) high: 0.5 not defined: 0 IntegReq = case IntegReq of
low: 0.5 medium: 1.0
AdjustedTemporal = TemporalScore recomputed with the TargetDistribution = case TargetDistribution of high: 1.51 not defined: 1.0
BaseScores Impact sub-equation replaced with the none: 0 low: 0.25
AdjustedImpact equation medium:0.75 high: 1.00 AvailReq = case AvailReq of
not defined: 1.00 low: 0.5 medium: 1.0
AdjustedImpact = min(10,10.41*(1-(1- high: 1.51 not defined: 1.0
ConfImpact*ConfReq)*(1-IntegImpact*IntegReq)
*(1-AvailImpact*AvailReq)))
CWSS Overview

 CWSS is a part of the Common Weakness Enumeration (CWE) project, co-sponsored by the Software Assurance
program in the National Cyber Security Division (NCSD) of the US Department of Homeland Security (DHS).
 The Common Weakness Scoring System (CWSS) provides a mechanism for scoring weaknesses in a consistent,
flexible, open manner while accommodating context for the various business domains
 It is a collaborative, community-based effort that is addressing the needs of its stakeholders across government,
academia, and industry.

 CWSS provides
 a common framework for prioritizing security errors ("weaknesses") that are discovered in software
applications
 provides a quantitative measurement of the unfixed weaknesses that are present within a software
application
 can be used by developers to prioritize unfixed weaknesses within their own software
 in conjunction with the Common Weakness Risk Analysis Framework (CWRAF), can be used by consumers to
identify the most important weaknesses for their business domains, in order to inform their acquisition and
protection activities as one part of the larger process of achieving software assurance.
Stakeholders

 Software developers: often operate within limited time frames, due to release cycles and limited resources.
As a result, they are unable to investigate and fix every reported weakness. They may choose to
concentrate on the worst problems, the easiest-to-fix.
 Software development managers: create strategies for prioritizing and removing entire classes of
weaknesses from the entire code base, or at least the portion that is deemed to be most at risk, by defining
custom "Top-N" lists. They must understand the security implications of integrating third-party software, which
may contain its own weaknesses.
 Software acquirers: want to obtain third-party software with a reasonable level of assurance that the
software provider has performed due diligence in removing or avoiding weaknesses that are most critical to
the acquirer's business and mission. Related stakeholders include CIOs, CSOs, system administrators, and end
users of the software.
 Code analysis vendors and consultants: want to provide a consistent, community-vetted scoring mechanism
for different customers.
 Evaluators of code analysis capabilities: evaluate the capabilities of code analysis techniques (e.g., NIST
SAMATE). They could use a consistent weakness scoring mechanism to support sampling of reported findings,
as well as understanding the severity of these findings without depending on ad hoc scoring methods that
may vary widely by tool/technique.
 Other stakeholders: may include vulnerability researchers, advocates of secure development, and
compliance-based analysts (e.g., PCI DSS).
Scoring Methods within CWSS

 Targeted: Score individual weaknesses that are discovered in the design or implementation of a specific
("targeted") software package, e.g. a buffer overflow in the username of an authentication routine in line
1234 of vuln.c in an FTP server package.
 Generalized: Score classes of weaknesses independent of any particular software package, in order to
prioritize them relative to each other (e.g. "buffer overflows are higher priority than memory leaks").
 Context-adjusted: Modify scores in accordance with the needs of a specific analytical context that may
integrate business/mission priorities, threat environments, risk tolerance, etc. These needs are captured using
vignettes that link inherent characteristics of weaknesses with higher-level business considerations. This
method could be applied to both targeted and generalized scoring.
 Aggregated: Combine the results of multiple, lower-level weakness scores to produce a single, overall score
(or "grade").
 The current focus for CWSS is on the Targeted scoring method and a framework for context-adjusted
scoring.
CWSS 0.6 Scoring for Targeted Software

 Scoring
 In CWSS 0.6, the score for a weakness, or a weakness bug report ("finding") is calculated using 18
different factors, across three metric groups:
 the Base Finding group, which captures the inherent risk of the weakness, confidence in the accuracy of the
finding, and strength of controls.
 the Attack Surface group, which captures the barriers that an attacker must cross in order to exploit the
weakness.
 the Environmental group, which includes factors that may be specific to a particular operational context,
such as business impact, likelihood of exploit, and existence of external controls.
Base Finding Metric Group

Technical Impact (TI) Acquired Privilege (AP) Acquired Privilege Layer (AL)

Value Code Weight Value Code Weight Value Code Weight


Critical C 1 Administrator A 1 Application A 1
Partially-Privileged
High H 0.9 P 0.9 System S 0.9
User
Medium M 0.6 Regular User RU 0.7 Network N 0.7
Low L 0.3 Guest G 0.6 Enterprise E 1
None N 0 None N 0.1 Default D 0.9
Default D 0.6 Default D 0.7 Unknown Unk 0.5
Unknown Unk 0.5 Unknown Unk 1 Not Applicable NA 1
Not Applicable NA 1 Not Applicable NA 1
Quantified Q
Internal Control Effectiveness (IC) Finding Confidence (FC)
Cod Cod The Base Finding subscore (BaseFindingSubscore) is calculated
Value Weight Value Weight
e e as follows:
None N 1 Proven True T 1
Base = [ (10 * TechnicalImpact + 5*(AcquiredPrivilege +
Limited L 0.9 Proven Locally True LT 0.8 AcquiredPrivilegeLayer) +
Moderate M 0.7 Proven False F 0 5*FindingConfidence) *
Indirect (Defense-in- f(TechnicalImpact) *
I 0.5 Default D 0.8 InternalControlEffectiveness ] * 4.0
Depth)
Best-Available B 0.3 Unknown Unk 0.5
Complete C 0 Not Applicable NA 1 f(TechnicalImpact) = 0 if TechnicalImpact = 0; otherwise
Default D 0.6 Quantified Q
f(TechnicalImpact) = 1.
Unknown Unk 0.5
Not Applicable NA 1
Attack Surface Metric Group

• Deployment Scope (SC Authentication Instances (AI) Required Privilege Layer (RL) Authentication Strength (AS)
Value Code Weight Value Code Weight Value Code Weight Value Code Weight
All All 1 None N 1 System S 0.9 Strong S 0.7
Moderate Mod 0.9 Single S 0.8 Application A 1 Moderate M 0.8
Rare Rare 0.5 Multiple M 0.5 Network N 0.7 Weak W 0.9
Potentially
Pot 0.1 Default D 0.8 Enterprise E 1 None N 1 The Attack Surface Sub-score is
Reachable
Default D 0.7 Unknown Unk 0.5 Default D 0.9 Default D 0.85 calculated as:
Unknown Unk 0.5 Not Applicable NA 1 Unknown Unk 0.5 Unknown Unk 0.5
[ 20*(RequiredPrivilege +
Not Applicable NA 1 Not Applicable NA 1 Not Applicable NA 1
RequiredPrivilegeLayer + AccessVector)
Quantified Q
+ 20*DeploymentScope +
Required Privilege (RP) • Level of Interaction (IN) Access Vector (AV) 10*LevelInteraction +
Value Code Weight Value Code Weight Value Code Weight 5*(AuthenticationStrength +
None N 1 Automated Aut 1 Internet I 1 AuthenticationInstances) ] / 100.0
Guest G 0.9 Limited/Typical Ltd 0.9 Intranet R 0.8
Regular User RU 0.7 Moderate Mod 0.8 Private Network V 0.8
Partially- Adjacent
P 0.6 Opportunistic Opp 0.3 A 0.7
Privileged User Network
Administrator A 0.1 High High 0.1 Local L 0.5
Default D 0.7 No interaction NI 0 Physical P 0.2
Unknown Unk 0.5 Default D 0.55 Default D 0.75
Not Applicable NA 1 Unknown Unk 0.5 Unknown U 0.5
Not Applicable NA 1 Not Applicable NA 1
Environmental Metric Group

Business Impact (BI) Likelihood of Discovery (DI) Likelihood of Exploit (EX


Value Code Weight Value Code Weight Value Code Weight
Critical C 1 High H 1 High H 1
High H 0.9 Medium M 0.6 Medium M 0.6 The EnvironmentalSubscore is calculated
Medium M 0.6 Low L 0.2 Low L 0.2 as:
Low L 0.3 Default D 0.6 None N 0
None N 0 Unknown Unk 0.5 Default D 0.6 [ (10 * BusinessImpact +
Default D 0.6 Not Applicable NA 1 Unknown Unk 0.5 3*(LikelihoodOfDiscovery +
Unknown Unk 0.5 Quantified Q Not Applicable NA 1 LikelihoodOfExploit) +
Not Applicable NA 1 Quantified Q 3*Prevalence +
Quantified Q RemediationEffort) *
f(BusinessImpact) *
External Control Effectiveness (EC) Remediation Effort (RE) Prevalence (P) ExternalControlEffectiveness ] / 20.0
Value Code Weight Value Code Weight Value Code Weight
None N 1 Extensive E 1 Widespread W 1
f(BusinessImpact) = 0 if BusinessImpact ==
Limited L 0.9 Moderate M 0.9 High H 0.9
0; otherwise f(BusinessImpact) = 1
Moderate M 0.7 Limited L 0.8 Common C 0.8
Indirect (Defense-
I 0.5 Default D 0.9 Limited L 0.7
in-Depth)
Best-Available B 0.3 Unknown Unk 0.5 Default D 0.85
Complete C 0.1 Not Applicable NA 1 Unknown U 0.5
Default D 0.6 Quantified Q Not Applicable NA 1
Unknown Unk 0.5 Quantified Q
Not Applicable NA 1
Comparison: CVSS and CWSS

CVSS CWSS Notes


Confidentiality Impact (C),
Integrity Impact (I), Availability
CWSS attempts to use a more fine-grained "Technical Impact" model than confidentiality, integrity, and availability.
Impact (A), Security
Technical Impact Business Value Context adjustments effectively encode the security requirements from the Environmental portion of
Requirements (CR, IR, AR),
CVSS. The CDP is indirectly covered within the BVC's linkage between business concerns and technical impacts.
Collateral Damage Potential
(CDP)

Access Complexity (AC), Deployment Scope is indirectly covered by CVSS' Access Complexity, which combines multiple distinct factors into a
Deployment Scope
Target Distribution (TD) single item. It also has an indirect association with Target Distribution (TD).

Access Vector (AV) Access Vector The values are similar, but CWSS distinguishes between physical access and local (shell/account) access.

Required Privilege Required Privilege Level is indirectly covered by CVSS' Access Complexity, which combines multiple distinct factors
Access Complexity (AC)
Level into a single item.

Authentication This is not directly specified within CVSS, but scorers might consider the authentication strength when evaluating
N/A
Strength Access Complexity (AC).

Authentication
Authentication (Au)
Instances

Within many CVSS use-cases, the vulnerability has already been discovered and disclosed by another party when
Likelihood of CVSS scoring takes place. So there is no need to track the likelihood of discovery, as the likelihood is (effectively)
N/A
Discovery 1.0. However, within some CWSS use-cases, the issue is only known to the developer at the time of scoring, and the
developer may choose to increase the priority of issues that are most likely to be discovered.
Comparison: CVSS and CWSS (Continued)..

CVSS CWSS Notes


N/A Likelihood of Exploit This is not covered in CVSS.
Access Complexity (AC) Interaction Requirements
Access Complexity (AC), Internal Control Effectiveness
The presence (or absence) of controls/mitigations may affect the CVSS Access Complexity.
Remediation Level (RL) (IC)
The presence (or absence) of controls/mitigations may affect the CVSS Access Complexity.
External Control
Access Complexity (AC) However, a single CVE vulnerability could have different CVSS scores based on vendor-
Effectiveness (EC)
specific configurations.
Report Confidence (RC) Finding Confidence
N/A Remediation Effort (RE)
Exploitability (E) N/A
These is no direct connection in CWSS 0.3 for target distribution; there is no consideration of
Target Distribution (TD) N/A how many installations may be using the software. This may be added to future versions of
CWSS.
CVE- Common Vulnerabilities and Exposures

 CVE® is a publicly available and free to use list or dictionary of standardized identifiers for common
computer vulnerabilities and exposures.
 The Common Vulnerabilities and Exposures or CVE system provides a reference-method for publicly known
information-security vulnerabilities and exposures
 MITRE Corporation maintains the system, with funding from the National Cyber Security Division of the
United States Department of Homeland Security. CVE is used by the Security Content Automation Protocol.
 MITRE Corporation's documentation defines CVE Identifiers (also called "CVE names", "CVE numbers", "CVE-
IDs", and "CVEs") as unique, common identifiers for publicly known information security vulnerabilities.
 CVE identifiers have a status of either "entry" or "candidate". Entry status indicates acceptance of a CVE
Identifier into the CVE List, while a status of "candidate" (for "candidates," "candidate numbers," or "CANs")
indicates an identifier under review for inclusion in the list.
 If the Board accepts a candidate, its status is updated to "entry" on the CVE List. However, the assignment of
a candidate number is not a guarantee that it will become an official CVE entry
CEE

 CEE is an open, practical, extensible, and industry-driven event logging specification with the goal of unifying event
representation and classification
 It's developed as a coordinated industry initiative with participation from end user groups, logging providers, SIEM vendors,
independent experts, and U.S. government organizations
 Development is facilitated by The MITRE Corporation as part of the Making Security Measurable initiative

Why CEE is important?


 These users must support millions of different events described using hundreds of different formats. Although log management
tools have been developed with adaptors for most of these formats, those tool vendors must spend development time
supporting these adapters at the expense of more core capabilities (such as analysis, aggregation, and visualization).
 End users pay for these inconsistencies through the increased overhead costs of having to support multiple formats as well as
decreased awareness brought about by inconsistent information reported by different tools.
 CEE addresses these deficiencies by providing a common vocabulary and syntax that may be used used to record, share, and
interpret log data so that the devices and applications writing log data can use a common format.
 For end users this will lower the cost of log management and make the information provided more consistent and accurate
 for event data producers and consumers it will allow them to focus time on their core product capabilities rather than on
defining logging formats and vocabularies.
The CEE Architecture
CEE Capabilities

 Common vocabulary and taxonomy for event reporting One common problem in current log management
practice is that terms mean different things to different products, organizations, and communities. An IP Address
may mean just an IPV4 address to one community while it might mean an IPV6 address to another. It may mean
only external addresses to one group and both external and internal to another. Normalizing these terms
across communities of interest will allow for a common understanding of terms and support both ease of
implementation for log management vendors and easy of use for end users

 Log Serialization The most obvious problem in current log management is the lack of a common syntax for
reporting log data. This includes both the data formats used (XML vs. formatted-text in many cases) as well as
common header fields (such as event record ID).

 Log Transport Although syslog is a de-facto standard in the log transport space, it is not supported across all
common operating systems and has several key technical weaknesses. Transition to a more feature-complete
transport is hindered by a lack of suitable substitutes and the implementation cost of changing existing
infrastructure

 4 Log requirements definition Another common issue is correlating similar event records from different
products across a heterogeneous IT infrastructure. For example, large enterprises may run both Unix servers
and servers by commercial vendors. While that company would wish for all events to be reported consistently
across their server installs there are generally differences between products in both which events they report
and which information is included in event records.
Common Configuration Enumeration

 CCE™ provides unique identifiers to system configuration issues in order to facilitate fast and accurate
correlation of configuration data across multiple information sources and tools.

 CCE Identifiers can be used to associate checks in configuration assessment tools with statements in
configuration best-practice documents and security guides, are the main identifiers used for the settings in the
U.S. Federal Desktop Core Configuration (FDCC) data file downloads, and are a key component for enabling
security content automation.

 Why CCE ? When dealing with information from multiple sources, use of consistent identifiers can improve data
correlation; enable interoperability; foster automation; and ease the gathering of metrics for use in situation
awareness, IT security audits, and regulatory compliance.

 Currently, CCE is focused solely on software-based configurations. Recommendations for hardware and/or
physical configurations are not supported.

 Each entry on the CCE List contains the following five attributes:
 CCE Identifier Number - current version is “CCE-3243-3”
 Description - a humanly understandable description of the configuration issue
 Conceptual Parameters - parameters that would need to be specified in order to implement a CCE on a system
 Associated Technical Mechanisms - for any given configuration issue there may be one or more ways to implement the
desired result
 References - pointers to the specific sections of the documents or tools in which the configuration issue is described in detail
Common Platform Enumeration (CPE)

 Background
 Secure information systems depend on reliable, cost-effective Software Asset Management practices that support security
assessment. IT managers need highly reliable and automatable software inventory processes that provide accurate, up-to-the-
minute details about the operating systems, software applications and hardware devices that are installed and available for
use.
 Specification languages exist such as Common Vulnerabilities and Exposures (CVE®) for describing vulnerabilities, Open
Vulnerability and Assessment Language (OVAL®) for testing system state, and Extensible Configuration Checklist Description
Format (XCCDF) for expressing security checklists.
 What these languages all have in common, however, is a need to refer to IT products and platforms in a standardized way
that is suitable for machine interpretation and processing. CPE satisfies that need.
 Solutions
 Developed specifically to work with specification languages, CPE provides:
 A standard machine-readable format for encoding names of IT products and platforms.
 A set of procedures for comparing names.
 A language for constructing “applicability statements” that combine CPE names with simple logical operators.
 A standard notion of a CPE Dictionary.
 CPE™ is a standardized method of describing and identifying classes of applications, operating systems, and hardware
devices present among an enterprise’s computing assets.
 CPE can be used as a source of information for enforcing and verifying IT management policies relating to these assets, such
as vulnerability, configuration, and remediation policies.
 IT management tools can collect information about installed products, identify products using their CPE names, and use this
standardized information to help make fully or partially automated decisions regarding the assets.
Common Platform Enumeration (CPE) Dictionary

 Hosted by NIST, the “Official CPE Dictionary” currently includes 43,000+


unique CPE Names.

Its main purposes are to:


 Provide a canonical source for all known CPE Names.
 Bind descriptive metadata (such as a title and notes) to a CPE Name.
 Bind diagnostic tests (such as an automated check to determine if a given
platform matches the name) to a CPE Name.
Thank you!!

You might also like