Professional Documents
Culture Documents
Metrics To Prove You CARE About Cybersecurity - 743959 - NDX
Metrics To Prove You CARE About Cybersecurity - 743959 - NDX
Overview
Key Findings
■ Security and risk management leaders struggle to demonstrate and communicate a
minimum standard of due care to customers, regulators, auditors and senior
management.
■ Security and risk management leaders often focus on operational metrics that
provide limited value to business stakeholders due to their technical nature.
Operational metrics are necessary to run the program, but they are not useful or
relevant to business leadership in their raw form.
■ Most metrics used by organizations have been developed in a specific and often
tactical context; they are not being used to prove a desired outcome has been
achieved.
Recommendations
Security and risk management leaders responsible for reporting on the effectiveness of an
organization’s security program should:
■ Develop a catalog of CARE metrics to help prove that cybersecurity controls are
consistent, adequate, reasonable and effective. These outcomes should be
contextualized further to create more effective stakeholder messaging.
Introduction
In The CARE Standard for Cybersecurity, Gartner introduced the CARE standard. The CARE
standard focuses on achieving cybersecurity outcomes that are consistent, adequate,
reasonable and effective, as opposed to driving greater investment in tools and processes
(see Figure 1).
Security and risk management (SRM) leaders can further increase the credibility and
defensibility of their cybersecurity program by developing metrics based on the CARE
standard.
Analysis
Develop Metrics to Prove You CARE
This research provides SRM leaders with clear guidance on what types of metrics (with
relevant examples) are useful to help prove that cybersecurity controls are consistent,
adequate, reasonable and effective. There is, however, no prescriptive document or set of
metrics that any organization could follow that will give complete assurance that the
standard of due care in a particular circumstance has been met. Each organization must
evaluate its own particular circumstances and take into account a number of factors to
make an informed judgment about what is “good enough.”
With this in mind, the CARE framework should be used as a guide for organizations to
expand the catalog of metrics across all controls by identifying how they would apply
each metric type to a specific control within their environment. Using the CARE framework
to develop and structure metrics enables SRM leaders to translate their operational
metrics into categories that are easily understood by a nontechnical audience.
Although the categories of CARE metrics can also be used to aggregate individual control
metrics across entities and similar controls, SRM leaders should exercise caution.
Aggregating metrics across different controls can create meaningless metrics. It is
important to remember that all metrics must do at least one of the following:
■ Inform and educate the audience about issues that are important to them.
However, CARE metrics can and should be further contextualized to the audience by
drilling into detail for specific business units and systems. It is not enough to give the
audience data; security and risk management leaders must embed context with the
metrics they report. Figure 2 provides an illustration of how adding context to metrics
creates more effective stakeholder messaging. The arrow indicates a progression toward
more precise and contextualized metrics.
Consistency metrics are the most popular and frequently used type of metric among SRM
leaders. The frequency of use may be due to a historical focus on deployment of
technologies to address issues. As a result, only a small subset of the operational
outcomes that can be used to assess the consistency of cybersecurity controls are used in
practice, with focus on the coverage and currentness of controls.
■ In line with stakeholders’ expectations in terms of balancing the need for protection
with the need to run the business.
■ Not far from what are normally considered acceptable results for similar conditions.
Every organization requires the flexibility to meet its unique needs and circumstances and
respond to risks and threats based on the organization’s budget, size and needs. Gartner
recommends adopting a risk, value and cost (RVC) optimization approach. The RVC
approach can demonstrate to key stakeholders that the organization has the right
priorities and investments to balance the need to address risk with the need to achieve
their desired business outcomes. Organizations can demonstrate reasonableness through
an assessment of risk (to both the organization itself and to third parties) and robust
governance in order to reduce the risk to within tolerable levels.
■ The organization’s maturity benchmarks (such as Gartner’s IT Score for Security and
Risk Management) matches or exceeds its peers’.
It is important to note that none of these approaches in isolation is a good way to decide
whether your controls are reasonable. Each is imprecise, highly dependent on
circumstances and subject to change over time, but can identify immediate concerns that
should be assessed in more detail. In addition, they ignore the nonfinancial impact of
excessive or too restrictive security.
Table 3 provides a set of operational outcomes to measure the friction created by your
security program and assess the reasonableness of your security controls.
Demonstrating that your controls are successful in producing the desired or intended
levels of protection requires an understanding of what the desired levels of protection are.
The levels of protection vary depending on the type of control and its intended purpose.
Gartner recommends developing outcome-driven metrics (ODM) that provide a direct line
of sight to protection levels in a business context. At a high level, this typically translates
to reducing both the number of incidents and their impact by either reducing the time to
detect and respond or being able to effectively recover.
Table 4 provides a set of operational outcomes on which to base the development of your
ODM and to assess the effectiveness of your security controls against the desired levels
of protection.
Evidence
Gartner received over 2,000 inquiries on the topic of security and risk metrics and reporting
between July 2019 and June 2021.
This analysis is based on Gartner analysts’ experience in working with clients to improve
their security and risk metrics. It is also based on Gartner observations of the most
common security metrics and analysis of the assertions being made by the metrics.
Most countries’ laws have similarly vague language. A further complicating issue is that
the meaning of the term “due care” depends on what country you are in, and where that
country’s law system was derived. The term of art “standard of due care” derives from U.K.
common law, and has specific meaning in countries where the law is based on that
system. It is a judgment made by a court, and is used to allocate or assign liability.
Other legal systems, such as the Napoleonic Code, have very different definitions of the
role of the judiciary, and thus may use the term differently.
Control coverage These metrics measure how consistently the Threat and vulnerability management: Percentage
control is deployed throughout the environment of systems scanned for vulnerabilities.
compared to required scope. It demonstrates that
the control covers the areas where it is needed and
intended.
Control currentness These metrics measure how consistently the Threat and vulnerability management: Percentage
Control strength These metrics measure how consistently the Threat and vulnerability management: Percentage
control is configured to provide an expected level. of systems utilizing comprehensive vulnerability
It demonstrates that the control is configured at scanning (authenticated scans).
the highest setting suitable for your organization.
Control availability These metrics measure how consistently the All technology-based controls: Percentage of
control can be used. It demonstrates that the hours control was unavailable due to an
control is available when it is needed. unplanned outage.
Control accuracy These metrics measure how consistently the Detection and response: Percentage of false
control detects what it is required to. It positives.
demonstrates that the control has a level of
precision required that can be relied upon.
Internal policy compliance These metrics measure how adequate the control All compliance-related controls: Percentage of
is when compared to an internal policy. It compliance with a specific policy requirement (e.g.,
demonstrates that the control meets the Percentage of systems in scope compliant to PCI
expectations of stakeholders as defined in an DSS 5.1 — Deploy antivirus software on all systems
approved policy. commonly affected by malicious software
[particularly personal computers and servers]).
Protection-level agreement achievement These metrics measure how adequate the control Threat and vulnerability management: Percentage
is when compared to defined outcomes. It of assets regularly patched within PLA.
demonstrates that the control meets the
expectations of stakeholders as defined in an
agreement.
Business case benefit realization These metrics measure how adequate the control Depends on the business case.
is when compared to projected benefits usually
documented in a business case. It demonstrates
that the control meets the expectations of
stakeholders when approved a project or business
case.
Delays These metrics measure how reasonable the control Identity management: Average delay (in hours)
is when compared to the delays caused to the when adding new access.
business as a result. It demonstrates that the
control does not unnecessarily impact the
business.
Incidents (security-caused) These metrics measure how reasonable the control All controls: Number of incidents caused by
is when compared to the incidents caused by the security control.
control. It demonstrates that the control does not
unnecessarily impact the business.
1
Complaints These metrics measure how reasonable the control All controls: Number of complaints caused by
is when compared to the complaints about the security control.
control. It demonstrates that the control does not
unnecessarily impact the business.
Control timeliness These metrics measure how effective the control is Threat and vulnerability management: Average or
when compared to its objective. It demonstrates maximum number of days required to patch critical
that the control is effective in preventing, detecting security vulnerabilities.
or recovering from impacts to the business.
Incident prevalence These metrics measure how effective the control is Threat and vulnerability management: Number of
when compared to the types of incidents it should incidents per year related to unpatched
address. It demonstrates that the control is vulnerabilities.
effective in reducing the volume of incidents that
could be prevented.
Risk reduction These metrics measure how effective the control is All controls: Demonstrable decrease in risks or
when compared to the inherent risk it is intended to business impact.
reduce. It demonstrates that the control is
effective in reducing the risk or impact to the
business.