Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

CIS Benchmark Development Guide

V02 - 2/6/2019
1 TABLE OF CONTENTS
2 Introduction ............................................................................................................................... 3
3 The Benchmark Development Team Roles .................................................................................. 3
4 Benchmark Overview ................................................................................................................. 4
4.1 Benchmark Technology Scope .........................................................................................................4
4.2 BDP Overview .................................................................................................................................5
4.2.1 CIS Workbench ....................................................................................................................................................... 5
4.2.2 Initial Benchmark Creation .................................................................................................................................... 5
4.2.3 Recommendation Creation .................................................................................................................................... 6
4.2.4 Consensus Review .................................................................................................................................................. 6
4.2.5 Last Call .................................................................................................................................................................. 6
4.2.6 Publishing the Final Benchmark ............................................................................................................................. 7

5 The Basics .................................................................................................................................. 7


5.1 All Community Members (General Contributors)..............................................................................7
5.1.1 Getting a Workbench Account ............................................................................................................................... 7
5.1.2 Joining a Community .............................................................................................................................................. 8
5.1.3 Viewing Community Benchmarks ........................................................................................................................ 11
5.1.4 Discussions ........................................................................................................................................................... 14
5.1.5 Tickets .................................................................................................................................................................. 17

6 The Details ............................................................................................................................... 21


6.1 More Advanced Contributors......................................................................................................... 21
6.2 Benchmark Editors ........................................................................................................................ 24
6.2.1 Benchmark Structure ........................................................................................................................................... 25
6.2.2 Recommendation Purposes ................................................................................................................................. 25
6.2.3 Recommendation Scoring .................................................................................................................................... 26
6.2.4 Process for Creating a Recommendation ............................................................................................................. 27
6.2.4.1 Scoring Status, Profiles, and CIS Control fields............................................................................................ 27
6.2.4.2 Title field ..................................................................................................................................................... 27
6.2.4.3 Description field .......................................................................................................................................... 28
6.2.4.4 Rationale Statement field ........................................................................................................................... 28
6.2.4.5 Audit Procedure field .................................................................................................................................. 28
6.2.4.6 Remediation Procedure field ...................................................................................................................... 30
6.2.4.7 Impact Statement field................................................................................................................................ 31
6.2.4.8 Default Value field ....................................................................................................................................... 31
6.2.4.9 References field ........................................................................................................................................... 31
6.2.4.10 Notes field ................................................................................................................................................... 32
6.2.5 Recommendation Formatting .............................................................................................................................. 32
6.2.6 Recommendation Organization ........................................................................................................................... 33

6.3 CIS Technology Community Leads (TCLs) ........................................................................................ 34


7 How Should I Proceed? ............................................................................................................. 34

CIS Benchmark Development Guide V02 Page 2 of 35


2 INTRODUCTION
Welcome to the Center for Internet Security (CIS) Benchmark Development Process (BDP). Organizations of all
sizes rely on CIS benchmarks every day for secure system configuration guidance, and as a contributor you are
playing a critically important role in helping many organizations worldwide be better prepared by increasing the
protection of their ongoing computer operations.

You have volunteered to be part of a community of individuals from all types of organizations on the journey to
create and maintain the Current Best Practice in security in your area of expertise. The term “current best practice”
is one that explicitly recognizes the only constant in our industry: change.

All CIS Benchmarks are developed by the consensus of a given Technology Community. A Technology Community
diverse group of people actively interested in the security of a given technology area (MS Windows, Linux, Postgres,
etc.) and the development of related benchmarks. They contribute testing and feedback on the recommendations
being forwarded. An active Technology Community is of extreme importance to the successful development of a
benchmark, since it is the consensus of the community that ultimately determines the final recommendations
that are in a given benchmark release.

The goal of all benchmark areas is to provide practical, security-specific guidance for a given technology that
concisely:

1. Describes a generally applicable baseline for all organizations


2. Recognizes the need to securely maintain operational effectiveness

This guide will provide an overview of the BDP, the roles and responsibilities of various participants in the BDP,
and an introduction to the environments and tools available to you. We are excited to have you with us!

3 THE BENCHMARK DEVELOPMENT TEAM ROLES


To successfully develop a benchmark, a focused team is drawn from the overall Technology Community to
spearhead the effort on behalf of the overall community. In general, this focused team has the following roles
represented:

 CIS Technology Community Leader (TCL): A CIS employee who is responsible for shepherding the given
Technology Community and resulting benchmark through the development process and ultimately
publishing the result.
 Editor: An individual or individuals who have been given editing rights to the underlying benchmark source.
These individuals have generally been contributors to other benchmarks and have been sufficiently vetted
to allow this level of trust. Editors are typically leaders in the given Technology Community and are a great
resource for new members of the community.
 Subject Matter Experts (SMEs): In general, there are two types of SMEs involved contributing their
security expertise and/or technical expertise in the development of the detailed recommendations:
o Technology Subject Matter Experts (T-SMEs): An individual or individuals who are actively
contributing their expertise in the development and testing of the detailed technical
recommendations of a given benchmark.

CIS Benchmark Development Guide V02 Page 3 of 35


o Security Subject Matter Experts (S-SMEs): An individual or individuals who are actively
contributing their expertise in the security goals and ramifications of recommendations of a given
benchmark.

These roles do not always have to be held by unique individuals. Sometimes two or more roles can be embodied
in a single individual depending on that individual’s expertise and availability, and the overall community makeup.
The Technology Community is always actively involved in the process, monitoring and providing feedback, and as
needed taking on the previous described roles. In the end, the community provides consensus-based approval of
the recommendations in the given benchmark.

4 BENCHMARK OVERVIEW
Before discussing the details of the BDP, let’s take a closer look at benchmarks and their components. This
understanding is important for all roles so there will be a high level of consistency across benchmarks and within
each benchmark.

4.1 BENCHMARK TECHNOLOGY SCOPE


The first step in developing a benchmark is to define the scope of the technology it will address. Benchmarks cover
a range of technologies, including operating systems, server software, desktop/client software, mobile devices,
network devices, multifunction printers, and cloud providers. A given benchmark defines a set of
recommendations for securing a specific technology, such as Microsoft Windows Server, Red Hat Linux, or Apple
iOS. A benchmark may also include recommendations for platforms supporting that technology, if applicable; for
example, the IBM DB2 benchmark has recommendations for securing the IBM DB2 software itself and the
Windows or Linux host OS the IBM DB2 software runs on top of.

A benchmark may address a single version or multiple versions of a particular technology. In general, technology
versions which are secured the same way should be covered by a single benchmark, and technology versions with
significantly different audit and remediation instructions should be covered in separate benchmarks. When
considering which technology versions a benchmark should cover, ask yourself which versions of the technology
have adequate documentation available and/or instances available to you for developing and verifying audit and
remediation instructions. It’s not necessary to test a benchmark against every version of the technology it covers,
as long as you’re confident there are no significant differences among those technology versions.

Once you have identified the technology versions and any supporting platforms (e.g., operating systems running
below a client or server application), you will need to define a profile. Each benchmark must have one or more
profiles defined. A profile is a collection of recommendations for securing a technology or a supporting platform.
For example, the IBM DB2 benchmark has profiles for the IBM DB2 software itself, the Windows host OS platform,
and the Linux host OS platform.

Currently CIS Benchmarks have at least one profile defined. The basic profile for each technology or supporting
platform is called a Level 1 profile. Level 1 profiles contain recommendations that:

 Are practical and prudent


 Provide a clear security benefit
 Do not inhibit the utility of the technology beyond acceptable means

CIS Benchmark Development Guide V02 Page 4 of 35


Level 2 profiles may optionally be defined to extend Level 1 profiles. Generally, level 2 profiles contain all level 1
recommendations plus additional recommendations that:

 Are intended for environments or use cases where security is paramount


 Act as defense in depth measures
 May negatively inhibit the utility or performance of the technology

Each recommendation is assigned to one or more profiles. For example, a single IBM DB2 benchmark
recommendation may be assigned to both a Level 1 Windows host OS profile and a Level 1 Linux host OS profile
because it applies to both platforms, even though the audit and remediation steps on each platform may differ.

Someone who wants to use a benchmark would select the profile that best suits their platforms, security
requirements, operational considerations, etc. Any recommendations not included in a selected profile would be
omitted for remediation and auditing purposes.

NOTE: It is entirely possible and valid for a benchmark user to create their own “profile” that is a
combination of recommendations in the given published benchmark (Level 1 and Level 2). This
is up to the user. In general, third-party compliance organizations stick with the predefined Level
1 and Level 2 recommendation sets in the published benchmarks.

4.2 BDP OVERVIEW


This section will give an overview of the BDP, the tools involved, and how the various roles fit into it. More details
on each part of the BDP is given later in this document.

4.2.1 CIS Workbench


CIS Workbench is a web-based tool used by the Technology Community to develop and maintain benchmarks in
their technology area. The primary aspects of the tool are:

 Access it from any browser – Anyone with an Internet connection and a web browser can potentially join
and contribute to a community and a benchmark. No software needs to be installed locally.
 Reach the Technology Communities – Users can join communities for the benchmarks they are interested
in and use CIS Workbench to interact with these communities via threaded discussions.
Create, edit, and maintain benchmark documents – Certain community members can use CIS Workbench
to create and revise recommendations and accompanying prose for a given benchmark. This capability is
generally limited to a few people in the roles of CIS TCL, Editor, and key SMEs.
 Suggest changes to benchmarks – Anyone in the community can suggest changes to a benchmark via the
Workbench ticket process, creating and submitting tickets against a recommendation in a benchmark.
These tickets are viewed and discussed by the community and can result in changes to a benchmark if
consensus is reached.

Details on using CIS Workbench are covered later in this document. For now, it is important to understand that
CIS Workbench is how technology communities and their benchmarks are managed and developed.

4.2.2 Initial Benchmark Creation


For any new benchmark being developed, the TCL for the given community will put the initial framework in place.
This new benchmark could be for a new technology, in which case the initial framework in CIS Workbench could
be quite sparse, or it could be a new benchmark derived from an existing one (new Ubuntu release, new Windows

CIS Benchmark Development Guide V02 Page 5 of 35


release, etc.) where the recommendations and prose from previous releases can be used as a starting point. In
any case, the TCL will get the ball rolling and create the initial version of the new benchmark.

4.2.3 Recommendation Creation


Recommendations are the key component of any benchmark. The recommendations are the security settings that
the system administrators need to find, check the current value of, and ultimately make sure are set correctly as
specified in the prose of the given recommendation. One important aspect of any CIS-approved benchmark is that
every recommendation meets a strict criterion, including:

 Profile (Level 1, Level 2, etc.) – the profile(s) this recommendation is assigned to


 Mapping to CIS Controls – the CIS Control(s) this recommendation maps to
 Description – detailed information pertaining to the security setting
 Rationale – the reason the recommendation is being made
 Audit Procedure – a discrete set of manual steps used to determine whether a target system is in
compliance with the recommendation
 Impact Statement – any non-obvious adverse consequences likely to be experienced by an enterprise
putting this recommendation into effect
 References – additional external information relevant to this recommendation (URLs to vendor
documentation, articles, etc.)

This level of detail has helped make CIS benchmarks the industry standard for quality and ease of use. This detail
involves additional work during the creation process, but is well worth the effort to create a deliverable that can
be applied by the broadest user base.

4.2.4 Consensus Review


After a draft benchmark has reached the point that the community and the TCL feel it is ready to be published,
the benchmark will go through Consensus Review. Consensus Review starts by an announcement being made to
the community via a new discussion thread created in the CIS Workbench tool by the TCL or an Editor stating that
a draft version of the benchmark is now available in MS Word format (MS Word format is generally considered
easier to read through than using Workbench for this purpose).

NOTE: The announcement will be sent to everyone who has joined this community and has the proper
notification settings (more details on joining a community and notifications are covered in
section 5.1.2).

During this process community members are encouraged to review the draft benchmark and comment on, or
create new, tickets and discussions with feedback for the development team on the recommendations in the
benchmark. Anyone can contribute and all feedback is welcome.

Generally, Consensus Review lasts for as long as there are unresolved tickets against the benchmark. A TCL may
decide to define a specific review period timeframe (such as three weeks) to keep the process moving. Tickets are
created for a number of reasons, but each will ideally be some form of change proposal. Tickets will be discussed
via comments in CIS Workbench and during the community's recurring open “Community Call” meetings until a
conclusion is reached and action taken (ticket can be resolved, rejected, or deferred).

4.2.5 Last Call


When the quantity of tickets requiring action reaches zero, it's time for a "Last Call" for review. Last Calls are very
similar to Consensus Review (Announced via community discussion, Eliciting feedback, etc.). The primary

CIS Benchmark Development Guide V02 Page 6 of 35


difference it is a fixed duration (generally two weeks) and occurs just prior to publishing the benchmark. Last Calls
are important for two reasons. First, not everyone in a community chimes in during benchmark development, and
the last call period can sometimes elicit additional review and feedback. Second, the last call period is the ideal
time for quality control and completion of the development of derivative products by the TCL, or the mapping of
the CIS Controls.

4.2.6 Publishing the Final Benchmark


Once Last Call is complete, the TCL will go through the process of using CIS Workbench to create the PDF version
of the document and publishing it on the CIS Benchmark website for anyone to access
(https://www.cisecurity.org/cis-benchmarks/).

The TCL primarily guides the benchmark through the development process, but it is the community members that
actively develop, review, test, and approve each recommendation that is ultimately included in the final
benchmark released.

5 THE BASICS
This section will go over the CIS Workbench tool in more detail and the capabilities it provides to individuals in the
various roles.

5.1 ALL COMMUNITY MEMBERS (GENERAL CONTRIBUTORS)


This section gives an overview of the CIS Workbench environment from the perspective of a new user. It is not
comprehensive, since other tutorials are; instead, it covers the basics and encourages the new user to explore the
tool, the Technology Communities available, and the benchmarks being worked on. The overall goal is to give the
new user enough information and encouragement to pick at least one Technology Community to get involved in
and hopefully contribute to a benchmark in active development.

5.1.1 Getting a Workbench Account


The first step in getting involved with the BDP is to get a free account for CIS Workbench
(https://workbench.cisecurity.org/). This process is shown in Figure 1 and Figure 2. Once your request is
approved, you will be given basic access to CIS Workbench and the various Technology Communities and
benchmarks.

CIS Benchmark Development Guide V02 Page 7 of 35


Figure 1: CIS Workbench Login Page

Figure 2: Registering for a CIS Workbench Account

Once approved, you will receive an email at your registered email address, telling you that your account is ready.
At this point you can log in to CIS Workbench with your credentials.

5.1.2 Joining a Community


Once your account is created and you log in to the CIS Workbench site, you can find a Technology Community that
you are interested in joining. You can do that by pressing the “Join A Community Today!” button shown in Figure
3. This will present you with a list of possible communities to join. Browse the list and select one or more
Technology Communities you are interested in, as shown in Figure 4.

Figure 3: User Home Page

CIS Benchmark Development Guide V02 Page 8 of 35


Figure 4: Finding a Technology Community

From the list shown, you can get an indication of how active a given community is by looking at the numbers in
the displayed columns:

 # of Benchmarks: The numbers of benchmarks maintained by this community


 # of Milestones: The number of benchmark project milestones in this community
 # of Discussions: The number of discussions that have occurred in this community

In general, the more discussions in a community, the more “active” it is. You can learn more about a given
community prior to joining by clicking on the community name in the above list. For example, clicking on “CIS
Apple OS Benchmark” takes you to this community’s home page, as shown in Figure 5.

CIS Benchmark Development Guide V02 Page 9 of 35


Figure 5: CIS Apple OS Benchmark Technology Community Homepage

From this page, you can see the CIS TCL’s name and the activities this community is currently involved in.

 Benchmarks: These are the most recent benchmarks being maintained by this community.
 Milestones: These are the most recent project milestones for the community’s benchmarks in
development.
 Tickets: These are the most recent change requests for the benchmarks maintained by this community.
 Discussions: These are the most recent posts to the community’s discussions on various topics related to
the benchmarks in this community.
 Community Activity: This is a timeline showing the most recent activity in this community.

NOTE: You can view any of the above by clicking on the item name in the given list, but to actually
contribute (create or add a comment to a discussion or ticket, etc.) you must join the community.

Joining a community basically means you are interested in the activities of this Technology Community and the
benchmarks they are creating, as in Figure 6. Practically, this means you will receive notifications about activities
in this community (Discussion items, Ticket items, etc.) via email, and you are able to create and comment on
tickets and discussions.

CIS Benchmark Development Guide V02 Page 10 of 35


Figure 6: Technology Community Joining Acknowledgment Dialog

NOTE: Details on how you get notified can be modified via the “My Subscription Preferences” page
available by clicking on your username in the upper right corner of the screen, as shown in Figure
7. On an actively developed benchmark you can get a number of notifications per day, so setting
up a filter on your email client might be advisable.

Figure 7: My Subscription Preferences

5.1.3 Viewing Community Benchmarks


Since benchmark publishing is the primary reason the Technology Communities and CIS Workbench exist, let’s
look at a benchmark in more detail. We’ll pick the CIS Apple macOS 10.12 Benchmark v1.1.0 from the Benchmarks
list on the CIS Apple OS Benchmark Technology Community Homepage from Figure 5, as shown more closely in
Figure 8.

Figure 8: Selecting a Benchmark

CIS Benchmark Development Guide V02 Page 11 of 35


This brings you to the overview page of the benchmark, which can be considered the title page of the benchmark,
as shown in Figure 9. Before we go any further, let’s discuss the three major areas displayed in the browser.

Figure 9: Benchmark Overview Page

The leftmost pane is used for navigation within the benchmark’s recommendations and other parts of the CIS
Apple OS Benchmark Technology Community site (Related Files, Related Tickets, etc.) The center pane is generally
the primary working area for whatever is selected in the navigation pane. The rightmost pane also changes based
on what is selected in the navigation pane, but is restricted to displaying information based on the tabbed
categories at the top of this pane (Tickets, Discussions, and Changes).

In the navigation pane, the lower section is dedicated to navigation of the specific benchmark being displayed.
This is basically a “Recommendation Tree” for this benchmark, and each item listed is one of these:

 A section/subsection: This is a set of subsections and/or recommendations. Sections and subsections are
used for logically grouping related recommendations.
 A recommendation: This contains the detailed prose for a security setting or closely related settings of
interest.

As an example, in Figure 10 we have selected recommendation 1.3 (Enable app update installs), the prose of which
is now displayed in the center pane. The right pane shows there are no tickets for this recommendation.

CIS Benchmark Development Guide V02 Page 12 of 35


Figure 10: Sections/Subsections and Recommendations

Also, in the left pane you can see that Section 1 (Install Updates, Patches and Additional Security Software) is
made up of five recommendations. Section 2 (System Preferences) is made up of eight subsections and at least
one recommendation is in view (in fact, there are more recommendations not in view). Subsection 2.1 (Bluetooth)
is made up of three recommendations. Viewing different recommendations in the benchmark can be done by
selecting the recommendation of interest in the left pane, which will display the corresponding prose in the center
pane.

NOTE: “< Prev” and “Next >” buttons are also available to move between recommendations within a
given section/subsection (below the recommendation title in the center pane).

This process certainly works for viewing a given benchmark, but many people would rather view the benchmark
in a more standardized form (pdf, MS Word, or MS Excel). This can be done for published benchmarks by going to
the “Files Area”, as shown in Figure 11. This brings you to the files selection page, as shown in Figure 12.

CIS Benchmark Development Guide V02 Page 13 of 35


Figure 11: Benchmark Files Area

Figure 12: Files Selection Page

Select and download a pdf version of this benchmark if available, as shown in Figure 13.

Figure 13: pdf Version of Benchmark

NOTE: The pdf version of the benchmarks contain all the prose details in the original CIS Workbench
form, but generally is easier to read and does not need a special application (CIS Workbench) to
view.

NOTE: The pdf and MS Word versions will always be available for published benchmarks but may not
be available for benchmarks being actively developed. When in development, it is best to view
the benchmark in CIS Workbench itself.

5.1.4 Discussions
Discussions are used by the community to talk about various subjects and are a good way to start getting involved
in a community. Let’s use the CIS Apple macOS 10.14 Benchmark v1.0.0 as an example.

CIS Benchmark Development Guide V02 Page 14 of 35


Figure 14: Discussions for Apple macOS 10.14 Benchmark v1.0.0

As can be seen in Figure 14, there is currently one discussion for this benchmark. Clicking on the title of the
discussion (Why importing 10.12 and not 10.13?) displays the discussion details in the right pane, as Figure 15
shows.

Figure 15: Discussion Detail

CIS Benchmark Development Guide V02 Page 15 of 35


The original topic description (from the topic creator – Daniel Cassana) and any current comments on this
discussion topic are listed. You can add a comment to this discussion by typing in the lower text box and pressing
Add Comment. Your comments will become part of this discussion topic and will be viewable by all the community
members. Members joined to this community will be notified of any new topic, or comments added to an existing
topic, based on their notification settings.

In general, discussions can be linked to a specific object in the community (something that the discussion is
referencing). For example, objects can be:

 Technology Community: Linking a discussion here is generally done for community announcement and
other discussions of broader interest.
 A Given Benchmark: Linking a discussion here is done for general discussions about a given benchmark.
 A Specific Section in a Benchmark: Linking a discussion here is done for discussions about a given section
of a benchmark.
 A Specific Recommendation in a Benchmark: Linking a discussion here is done for discussions about a
given recommendation of a benchmark.

Linking to objects can be done via the Links portion of the Create or Edit discussion screen as shown in Figure 16.

Figure 16: Adding Linked Objects to a Discussion

You can also create a new discussion on a topic. The right pane button to do this looks a little different depending
if there are existing discussions on the object selected, as in Figure 17 and Figure 18.

Figure 17: New Discussion on a Recommendation with an Existing


Discussion Figure 18: First Discussion on a Recommendation

Either way will bring you to the same discussion entry screen, as shown in Figure 19. Here is the screen for entering
a new discussion for recommendation 1.2.2 (Ensure 'Host Name' is set) on a test version of the CIS Cisco Firewall
Benchmark v4.1.0. At this point you simply fill in the appropriate fields describing the topic in detail you want to
discuss and click Submit.

CIS Benchmark Development Guide V02 Page 16 of 35


Figure 19: New Discussion Entry Right Pane

It is generally easier to select the object of interest in the left navigation pane as in Figure 20, and then use the
“New Discussion” button on the right pane. This will link the selected object to the created discussion.

Figure 20: Left Pane Selection for the New Discussion in Figure 19.

5.1.5 Tickets
The concept of a “Ticket” will be familiar to many people who have used other types of issue tracking software
(Jira, etc.) In general, the idea is there is a specific issue that needs to be addressed and tracked to completion.

NOTE: Tickets are the most common form of community communication on benchmarks in active
development, and progress on tickets is tracked using the Milestone tool in Workbench. Most
editors and TCLs use tickets as a sort of “To Do” list for the changes and additions to a benchmark.

CIS Benchmark Development Guide V02 Page 17 of 35


Figure 21: Tickets for Apple macOS 10.14 Benchmark v1.0.0

As shown in Figure 21, there is currently one ticket for this benchmark. Clicking on the title of the discussion
(ntpdate is no longer applicable in Mojave) displays the ticket details in the right pane, as in Figure 22.

Figure 22: Ticket Detail

The original ticket description is listed (from the ticket creator – Peter Loobuyck), along with who it is currently
assigned to (the TCL – Eric Pinnell) and its status and priority. Any current comments on this ticket topic are listed
as well. You can add a comment to this ticket by typing in the lower text box and pressing Add Comment. Your
comments will become part of this ticket and will be viewable by all community members. Members of the
community will be notified of new tickets or comments added to existing tickets based on their notification
settings.

In general, tickets can be linked to a specific object in the community just like with discussions (Community,
Benchmark, Section and Recommendation). Linking to objects can be done via the Links portion of the Create or
Edit ticket screen, as shown in Figure 23.

CIS Benchmark Development Guide V02 Page 18 of 35


Figure 23: Adding Linked Objects to a Ticket

You can also create a new ticket on a topic. The right pane button to do this looks a little different depending if
there are existing tickets on the object selected, as in Figure 24 and Figure 25.

Figure 25: First Ticket on a Recommendation


Figure 24: New Ticket on a Recommendation with an Existing Ticket

Either way will bring you to the same ticket entry screen, as shown in Figure 26. It is the screen for entering a new
ticket for recommendation 2.2.1 (Enable "Set time and date automatically") on the CIS Apple macOS 10.14
benchmark v1.0.0.

CIS Benchmark Development Guide V02 Page 19 of 35


Figure 26: New Ticket Entry Right Pane

It is generally easier to select the object of interest in the left navigation pane, as in Figure 27, and then use the
“New Ticket” button on the right pane. This will link the selected object to the created ticket.

Figure 27: Left Pane Selection for the New Ticket in Figure 26

With the information covered thus far, you can fully contribute to any benchmark using discussions and tickets. If
you are ready to get started, please jump to Section 7 (How Should I Proceed?). Otherwise, please read on and
learn more about making benchmarks and using the Workbench tool.

CIS Benchmark Development Guide V02 Page 20 of 35


6 THE DETAILS
This section goes over some of the advanced capabilities of the CIS Workbench tool and some of the activities
typically done by individuals in more advanced roles.

6.1 MORE ADVANCED CONTRIBUTORS


This section gives an overview of the Proposed Change capability of Workbench. This is a more advanced feature
that allows any contributor to make “proposed” changes directly to the benchmark prose. These changes can be
viewed, partially accepted, fully accepted, or rejected by the leaders of the benchmark development (individuals
with editor rights).

NOTE: Although this feature can be used by anyone to propose a change to a benchmark, it gets into
the editing capabilities and conventions of the Workbench tool, which can add complications.
For this reason, we suggest most users utilize the discussion and ticket capabilities discussed
previously for most issues.

Let’s walk through a Proposed Change on a test version of the CIS Cisco Firewall Benchmark v4.1.0,
recommendation 1.9.1.1 (Ensure 'NTP authentication' is enabled). You can see the initial recommendation prose
in Figure 28.

Figure 28: Initial Recommendation Prose and Proposed Change Selection Highlighted

Selecting Propose Change in the left pane starts the process (see Figure 28). This will display the Edit Proposed
Recommendation screen, as shown in Figure 29.

NOTE: Figure 29 is not very legible. It is here for navigational reference and get an idea of what it looks
like. A more detailed description of the various fields will follow.

CIS Benchmark Development Guide V02 Page 21 of 35


Figure 29: Proposed Recommendation (Change) Screen – For Overall Navigational Reference

This screen is essentially the same screen benchmark editors use to create and modify the benchmarks directly,
and it is made up of a number of areas:

 Artifacts: This area is primarily used by the CIS TCL to develop automated assessment content (AAC) that
corresponds to the specific test(s) described in the text for this recommendation. A full discussion of
artifacts and AAC is beyond the scope of this document, but in general, AAC is XML files following the
Security Content Automation Protocol (SCAP) set of standards, and specifically the Extensible

CIS Benchmark Development Guide V02 Page 22 of 35


Configuration Checklist Description Format (XCCDF) and Open Vulnerability and Assessment Language
(OVAL) standards. AAC can be read by CIS (CIS-CAT) and a wide variety of third-party assessment tools to
analyze systems for compliance to the given CIS benchmark.
 Recommendation Properties: In general, this is the area where most changes are focused. The following
sub-properties will be described briefly here, but for a more detailed explanation, please see Section 6.2.1.
o Title: Short descriptive title for this recommendation
o Scoring Status: In general, Scored means a value that can be manually or automatically collected
and definitively compared to a standard.
o Profiles: Predefined groups of recommendations for a given purpose
o CIS Controls: The CIS Controls this recommendation addresses
o Description: A detailed description of how this recommendation affects the target or target
environment’s functionality
o Rationale Statement: The specific reasoning this recommendation is being made
o Audit Procedure: Step-by-step instructions for determining if the target system is in compliance
o Remediation Procedure: Step-by-step instructions for applying this recommendation to the
target (bringing it into compliance)
o Impact Statement: Any non-obvious adverse security, functionality, or operational
consequences from following this recommendation
o Default Value: The default value for the given setting in this recommendation, if known
o References: URLs to additional documentation on this issue, if applicable
o Notes: Supplementary information that does not correspond to any other field

In the following example, we are going to make some proposed changes to the Audit Procedure field above.
Figure 30 shows the original recommendation text. Figure 31 shows we have made three modifications:

1. In Step 1: deleted the word “following”


2. In Step 2: replaced “a finding” with “an issue”
3. Step 3 is new

Figure 30: Original Recommendation

Figure 31: Modified Recommendation

NOTE: Due to limitations with the Proposed Change capability, users should avoid using the formatting
toolbar since in many cases it will cause a confusing result. This does not happen in all cases but
is a good general rule.

Now, if we reselect this recommendation in the left pane navigation, we see the result in Figure 32.

CIS Benchmark Development Guide V02 Page 23 of 35


Figure 32: The Recommendation After Submitting the Proposed Change

The recommendation text in the center pane looks the same, but there is now some additional information and
a Show Diff button on the right pane. When the Show Diff button is pressed, the changes are highlighted in Red
(Deletions) and Green (Additions), as in Figure 33.

Figure 33: Proposed Change “Diff” Highlighting

When a benchmark editor sees a Proposed Change, they will have some additional capabilities. They can Accept
or Reject a given change, and they can modify the suggested change further if needed since they have full editing
rights to the benchmark.

NOTE: As can be seen from the example above, there is some “interpretation” required in some cases.
In the above case, it shows duplication of much of Step 2’s text, when only the highlighted text
actually changed. The Proposed Change capability will improve over time, but for now should
only be used for relatively straightforward changes, and by users familiar with the quirks
(generally benchmark editors).

6.2 BENCHMARK EDITORS


Benchmark editors is a shorthand term for community members of all backgrounds who have editing rights on the
benchmark source. These individuals have been involved as active contributors on previous benchmarks, proven
their commitment to the BDP, and been vetted by CIS for this higher level of access. In general, benchmark editors

CIS Benchmark Development Guide V02 Page 24 of 35


take a leadership role in developing a given benchmark. They propose and draft new recommendations for review
by the community. In many ways, benchmark editors are similar to maintainers in open source projects in that
they can change the underlying source of the benchmark based on community submissions.

This section covers details of recommendation development that editors typically perform or oversee.

NOTE: This section covers items that are useful for benchmark editors and are not necessary for the
general contributor. Of course, for those interested in learning more about what benchmark
editors typically do, or the details of what make a good benchmark, feel free to read this section.

6.2.1 Benchmark Structure


The structure of benchmarks varies slightly from one benchmark to another, but the typical high-level components
in order are:

 Front Matter
o Cover Page
o Terms of Use
o Table of Contents
 Overview
o Untitled introductory paragraph
o Intended Audience
o Consensus Guidance
o Typographical Conventions
o Scoring Information
o Profile Definitions
o Acknowledgements
 Recommendations
 Appendix: Summary Table
 Appendix: Change History

Most of these components are either automatically generated (cover page, terms of use, table of contents, etc.)
or are mostly the same for every benchmark, with minor customizations (for example, the introductory paragraph
for the overview should state which technology versions the benchmark covers and which versions it was tested
against).

Nearly all effort put into developing and maintaining a benchmark involves the Recommendations section, and
the rest of this chapter covers it exclusively.

6.2.2 Recommendation Purposes


The key to creating a useful benchmark is fully understanding the type of content each recommendation should
contain. Writing recommendations is easy; writing recommendations that people find clear and useful is more
difficult. Always remember that each recommendation is intended to be used in some way, usually to remediate
a target asset so it conforms to the recommendation, or to audit a target asset to confirm compliance with the
recommendation. Each recommendation should provide a goal state for the target asset, such as having the
operating system’s full disk encryption capability enabled or having a disaster recovery plan in place. Once you’ve
identified the goal state, you then write the recommendation to explain one or more methods for reaching the
goal state on the target asset (remediation) and confirming the target asset’s state (auditing).

CIS Benchmark Development Guide V02 Page 25 of 35


In most cases, the goal state involves one or more configuration items, also known as attributes. The
recommendation would explain how to remediate the target asset’s configuration to reach the goal state—for
example, by using the asset’s administrative GUI to change the configuration, or by editing a configuration file
with a text editor. The recommendation would also explain how to audit the asset to confirm its configuration
complies with the recommendation—for example, by visually checking the value displayed in the asset’s
administrative GUI, or by viewing the contents of a configuration file. To the extent feasible, the remediation and
auditing information should be step-by-step instructions.

In some cases, the goal state doesn’t define specific configuration items, but rather involves processes, policies,
and other non-technical elements of asset security. For these, recommendations speak at a higher level, such as
ensuring there’s a disaster recovery plan for the target asset and creating a disaster recovery plan if one doesn’t
exist. There’s no expectation of the benchmark containing details for how to do these things, since they will
necessarily vary so much among assets and environments.

CIS benchmarks should focus mainly on recommendations specific to the benchmark’s target asset. Generic
recommendations, such as having a backup policy and physically protecting backup media, are usually not as
helpful as asset-specific recommendations. That being said, however, many benchmarks already contain generic
recommendations you could easily reuse in your benchmark, instead of writing new recommendations from
scratch.

6.2.3 Recommendation Scoring


Each recommendation has a scoring status of either scored or not scored. Benchmark conformance is measured
by enumerating all scored recommendations and assessing a target against them. From time to time, it’s difficult
to ascertain what a recommendation's scoring status should be. In general, the status should be:

 Scored – If the recommendation is for a technical control—something for which an actual value can be
automatically or manually collected from a target and compared against an expected value.
 Not Scored – For all cases where the recommendation doesn’t involve an attribute of the target (for
example, a recommendation to ensure backups are centrally available).

In a few cases, our consensus process can’t provide guidance on what a compliant state is. For example, a setting
might have a distinct set of possible values from which the consensus team is unable to make an explicit
recommendation. We could still create a recommendation that an enterprise take the setting under consideration,
but we can’t state a precise recommendation for the setting. Under such circumstances, the recommendation
would be set to “Not Scored".

Some people have argued that "not scored" recommendations should be omitted from benchmarks, but there
are two compelling reasons to include them.

 First, CIS's mission is to positively affect "best practices in cyber security" for all organizations, including
those with less mature security programs. If benchmarks do not include recommendations advising users
to consider performing regular security audits, monitoring the technology's use, etc., we are missing an
opportunity to positively affect best practices.
 Second, in cases where we cannot define the compliant state, benchmark users will be responsible for
providing the compliant state to complete the recommendation and change its scoring status from “not
scored” to “scored”.

CIS Benchmark Development Guide V02 Page 26 of 35


6.2.4 Process for Creating a Recommendation
Creating a recommendation is a twofold process. First, you must identify the goal state for the recommendation,
understand why that goal state is recommended, and determine how to remediate and audit the goal state.
Usually this involves one or more methods, such as reviewing the product’s documentation and existing third-
party security guidelines or experimenting with the product in a test environment. Second, you must document
the recommendation using several standard fields.

You may prefer to do all the research first and then document everything, or to document the recommendation
while you conduct your research. Either way is fine. However, be aware that each recommendation should cover
a single attribute or an integrated set of attributes (for example, a set of access control lists for a file). Your
research may identify multiple attributes that should be remediated and audited separately, in which case you
should write one recommendation for each attribute. It will save you time if you identify the need for multiple
recommendations before documenting them.

Each recommendation contains several mandatory fields and may also contain additional fields. The following
subsections describe each field and provide advice on how to populate it.

6.2.4.1 Scoring Status, Profiles, and CIS Control fields


These three fields are all selection-based; you choose one or more values from already-populated lists.

 Scoring Status: Each recommendation must have a scoring status of either Scored or Not Scored. See the
discussion in Section 6.2.3 for more information on scoring.
 Profiles: Each recommendation must reference one or more configuration profiles. This field has a
checkbox for each defined profile, and you may select as many of the checkboxes as needed. See the
discussion in Section 4.1 for more information on profiles.
 CIS Controls: Each recommendation should be linked to all applicable CIS controls (which are listed and
defined at https://www.cisecurity.org/controls/). For example, a recommendation for enabling the use
of authoritative time sources for clock synchronization should be linked to CIS v6 Critical Security Control
(CSC) 6.1 (“Include at least two synchronized time sources from which all servers and network equipment
retrieve time information on a regular basis so that timestamps in logs are consistent.”)

6.2.4.2 Title field


The Title field must contain a concise summary of the recommended outcome or result, while being specific
enough that the recommendation won’t easily be confused with any other recommendation in the benchmark.
Here are examples of possible titles:

 Ensure ‘Login Banner’ is set


 Ensure ‘Minimum Length’ is greater than or equal to 12
 Ensure WMI probing is disabled
 Ensure there is a backup policy in place
 Ensure the ‘MYSQL_PWD’ environment variable is not in use
 Ensure a Zone Protection Profile with an enabled SYN Flood Action of SYN Cookies is attached to all
untrusted zones

You may notice that some older benchmarks use a different construction for titles. For example, instead of saying
“Ensure ‘Minimum Length’ is greater than or equal to 12,” a benchmark might say “Set the ‘Minimum Length’ to
12 or greater.” This construction should not be used for new recommendations.

CIS Benchmark Development Guide V02 Page 27 of 35


In terms of format, the title should mimic the examples above. The first word of the title should be capitalized,
and the names of specific settings and other proper nouns should be capitalized. All other words should be in
lower case. The title should be written as a phrase, not a complete sentence (e.g., no period at the end of the
text).

6.2.4.3 Description field


The Description field must explain in some detail how the recommendation affects the target or target
environment’s functionality. This usually includes providing basic information about the target’s state or potential
state before the recommendation is implemented. Here is an example of a Description:

Tomcat listens on TCP port 8005 to accept shutdown requests. By connecting to this port and sending the
SHUTDOWN command, all applications within Tomcat are halted. The shutdown port is not exposed to the
network as it is bound to the loopback interface. It is recommended that a nondeterministic value be set
for the shutdown attribute in $CATALINA_HOME/conf/server.xml.

In this example, the first three sentences explain the undesirable state, and the last sentence states the
recommendation to change from the undesirable state to a more secure state.

The Description field should not be overly detailed. For example, there is no need for it to provide step-by-step
instructions for auditing or remediation or to specify the recommended value for the setting, since those will be
included in the Audit Procedure and Remediation Procedure fields.

6.2.4.4 Rationale Statement field


Each recommendation must have a Rationale Statement field which clearly articulates the specific reasons the
recommendation is being made. Statements that rely on phrases like "doing this is best practice" are
unacceptable. The Rationale Statement should provide clear supporting evidence for the security benefits to be
achieved by implementing the recommendation.

It can be hard to differentiate the Description and the Rationale Statement fields. Keep in mind that the
Description field explains what implementing the recommendation is going to do to the target in terms of changing
its functionality, and the Rationale Statement field explains why implementing the recommendation is beneficial
to security. The Description is what will be done, and the Rationale Statement is why it needs to be done. Here is
an example of a Rationale Statement corresponding to the Description example above:

Setting the shutdown attribute to a nondeterministic value will prevent malicious local users from shutting
down Tomcat.

6.2.4.5 Audit Procedure field


The Audit Procedure must provide specific instructions—step-by-step whenever feasible—for determining if a
target is in compliance with the recommendation. Whenever applicable, this should include explicitly stating the
recommended and acceptable values for the setting. Here’s an example of a relatively simple Audit Procedure
field:

CIS Benchmark Development Guide V02 Page 28 of 35


Verify the shutdown attribute in $CATALINA_HOME/conf/server.xml is not set to SHUTDOWN.
$ cd $CATALINA_HOME/conf
$ grep shutdown[[:space:]]*=[[:space:]]*”SHUTDOWN” server.xml
The above command should not yield any output.

The beginning of the Audit Procedure should state what is to be done through “verify” language. The term “verify”
is preferred because it indicates the auditor must take action to confirm compliance.

If the Audit Procedure is very simple, such as verifying a particular policy exists, one sentence may be sufficient.
However, in most cases, more instructions will be needed. In the example above, the second and third lines specify
commands the auditor can use to verify the configuration, and the fourth line explains how to interpret the output
of the commands. Whenever feasible, provide commands, regular expressions, short scripts or code examples,
and other practical information auditors can reuse or adapt for reuse.

Here’s an example of an Audit Procedure field with several items:

Perform the following to verify that the recommended state is implemented:

1. Check to see if the ScoreBoardFile is specified in any of the Apache configuration files. If it is not
present, the configuration is compliant.
2. Find the directory in which the ScoreBoardFile would be created. The default value is the
ServerRoot/logs directory.
3. Verify that the scoreboard file directory is not a directory within the Apache DocumentRoot.
4. Verify that the ownership and group of the directory is root:root (or the user under which Apache
initially starts up if not root).
5. Verify that the scoreboard file directory is on a locally mounted hard drive rather than an NFS mounted
file system.

Although this example is detailed, it does not provide step-by-step instructions. For example, item 1 does not
explain how to find the Apache configuration files or how to check each of them for ScoreBoardFile.
Providing that level of detail would make the instructions extremely long, and most readers probably wouldn’t
need them, so omitting them is acceptable.

For some Audit Procedures, a single set of instructions isn’t sufficient. There is more than one way to perform the
audit, or there is more than one set of conditions that can be met to demonstrate compliance with the
recommendation. Here’s an example of the latter:

CIS Benchmark Development Guide V02 Page 29 of 35


Perform the following to verify that the recommended state is implemented:

1. Search the Apache configuration files to find all <Directory> elements.


2. Ensure that either one of the following two methods is configured.
a. For the Deprecated Order/Deny/Allow method:
i. Verify there is a single Order directive with the value of Deny,Allow for each.
ii. Verify that the Allow and Deny directives have values that are appropriate for the purposes
of the directory.
b. For the Require method:
i. Verify that the Order/Deny/Allow directives are NOT used for the directory.
ii. Verify that the Require directives have values that are appropriate for the purposes of the
directory.

An Audit Procedure often combines these approaches, such as having step-by-step instructions that include
commands. Here is an example of prose instructions and commands together:

Perform the following to verify that the recommended state is implemented:

1. Use the httpd -M option as root to check which auth* modules are loaded.
httpd -M | egrep ‘auth._’

2. Also use the httpd -M option as root to check for any LDAP modules which do not follow the same
naming convention.
httpd -M | egrep ‘ldap’

An Audit Procedure should also list any prerequisites needed to verify compliance. For example, the auditor may
need administrator-level privileges on the target, or a particular tool may need to be installed in order to view the
setting value. Prerequisites should be specified before instructions and commands, otherwise an auditor may
attempt to follow the instructions and issue the commands before seeing the prerequisites.

Note that all these examples use full sentences with terminating punctuation. Sentence fragments should only be
used in cases where options are being listed, such as the example above introducing instructions for each method
by naming the method.

6.2.4.6 Remediation Procedure field


The Remediation Procedure is similar to the Audit Procedure, except that the Remediation Procedure provides
instructions for implementing a recommendation for a non-compliant target. The Remediation Procedure should
cover how to implement the recommended value and may also cover how to implement other acceptable values.

Here is a simple example of a Remediation Procedure:

To set a nondeterministic value for the shutdown attribute, update it in


$CATALINA_HOME/conf/server.xml as follows:
<Server port="8005" shutdown="NONDETERMINISTICVALUE">

Note: NONDETERMINISTICVALUE should be replaced with a sequence of random characters.

The Remediation Procedure should make it clear that the target’s state is to be changed through one or more
actions. Terms such as “set,” “update,” “create,” “change,” and “perform” indicate actions.

CIS Benchmark Development Guide V02 Page 30 of 35


Another example of a Remediation Procedure indicates which steps can be skipped if the specified conditions are
met:

Perform the following to implement the recommendation:

1. If the apache user and group do not already exist, create the account and group as a unique system
account.
groupadd –r apache
useradd apache –r –g apache –d /var/www –s /sbin/nologin

2. Configure the apache user and group in the Apache configuration file httpd.conf.
User apache
Group apache

6.2.4.7 Impact Statement field


An impact statement should only be used if there are likely adverse security, functionality, or operational
consequences from following the recommendation. For example, if making a new setting take effect will require
rebooting a host, the impact statement should state this. Another example is a recommendation that makes one
aspect of security stronger but as a side effect weakens another aspect of security.

Impact statements should focus on non-obvious impacts. Many recommendations have obvious impacts; for
example, disabling a service means the service will no longer be available. The intent of the impact statement is
to identify the impacts that are less likely to be recognized.

6.2.4.8 Default Value field


This field is used to record the default value for a setting, if it is known. If the default state is that the value isn’t
set, enter “Not set” for this field. If a default value is not applicable—for example, a recommendation that does
not involve a particular setting—just leave this field blank.

If the default value is not straightforward, it’s acceptable to have a verbose explanation here. One example is if
the default value varies based on the underlying platform, in which case you may need to list each possible
underlying platform and the associated default value.

6.2.4.9 References field


References can include, but are not limited to, the following categorical items:

 Common Configuration Enumeration (CCE) identifiers for the setting addressed by the recommendation
 URLs to documentation or articles pertaining to the recommendation. URL references should first prefer
vendor sources of information, and then, exceptionally, reputable third-party sources.

The references must be numbered for usability. Here is a sample of a reference list:

1. CCE-ID XXXXX
2. https://tomcat.apache.org/tomcat-5.5-doc/config/server.html
3. https://httpd.apache.org/docs/2.4/programs/configure.html

There is no particular order for the references, but if there are numerous references, they should be grouped by
type at a minimum (for example, all CCE IDs, then vendor URLs, and finally third-party URLs).

CIS Benchmark Development Guide V02 Page 31 of 35


6.2.4.10 Notes field
A recommendation may include a Notes field with supplementary information that doesn’t correspond to any of
the other fields. The Notes field is rarely used. One possible use is to mention other recommended actions that
fall outside the scope of the recommendation—for example, deploying Kerberos for organizational use. Another
possible use is to define possible values for a setting, especially if many such values have been defined, each
needing its own explanation. Having such lengthy, detailed material within another field would disrupt the flow
of the recommendation, so placing it in the Notes field keeps it out of the way.

6.2.5 Recommendation Formatting


The following recommendation fields offer the set of formatting options depicted in the bar below: Description,
Rationale Statement, Audit Procedure, Remediation Procedure, Impact Statement, Default Value, and Notes.

Figure 34: Formatting Toolbar for Editing Recommendations and Proposed Changes

Starting on the far left, the first set of three buttons is for Bold, Italic, and Heading:

 Bold is to be used sparingly to indicate caveats or other particularly important information, such as a note
about prerequisites for issuing a command.
 Italic can be used in two ways. First, it can denote the title of a book, article, or other publication. Second,
italicized text set in angle brackets <> denotes a variable for which a real value needs to be substituted.
 Heading is rarely used.

The next set of two buttons is URL/Link and URL/Image. These buttons are used to add a pointer to a webpage,
graphic, or other element with additional information.

The next set of three buttons is for Unordered List, Ordered List, and Quote:

 An Unordered List is better known as a bulleted list. It should be used when there are two or more items
and they are options (look for any of the following values, etc.) It may be used when there are multiple
required items that can be performed in any sequence, but an Ordered List is generally preferred for those
cases.
 An Ordered List is a numbered list. It should be used whenever you are providing step-by-step instructions
where sequence is important. For usability reasons, an Ordered List is generally recommended for any
instructions with more than one step or item.
 The Quote button is used to indicate quoted text. Most benchmarks do not use Quote formatting.

NOTE: There is no such thing as a list (either Ordered or Unordered) with just one item. No such lists
should be used in benchmarks

The next button is the Preview. This can be used to view a non-editable render of what the resulting text will look
like. Push the Preview button again to return to editing.

The last two buttons are for a Code Block and Inline Code:

 The Code Block button is used to denote a block of contiguous text as code, commands, or scripts by
displaying it in a monospace font and a grey background. See the examples throughout Section 6.2.4 for
text formatted as Code Blocks.

CIS Benchmark Development Guide V02 Page 32 of 35


 The Inline Code button is used to mark part of a sentence as “Code” (monospace font). This is generally
used to indicate configuration setting names, file and directory names, parameter values, and other
similar pieces of text within a sentence.

NOTE: Due to limitations with the Proposed Change capability, users should avoid using the formatting
toolbar, since in many cases it will cause a confusing result. This does not happen in all cases but
is a good general rule.

6.2.6 Recommendation Organization


Each benchmark has its recommendations organized into multiple sections at a minimum. The sections are unique
for each benchmark, but they often include the following:

 Planning and Installation. This encompasses any recommendations that apply to preparation before
installation or options for the installation itself, such as not installing unnecessary modules, or installing
additional modules to provide more security features. Configuration options available both during and
after installation should not be placed in this section.
 Hardening. This section is for actions that reduce the target’s functionality, remove weak default values,
prevent the use of inadequately secure options, delete sample content, etc.
 Access Control. This includes user and group identities, and ownership and permissions for resources (e.g.,
files, folders/directories, shares, processes, media).
 Communications Protection. This encompasses the cryptographic settings, protocol options, and other
security features involved in protecting network communications. Examples include SSL/TLS options,
certificate requirements, cryptographic key protection, and restrictions on which versions of network and
application protocols may be used.
 Operations Support. This covers security recommendations for typical security operations, such as
configuring logging, patching, security monitoring, and vulnerability scanning.
 Attack Prevention and Detection. This addresses the recommendations intended to stop or detect attacks,
ranging from the use of features that prevent sensitive information leakage or mitigate denial of service
attacks to the use of technologies for detecting malware.

In each of these examples, the name of the section indicates the purpose of the settings. Older benchmarks may
have inconsistent section names involving types of threats or attacks, types of vulnerabilities, etc. Use of such
names should be avoided.

Grouping recommendations into sections makes the benchmark much easier for people to understand, but it has
additional benefits. For example, if most or all access control-related recommendations require the benchmark
user to have administrator-level privileges, that can be stated at the access control section level as a prerequisite
instead of having to list it within each individual access control recommendation.

Most benchmarks have a large enough number of recommendations that they have subsections within most
sections. For example, an Access Control section might have subsections for identities, ownership, file permissions,
process permissions, etc. The general rule of thumb is to use subsections when the number of recommendations
within the section is unwieldy (e.g., dozens) or when the recommendations naturally fall into two or more
categories.

Each benchmark section should have an introductory paragraph or two. It should indicate the overall intent of the
section’s recommendations and point out any significant cautions about the recommendations. For example, a
section on hardening that includes disabling unneeded modules might include text like this in its introduction:

CIS Benchmark Development Guide V02 Page 33 of 35


“This section covers specific modules that should be reviewed and disabled if not required for business
purposes. However, it's very important that the review and analysis of which modules are required for
business purposes not be limited to the modules explicitly listed.”

6.3 CIS TECHNOLOGY COMMUNITY LEADS (TCLS)


As previously discussed, the TCL’s primary role is to shepherd the various technology communities they lead by
growing, supporting, and guiding them in the development benchmarks. The best way to think about the TCL’s
role is not as an expert in all the technologies they lead, but instead as a skilled project manager bringing together
the needed resources to develop a given benchmark in a reasonable period of time. The resources the TCL draws
upon always consists of technology community members but can also include key contractors and other CIS
employees with appropriate skills and expertise.

All TCLs have editing rights like benchmark editors and at times fill that role on a given benchmark. Also, like
benchmark editors they are similar to maintainers in open source projects in that they can change the underlying
source of the benchmark based on community submissions. Every TCL leads multiple technology communities at
the same time and generally has multiple benchmarks in development simultaneously. Due to the diversity of
technologies involved, no TCL can be an expert in all of them, so they are very dependent on the technology
community’s editors, other contributors, and the overall consensus process to develop successful benchmarks.

In addition, TCLs have additional roles outside of those available to the community in general. These include:

 Set up new communities and benchmarks in the CIS Workbench tools


 Finalize and publish completed benchmarks from Workbench to the CIS public website
 Answer community questions and help contributors new to the benchmark development process
 Promote the CIS benchmark development process publicly and encourage involvement
 Recruit additional qualified contributors to their technology communities
 Work with technology providers for early access to releases and/or assist directly with benchmark
development
 Develop and test appropriate artifacts and create AAC for use by CIS and various third parties
 Work with third parties to certify their tools to ensure compliance to the appropriate CIS Benchmarks
 Schedule and hold public calls on the status of their communities and benchmarks under development

The bottom line is the TCL is the overall “glue” that holds the benchmark development process together and gets
a result in a reasonable period of time. There are many facets to this job, and they are quite busy, but their primary
job is always to help the communities they lead by providing the recourses and guidance they need to succeed.
Please feel free to reach out to the TCL in any technology community with questions or feedback on a benchmark
or the BDP in general. They love getting feedback!

7 HOW SHOULD I PROCEED?


Now that we covered some of the basics of the Workbench tool, what is the next step? Get involved! Here is a
simple process to get started:

1) Join a community of interest: Find one that you have some expertise and interest in, join it, and set your
notifications accordingly.
2) Get involved:

CIS Benchmark Development Guide V02 Page 34 of 35


a. Option 1: Dive in immediately and create a new discussion on the community announcing yourself,
your expertise, and your availability. The TCL or other community leaders will soon reach out to
you to discuss how you can help.
b. Option 2: If you want to start out more slowly, comment on an existing discussion or ticket, and
help resolve an issue or clarify a topic.

Feel free to contribute as much or as little as you can, since we value contributions of all sizes. For example, we
have contributors that do one of the following:

 Provide spelling and grammar changes to benchmarks. This is indispensable for the creation of a
professional result.
 Test proposed recommendations and provide feedback via tickets. This is indispensable for the creation
of a reliable and widely applicable result.
 Provide a starting point for a new benchmark that was initially developed outside of the CIS benchmark
process by a given company or set of individuals. This then forms the basis of an initial benchmark and
community around it.
 Provide a detailed analysis of the variations in security configuration items from one operating system
release to another. This is an essential contribution and helps focus the community to efficiently work on
the changes that matter out of potentially hundreds of possibilities.

Diversity of expertise and viewpoints in the community is key to creating a widely applicable and used benchmark,
and any contribution you can provide is valuable and appreciated.

We look forward to your contribution!

CIS Benchmark Development Guide V02 Page 35 of 35

You might also like