Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 105

Understand your Org’s Mission 

Develop your understanding of your organization’s mission. What does it do?


How does it create value? What are its success conditions? Who are its
competitors? What is its vertical? Market Impact? Geographic placement?
Geopolitical considerations for all of the above (as applicable)? What we’re
doing here is thinking like an attacker would, and essentially Targeting your
organization. There’s something about Sun Tzu in here, I just know it.

Understand Your Environment


Ask yourself: From a technical perspective, what are we testing against? What
will make this strange? What are the idiosyncrasies of your org’s service, data,
transport, and security architectures? Use this time in the schedule to ensure
you have the most recent and accurate documentation possible on all testable
facets of your enterprise.

Know the threats to the mission

Terrain Analysis
Turn the screws on your IT architecture review to more fully understand how
it supports your organizational mission. Why was it built the way it is?
Prioritize assets based on business outcome and recurse into business
process<-capability <-asset<-infrastructure chains that support them; this
enables threat picture development and actor assessments by helping you
understand probable attack paths and targets. IT Ops should be able to help
here, if not hand you something that answers most of it.
Threat Selection
From your understanding of the mission, architecture, and the interaction
between them, turn the table around and ask “how would I attack this?” and
“who would attack this?” This answer should be informed by the self-
targeting you did 2 steps back. Consider APTs, consider commodity malware,
and consider the tools various actors are known to use and their capabilities.
There will be A LOT. Based on your prioritization of business-critical assets
and/or controls, narrow it down to no more than 2 actors mixed in phasing and
tempo to train both Ops and Intelligence functions.

Know Your Controls


Ask yourself: “What is happening where security intersects with infrastructure
at critical points in the architecture? Do my controls work against baseline
threats (i.e. dirty dozen)? What is the full list of controls and capabilities
operating in the enterprise? Are they enabled? The output of this step will
later combine with that from threat selection to produce your emulation plan.

Scope The Exercise

Establish Goals
Begin planning in earnest by deciding what you want to achieve: Baseline (or
better yet, up-gun) your tools, procedures, and team? Validate controls in the
wake of a major reorg or infrastructure update? Test new capabilities?

Establish Emulation Control Measures


Control measures fence off areas, assets, identities, and people whose
criticality or sensitivity is such that the risk incurred by testing them directly is
unacceptable to management. Risk is management business and it’s the job of
the infosec and IT ops teams to present them with the data needed to make
informed risk decisions. Speak plainly with the best available analysis and
avoid overstating risk, just qualify it and, where possible, quantify it. Control
measures can be as simple as lists of subnets, hosts, services, identities, or
people.

Determine Controls Under Evaluation


Based on the time and resources available, you may need to limit the number
of controls being tested. Remember that every control, regardless of test
outcome, needs.

Set Timing, Sequencing, and Flow Control

Timing And Schedule:


Planning factors*: 3-4 Weeks for prep, 1 week for execution. Plan for 4 days’
worth of work per shift. Plan for 1 more day of execution than you think
you’ll need to complete all of your emulations. Shift, Daily, and Final
reporting should be specified. Phase I threat selections become Master
Scenario Event List items (Assuming Approved budget and personnel)

Establish the Battle Rhythm


This is where you make money. Don’t skip this part. Bananas. I’m going to
ask a question about fruit, later on, to make sure you read this. Note: the
critical element of purple teaming is in continuous interaction between red and
blue, regardless of whether or not red is automated. Exercise Control should
be leading debriefs of effects, detects, and protects at least twice daily with all
Do-ers in the room.

Effect (test)-based time constraints and debriefs


Set time gates for the blue team to detect and action each effect. If they blow a
gate, advise the red team to move to the next OR provide “threat intel” to
point blue in the right direction. It’s EXCON’s responsibility to understand
the relative value of each scenario and keep the exercise moving. Both a
blown gate and immediate alert have training value and need a debrief.

Empower Trusted Agents

ID and in-brief trusted agents


Senior stakeholders and leadership of red and blue should have full
knowledge of the exercise scenario, specifically red actions and their timing.
NDA them as needed, but be more certain to impress the importance of
limiting what the Do-ers know as a matter of training value. From the
perspective of safety, TAs will know that something is happening and will
deconflict confusion on the analyst floors when reality pokes its nose in.
Establish Deconfliction Procedures
The exercise controller should have quick access and a close relationship with
IT Ops Leaders and at least 2 (one per shift) IT Ops tech should be TAs IOT
effect quick deconfliction of emulation effects which may impact production.
Be sure that everyone who has cease-fire authority can contact the red team on
a moment’s notice and that the red team knows who they are.

Create the Emulation Plan

Align Emulations to Controls


Every Emulated Adversary technique should align to a control or set of
controls totest—this is the core of the emulation plan. There will be A LOT to
choose from, so narrow it down to about 4 days of work for each shift
involved in the exercise. As you think about what those detections are and
how they will look, consider the sigma project as a reference point for
designing rules: https://github.com/Neo23x0/sigma

Success Criteria
Determine your standard of success. This is generally detection, prevention, or
both.

Prepare a Hint Bank


There’s going to be more than one time when the blue team is stumped—this
is ok and actually good. A blown gate is worth more in training value than an
immediate detection, just be ready to keep the action moving with specifically
crafted “threat intel” notes and packages that can put them back on the right
track or help slide the last piece into place.

Scope The Exercise


Establish Goals
Begin planning in earnest by deciding what you want to achieve: Baseline (or better yet,
up-gun) your tools, procedures, and team? Validate controls in the wake of a major
reorg or infrastructure update? Test new capabilities?

Establish Emulation Control Measures


Control measures fence off areas, assets, identities, and people whose criticality
or sensitivity is such that the risk incurred by testing them directly is unacceptable
to management. Risk is management business and it’s the job of the infosec and IT ops
teams to present them with the data needed to make informed risk decisions. Speak
plainly with the best available analysis and avoid overstating risk, just qualify it and,
where possible, quantify it. Control measures can be as simple as lists of subnets, hosts,
services, identities, or people.

Determine Controls Under Evaluation


Based on the time and resources available, you may need to limit the number of controls
being tested. Remember that every control, regardless of test outcome, needs.

Set Timing, Sequencing, and Flow Control

Timing And Schedule:


Planning factors*: 3-4 Weeks for prep, 1 week for execution. Plan for 4 days’ worth
of work per shift. Plan for 1 more day of execution than you think you’ll need
to complete all of your emulations. Shift, Daily, and Final reporting should be specified.
Phase I threat selections become Master Scenario Event List items (Assuming
Approved budget and personnel)

Establish the Battle Rhythm


This is where you make money. Don’t skip this part. Bananas. I’m going to ask
a question about fruit, later on, to make sure you read this. Note: the critical element
of purple teaming is in continuous interaction between red and blue, regardless
of whether or not red is automated. Exercise Control should be leading debriefs
of effects, detects, and protects at least twice daily with all Do-ers in the room.

Effect (test)-based time constraints and debriefs


Set time gates for the blue team to detect and action each effect. If they blow a
gate, advise the red team to move to the next OR provide “threat intel” to point blue in
the right direction. It’s EXCON’s responsibility to understand the relative value of
each scenario and keep the exercise moving. Both a blown gate and immediate
alert have training value and need a debrief.

Empower Trusted Agents

ID and in-brief trusted agents


Senior stakeholders and leadership of red and blue should have full knowledge of the
exercise scenario, specifically red actions and their timing. NDA them as needed, but be
more certain to impress the importance of limiting what the Do-ers know as a matter of
training value. From the perspective of safety, TAs will know that something is
happening and will deconflict confusion on the analyst floors when reality pokes its
nose in.
Establish Deconfliction Procedures
The exercise controller should have quick access and a close relationship with IT
Ops Leaders and at least 2 (one per shift) IT Ops tech should be TAs IOT effect
quick deconfliction of emulation effects which may impact production. Be sure
that everyone who has cease-fire authority can contact the red team on a
moment’s notice and that the red team knows who they are.

Create the Emulation Plan

Align Emulations to Controls


Every Emulated Adversary technique should align to a control or set of controls totest—
this is the core of the emulation plan. There will be A LOT to choose from, so narrow it
down to about 4 days of work for each shift involved in the exercise. As you think about
what those detections are and how they will look, consider the sigma project as a
reference point for designing rules: https://github.com/Neo23x0/sigma

Success Criteria
Determine your standard of success. This is generally detection, prevention, or both.

Prepare a Hint Bank


There’s going to be more than one time when the blue team is stumped—this is ok and
actually good. A blown gate is worth more in training value than an
immediate detection, just be ready to keep the action moving with specifically crafted
“threat intel” notes and packages that can put them back on the right track or help slide
the last piece into place.
Execute the Emulation Plan
…and make sure it counts. You’ll have found a way to get your emulations
executed professionally and ensuring the debriefs happen is paramount

Manage the Ebb and Flow


This is the iterative and on-call portion of the exercise. You’ll quickly see
where SOC teams and red teams alike find their friction points and the art to
this Purple stuff is in nudging the schedule and emulation timing to take
advantage of it. EXCON should be everywhere at once, assessing processes,
information flows, and general competency on both sides.

Exercise Judgement
Safety, Exercise Flow, and PRODUCTION are all subject to a degree of risk
when emulating badness. EXCON should be an experienced practitioner-
leader who knows Red, Blue, and Intel as fluently as IT architecture (very).

…and remember, No Discomfort, No Expansion


Debrief in Detail and Report

Hot Wash and Deliver the Initial Outbrief


Every day gets a rundown of catches and misses with both red and blue in the
room. Address the how and why of each, be candid, call out individual
successes and failures constructively.

Produce Audience-Appropriate Reports


Every stakeholder has both a boss and a job to handle; produce reports
accordingly. Some technical reports will require extra time and analysis to
make useful with compensating controls and mitigation plans. Some EXSUMs
will need savvy VPs to weigh in and executize© things into the language of
risk as opposed to vulnerabilities in libc. Talk to people about the things they
care about.

Mitigate and Revalidate Control Gaps

Assess and Enact Mitigations


Ask yourself and your team: Wherever the pipeline failed, how do we fix it
and what are the best compensating controls to stand between now and that
fix? Where do controls so repeatedly overlap as to lose value in maintaining
both rather than dropping one and compensating somewhere else? Security
Architecture analysis comes back into play as red and blue refine both failed
processes and tech. A mitigation is anything that The Risk Mitigation Plan is a
framework for describing and prioritizing exercise outputs.

Revalidate Updated Controls


Startup whatever Red capability you used to execute the emulation plan and
throw it at your fresh mitigations to see how they took.

Plan for future iterations

Identify Persistent Gaps


There will still be holes, but they shouldn’t be so big or numerous as before,
and you’ve stepped up your team’s capabilities to the point that the ones you
filled are matters of policy and procedure to cover rather than intense effort.
The ones left over are the subject of compensating controls, longer-term
investments, and the starting point for the next round.

Level Up The Next Exercise


A successful Purple Teaming exercise so plainly demonstrates value that
every stakeholder is going to want more. This is a process that finds maximum
ROI when executed in a spiral of increasing scenario complexity. Any Blue
team becomes purple with the proper measure of Red capabilities mixed in.
Before we start talking about BAS, or Breach and Attack Simulation - you’ll hear me
use the terms interchangeably throughout the course; I want to introduce you to the
concept of threat informed defense.

A threat informed defense is a proactive approach to cybersecurity that utilizes three


elements to provide an evolving feedback loop to your security team.

Those elements are:

 Cyber threat intelligence analysis


 Defensive engagement of the threat
 Focused sharing and collaboration

Threat intelligence analysis is taking existing intelligence data like TTPs, malware
hashes, or domain names and applying human intelligence to harden cyber defenses and
improve ways to anticipate, prevent, detect, and respond to cyber-attacks.

MITRE CRITS

Let’s look at CRITS as an example of what goes into cyber threat intelligence analysis.
CRITS is a tool developed by MITRE and stands for Collaborative Research Into
Threats. It’s open-source and freely available here. CRITS does a handful of things that
assist with intelligence analysis such as:

 Collecting and archiving attack artifacts


 Associating artifacts with stages of the cyber attack lifecycle
 Conducting malware reverse engineering
 Tracking environmental influences
 Connecting all of this together to shape and prioritize defenses and react to
incidents
CRITS itself is outside of the scope of this course, but it gives us a good illustration of
some of the features of cyber threat intelligence.

Defensive engagement of the threat takes what you’ve discovered from intelligence
analysis and allows you to look for indicators of a pending, active, or successful cyber
attack. Breach and attack simulation tools fit in well here because we can take the
behavioral models uncovered during intel analysis and use BAS to automate testing and
reporting on what those behavior patterns look like in our enterprise.

These simulation results can feed back into your threat intelligence analysis and into the
next element we’re going to talk about, which is focused sharing and collaboration.
By sharing threat actor TTPs through standards such as STIX and TAXII the security
community benefits together, or if you are part of a large organization with different
security groups information shared between groups in a standard format can help your
enterprise build a threat informed defense.

Groups like MITRE’s Center for Threat Informed Defense (CTID) bring together
sophisticated security teams from leading organizations around the world to expand the
global understanding of adversary behaviors by creating focus, collaboration, and
coordination to accelerate innovation in threat-informed defense, building on the
MITRE ATT&CK framework.

Now that we’ve talked about the methodology of a threat informed defense we can
begin to talk about Breach and Attack Simulation as a way to operationalize and take a
lot of the manual work out of implementing a threat informed defense.
The general idea between breach and attack simulation tools is similar:

 Organizations can choose attacker behaviors they want to see executed in their
environment.
 Behaviors are executed by the BAS tool.
 Operators observe the response from security controls.

How these ideas are implemented and additional features provided vary from vendor to
vendor. We’re going to talk about things to consider when you’re investigating BAS
solutions in a little bit, but before we do that I want to talk about Why BAS has become
important.

Before breach and attack simulation tools existed, there were still plenty of
organizations implementing or at least partially implementing a threat informed defense.
This work was originally done through purple teaming activities where red teams and
blue teams would work together to improve their security posture. Purple teams still
exist and are beginning to become more popular, but BAS tools can be used to help with
some deficiencies of a manual process.

Time/FTE

 Red Team members are generally highly skilled individuals whose time could be better
spent innovating instead of running scripts and building reports.
 Coordination and sharing of information between red teams and blue teams consumes
time that could be spent implementing projects and defending the enterprise.

Documentation

 Documentation during manual efforts is often lacking because of the time commitment
or lack of resources to document what was done, how it was done, when it was done,
and by who.
Safety

 Without tight collaboration or understanding between red teams and blue teams on what
exercises are run by who, against what assets, and when the idea of testing the security
of your network begins to feel more like a liability than an asset.

When we get into Breach and Attack Simulation use cases later in this course we will
explore in more detail how BAS tools help alleviate at least some of these burdens.
When considering the use of a breach and attack simulation tool for your team, there are a few
different ways that deployment can be done.

Agent-Based Deployment Approach

Agent-based deployments utilize individual assets in your environment to execute tests.


Generally, the agent on the host will be controlled by your BAS console. The agent
executes tests on or from the host and then reports data back to the BAS server on the
success or failure of those tests. Agents are flexible because they allow you to deploy
quickly and into specific areas of your environment.

If you are deploying agents in a production environment, you’ll want to have a good
understanding of how safe this is from your vendor. We’re going to go into testing and
transparency approaches in a bit, and having this understanding will allow you to better
understand the safety of running a BAS tool in your production environment.

The main reason you would choose to deploy in production instead of a lab is that it will
give you more accurate results to measure against.

One of the limitations of an agent-based approach can be proper coverage.

 Understand what your use cases are before investigating BAS tools. This will give you
an understanding of how many hosts, VLANs, operating systems, departments, and
security domains you will test on.
 Do you need just a sample from your enterprise or do you want to be able to execute on
any host in your environment?

Having the answers to these questions in mind, talk to your vendors about how they
license and scale so that they can fit your needs.
Virtual Based Deployment Approach

A virtual based deployment can be executed in a multitude of different ways. This could
be a deployment where agents are being used but as part of an OVA. This could also be
an agentless deployment where packets are replayed to see how the environment
responds.

The main theme across a virtual deployment is that it involves lab components and
should be designed to simulate your production network.

Although this type of deployment allows you to execute actual malicious activity in a
safe manner, it does have some limitations.

Some of the limitations of a virtual based deployment include:

 Accuracy – The accuracy of the tests is only as reliable as the environment the tests are
executing in. If you are executing in a virtual or lab environment and not a production
environment, you risk not having an accurate measure of your production enterprise.
 Complexity – The complexity of a virtual environment can definitely provide you with
testing flexibility in the future. However, the complexity of virtual based deployment
can often add time and expense to BAS projects.

Services Based Deployment Approach

A services-based deployment method often conducts tests by simulating or replaying


attacker behavior from a cloud service against a target or range of targets. This type of
testing is often used as a form of external to internal penetration test, usually focusing
on exploitation activities.

The deployment for services based BAS tools is easy because there usually isn’t
anything to deploy.

One of the limitations of a services-based BAS deployment is that they are often limited
in how robust the testing can be.
Let’s talk about four of the main approaches BAS tools take in how they execute
testing.

It’s important to remember that some BAS tools may incorporate more than one of
these approaches in how they run their tests, so you need to understand what is
important to your use cases and how that works with the BAS tools you are
investigating.

Behavior Emulation Testing Approach

Behavior Emulation is taking specific behaviors of attackers and re-creating them as


unit tests in the BAS platform. This is generally a production safe approach because you
are able to focus on specific behaviors instead of payloads. If you believe a behavior to
be a risk to your production network, you can choose other behaviors that may occur
before or after the exploitation phase of an attack.

Behavior Emulation generally focuses on pre or post-exploitation activities. If your use


cases are focused on exploitation activities only, this may be a limitation to consider.

Behavior Replay Testing Approach

Behavior Replay is generally done by replaying packet captures of actual attacks.

 It allows you to replicate the actual behavior of an attacker, including actual


exploitation.
 Behavior replay allows for more robust network-based testing.

Testing actual behavior with actual exploitation can also be a drawback if you desire to
test your production assets since these tests are much harder to make safe.
Malware Detonation Testing Approach

Malware detonation is similar to sandboxing, but with a focus on how efficiently your
security controls respond instead of understanding how the malware operates. Malware
detonation is essentially taking known malware samples and executing them in your test
environment. This is good if you have a targeted use case for understanding how your
security controls stand up to the exploitation phase in a very real way. Obviously, this
carries a large risk of impacting the environment it is run in and is not safe for
production.

Services Based Testing Approach

Services based testing approaches vary widely and can use a combination of all of these
testing approaches. They may even include human elements that analyze or assist in the
operation of the test.

Because services based testing can be so different from provider to provider, it’s
important to have a grasp of what is in scope and out of scope for testing and how often
testing will be done.
How transparent the actual tests you are executing can vary from solution to solution. Some
BAS solutions may even take multiple approaches or variate the degree of approach to how
transparent they are with their content.

Blackbox Approach

A blackbox approach leaves little visibility to the operator. Limited flexibility of testing
and the uncomplicated nature of a blackbox approach may be valuable to less mature
security organizations looking to put some sort of security control validation project in
place. However, larger or more experienced organizations may experience difficulty in
the lack of detail offered by a blackbox approach.

This type of approach can also limit red team involvement, leaving their experience out
of the validation project.

Glassbox Approach

A glass box approach is a much more open approach than a blackbox approach. In a
glass box approach, operators can view details of how the test is being run. They can get
a deeper understanding and in some cases make changes to the configuration of how
tests are executed. An example of a glassbox approach would be packet capture replay
solutions. In this case, you are able to see the traffic being used as part of the test, but
there is little to no modification available.

A glassbox approach is useful for organizations that are larger or more mature that
would like to implement a breach and attack simulation tool, but may not have the
resources or desire to manage an Openbox approach. This may also become a scalability
limitation. If your organization does have resources and expertise to achieve more
control over how tests are executed, a glassbox approach may be somewhat limiting.
Openbox Approach

An open box approach takes the same approach as a glass box approach, however, the
source code of the tests is made available to operators. This allows for full transparency
and customization of how the tests are executed.

An openbox approach provides mature security organizations a ton of flexibility.


However, this testing approach may be dangerous if operators don’t have the experience
necessary to properly write tests and the proper guardrails are not in place by the BAS
tool.

Although there are a few frameworks you could lay on top of BAS testing tools, the
most prevalent is the MITRE ATT&CK Framework. Along with many defensive tools,
breach and attack simulation tools often align themselves with the MITRE ATT&CK
Framework.

This makes sense for organizations that are trying to find a way to match security
controls to offensive tactics.

MITRE has organized attacker techniques into multiple categories along the attack
chain. On the MITRE ATT&CK website, you can drill into techniques under each
category to get a better understanding of how a technique works, threat groups known to
use the technique, how to mitigate and detect the technique, and references to articles on
the technique.

Some breach and attack simulation tools allow you to understand where your defensive
gaps may lie in the context of MITRE ATT&CK.

 If the tool aligns to ATT&CK, you should be able to design your test based on
techniques that are used by known threat actors.
If the tool doesn’t have direct MITRE ATT&CK alignment, you can use a freely
available online tool like the MITRE ATT&CK Navigator to understand the attack
patterns of known threat actors and then find tests within your BAS tool that align to
those techniques.
As a reminder, labs are restarted and all data in the labs will be lost every Monday,
Wednesday, and Friday between 7 pm - 10 pm PDT.

 It is not advised for you to continue if you are close to the restart window, but
rather wait until the restart window is closed.
 It is not advised for you to continue if you do not have the next 45 - 90 minutes
available to work on the labs.
Continuous security validation is the process of taking your existing individual security
controls, creating unit tests for those controls, executing those tests, and analyzing the
results

For example:

 You have a DLP solution or a Firewall, and you are using it to block a specific
rule or action.
 For every rule or action you create, you should also design a test for that rule.
 If I’m blocking a specific domain or URL, I would create a test that tries to
reach that domain or URL.

If I’m blocking a specific text pattern in my DLP, I would create a test that would try
and mimic that pattern and exfiltrate data.

Let’s keep it simple and stick to the Firewall example with a blocked URL. We will call
it www.blockme.com.

Once my rule has been created and the policy has been pushed to block
www.blockme.com on the firewall, I create a test using a BAS tool or even scripting to
try to make a connection from my network through the firewall and out to
www.blockme.com.

Now I execute the test and make sure that the results come back that it could not
connect.

It’s important to remember to execute this sort of testing against all firewalls to validate
that the policy you pushed was deployed correctly.
Once we’ve validated that our test to www.blockme.com is actually being blocked, we
need to schedule this test to occur regularly so that we can be certain that the rule we put
in place continues to work as desired.
Alright, so you’ve identified your deficiencies while performing GAP Analysis. It’s
time to put your plan into action and start selecting tools you will purchase to cover
those gaps. Here’s the problem – you want to be as certain as possible that the tool you
are about to spend a lot of money on actually follows through on the promises to fill
those gaps.

By taking a scientific approach that is measured and repeatable with each solution to be
tested, you can make sure that you are choosing the best tool to meet your needs. BAS
tools fit in well here because they allow you to take a lot of the manual process and
documentation out of the equation.

Here are some suggestions I’ve given security teams in the past:
 Make sure your testing scope only includes tests that make sense for the solution
you are evaluating. It doesn’t make sense to run credential theft testing against a
network firewall solution and can skew results.
 If possible, execute your testing in production to get the most accurate picture of
how the product will perform in your environment
 Use a control – For example: If you are testing endpoint solutions, make sure
that one of the hosts you are testing does not have that endpoint solution
installed. This allows you to see where there may already be some overlap in
coverage or a false reading in your testing.

Another side benefit of performing testing this way is that when you do choose a
solution to purchase and implement, you will already have the test designed that will
help to verify that your implementation is correct. You can also use this same test plan
continuously with that security control to ensure that environmental changes to your
enterprise do not affect how the security control operates.
Red Teams are expensive and highly specialized. They should be innovating, not
playing gotcha! Blue Teams are overworked and spread too thinly. They should be
hunting, not maintaining.

Purple Teaming is an organizational concept by which red and blue functions occur
simultaneously, continuously, tightly coupled, and with full knowledge of each other’s
capabilities, limitations, and intent at any given time.

Given reliable access to red capabilities, this methodology allows security teams to
iteratively increase program maturity as a product of continuously clearing low-effort
attacks from the board.

Let’s take a look at the workflow of a purple team.

1. Red Team executes iterative attacks against friendly cyberspace, tuned to


replicate adversary capabilities and prevent irrecoverable disruption
2. Stopped attacks generate reports of detection and mitigation details back to the
Red Team
3. Successful attacks generate reports of the attack method and exposure details
back to the Blue Team.
4. Red and Blue Teams jointly debrief all actions in coordination with IT Ops;
mitigations emplaced, attack techniques refined, attack surface reduced
5. Continuous testing and improvement refines detection capabilities and enables
ever-more difficult scenario execution, which refines detection capabilities.

 Breach and Attack simulation tools can help with Red Team execution by
providing a platform to make sure test procedures are safe, controlled, and
documented.
 Integrations with other defensive security tools like EDR, Firewalls, AV, and
IDS/IPS can allow BAS tools to provide instant feedback in a centralized
manner to the Red Team
 Those same integrations can provide instant feedback and centralization for Blue
Team members as well. Some BAS platforms will also provide mitigation
information to the Blue Team as well.
 During the joint debrief, data collected by the BAS tool can be analyzed by both
Blue and Red team members. This data can be used as suggestions for both sides
on the next piece, which is
 Continuous testing and improvement. Breach and attack simulation tools allow
you to begin automating many of the low-level tasks the red team is doing so
that they can continue to innovate. Blue teams are also provided with a way to
run those lower-level red team tasks themselves to validate that the measures
taken to resolve red team discoveries are always working.
Quality Assurance testing can utilize BAS tools to help make sure security
configuration on golden images or new server deployments is correct. Testing your
golden image with a BAS tool can greatly decrease the risk of deploying new
workstations with improper configuration.

Here are a few things to keep in mind:

 Design your tests to match the security controls you put on the host. This may
include things like bypassing UAC, privilege escalation, registry modification,
or credential theft.
 Don’t just focus on security tool testing. Consider testing operating system
policy and other native controls.
 Utilizing a BAS tool with RBAC features can allow Desktop QA engineers to
execute testing without having access to results for separation of duties.
 Utilizing a BAS tool with an API can allow the process of testing to be baked
into QA automation tools

In a world where we are seeing more and more automated deployment of servers, it
makes sense that security teams are becoming more and more involved with the quality
assurance of these servers. Breach and Attack simulation tools can allow security teams
and server deployment teams to feel confident in the configuration and setup of new
assets.

Some things to consider when using BAS in conjunction with server deployment;

 Don’t forget about a threat informed defense – keep tests lightweight and fast by
only testing what you’ve discovered from intel analysis.
 Utilize a BAS tool with an API to automate the process and test rapidly
 Using a BAS tool that integrates with your security stack can help security
operations teams quickly pinpoint what failed if a test does not pass
Correção
Assessment Design Theory
Before opening any sort of breach and attack tool, you should have a plan. Put this plan
on paper first so that everyone involved knows what is involved in testing. You will
eventually have multiple test plans that will each translate into different assessments.

Each test plan should include:

 Questions To Be Answered
 Assets To Be Tested
 Scenarios To Run
 Testing Schedule

Questions To Be Answered

It’s pointless to run any sort of testing if you don’t know what you are testing for. If you
are still in that don’t know what you don’t know phase, that’s fine. Here are some
thought starters that might help you out.

 Which threat groups are known to target my industry?


 Which techniques do they commonly use?
 Which areas of my enterprise are essential for business continuity?
 Which security controls do I have that are questionable?
 What security controls do I have?

If you work for a law firm, you may be concerned about groups like APT19. Try
understanding the techniques that are commonly used by APT19 by reviewing the threat
intelligence data provided by MITRE ATT&CK.

With the understanding that APT19 will often use Powershell, it may help to dig deeper
into the procedures used in the PowerShell sub-technique. Even in a general sense, you
now may have some questions about how well your PowerShell deployment is secured.
Creating A Test Statement

Combine your questions with a hypothesis to make a test statement. I suggest starting
by being more specific with test statements in the beginning. Eventually, you may find
that you don’t need to be as specific. The idea is to generate as many test statements as
you can on the first run.

Question: Can encoded PowerShell commands execute in our environment?

Hypothesis: I believe that only administrators can execute PowerShell commands in


our environment, but not encoded PowerShell commands

Test Statement: Any user with rights less than a local administrator cannot execute
encoded PowerShell commands in our environment.
-OR-

Test Statement: Any user with rights less than a local administrator cannot execute
ANY PowerShell commands in our environment.

-OR-

Test Statement: Any administrator cannot execute encoded PowerShell commands in


our environment.

Assets To Be Tested

Based on the test statements you’ve created, you will want to identify all of the assets in
your environment that should be involved in testing. Do your statements involve the
entire enterprise, a specific business unit, or a specific technology? With each test
statement, add in assets that would prove or disprove the statement. Remember, all of
these guidelines are flexible as you are creating your plans.

For example:

Test Statement: Any user with rights less than a local administrator cannot execute
encoded PowerShell commands in our environment.

Assets: 2 Workstations from each business unit, one with policy A and one with Policy
B

-OR-

Assets: All critical Windows assets


Scenarios To Run

Deciding which scenarios to run for each test statement may seem daunting. There’s a
lot to choose from. The good news is this is iterative. That’s right, if you combine your
test planning with Purple Teaming, you are only going to get better. Here are some
things to consider when planning which scenarios to run for your test statement:

 Do my test statements align at all with any ATT&CK techniques, what about tactics?
 Which security tools do my test statements include?
 What types of security controls do my test statements include?
 What operating systems are included in my assets to be tested?
 Are there any special considerations on the assets to be tested?
 Does the scenario prove or disprove the test statement?
For each test statement, you will need to find at least one scenario that proves or
disproves the test statement. Add these scenarios to your test plan.

For example:

Assets: 2 Workstations from each business unit, one with policy A and one with Policy
B

Scenarios: Execute Encoded Powershell Command


Testing Schedule

The schedule of when assessments are executed may seem like a minor consideration.
However, when an assessment is scheduled to run can ultimately impact your results.
Here are some things to think about when determining when and how often to run an
assessment:

 Are the assets to be tested remote or local? Always available? Business-critical?


 Will the scenarios have any impact on local users?
 How often could my test results change?
 How does this testing fit in with the overall IT/Sec Ops schedule?

Add the testing schedule to each test statement in your testing plan.

For example:

Test Statement: Any user with rights less than a local administrator cannot execute
encoded PowerShell commands in our environment.

Assets: 2 Workstations from each business unit, one with policy A and one with Policy
B

Scenarios: Execute Encoded Powershell Command

Schedule: Execute at least twice a month, no closer than 10 days apart


Antivirus/Signature-based Testing
Anti-virus is one of the oldest methods of security prevention. Although a lot of new
technologies have been thrown on top, at the heart of that tech you will still often find
signature-based detections. Operationally, AV is supposed to be a set it and forget it
technology with regular signature updates being applied. In reality, complicated policies
and deployments can often hinder updates or be misconfigured.
EDR Testing
EDR, or Endpoint Detection and Response testing looks at many more behavioral
aspects of an attack. Many EDR tools even align with MITRE ATT&CK. Testing these
tools involves an understanding of how the EDR tool works with a combination of how
an attacker behaves. In the lab for this course, we looked at persistence techniques as an
easy way to start testing your Endpoint Detection and Response tools.

Content Filter Testing


Content filters may be simplistic in their idea but can become quickly unreliable if not
configured properly. Creating test statements for content filtering should take into
account what types of content should and should not be filtered and any policies that
may differ from a default corporate policy.
What is ATT&CK® Navigator?
According to the GitHub page for the project, “ATT&CK® Navigator is designed to
provide basic navigation and annotation of ATT&CK® matrices...” To put it simply,
Navigator is a robust tool that allows for interaction and visualization of the
ATT&CK® matrix.

The ability to import and export to and from other tools provides interactivity with the
ATT&CK® matrix for security teams and features like risk scoring and coloring allow
security teams to better understand how their organization maps to the ATT&CK®
matrix.
Who is ATT&CK® Navigator for?
ATT&CK® Navigator has features that can help with nearly any job roll in Information
Security. For example:

 CISO’s will find the visualization, scoring, and reporting features useful when trying to
calculate and explain risk to other executives.
 Red Teams, Blue Teams, or the combination of the two in a Purple Team will find the
visualization, scoring, commenting, imports, exports, and pretty much all of the features
of ATT&CK® Navigator to be incredibly useful.
 Cyber Threat Intelligence (CTI) Analysts and teams can use ATT&CK® Navigator to
map and synchronize their intelligence reports to other departments so that they are
actionable.

Before beginning labs:

1. Login to your lab machine with the following credentials:


Username: student
Password: academy123

2. Open the terminal application from the launch bar on the left-hand side of the screen
to enter the ‘Home’ directory.

3. Change directories to the ‘Navigator’ directory in your home directory.

 cd Navigator/

4. Clone the Navigator Git repository to the Navigator folder.

 git clone https://github.com/mitre-attack/attack-navigator.git . 

5. Change directories to the new nav-app directory.


 cd nav-app/

6. Install the local Navigator package.

 npm install

NOTE: Warnings about “SKIPPING OPTIONAL DEPENDENCY” are normal.

7.  Ask Angular to build and serve the Navigator application.

 ng serve

 8. Open the Firefox web browser and go to http://localhost:4200.

In July 2020 the MITRE ATT&CK® team released sub-techniques. Sub-techniques


were introduced to address the levels of granularity in the techniques. Some of the
original techniques were very broad and covered a lot of activity, while some were
narrow.

What Is a Sub-Technique?

According to MITRE, Sub-techniques are a way to describe a specific implementation


of a technique in more detail.
ATT&CK® Navigator is organized into layers. You have the ability to create multiple
layers, adjusting them to suit your needs. Within each layer, you have the ability to
select multiple techniques or sub-techniques and color code them based on a risk score.

Determining how to assign risk is ultimately up to your organization.

Navigator Layers

Layers can be used to organize your Navigator, think of each layer as a fresh worksheet
that already has the Navigator template waiting for you to fill in.  Layers are modular in
the sense that they can be imported and exported, independent of each other.

Layers can also be combined to help better align risk scoring for your organization to
the MITRE ATT&CK® Framework.

Create New Layer

The Create New Layer option allows you to create a layer using ATT&CK® version 4
- 8, it also provides the option of choosing the domain.

 Enterprise - Focuses on traditional enterprise security.


 Mobile - Focuses on mobile security.
 ICS - Focuses on industrial control system security.

Open Existing Layer

ATT&CK® Navigator allows you to upload JSON formatted files that contain layer
information from your local hard drive or a remote URL. Importing from a remote URL
is handy when you are doing things like using the Center for Threat Informed Defense’s
ATT&CK® to NIST 800-53 control mappings.

Create Layer From Other Layers

 Domain - this is the domain you are choosing to use for the new layer you are creating
from your existing layers. You can also choose the ATT&CK® version.
 Score Expression - can be used to combine scores formulaically from multiple layers
into a single layer.
 Gradient - allows you to choose the layer the color gradient for heat mapping will be
assigned from.
 Coloring - allows you to choose which imported layer you will use manual coloring
from when creating your new layer.
 Comments - allows you to choose which layer you would like to import comments from
 States - allows you to choose if and where enabled and disabled states need to be
imported from a layer
 Filters - allows you to choose the layer filters will be applied from.
 Legend - allows you to choose which layer to import the legend from.

Create Customized Navigator

Default Layers

Add a layer link allows you to enter the URLs of layers hosted on the web. The custom
navigator will open these layers by default. If the layers you wish to display are hosted
in different locations, you add in a URL of each of the layers you want to be displayed
by default on your customized Navigator.

Navigator Features
The options in this section allow you to enable and disable features such as:

 Tabs
 Technique selection
 The ‘MITRE ATT&CK® Navigator’ header
 Subtechniques

Selection Controls
The options in this section allow you to enable and disable features such as:

 Search panel
 Multiselect panel
 Deselect all button

Layer Controls
The options in this section allow you to enable and disable features such as:

 Layer info panel


 Download layer button
 Export render button
 Export to Excel button
 Filters panel
 Sorting button
 Color setup panel
 Hide disabled techniques button
 The ability to change the current matrix layout
 Legend panel

Technique Controls
The options in this section allow you to enable and disable features such as:

 The ability to disable techniques


 The ability to assign manual colors to techniques
 The ability to score techniques
 The ability to add comments to techniques
 Button to clear all annotations.
The toolbar is broken down into three sections:

 Selection Controls - These controls are used to select the different techniques or sub-
techniques in ATT&CK® Navigator that you wish to work with.
 Layer Controls - These controls apply to the entire layer that you are working on.
 Technique Controls - This set of controls allows you to work with the different
techniques you have selected with selection controls.

Selection Controls
Selection Behavior

Selection behavior allows you to select the same technique across all tactics by only
selecting it under a single tactic.

The other option available under selection behaviors is to select sub-techniques with
parent. This allows you to select all sub-techniques under a parent-technique by only
selecting the parent technique or visa-versa as seen in the image below.

Search

Search techniques allow you to search across all of the techniques in the MITRE
ATT&CK® matrix by:

 Name
 ATT&CK® ID
 Description
 Data Sources

Multi-Select

The multi-select button allows you to select multiple techniques based on:
 Threat groups
 Software
 Mitigations

Deselect

The deselect button clears out any of the techniques you currently have selected.

Layer Controls
Layer Information

Information about the layer such as a name and description can be set in this panel.

Additionally, you will find the domain type and ATT&CK® version information on this
panel.

Custom metadata name and value pairs can also be added by clicking the add more
metadata button.

An example of this could be - 


name: nistcontrol
value: AC-2

Utilizing additional metadata function can be helpful in organizing your ATT&CK®


Navigator layers.

Download Layer as JSON

This button gives you the ability to download your Navigator mappings as
a JSON formatted file.

Export to Excel

This button gives you the ability to download your Navigator mappings as an xlsx
compatible file.

Render Layer to SVG

This button allows you to download your Navigator mappings as a vector graphic file.

Filters

The filter button allows you to toggle between showing/hiding the techniques related to
the following platforms:

 Linux
 macOS
 Windows
 Office 365
 Azure AD
 AWS
 GCP
 Azure
 SaaS
 PRE
 Network

Sorting

This button allows you to sort (ascending or descending) techniques in alphabetical


order or in order of score.

Color Setup

The color setup panel allows you to select your color palette for scoring techniques.
You can assign low and high values along with colors that match those values to build a
color gradient into your mappings when applying scores.

This panel also comes with some pre-built color palettes to make changes and
modifications easier.

Show/Hide Disabled

The show/hide disabled button allows you to toggle between showing and hiding
techniques that you have set the state of to disabled through the use of the toggle state
button.

Expand Sub-techniques

The expand sub-techniques button allows you to expand sub-techniques from under the
parent techniques.

In the example below, we can see that the Malicious Link and Malicious File sub-
techniques are shown as expanded from under thair parent technique of User Execution.

Collapse Sub-techniques

The collapse sub-techniques button allows you to collapse the sub-techniques back
under the parent techniques if they are already expanded.

Continuing with the example from the Expand Sub-techniques section, you can see that
both the Malicious Link and Malicious File sub-techniques have been collapsed back
under the parent technique of User Execution.

Matrix Layout

The matrix layout panel gives you a few options when setting up the layout of your
Navigator ATT&CK® Matrix. It allows you to toggle the display of both technique ID
and/or Technique name. It also allows you three options, chosen by drop-down for how
the matrix is actually laid out in Navigator.
 side layout - This is the default layout for ATT&CK® Navigator. It sets up the parent
techniques to have their sub-techniques expanded from the side.
 flat layout - This layout sets up the parent techniques to have their sub-techniques
expanded from the bottom.
 mini layout - This technique sets up the entire matrix to be minimal, using a reliance on
tooltips that pop-up when you hover your mouse over a technique. Very light visual
cues, such as light and dark box outlines show a parent-technique and how many sub-
techniques it contains.

Technique Controls

Technique controls are used to manipulate and add context to selected techniques.

Toggle State

The toggle state button sets the selected technique(s) as disabled or enabled. The view
of disabled techniques can be toggled with the Show/Hide Disabled button

Background Color

The background color button opens a panel that allows you to choose a background
color to apply to a selected technique(s).

Scoring

This button allows you to apply a score to a selected technique(s). When combined with
the options in the color setup panel, this feature brings strong visualizations when it
comes to prioritizing ATT&CK® techniques.

Comment

The comment button allows you to apply comments to selected techniques.

Clear Annotations on Selected

This button clears comments on selected techniques.

Select Section
1. Click on Create New Layer
2. Expand More Options
3. Set the version to ATT&CK v8
4. Click the Enterprise button
5. Use the search button under selection controls and search for the term Password
6. Click the select all button
7. Using the background color button under technique controls, change the
background color of your selection from steps 5 and 6 to red.
8. Click the toggle state button to disable the selected techniques.
9. Click the show/hide disabled button to hide the disabled techniques from view.
10. Click the x next on the tab at the top of your layer to close it out

Click on the color selection icon under the layer controls menu.
11.  On the presets drop-down, at the bottom of the menu, click on blue to red to
change the score gradient to blue for a low score and red for a high score.
12.Select Multiple Techniques For A Threat Group
13.  Under selection controls, choose the multi-select icon
14.  Find APT29 under Threat Groups, and click the select button.

15.Assign A Threat Score


16.  Click on the scoring button under technique controls, and give the selected
techniques a score of 80.

17.Assign More Scores


18.  Click the deselect button under selection controls to unselect the 20 techniques that
were chosen in step 9.
19.  Under selection controls, choose the multi-select icon
20.  Under software, locate DarkComet and click view to open the MITRE ATT&CK web
page for DarkComet.
21.  Record the names of the two threat groups listed who use the DarkComet
Software. You will need this information for your final assessment.
22.  Return to the ATT&CK Navigator and click the select button for DarkComet.
23.  Give the selected DarkComet techniques a score of 50
24.  Use the deselect button to unselect all of the DarkComet techniques
25.  Use what you’ve learned so far to add a score of 25 to techniques that can be
mitigated with Active Directory Configuration.
1. Return to your open Navigator page and click on the + sign next to the Sable
Bluff Tab. This creates a new layer.
2. Click on Open Existing Layer.
3. Paste the following URL into the Load from URL text box:
https://raw.githubusercontent.com/center-for-threat-informed-defense/attack-
control-framework-mappings/master/frameworks/nist800-53-
r5/layers/by_family/AC/AC-2.json
4. Click on the little arrow next to where you pasted the URL to import the layers.

Exercise 5 - Exporting an Image

1. From the Sable Bluff layer, click on the render layer to SVG icon under layer controls.
2. Remove the filters display from the report. Click on the display settings button and
uncheck the box for show filters.
3. Click the download svg icon.
4. Choose to save the file and then click the Ok button.

Exercise 6 - Exporting To A Spreadsheet

1. From the Sable Bluff layer, click the export to excel icon under layer controls.


2. Choose to Save File and then click the OK button.
Let’s Review The File. First we need LibreOffice - These steps are
OPTIONAL and only apply to students running Navigator Locally.

3. Click the new tab icon in the terminal window


4. On the new terminal tab, run the command:
sudo apt-get install libreoffice
5. Enter the password academy123 when prompted.
6. When the installation completes, open the file explorer in the GUI and go to the
downloads folder.
7. Open the Sable_Bluff.xlsx to see what it looks like.
8. Close the program, and hold on to the file. You will need it for your final assessment for
this class.

You might also like