Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335652869

Standardised Penetration Testing? Examining the Usefulness of Current


Penetration Testing Methodologies

Thesis · September 2019

CITATIONS READS

2 11,144

1 author:

Niek Jan van den Hout


KPMG
2 PUBLICATIONS   26 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Niek Jan van den Hout on 12 January 2020.

The user has requested enhancement of the downloaded file.


Standardised penetration testing? Examining the
usefulness of current penetration testing methodologies.
A case study of the Dutch penetration testing industry.

22 August 2019

Niek Jan van den Hout


PUBLIC VERSION

Supervisor
Prof. dr. Paul Dorey
Information Security Group

Submitted as part of the requirements for the award of the


MSc in Information Security
at Royal Holloway, University of London.
Acknowledgements

I would like to thank my project supervisor prof. dr. Paul Dorey for his guidance and
valuable feedback. In addition I would like to thank all testing providers for their
insights and their willingness to be interviewed and participate in the survey.

2
Table of Contents

Acknowledgements ...................................................................................................2

List of Abbreviations ................................................................................................4

Executive Summary .................................................................................................5

1. Introduction ......................................................................................................7
1.1 Introduction................................................................................................................. 7
1.2 Methodology ..............................................................................................................11

2. Introduction to security testing ...................................................................... 12


2.1 Software testing..........................................................................................................13
2.2 Security engineering and the development lifecycle ....................................................16
2.3 Security testing...........................................................................................................18
2.4 Standardisation within the British and Dutch security testing industries .......................22

3. Penetration testing business process .............................................................. 26


3.1 Describing a business process .....................................................................................26
3.2 The penetration testing business process .....................................................................27

4. Penetration testing methodologies ................................................................. 30


4.1 Open Source Security Testing Methodology Manual (OSSTMM) ...............................31
4.2 Penetration Testing Execution Standard (PTES)..........................................................35
4.3 NIST Special Publication 800-115 ..............................................................................38
4.4 Open Web Application Security Project (OWASP) Testing Guide...............................41
4.5 Comparison of the methodologies ...............................................................................43
4.6 Conclusion .................................................................................................................49

5. Implementation of the methodologies ............................................................ 50


5.1 Data gathering ............................................................................................................50
5.2 Implementation of the methodologies .........................................................................52
5.3 Achievement of the goals ...........................................................................................61

6. Conclusion and discussion .............................................................................. 63

Bibliography ........................................................................................................... 66

3
List of Abbreviations
ASVS Application Security Verification Standard
BS British Standards
BSI British Standards Institute
BPA Business Process Analysis
CBS Centraal Bureau voor de Statistiek (Dutch Central
Agency for Statistics)
CHECK IT Health Check Service
CREST Council for Registered Ethical Security Testers
IBD Informatiebeveiligingsdienst (Information Security
Service for Dutch Municipalities)
IoT Internet of Things
ISO International Organisation for Standardization
ISSAF Information Systems Security Assessment Framework
NCSC National Cyber Security Centre
NHS National Health Service
NIST National Institute for Standards and Technology
NIST SP 800 – 115 NIST Special Publication 800 – 115
OpenSAMM Open Software Assurance Maturity Model
OpSec Operational Security
OSSTMM Open Source Security Testing Methodology Manual
OWASP Open Web Application Security Project
PCI DSS Payment Card Industry Data Security Standards
PTES Penetration Testing Execution Standard
RAV Risk Assessment Value
SANS Escal Institute of Advanced Technology (SANS Institute)
SDLC Software Development Life Cycle
STAR Security Test Audit Report
SUT System Under Test

4
Executive Summary

Penetration testing is one of the main approaches taken by companies for gaining
assurance in the security of their information technology. A penetration test includes
simulating an attacker and testing negative security requirements (essentially testing
whether something is impossible to compromise) which can make producing reliable
results very difficult, if not impossible. Several methodologies have been created and
published in order to aid testers with conducting high quality penetration tests. These
methodologies, however, seem to be used to a very limited extent. It is unclear why this
is the case and how this affects the quality of penetration tests.

With this study the author has attempted to clarify whether the four main publicly
available standard methodologies, the Open Source Security Testing Methodology
Manual (OSSTMM), the Penetration Testing Execution Standard (PTES), the NIST
Special Publication 800-115 and the OWASP Testing Guide, improve the quality of
penetration tests and why these methodologies do or do not provide this improvement.

The author attempted this by examining how the current implementation of the
methodologies within the Dutch testing industry leads to the achievement of the goals
of the methodologies. This examination included introducing the topic of security
testing and introducing the approaches to standardisation within the British and Dutch
markets. It also included examining the methodologies themselves and conducting in-
depth interviews with seven testing providers operating within The Netherlands on their
use of the methodologies.

Based on this examination it can be concluded that current methodologies do not


provide much improvement to the quality of penetration tests and that the current
implementation does not lead to the achievement of the goals of the methodologies.

The main reason for this is that most methodologies are only implemented partially or
are not implemented at all. This is the case because some methodologies are complex
to use and impractical to implement and some methodologies aren’t well known within

5
the industry. Furthermore some providers describe that each test requires a different
approach and that they value creativity over a strictly defined methodology.

Another conclusion from this study is that the quality of penetration tests in the Dutch
market is varying widely depending on provider but is in general described as low.

A potential solution, the creation of an industry association (like CREST in the British
market) tasked with maintaining a certain quality within the market, is regarded as
something that might help but also something that would be difficult to properly set up.

In the final chapter the author provides six recommendations on how, based on the focus
and results of this study, the quality of penetration tests within the Dutch industry could
be improved.

6
1. Introduction

1.1 Introduction

Our society is digitalising; there is an increasing dependence on information technology.


These technologies include, among others, smartphones, corporate networks,
applications such as Facebook, and a large amount of newly developed IoT devices.
Recent research by the European Statistical Office (Eurostat) shows that in 2017 more
than 84% of all individuals in the EU, aged between 16 and 74 years, used the internet
[1]. A study conducted in 2018 by the Dutch Central Agency for Statistics (CBS) even
showed that 91,8% of all people in The Netherlands had used the internet in the last
three months [2] and 98% of all people had access to the internet [3]. In projections
made by Business Insider Intelligence it is estimated that there will be more than 65
billion IoT devices worldwide by 2025, up from about 10 billion devices in 2018 [4].
Due to this increasing use and dependence on these technologies, the security risks
associated with threats to the confidentiality, integrity and availability of information
used by these technologies have never been higher [5, 6]. A breach of security can lead
to severe negative consequences such as data leaks [7], material damage [8], significant
financial damage [9, 10] and even injury or death.

This last risk is illustrated by an outage of the Dutch national emergency number which
caused emergency services to be unreachable which in turn might have resulted in the
death of a woman who suffered from cardiac arrest during the outage [11]. Another
situation where information security risks might have led to serious injury or deaths,
were failings of computer systems of the UK National Health Service (NHS),
specifically as a result of the WannaCry malware that infected NHS systems and led to
the cancellation of 20,000 appointments in 2017. The cancellation of these
appointments might have caused an estimated 900 deaths [12].

The industry trying to manage these relatively new risks: the information security
industry, is growing [13]. One of the approaches that is taken to manage the risks is
testing or assessing the security of an information system. In this case software [14],
hardware [15] or human behaviour [16] is tested or assessed in order to gain information
about vulnerabilities or deficiencies present in a system. The results of such a test or
assessment can be used to determine which corrective actions should be taken.

7
There are many different kinds of tests or assessments that can be conducted [17, 18].
Tests or assessments which include testing human behaviour often use social
engineering in order to breach security [16]. These tests can include the use of
manipulation techniques to convince employees to perform certain unwanted actions,
such as clicking on a malicious link in a simulated phishing mail. Although very
important, this thesis will not focus on this kind of security test. In addition to this, tests
or assessments which focus purely on hardware testing are also outside of the scope of
this thesis. Instead the focus of this thesis are tests or assessments which aim to mainly
test or assess the security of software and more specifically tests that simulate an
attacker and have so called negative test requirements (the exact meaning of this will
be described later). In practice these tests are often described as penetration tests [14].

During the last 20 years several efforts have been made to improve the overall quality
of penetration tests and the maturity of the industry. One particular approach to
improving the quality and reliability of penetration tests is the introduction of various
public frameworks and methodologies, since a methodology is regarded as necessary
for any activity that requires repeatable and verifiable results [19, 20]. The
methodologies aim to increase the value of a test [21], by “providing a common
language and scope” [22] and “a consistent, repeatable and defined approach” [23].
These methodologies include the Open Source Security Testing Methodology
(OSSTMM), Penetration Testing Execution Standard (PTES), NIST Special
Publication 800-115 [24] and the Open Web Application Security Project (OWASP)
Testing Guide [25].

A market survey conducted in 2015 by Lancaster University in cooperation with the


British Standards Institution (BSI) found that current penetration testing providers
operating in the British market create and use their own internal methodology [26],
instead of using the available public standard methodologies. These internal
methodologies are only “influenced by other community standards” [26]. To what
extent the public standards influence the internal methodologies and why the public
standards are not used in their entirety remains unclear. The situation within the Dutch
market seems similar, some public methodologies are recommended [27], such as the
OSSTMM, OWASP and NIST SP 800-115 [28]. In addition to these methodologies the

8
IBD (Information Security Service for Dutch Municipalities) recommends performing
a penetration test in line with the ISSAF or SANS [29]. But according to some security
practitioners these standards haven’t been widely adopted [30] or are limited in use [31].

Because of the fragmentation in the use of standard penetration testing methodologies,


these methodologies might not provide the benefits, such as a consistent, high quality
approach and common language, that they were created for providing. Furthermore, the
lack of use of these standard methodologies could therefore have a negative effect on
the quality and reliability of penetration tests in general.

Considering the fact that societal dependence on digital technologies is increasing and
therefore the need for high quality and reliable penetration tests is as high as ever the
author decided to investigate this topic further.

With the research presented in this thesis the author attempts to clarify whether
four selected standard methodologies indeed improve the quality of penetration
tests and more specifically why these methodologies do or do not provide this
improvement.

The author attempts to provide this clarification by examining how the penetration
testing methodologies are implemented in business processes and whether this
implementation leads to the achievement of the goals of the methodologies. The
resulting information could help testing providers to improve their implementation of
the methodologies and it could help the authors of the methodologies to increase the
effectiveness of the methodologies.

The author has chosen to focus on the Dutch penetration testing industry to gain this
practical insight, because of the countries high dependence on information technology,
characterized by its extremely high internet use and penetration rate [32] and its role as
a worldwide internet exchange [33]. Furthermore, the authors previous professional
experience allowed him to arrange in depth interviews with several penetration testing
providers operating within this market.

9
The main research question of this study can be defined as follows:

How does the current implementation of the OSSTMM, NIST SP800-115, PTES
and the OWASP Testing Guide within the Dutch penetration testing industry lead
to the achievement of the goals of these methodologies?

This question will be answered by answering three sub-questions:

QA What is penetration testing and how can the typical penetration testing
business process be described?

QB What are the OSSTMM, NIST SP800-115, PTES and OWASP Testing
Guide and what are their goals?

QC How are the OSSTMM, NIST SP800-115, PTES and OWASP Testing
Guide implemented in penetration testing business processes and why are they
implemented this way?

10
1.2 Methodology

Due to the rather explorative research question this study will take a qualitative
approach to answering the research questions. The data needed to answer the research
questions will be gathered in two different ways.

In order to describe what penetration testing is and to describe the typical penetration
testing business process (QA) relevant literature will be reviewed.

For describing the OSSTMM, NIST SP800-115, PTES and OWASP Testing Guide
(QB), literature, which include the methodologies themselves, will be reviewed.

And to describe whether the selected methodologies are implemented, how they are
implemented and why they are implemented like they are (QC), literature will be
reviewed and in addition to this, semi-structured in-depth interviews with penetration
testing providers will be conducted.

The data gathered through the interviews will be analysed using the grounded theory
(GT) methodology. This methodology “involves the progressive identification and
integration of categories of meaning from data” [34]. This process, which involves
labelling particular text segments with codes, is very useful in order to systematically
analyse interview transcriptions. The methodology provides “an explanatory
framework with which to understand the phenomenon under investigation” [34]. As a
result of using this methodology, conclusions, based on the data gathered during the
interviews, can be consistently drawn and are tracible throughout the process. In the
final chapter the outcomes of the interviews will be used to determine how the
methodologies are implemented and whether this results in the achievement of the goals
of the methodologies.

11
2. Introduction to security testing
In order to situate this research within the broader body of security testing literature and
to understand security testing and its industry, this chapter will introduce the concepts
of software testing, security testing and the different types of security tests. It will also
introduce the most important features of the security testing industry, including the
current role of standardisation within this industry.

The reason to approach this topic with a focus on the industry is twofold. Firstly,
because the security testing industry is still relatively young and changing rapidly, and
as a consequence there is limited fundamental research available which accurately
describes security testing and is widely accepted [26]. And secondly, because the author
agrees with the believe that “any attempt to characterize or standardize an industry
must be grounded in the realities of industry practices and client experiences” [26].

In order to understand what role methodologies play within a penetration testing


business process it is important to have an understanding of what exactly a penetration
test is and how a penetration test is different from other types of security tests.

One could dedicate an independent research project solely focussed on creating a


taxonomy with all approaches to security testing. Especially since, as mentioned earlier,
there is much uncertainty about how the wide variety of approaches should be defined.
During a preliminary literature review it becomes clear that in 2015 exactly this has
been attempted by a group of academics from several European universities. Felderer
et al [14] provide a very helpful introduction into the topics of software testing, security
engineering and security testing. They also provide a model for classifying the different
types of tests. For the purpose of this thesis their introduction of these topics will be
summarised and supplemented with some extra sources and information that is relevant
for this thesis. In addition to this their model of classification will be presented.

12
2.1 Software testing

Felderer et al start by introducing software testing and its link with security testing [14].
They provide several definitions of software testing but their final definition could be
described as follows: the static or dynamic verification that a program behaves as
expected on a set of test cases, selected from the execution domain. In order to correctly
understand this definition some terms need to be explained. Firstly, the difference
between static and dynamic testing can be explained as that static testing involves
testing a software program without executing it whereas dynamic testing involves
executing the software program and verifying its behaviour. An example of static
testing is analysing the source code of a program for undesired instructions. Since
software is often too complex to perform all possible tests (test all possible behaviour),
a selection of test cases is made from the execution domain, which is a term that
describes the collection of all possible test cases.

The group of researchers then continues to explain the process of testing. They describe
that “after running a test case, the observed and intended behaviours of a SUT (System
Under Test) are compared with each other, which then results in a verdict” [14]. If the
tested software behaves as intended the verdict is a pass, if the behaviour is not as
intended the verdict is a fail. Failures are caused by a fault, which Felderer et al describe
as “a static defect in the software, usually caused by human error in the specification,
design or coding process” [14]. In some cases it is unknown whether the behaviour is
intended, so the verdict will be inconclusive [14]. Furthermore, the researchers present
several models that can be utilised to classify the different types of tests. One of these
models can be seen in figure 1. They present three different dimensions useful for
classifying a test, namely its objective, scope and accessibility level.

13
Figure 1: Test classification model by Felderer et al [14].

On the vertical axis two main objectives for conducting a test are described. Functional
testing focusses on assessing the actual functional behaviour of a test object, whereas
non-functional testing focusses on testing more abstract quality properties, such as
security, safety, reliability or performance [14]. On the horizontal axis the amount of
access to the software code is described. The different levels of access to code can be
defined as white-box, black-box or grey-box. During white-box testing, “test cases are
derived based on information about how the software has been designed or coded”
[14]. And during black-box testing “test cases rely only on the input/output behaviour
of the test object” [14].

Although grey-box testing is not specifically presented in the model, this approach is
frequently mentioned and used within the security testing industry [35, 36]. In this case
the tester has some information (such as log in credentials) but not access to the
complete source code. The different points within the graph represent the different types
of scope of a test. A test can focus on components, in which case very small components
of a software product [14], such as classes in an object-oriented implementation, are
tested. Another focus can be on integration, which “combines components with each
other and tests those as a subsystem” [14]. And at last a test can focus on testing the
complete system. Some types of system testing are acceptance testing “where it is

14
checked whether a solution works for the user of a system” [14] and regression testing,
which involves retesting some parts of a system to verify that any modifications have
not caused the SUT to deviate from the specified requirements [14].

After presenting and describing the model Felderer et al describe the main activities
that are part of a testing process. Since the second chapter of this thesis will focus
extensively on analysing the penetration testing business process for now only a short
description of the main activities within a testing process will be introduced.

The testing process comprises five core activities [14], namely planning, design,
implementation, execution and evaluation. During the planning phase a test plan,
including the test objectives, scope, resources and schedule is created [14]. During the
design phase the general objectives are transformed into “concrete test conditions and
abstract test cases” [14]. The test implementation phase includes making the “abstract
test cases executable” [14], this can include the creation of test scripts in order to
automate a test [14]. During the execution phase, the test cases are “executed and all
relevant details are logged and monitored” [14]. And at last during the test evaluation
phase the exit criteria are evaluated and the test results are reported [14]. After a
description of the main activities of a testing process the group of researchers introduces
the basics of security engineering and the role of a secure software development
lifecycle. Both topics are relevant for this thesis as well, since vulnerabilities and
corresponding risks are introduced when software is being developed. It is important
that security testing is part of the development process.

15
2.2 Security engineering and the development lifecycle

Security engineering is about “building systems to remain dependable in the face of


malice, error, or mischance” [37]. The dependability of a system can be characterised
on the basis of six security properties [14], namely: confidentiality, integrity,
availability, authentication, access control (or authorisation) and non-repudiation. The
ISO27000:2017 standard defines these properties as follows: [38]

- Confidentiality: “property that information is not made available or disclosed


to unauthorised individuals, entities, or processes.”

- Integrity: “property of accuracy and completeness.”

- Availability: “property of being accessible and usable upon demand by an


authorised entity.”

- Authentication: “provision of assurance that a claimed characteristic of an


entity is correct.”

- Access control: “means to ensure that access to assets is authorised and


restricted based on business and security requirements.”

- Non-repudiation: “ability to prove the occurrence of a claimed event or action


and its originating entities.”

A security test attempts to validate the security requirements, based on one or more of
the previous mentioned properties, of a program [14]. These security requirements can
be formulated as positive requirements or as negative requirements [14]. Positive
requirements describe how a security mechanism should function, so a positive test is
“verifying that a feature properly performs a specific task” [39]. An example of a
positive requirement related to the property access control could be “User accounts are
disabled after three unsuccessful login attempts” [14]. On the other hand, negative
requirements describe what an application should not do [23]. An example of this could
be “the application should not be compromised or misused for unauthorized financial
transactions by a malicious user” [14]. When either a positive or a negative

16
requirement isn’t met this results in a fail verdict. As mentioned earlier, this is the result
of a fault in the software. When a “fault is related to security properties it is called a
vulnerability” [14]. Vulnerabilities are always associated with the mechanism that
protects an asset [14]. A vulnerability means “that (1) the responsible security
mechanism is completely missing, or (2) the security mechanism is in place but is
implemented in a faulty way” [14].

As we have seen before there is a wide variety of different (security) tests that can be
conducted, among other things depending on the specific objective, the scope and the
level of accessibility to the source code. It is therefore no surprise that the different
types of tests can be used to achieve different kinds of goals. Unfortunately, only a
limited variety of tests is used to achieve these specific goals. One of the main problems
within the software and security industry is that testing is often started late in the
software development process, shortly before the developed application is deployed
[14]. This is ineffective since this results in limited choice to which test to conduct and
significant time pressure to conduct tests and repair the vulnerabilities that are found. It
also does not follow widely accepted principles such as Security by Design [14]. Which
can be defined as “an approach to information security which is at once holistic,
creative, anticipatory, interdisciplinary, robust, accountable and embedded into
systems” [40]. More practically, following this principle means that security is
considered during all phases of a systems lifecycle, with emphasis on the early design
phases.

In order to improve the software development process a software development lifecycle


can be constructed [14, 41, 42, 43]. This is a process that defines the structure of the
phases involved in the development of an application [41]. This can start from an initial
feasibility study, to deployment to maintenance [41].

The software development lifecycle can be divided into five separate phases, namely:
analysis, design, development, deployment and maintenance [23]. To implement
security within the development process a secure software development lifecycle can
be constructed. “Such a model takes security aspects into account in each phase of
software development” [14]. Some major models that are used today are the Security

17
Development Lifecycle (SDL) [42] from Microsoft and the Open Software Assurance
Maturity Model (Open-SAMM) [43] from OWASP.

2.3 Security testing

In the previous paragraphs the different types of software tests and the secure
development lifecycle have been introduced. Security tests can be classified by the
same types as software tests and security tests ideally play an important role in all
phases of the secure development lifecycle [14, 42, 43]. In this chapter security testing
will be introduced, the different types of security tests will be described and ultimately
the different types will be taxonomized based on its type and the place within the secure
development lifecycle. This will result in a model that provides a useful overview and
which will aid in understanding the next chapters of this thesis.

Security testing is defined in multiple ways, Felderer et al define security testing as


follows: “Security testing is testing of security requirements related to security
properties like confidentiality, integrity, availability, authentication, authorization, and
non-repudiation. Security testing identifies whether the specified or intented security
properties are, for a given set of assets of interests, correctly implemented” [14]. This
definition corresponds with other definitions given by Tian-yang et al [18] and Potter
& McGraw [44]. In order to test if a program fulfils the defined security requirements
two main approaches can be differentiated [14]. The most straightforward way to test
if a program fulfils the requirements is to perform conformance or requirements-based
testing [14]. How this kind of test looks like depends on whether the security
requirements are positive and functional or negative and non-functional [14]. For
“positive security requirements classical testing techniques can be applied” [14] and
for “negative security requirements (a combination of) additional measures like risk
analysis, penetration testing, or vulnerability knowledgebases are essential” [14]. This
distinction put forth by Felderer et al is supported by other Tian-yang et al who
distinguise between “security functional testing” and “security vulnerability testing”
[18]. Potter & McGraw distinguish between “testing security mechanisms to ensure
that their functionality is properly implemented and performing risk-based security
testing motivated by understanding and simulating the attacker’s approach” [44].

18
It should be noted that tests which are supposed to validate if a program has one or more
security properties are very difficult to carry out. Firstly because it is difficult or even
impossible to determine all functional requirements and test cases that relate to a
property, since the properties are abstract in nature and “describe all executions of a
system” [14]. And secondly because the opposite (negative) approach: testing for
violations of the properties, also has limitations: it can never confirm the absence of
faults [45]. These limitations have “resulted in the development of specific testing
techniques like penetration testing” [14], the testing technique that is the main topic of
this thesis.

Felderer et al describe a penetration test as a test “that simulates attacks to exploit


vulnerabilities” [14]. Since penetration testing is the main focus of this thesis, it has to
be defined carefully.

Many different definitions of penetration testing exist. The UK National Cyber Security
Centre (NCSC) defines penetration testing as “a method for gaining assurance in the
security of an IT system by attempting to breach some or all of that system's security,
using the same tools and techniques as an adversary might” [46]. This definition is
broader and more specific. The definition of CREST (Council of Registered Ethical
Security Testers) makes the distinction between manual and automated techniques,
namely “Penetration testing involves the use of a variety of manual and automated
techniques to simulate an attack on an organisation’s information security
arrangements.” [47]. The definition given by the NIST in the NIST Special Publication
800-115 is quite long, and it emphasizes the sophistication of a penetration test,
specifically vulnerability chaining. It defines penetration testing as “Security testing in
which evaluators mimic real-world attacks in an attempt to identify ways to circumvent
the security features of an application, system, or network. Penetration testing often
involves issuing real attacks on real systems and data, using the same tools and
techniques used by actual attackers. Most penetration tests involve looking for
combinations of vulnerabilities on a single system or multiple systems that can be used
to gain more access than could be achieved through a single vulnerability” [24]. The
definition of the OWASP highlights the black-box and negative approach of a
penetration test: “Penetration testing is essentially the “art” of testing a running

19
application remotely to find security vulnerabilities, without knowing the inner
workings of the application itself” [23]. For the purpose of this thesis the definition of
the UK NCSC will be used. Since this definition includes both the black-box approach,
the sophistication of an attack and the use of manual and automated techniques as these
elements can be summarised as “using the same tools and techniques as an adversary
might” [46].

During a penetration test “testers build a mental model of security properties, security
mechanisms, and possible attacks against the systems and its environment” [14]. The
use of models results in a model-based security testing approach. In this kind of
approach security test models “provide guidance for the systematic specification and
documentation of security test objectives and security test cases, as well as for their
automated generation and evaluation” [14]. Model-based security testing approaches
can be applied in several phases of a security software development lifecycle [14], not
just as part of a penetration test. Felderer et al describe that often as part of a penetration
test black-box vulnerability scanners are used [14]. During vulnerability scanning “the
scanner queries the application’s interfaces with a set of predefined attack payloads
and analyses the application’s responses for indicators if the attack was successful”
[14].

Because of the use of vulnerability scanners within a penetration testing process, some
confusion within the industry exists about a vulnerability scan as a service and a
penetration test as a service [48]. From a practical and commercial perspective there is
a significant difference between a penetration test and a vulnerability scan. For
understanding what a penetration test is it is useful to describe these differences. The
Payment Card Industry Data Security Standards (PCI DSS) describes the major
differences in depth [48], which can be summarised as that a vulnerability scan has a
purpose to identify, rank and report vulnerabilities that, if exploited, may result in a
compromise of system, while a penetration test has as a purpose identify ways to exploit
a particular vulnerability. Usually a vulnerability scan takes a relatively short amount
of time, such as a several minutes per scanned host while a penetration test may last
days or weeks, depending on the scope of the test [48].

20
Penetration tests and vulnerability scans are not the only techniques that are used to test
non-functional negative requirements [14]. Other techniques that Felderer at al mention
are (dynamic) taint analysis and fuzzing [14]. While both are outside the scope of this
thesis, fuzzing can be used within a penetration test as well [22]. It can be described as
a “form of black-box random testing, which randomly mutates well-formed program
inputs and then tests the program with those modified inputs, hoping to trigger a bug”
[49].

After describing the different types of security tests Felderer et al present a model in
which the different types of security tests are embedded in the five phases of the secure
software development lifecycle [14]. They made this classification based on, the
OpenSAMM (Open Software Assurance Maturity Model) [43], the OWASP Testing
Guide [23], the OWASP Code Review Guide [50] and in particular the different
practices described in the Security Development Lifecycle (SDL) from Microsoft [14].
This model provides an overview of the different types of tests, which artefacts they
test and in which phase of the development lifecycle they are conducted.

Figure 2: Security Testing Techniques in the Secure Software Development Lifecycle by Felderer et al. [14]

The first phases of the secure software development lifecycle include the analysis and
design of requirements [14], including security requirements for a program. During this
phase it is important that the “requirements are as unambiguous as possible” [23].
Model-based security tests are “grounded on requirements and design models created
during the analysis and design phase” [14]. During the development phase source and
byte code is tested by manual reviews or static analysis [14]. After development and
during deployment penetration and other dynamic tests are conducted [14]. And finally,
as part of maintaining a program, security regression testing is performed [14], which
is a type of test that takes place after software has changed in order to confirm software
still functions securely.

21
2.4 Standardisation within the British and Dutch security testing
industries

In order to not only have a theoretical understanding of what security testing should be,
but also an understanding of what security testing looks like in practice, this chapter
will describe the security testing industry and examine the approach to standardisation
within this industry. It will do this by highlighting the major considerations that must
be made with standardising security testing and comparing the Dutch approach with the
British approach to making these considerations.

Standardising penetration testing by using a systematic penetration testing methodology


is very difficult. As described earlier a penetration test has negative security
requirements; ideally it would like to prove that a particular vulnerability does not exist
or that an application cannot be compromised. This however, is impossible from a true
scientific point of view, since science depends on falsifiable hypotheses [45]. It is
therefore no surprise that within the industry penetration testing is sometimes seen as
more of an art than (a) science [45]. A penetration test requires thinking and behaving
like an attacker, an entity which is not bound by a methodology or other rules. In
addition to this the hacking ideology and subculture promotes five principles, including
the mistrust of authority and the pursuit of decentralization [51, 52]. These principles
seem to be at odds with the creation of a centralised and standardised penetration testing
methodology. Furthermore, some commercial reasons against standardising a
methodology exist as well, the main reason being that a customised internal
methodology can give a penetration testing provider a competitive advantage over other
providers. These factors all result in an aversion to standardisation and in particular to
a prescribed methodology limiting the creativity and actions of a penetration tester. It
is therefore no surprise that the use of current publicly available penetration testing
methodologies seems to be limited and fragmented.

The problem with a non-structured and non-standardised approach is the limited


assurance a test can provide. These types of tests might not provide consistent results,
their quality depends heavily on the expertise of the tester and it can be unclear why
certain scenarios have been tested. In addition to this Duffy notes that “when standard

22
exploits do not work, testers can have tunnel vision; sticking to a methodology will
prevent that” [53]. It is clear that the right balance between a methodological approach
and a creative or ‘out-of-the-box’ approach has to be found. Wilhelm describes: “what
we need in our industry is a repeatable process that allows for verifiable findings, but
which also allows for a high degree of flexibility on the part of the pentest analyst to
perform ‘outside-the-box’ attacks” [20].

Another problem with lack of standardisation within the industry in general, not just
the methodology, is that it can make it hard for clients to find the right provider. A client
interviewed by Knowles et al described: “it’s very difficult as a person wishing to
purchase penetration testing and IT Health Check services… to assess the marketplace
and find out whether or not your potential vendors will satisfy what you require, other
than them being able to say that they’re CREST or CHECK registered” [26]. Knowles
et al write that there was “a predominant sense of confusion and frustration amongst
stakeholders about the ambiguity in what constitutes a penetration testing service” [26].
The situation within the Dutch market is less clear, mainly since no study has focussed
on interviewing clients of penetration tests. Some interviewed providers within the
Dutch market do mention the lack of knowledge on the side of the client. What also
became clear when interviewing providers in both the British and the Dutch market,
was that the current penetration tests are of varying quality. Most providers within the
Dutch market, when asked to give their opinion about the overall quality of penetration
tests within the Dutch market, answered that the quality was average to low and that
there is room for improvement. In addition to this, clients interviewed by Knowles et al
“highlighted a significant perceived variability in the quality of reports from providers”
[26].

One big difference between the British and the Dutch market in tackling these issues
and improving the overall quality of penetration tests is the creation of an industry and
certification body. An organisation that plays a central within the British security testing
market is the Council of Registered Ethical Security Testers (CREST). This not for
profit organisation was established in 2006 in response to “the need for regulated and
professional security to serve the global information security marketplace”. Its mission
is to “represent the information security testing industry and offer a provable level of
assurance as to the competency of organisations and individuals within those

23
organisations”. CREST offers certifications for penetration testing providers as well as
for individual testers [54]. Testing providers can become a registered CREST member
by meeting a set of requirements, which include requirements related to the quality and
methodology of a security test [54].

In contrast to the British market the Dutch market does not have an organisation like
CREST which maintains a certain level of quality of services and represents the
industry. The absence of this kind of organisation results in a lack of concrete market
related information, such as the amount of (registered) testing providers and the quality
or maturity of the different providers.

While within the English market several standard qualifications for testers and
standards for companies have been introduced, such as IT Health Check Service
(CHECK) by CESG (Communications-Electronic Security Group), four tiers of
qualifications by CREST, Tigerscheme by the University of South Wales and Cyber
Scheme in cooperation with the UK NCSC, within the Dutch market no standards for
providers and only a limited number of standard qualifications for testers have been
introduced. These standard qualifications include Certified Information Systems
Security Professional (CISSP) by ISC2, Certified Penetration Testing Specialist (CPTS,
CPTE, CPTM) by Mile2, Offensive Security Certified Professional (OSCP) by
Offensive Security, Certified Ethical Hacker (CEH) and Licensed Penetration Tester
(LPT) by EC-Council [28, 29]. Some of these qualifications are also used within the
UK market, but the earlier mentioned market survey by Knowles, Baron & McGarr
(BSI and Lancaster University) found that within the UK market there is a “greater
emphasis on UK qualifications for recruitment” [26].

It is unclear why the Dutch and British industries have a different approach to
standardisation and whether one approach is significantly better for providing high
quality penetration tests than the other. What became clear during the survey of BSI is
that British providers and practitioners are satisfied with the current number of
standards for individual testers and are opposed to any new standards. They describe
that “existing consortia have done an exemplary job of raising and assessing the
competence of penetration testers and that the British industry is ahead of the rest of
the world in this regard” [26]. When introducing the topic of an industry and

24
certification body to Dutch providers, reactions are mixed. Some providers think the
introduction of such a body can stimulate the overall maturity and quality of penetration
tests within the Dutch market and limit the amount of low-quality providers. Other
providers, however, were sceptical since it could be difficult to set up such a body,
mainly because of possible misuse by competitors and because certification could
distract providers from delivering actual high-quality penetration tests.

Since we now have a high level understanding of both the theoretical and industrial
dimension of security and penetration testing we will focus on more specific aspects of
penetration testing, such as the typical penetration testing business process, in the next
chapter.

25
3. Penetration testing business process

In order to be able to understand, describe and examine how and why penetration testing
methodologies are implemented within penetration testing business processes, a
conceptual framework of the typical penetration testing business process would be very
useful. This chapter will present such a framework by defining the main elements of a
business process and by summarising key elements described in the literature focussed
on the process of conducting a penetration test.

3.1 Describing a business process

A business process can be defined in many ways [55, 56]. According to the well-known
business academic T. H. Davenport a business process is “simply a structured,
measured set of activities designed to produce a specified output for a particular
customer or market” [55]. He adds “it implies a strong emphasis on how work is done
within an organization, in contrast to a product focus’s emphasis on what” [55].
Although a process and a product view might occasionally overlap, for the purpose of
this thesis examining the typical penetration testing business process with a process
focus will enhance our understanding of how methodologies are implemented. In order
to systematically describe a process, several business process analysis (BPA) and
business process modelling techniques can be used [57]. One of the core, and relatively
simple concepts of business process modeling and design is, according to Periera et al,
that “a business process can be represented through the identification of the concepts
that are associated to the following six dimensions: what, where, when, why and how,
i.e. the classic 5W1H dimensions” [58]. They provide a more specific definition of a
business process, namely “a set of connected activities (how) which consumes and
produces tangible or intangible artefacts (what), is performed by people or systems
(who), contributes to achieving goals (why), takes place in a specific location (where)
and during a specific period of time (when)” [58]. This definition is presented visually
in figure 3.

26
Figure 3: Core concepts of a business process by Periera et al [58].

The six core concepts and their related questions presented by Periera et al have been
used to design survey and interview questions and to examine the answers given by the
interviewed providers.

3.2 The penetration testing business process

In addition to the earlier mentioned public methodologies, many books and articles [20,
47, 59, 60, 53, 61, 62] have been published that describe how a penetration test could
or should be conducted. Most of the authors divide the testing process in a number of
distinguishable consecutive phases. The rest of this chapter will summarise the relevant
literature and attempt to provide an overview and description of the typical
distinguishable phases. This final overview will provide a conceptual framework by
which the data from the survey and the interviews can be properly interpreted and
analysed.

Most books that have been written on how to conduct a penetration test, describe a
process that exists out of different consecutive phases. A simple way to describe the
different phases of a penetration testing business process is provided by Engebretson
[62]. He presents four steps, namely ‘reconnaissance, scanning, exploitation and post
exploitation (or maintaining access)’ [62].These steps are mostly in line with the steps
provided by Whitaker & Newman, namely ‘reconnaissance, scanning, obtaining access,
maintaining access and erasing evidence’ [63].

27
Engebretson describes that the first phase “deals with information gathering about the
target” [62] and that the goal is to gather as much information as possible because “the
more time you spend collecting information on your target, the more likely you are to
be successful in the later phases” [62]. Both Engebretson and Whitaker & Newman
make the distinction between passive and active information gathering. Active
information gathering requires interaction with the target while passive information
gathering could include using Open-Source Intelligence (OSINT) [62, 63]. The
scanning phase includes “scanning open ports using tools such as Nmap” [63] in order
to “determine services that are running on target hosts” [63]. Engebretson adds another
activity in addition to port scanning, namely vulnerability scanning. Engebretson
describes this phase as “the process of locating and identifying specific weaknesses in
the software and services of our targets” [62]. In the third phase the tester “tries to
exploit those weaknesses” [63] and to gain “access (complete control) over the target
machine” [62]. The fourth phase, ‘maintaining access’, includes creating “a more
permanent backdoor to the system” [62]. The final phase, ‘erasing evidence’, according
to Whitaker & Newman, is important since “ethical hackers want to see if they are able
to erase log files that might record their access on the target network” [63]. Although
not specifically included in a phase, Engebretson describes that the “final and arguably
the most important activity of a penetration test is the report” [62].

In addition to these phases Whitaker & Newman recommend using a methodology for
devising a “methodical plan on how you are to perform your test” [63], they mention
the OSSTMM specifically [63]. Other authors also base their description of a
penetration testing business process on the OSSTMM [21] or the PTES [64].

CREST also provides a very detailed guide on how to perform penetration tests and
how to design an effective penetration testing programme. They describe a penetration
testing programme as a three-stage approach [47]. The first stage involves preparing for
penetration testing, and is called “Preparation” [47]. This stage includes activities to
establish a penetration testing governance structure, define the purpose of penetration
tests and produce requirements specifications for selecting a suitable supplier. The
second stage includes the execution of penetration tests enterprise-wide, and is called
“Testing” [47]. This stage includes different phases, largely corresponding the phases
described by Engebretson and Whitaker & Newman. CREST also describe some

28
methodologies that can be used, namely the OSSTMM, OWASP, NIST SP800-115,
ISSAF and PTES. The third and final stage is described as carrying out appropriate
follow up activities, and is called “Follow up” [47]. This stage includes remediating
weaknesses, addressing the root causes of weaknesses, initiating an improvement
programme and evaluate the penetration test effectiveness. The fact that the guide
describes not only the actual ‘technical’ test, but also the preparation and follow up
activities required, makes this guide very valuable in general but also for the purpose
of this thesis. As described in more detail in chapter four, the earlier mentioned
standards also often have a broader focus than just the actual ‘technical’ test.

Knowles, Baron & McGarr present a schematic overview of the phases of a typical
penetration testing (business) process [26] based on their market survey. The model is
divided into three broad phases [26], namely ‘pre-engagement’, ‘practical assessment’
and ‘post-engagement’. The ‘pre-engagement’ phase includes initial interaction with
clients, determining the scope and signing of a project. The ‘practical assessment’ phase
includes information gathering, performing a threat analysis, performing vulnerability
analysis and eventually exploiting vulnerabilities. The final, ‘post-engagement’ phase
includes writing and delivering a report and interaction with clients. In figure 6 as visual
representation of the penetration testing phases and activities can be found.

Figure 4: Penetration testing phases and activities by Knowles et al [26].

In order to systematically analyse the different methodologies and to understand how


the methodologies are implemented within actual business processes, the three broad
testing phases, the pre-engagement phase, the practical assessment phase and the post-
engagement phase, presented by Knowles, Baron & McGarr will be used.

29
4. Penetration testing methodologies

In this chapter the selected methodologies for penetration testing will be introduced,
examined and compared. Firstly, their background and specific aims will be described
and secondly an overview of phases and special characteristics will be presented. One
of the earlier mentioned and referenced methodologies is the Information Systems
Security Assessment Framework (ISSAF), however, as earlier mentioned, this
methodology is outdated [20, 65, 66]. In addition to this Knowles, Baron & McGarr do
not mention the use of this methodology among their interviewed stakeholders within
the British market [26], which could suggest this methodology is not used that much
anymore. Because of these two reasons the ISSAF is not one of the standard
methodologies that will be examined in this thesis.

Knowles, Baron & McGarr mention the use of four different publicly available
penetration testing methodologies within the British market, namely the: Open Source
Security Testing Methodology Manual (OSSTMM), Open Web Application Security
Project (OWASP) Testing Guide, Penetration Testing Execution Standard (PTES) and
National Institute of Standards and Technology (NIST) Special Publication 800-115
[26]. The existing guidelines from the Dutch government mention most of these
standards, except the PTES, but due to the limited amount of documentation available
it is unclear if the PTES really isn’t used within the Dutch market, so the author chose
to include this methodology. As a result the methodologies that will be examined in this
chapter are the OSSTMM, the OWASP Testing Guide, the PTES, and the NIST Special
Publication 800-115.

30
4.1 Open Source Security Testing Methodology Manual (OSSTMM)

One publicly available penetration testing methodology that is used within the Dutch
and British markets is the OSSTMM. [67, 68, 28] The OSSTMM was created by Pete
Herzog and Marta Barcelo and developed by the Spain and United States based Institute
for Security and Open Methodologies (ISECOM) [21]. The first version of the
OSSTMM was created in 2000 and its current version is version 3, which was published
in 2010 [21]. As of April 2019, version 4 of the manual is being developed but not yet
officially released [69]. The manual exists out of 213 pages and is divided into 14 main
chapters.

Barcelo & Herzog start by stating the need for verifiable results of a security test, by
saying “when a security test is an art then the result is unverifiable and that undermines
the value of a test” [21]. They propose a solution, namely: “one way to assure a security
test has value is to know the test has been properly conducted. For that you need to use
a formal methodology. The OSSTMM aims to be it” [21]. They describe a thorough
security test as an “OSSTMM audit” [21]. They describe an OSSTMM audit as “an
accurate measurement of security at an operational level that is void of assumptions
and anecdotal evidence” and that it is “designed to be consistent and repeatable” [21].
Barcelo & Herzog describe the primary aim of the OSSTMM as “to provide a scientific
methodology for the accurate characterization of operational security (OpSec) through
examination and correlation of test results in a consistent and reliable way” [21]. They
describe that the manual can be used for almost any audit type, including ‘penetration
tests, ethical hacking, security assessments, vulnerability assessments, red-teaming and
blue-teaming’ [21] and that it is designed for “factual security verification and
presentation of metrics on a professional level” [21]. In addition to this primary purpose
they define a secondary purpose, which is to “provide guidelines which, when followed
correctly, will allow the analyst to perform a certified OSSTMM audit. These guidelines
exist to assure the following:
1. The test was conducted thoroughly
2. The test included all necessary channels
3. The posture for the test complied with the law
4. The results are measurable in a quantifiable way
5. The results are consistent and repeatable
6. The results contain only facts as derived from the tests themselves

31
An indirect benefit of this manual is that it can act as a central reference in all security
tests regardless of the size of the organization, technology, or protection” [21].

To have a basic understanding of the OSSTMM, many terms need to be defined. In


chapter 1 ‘What You Need to Know’, the idea behind the OSSTMM is explained and
the important terms are defined [21]. The OSSTMM is about “operational security
(OpSec). It is about measuring how well security works” [21]. Barcelo & Herzog
continue to describe OpSec as “a combination of separation and controls” [21]. They
explain: “for a threat to be effective, it must interact either directly or indirectly with
the asset. To separate the threat from the asset is to avoid a possible interaction” [21].
They define security as follows: “under the context of operational security, we call
security the separation of an asset and a threat” [21]. The degree of separation is
characterised as ‘porosity’ [21]. The higher the porosity, the lower the separation
between an asset and a threat.

One of the main objectives of an OSSTMM audit is to describe the ‘Actual Security’
of one (or more) asset, by calculating its Risk Assessment Value (RAV) [21]. A RAV
of 100 represents the perfect balance between possible interactions by, for example, an
attacker and the amount of implemented controls [21]. The authors call this “Perfect
Security” [21]. When the RAV is below 100, the implemented controls are insufficient.
When the RAV is above 100 it means too many and unnecessary controls are
implemented.

The RAV of an asset is calculated by using several variables [21], all independent of a
specific threat agent. The main variables include porosity (as a result of Operational
Security), implemented controls and existing limitations. Porosity is calculated by
adding all possibilities of access to an asset. The controls variable is calculated based
on the implemented controls that protect an asset. And the limitations variable is
calculated based on vulnerabilities or weaknesses existing in these controls. To
comprehend at least the basics of this approach, this could, just for the purpose of
comprehension, be summarised as follows:
𝑅𝐴𝑉 = 𝑃𝑜𝑟𝑜𝑠𝑖𝑡𝑦 − (𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝑠123415 − 𝐿𝑖𝑚𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠)
𝑅𝐴𝑉 = 0, means Perfect Security
𝑅𝐴𝑉 > 0, means insufficient Controls
𝑅𝐴𝑉 < 0, means too many Controls

32
In reality the formula for calculating the RAVs is more complicated, but for the purpose
of this thesis it suffices to understand the basic approach and that this approach is
different than the ones described in the other methodologies.

In addition to describing how to calculate the RAVs, the OSSTMM describes seven
steps which should be followed to conduct a properly defined security test [21].

The first step includes defining which assets should be protected. Barcelo & Hertog
describe that “the protection mechanisms for these assets are the Controls you will test
to identify Limitations” [21]. Identifying the right assets is important, since it will
consequently identify and select the controls that will be tested during the penetration
test.

During the second step the ‘engagement zone’ should be identified, which is described
as “the area around the assets which includes the protection mechanisms and the
processes or services built around the assets” [21].

In the third step the scope should be identified [21]. This includes “everything outside
the engagement zone that you need to keep your assets operational. This may include
things you may not be able to directly influence” [21]. Some examples are given, such
as information, legislation, regulations and colleagues [21].

Step four includes defining the interactions within and with entities outside of the scope.
The interactions are called “vectors” [21] and could be interactions such as
“department A to department B” [21]. But also interactions between a client and a
department.

The fifth step is about identifying over which earlier described channels to test will be
conducted and what equipment will be needed for each test [21].

The sixth step includes determining what type of test should be conducted, the six
common identified types are “Blind, Double Blink, Gray Box, Double Gray Box,

33
Tandem and Reversal” [21]. Some of which have been explained in chapter 1 of this
thesis.

The final step includes assuring that the test is in compliance with the Rules of
Engagement presented in the OSSTMM [21]. These rules include 42 guidelines which
“define the operational guidelines of acceptable practices in marketing and selling
testing, performing testing work, and handling the results of testing engagement” [21].

For reporting the results of a test the OSSTMM describes the STAR (Security Test
Audit Report), which has as a purpose to “serve as an executive summary of precise
calculation stating the Attack Surface of the targets tested within a particular scope”
[21]. This calculation is based mainly on the calculated RAVs during a test.

34
4.2 Penetration Testing Execution Standard (PTES)

Another earlier mentioned penetration testing methodology is the Penetration Testing


Execution Standard (PTES). The development of the PTES was started in early 2009
by a group of about six people and is now being developed by a group of around 20
senior information security practitioners based primarily in the United States [64]. The
‘standard’ is described as “a new standard designed to provide both businesses and
security service providers with a common language and scope for performing
penetration testing (i.e. Security Evaluations)” [64]. It was started “following a
discussion that sparked between some of the founding members over the value (or lack
of) of penetration testing in the industry” [64]. The intended audience for the PTES
includes “two main communities: businesses that require the service, and service
providers” [64]. More detail is provided on the audience, namely: “for businesses the
goal is to enable them to demand a specific baseline of work as part of a pentest. For
service providers the goal is to provide a baseline for the kinds of activities needed,
what should be taken into account as part of the pentest from scoping through reporting
and deliverables” [64].

The PTES consists of seven sections and an extra guide, consisting of technical
guidelines. The seven sections include: ‘Pre-engagement Interactions, Intelligence
Gathering, Threat Modelling, Vulnerability Analysis, Exploitation, Post Exploitation,
Reporting’ [22].

The aim of the ‘pre-engagement interactions’ section is “to present and explain the
tools and techniques available which aid in a successful pre-engagement step of a
penetration test” [22]. Within this section 19 different topic are described, including
how a penetration test should be scoped, which questions to ask, how to deal with third
parties, how to define payment terms, how to define the goals, how to establish lines of
communication and how to test according to specified rules of engagement [22].

The ‘intelligence gathering’ section “defines the Intelligence Gathering activities of a


penetration test. The purpose of this document is to provide a standard designed
specifically for the pentester performing reconnaissance against a target (typically
corporate, military, or related). The document details the thought process and goals of
pentesting reconnaissance, and when used properly, helps the reader to produce a

35
highly strategic plan for attacking a target” [22]. Within the ‘intelligence gathering’
section seven topics are described, which include: how to select a target, how to use
Open Source Intelligence (OSINT), how to perform footprinting and how to identify
protection mechanisms [22].

The ‘threat modelling’ section “defines a threat modeling approach as required for a
correct execution of a penetration testing” [22]. This sections includes information
about how to perform a business asset analysis, a business process analysis, a threat
agent and community analysis, a threat capability analysis and how to model motivation
of threat agents and communities and finally how to find relevant news of comparable
organizations being compromised [22].

The ‘vulnerability analysis’ section focusses on “the process of discovering flaws in


systems and applications which can be leveraged by an attacker” [22]. This section
includes a description of active testing techniques, passive testing techniques,
validation techniques and what research should be performed when searching for and
validating vulnerabilities [22].

The ‘exploitation’ section describes that “the exploitation phase of a penetration test
focuses solely on establishing access to a system or resource by bypassing security
restrictions” [22]. The section includes information on how to evade countermeasures,
a description of typical approaches, information on how to use tailored exploits,
information on how to use zero-days and some examples of attack avenues [22].

The purpose of the next section, the ‘post-exploitation’ section is “to determine the
value of the machine compromised and to maintain control of the machine for later use”
[22]. This value is mainly based on “the sensitivity of the data stored on it and the
machines usefulness in further compromising the network” [22]. The section includes
information on how to protect clients and providers, how to analyse the infrastructure
using a compromised machine, how to obtain sensitive information from compromised
machines, how to identify high value targets, how to exfiltrate data, how to gain
persistent presence on compromised machines and how to clean up after a test [22].

36
The final section focusses on reporting and “intends to define the base criteria for
penetration testing reporting” [22]. Within this section information is presented on how
to structure a report, including the executive summary and the technical report [22].
The technical guidelines attached to the PTES “help define certain procedures to follow
during a penetration test” [22] and are, as the name suggests mainly focussed on the
use of techniques and tools during a penetration test [22].

37
4.3 NIST Special Publication 800-115

A penetration testing methodology that is, among others, mentioned in the survey by
Knowles, Baron & McGarr [26] and recommended by the Dutch government [28] is
the NIST Special Publication 800-115 [26].

The NIST SP 800-115 is titled, ‘Technical Guide to Information Security Testing and
Assessment’, and was developed and published in 2008 by the National Institute of
Standards and Technology [24], part of the United States Department of Commerce.
The guide defines an information security assessment as “the process of determining
how effectively an entity being assessed (e.g., host, system, network, procedure, person
– known as the assessment object) meets specific security objectives” [24]. The guide
describes three types of assessment methods: “testing, examination, and interviewing”
[24]. ‘Testing’ is defined as “the process of exercising one or more assessment objects
under specified conditions to compare actual and expected behaviors”, [24] this is
largely in line with the definition given by Felderer et al in chapter 1. ‘Examination’ is
defined as “the process of checking, inspecting, reviewing, observing, studying, or
analysing one or more assessment objects to facilitate understanding, achieve
clarification, or obtain evidence” [24]. And ‘interviewing’ is defined as “the process
of conducting discussions with individuals or groups within an organization to facilitate
understanding, achieve clarification, or identify the location of evidence” [24].

The guide recommends using a “repeatable and documented security assessment


methodology” [24], because it can “provide consistency and structure to security
testing, which can minimize testing risks”, “expedite the transition of new assessment
staff” and “address resource constraints associated with security assessments” [24].
The guide recommends a phased information security assessment methodology, which
should include at minimum the following phases: ‘planning’, ‘execution’ and ‘post-
execution’ [24]. The NIST SP 800-115 describes several “accepted” methodologies for
this purpose, namely the Information Design Assurance Red Ream (IDART), NIST SP
800-53A, National Security Agency (NSA) Information Assessment Methodology
(IAM), Open Source Security Testing Methodology Manual (OSSTMM) and Open
Web Application Security Project (OWASP) Testing Project [24]. They describe the
role of the NIST SP 800-115 as “this publication offers recommendations for technical
testing and examination techniques that can be used for many assessment

38
methodologies and leveraged for many assessment purposes” [24]. So, following the
NIST SP 800-115 doesn’t mean no other methodology can be followed. The guide
continues to describe the three main technical assessment techniques: ‘review
techniques’, ‘target identification and analysis techniques’ and ‘target vulnerability
validation techniques’ [24]. According to the guide ‘Target Vulnerability Validation
Techniques’ include “password cracking, penetration testing, social engineering, and
application security testing” [24]. The focus of the guide is described as “this
publication focuses on explaining how these different technical techniques can be
performed, and does not specify which techniques should be used for which
circumstances” [24].

After describing some preliminaries, such as testing viewpoints, the guide describes
(passive) review techniques, including network sniffing [24]. Chapter 4 of the guide
addresses technical target identification and analysis techniques, “which focus on
identifying active devices and their associated ports and services, and analysing them
for potential vulnerabilities” [24]. The guide focusses on ‘network discovery’
techniques, ‘network port and service identification’, ‘vulnerability scanning’ and
‘wireless scanning’. Chapter 5.2 of the guide specifically focusses on penetration
testing, in which it is defined as “security testing in which assessors mimic real-world
attacks to identify methods for circumventing the security features of an application,
system, or network” [24]. The guide describes that penetration testing can be useful for
determining: “how well the system tolerates real world-style attack patterns” [24], “the
likely level of sophistication an attack needs to successfully compromise the system”,
[24] “additional countermeasures that could mitigate threats against the system” [24]
and “defenders’ ability to detect attacks and respond appropriately” [24]. The guide
presents four phases of penetration testing, namely ‘planning’, ‘discovery’, ‘attack’,
‘reporting’ [24]. The relationships between the different phases are displaying in figure
7.

39
Figure 5: Penetration testing phases by Scarfone et al [24].

During the ‘planning’ phase, “rules are identified, management approval is finalized
and documented, and testing goals are set” [24]. The discovery phase includes two
parts, the first part “is the start of actual testing, and covers information gathering and
scanning” [24]. The second part includes “vulnerability analysis, which involves
comparing the services, applications, and operating systems of scanned hosts against
vulnerability databases and the testers’ own knowledge of vulnerabilities” [24]. The
third phase, executing an attack, is “at the heart of any penetration test” [24]. The guide
describes four steps out of which the ‘attack’ phase exists, namely: ‘gaining access’,
‘escalating privileges’, ‘system browsing’ and ‘install additional tools’ [24]. These four
steps are represented in figure 8.

Figure 6: Attack phase steps by Scarfone et al [24]

The final, ‘reporting’ phase, occurs “simultaneously with the other three phases of the
penetration test” [24]. At the end of this phase, “a report is generally developed to
describe identified vulnerabilities, present a risk rating, and give guidance on how to
mitigate the discovered weaknesses” [24].

40
4.4 Open Web Application Security Project (OWASP) Testing Guide

The fourth penetration testing methodology is the OWASP Testing Guide. This
methodology is mentioned in the survey by Knowles, Baron & McGarr [26] and also
in the CREST guide on how to set up a penetration testing programme [47]. In addition
to this the Dutch Information Security Service for Dutch Municipalities (IBD)
recommends using this methodology [29].

The OWASP is “a worldwide free and open community focused on improving the
security of application software”. It describes its mission as “to make application
security ‘visible’, so that people and organizations can make informed decisions about
application security risks” [23]. The OWASP includes many projects, including the
OWASP Application Security Verification Standard Project, OWASP Top 10 and
OWASP Zed Attack Proxy (ZAP). In the survey by Knowles, Baron & McGarr three
providers specifically mention the use of the OWASP Testing Guide [26]. This guide
is described as “the standard de-facto guide to perform Web Application Penetration
Testing” [23] and aims to “help people understand the what, why, when, where and
how of testing web applications” [23]. The current version of the OWASP Testing
Guide is version 4, which was published in 2014 [23]. The OWASP Testing Guide
specifically integrates security testing within the software development life cycle
(SDLC) [23]. After an introduction the second chapter of the guide describes what
testing is, when and what to test and some of the basic testing principles [23]. In addition
to this some testing techniques, including their advantages and disadvantages are
explained, such as ‘manual inspections & reviews’, ‘threat modeling’, ‘source code
review’ and ‘penetration testing’ [23]. Also, more information on how to derive security
test requirements, both functional and non-functional, is presented.

In chapter 4 the “Web Application Penetration Testing Methodology” [23] is described.


It defines a ‘web application security test’ as “a method of evaluating the security of a
computer system or network by methodically validating and verifying the effectiveness
of application security controls” [23]. The methodology has as a goal to “collect all
the possible testing techniques, explain these techniques, and keep the guide updated”.
[23] It is based on a black box approach [23]. The described test is divided into 2 phases,
a ‘passive mode’ and an ‘active mode’. In the passive mode “the tester tries to
understand the application’s logic and plays with the application” [23]. The ‘active

41
mode’ exists of a set of active tests, that “have been split into 11 sub-categories for a
total of 91 controls” [23]. These eleven sub-categories include: ‘information gathering’,
‘configuration and deployment management testing’, ‘identity management testing’,
‘authentication testing’, ‘authorization testing’, ‘session management testing’, ‘input
validation testing’, ‘error handling’, ‘cryptography’, ‘business logic testing’ and ‘client
side testing’ [23]. Most of the remaining chapters of the guide present in depth technical
guidelines on how to perform the tests within the 11 sub-categories [23]. The final
chapter, chapter 5 describes on how a test should be reported [23]. This is important
since “performing the technical side of the assessment is only half of the overall
assessment process” [23] according to the guide. The OWASP describes that the report
“needs to have three major sections”, namely an executive summary, the ‘test
parameters’ and the findings [23]. The executive summary “sums up the overall
findings of the assessment and gives business managers and system owners a high-level
view of the vulnerabilities discovered” [23]. The second section includes the ‘test
parameters’, which could include the following headings: ‘project objective’, ‘project
scope’, ‘project schedule’, ‘targets’, ‘limitations’, ‘findings summary’ and ‘remediation
summary’. The final section “includes technical information about the vulnerabilities
found and the actions needed to resolve them” [23]. The section is “aimed at a technical
level” [23] and should include, amongst others, the severity rating of the vulnerability
[23].

42
4.5 Comparison of the methodologies

As can be read in the previous chapters, the methodologies have a different approach
to performing a penetration test or security test. In order to understand these differences
this chapter will compare the different methodologies by examining their goals, the
corresponding scope and the activities during pre-engagement, practical assessment and
post-engagement phases described within the methodologies. These three phases are
based on the analysis of the business process in chapter 3. At the end of every sub-
chapter a table is presented in which the findings are summarised.

4.5.1 Comparison of goals


In this chapter the goals of the different methodologies will be compared and the main
differences highlighted.

The methodology that claims to be the most ‘scientific’ is the OSSTMM. The authors
of this methodology take a clear position in the ‘art vs science’ debate mentioned in
chapter 2.4. They introduce their standard as follows: “in art, the end result is a thing
of beauty, whereas in science, the means of reaching the end result is a thing of beauty.
When a security test is an art then the result is unverifiable and that undermines the
value of a test. One way to assure a security test has value is to know the test has been
properly conducted. For that you need to use a formal methodology. The OSSTMM
aims to be it” [21]. This is a clear aim and they define a security test performed using
the OSSTMM as an OSSTMM audit. As earlier mentioned the primary aim of the
OSSTMM is described by the authors as “to provide a scientific methodology for the
accurate characterization of operational security (OpSec) through examination and
correlation of test results in a consistent and reliable way” [21], which is mainly done
by calculating RAVs. The focus on the scientific nature of a test differentiates the
OSSTMM from the PTES, NIST SP800-115 and the OWASP Testing Guide.

The PTES instead focusses on providing both “business and security service providers
with a common language and scope” and to enable business to “demand a specific
baseline of work”. The PTES provides this baseline of activities by listing practical
activities, such as how to determine a scope, communicate with clients, gather
information, test for vulnerabilities and more. It does not provide a ‘scientific’ and
quantifiable approach to calculate actual security, or RAVs, like the OSSTMM does.

43
Also, the NIST SP800-115 has a different focus, the NIST SP800-115 “offers
recommendations for technical testing and examination techniques that can be used for
many assessment methodologies and leveraged for many assessment purposes”. It
provides a description of three main technical assessment techniques, but also
recommends using a phased information security methodology (amongst others the
OSSTMM and OWASP Testing Guide). The NIST SP800-115 can be seen as a
preliminary guideline for providers and clients, but not as a replacement for a
penetration methodology such as the OSSTMM, PTES and OWASP Testing Guide.

The OWASP Testing Guide also has a different aim compared to the OSSTMM.
Although one of the main differences is the scope (this will be discussed later), the
OWASP describes the goals of their Web Application Penetration Testing
Methodology as to “collect all the possible testing techniques, explain these techniques,
and keep the guide updated”. The authors try to achieve a Testing Methodology that is
“consistent, reproducible, rigorous and under quality control” [23]. From the four
methodologies the OWASP Testing Guide is arguably the most practically focussed. It
provides guidelines to integrate security testing into the SDLC and lists a large amount
of possible tests that can be conducted and how they can be practically executed. In
table 1 the goals of the methodologies are summarised.

Table 1: Summary of the goals of the methodologies


OSSTMM Providing a scientific methodology for measuring actual security.
PTES Providing business and security service providers with a common language
and scope and a baseline of work.
NIST SP Providing recommendations for testing that can be used for many
800-115 methodologies and assessment purposes.

OWASP Providing a collection and explanation of all testing techniques and providing
TG a consistent and reproducible methodology.

Although some overarching aims can be identified, such as the use of a structured
approach to allow verification of test results, it is clear that the goals of the
methodologies are not the same.

44
4.5.2 Comparison of scope
Another element that differentiates the methodologies is the scope of the methodologies.
In this chapter the scope of the methodologies will be compared and the differences
highlighted.

The methodology with the broadest scope is the OSSTMM. The authors of the
OSSTMM describe that the manual can be used for almost any audit type, including
penetration tests, ethical hacking, security assessments, vulnerability assessments, red-
teaming and blue-teaming [21] and for six ‘channels’, namely human security, physical
security, wireless security, telecommunications security and data networks [21]. This
scope is very different from, for example, the OWASP Testing Guide. This guide
focusses purely on performing penetration tests against web applications. The NIST
SP800-115 is somewhat in the middle of the spectrum, it focusses on technical
information security tests and assessments, which include all types of information
technology, but not on physical and human security. For the PTES it is the other way
around, it focusses exclusively on performing penetration tests, but this includes testing
for human and physical security. Graph 1 below indicates the relationship between the
different scopes of the methodologies.

Graph 1: Relationship between the scope of the four methodologies

45
4.5.3 Comparison of pre-engagement guidelines
Ultimately what differentiates the methodologies the most is the practical guidelines for
each phase of a penetration test. All four methodologies describe activities to be
conducted before the engagement starts. As described in the previous chapter the
OSSTMM presents seven steps which should be followed to conduct a properly defined
security test. These steps mostly focus on how to determine the scope of the test in
accordance with the OSSTMM approach and that the test is in compliance with the
Rules of Engagement. The PTES however, gives a more practical description of the
activities that should be performed, for example how to interact with the client and deal
with third parties and how to set up payment terms. Like the OSSTMM the PTES
describes how to perform a test in accordance with Rules of Engagement, again these
guidelines are of a more practical nature, as can be seen by, for example, the
introduction of planning tools, how to deal with evidence and what time to test. The
NIST SP800-115 only shortly mentions a ‘planning’ phase, in which “rules are
identified, management approval is finalized and documented, and testing goals are
set”, without giving more concrete information [24]. Lastly, the pre-engagement
guidelines within the OWASP Testing Guide focus mostly on the place of a test within
the SDLC and deriving the right test requirements, something the other methodologies
don’t do.

Table 2: Summary of the pre-engagement guidelines of the methodologies


OSSTMM Determination of scope and attack surface following OSSTMM approach and
presentation of Rules of Engagement.
PTES Very practical guidelines, including how to test according to Rules of
Engagement, interact with clients, setting up payments and other practical
activities.
NIST SP Limited guidelines, including getting management approval, identifying rules
800-115 and setting goals.

OWASP Placing testing within the SDLC and deriving the right test requirements.
TG

4.5.4 Comparison of practical assessment guidelines


The practical assessment guidelines of the methodologies differ significantly. In this
sub-chapter the main high-level differences will be highlighted. The OSSTMM

46
describes practical assessment guidelines for performing a software security test or
penetration test in chapter 11 ‘Data Networks Security Testing’. This chapter is divided
into 17 sub-chapters which include practical activities to be conducted during an
OSSTMM penetration test. The main focus of the activities is to gather information and
perform vulnerability analysis on the target network and to determine OSSTMM
variables, such as the Risk Assessment Values.

The activities described in this chapter are less concrete than, for example, the OWASP
Testing Guide and the PTES. A typical example of this is the activity described in
chapter 11.15.2 (‘Authorization’) of the OSSTMM, namely “examine and verify any
means for gaining fraudulent authorization to gain privileges similar to that of other
personnel” [21]. This is a very broadly described activity and does not specify which
technical tools or techniques should be used to examine and verify the means for
gaining fraudulent authorization. The PTES on the other hand is very specific about
what tools and techniques could or should be used to examine the vulnerability of the
test object. Another characteristic of the PTES is the recommendation to perform a
threat analysis. During this analysis primary and secondary assets are identified and
corresponding threats and threat communities are mapped against them. The OWASP
describes the importance of threat modelling and risk analysis but does not go into detail
on how to perform this analysis. The OSSTMM specifically recommends not to analyse
the threat as part of security analysis [21], since this is something for a risk analysis.
The NIST SP 800-115 only mentions threat modelling could be done, but does not
provide any more in-depth information and it does not mention this as part of their
penetration testing process (chapter 5.2). Another difference between the practical
assessment guidelines are the recommendations on what to do after a target has been
exploited. Chapter 6 of the PTES, ‘Post Exploitation’, describes activities to “determine
the value of the machine compromised” and to “maintain control of the machine for
later use” [22]. Both the OSSTMM and the OWASP Testing Guide do not provide any
recommendations on this topic. The NIST SP 800-115 does provide some
recommendations related to this topic as part of the attack phase [24].

47
Table 3: Summary of the practical assessment guidelines of the methodologies
OSSTMM Semi-practical activities, main focus on information gathering and
performing vulnerability analysis and determining RAVs, operational
security, limitations and controls.
PTES Very practical and specific activities, including what tools and
techniques to use. Also includes performing a threat analysis.
NIST SP High level description of activities, including recommending the use of
800-115 other methodologies.
OWASP Very practical activities, including on how to test for a wide variety of
TG vulnerability types.

4.5.5 Comparison of post-engagement guidelines


All methodologies have guidelines that focus on how to report the findings of a test.
The OSSTMM is the only methodology that recommends its own reporting framework.
This framework, called the STAR (Security Test Audit Report), has as a purpose to
“serve as an executive summary of precise calculation stating the Attack Surface of the
targets tested within a particular scope” [21]. This precise calculation requires specific
values, related to operational security, limitations and controls. The other
methodologies offer simple recommendations on how findings can be reported most
effectively. The PTES provides a high level of understanding for reporting findings. It
advises to use two separate sections within the report, namely an executive summary
and a section presenting the technical findings. The OWASP Testing Guide
recommends the same. The NIST SP 800-115 also includes this recommendation and
in addition focusses on how to create a ‘Plan of Action and Milestones (POA&M)’ for
fixing and mitigating the vulnerabilities and corresponding risks that were discovered
during the test. This focus on remediation and mitigation is more elaborate than both
the OWASP Testing Guide and the PTES.

48
Table 4: Summary of the post-engagement guidelines of the methodologies
OSSTMM Describes on how to report with the STAR, including how to calculate
values, related to operational security.
PTES High level guidelines on how to report, including the use of two separate
sections in a report.
NIST SP High level guidelines on how to report and in addition the creation of a
800-115 mitigation plan for vulnerabilities.
OWASP High level guidelines on how to report, including the use of two separate
TG sections in a report.

4.6 Conclusion

By comparing the methodologies, it becomes clear that all four methodologies present
a different approach to perform a penetration test and that they aren’t necessarily
compatible. For the purpose of standardisation and creating a consistent approach it is
therefore remarkable that interviewed providers in both the British and Dutch market
say they use different elements from different methodologies to create their internal
methodology. In the next chapter the exact way of how the different methodologies are
used and combined and whether this supports the achievement of the goals of the
methodologies will be described.

49
5. Implementation of the methodologies

5.1 Data gathering

In order to find out how the methodologies are implemented in practice, why they are
implemented this way and whether this implementation supports the achievement of
the goals of the methodologies and as a result improves the quality of penetration tests,
the author decided to interview penetration testing providers operating within the Dutch
market. Due to the explorative nature of the main research question and in particular
question C (QC) the author decided to follow a qualitative approach and conduct semi-
structured in-depth interviews with seven providers.

Based on the reviewed literature and the research questions, 11 interview questions
were created which are presented in a summarized manner in table 5.

Ultimately six telephone interviews were conducted and in addition one digital
questionnaire was filled in. The interview results have been coded according to the
grounded theory methodology. During this process the interviews were transcribed and
a codebook was created using the qualitative data analysis tool ATLAS.TI. Because of
the presence of potential confidential or commercially sensitive information the
answers are not tracible back to a particular provider or person.

50
Table 5: Summarised version of the interview questions
Introduction
1. What is the size of your organisation?
2. Is conducting penetration tests the primary service your organisation provides?
Internal approach
3. Do you have, and do you follow, within your organisation, a formalised approach,
when conducting a penetration test?
Use of methodologies
4. Do you use, or is the approach used within your organisation, based on one of the
mentioned standards or methodologies?
4.1 OSSTMM, if yes, how much is it based on this methodology and why?
4.2 PTES, if yes, how much is it based on this methodology and why?
4.3 NIST SP 800-115, if yes, how much is it based on this methodology and why?
4.4 OWASP Testing Guide, if yes, how much is it based on this methodology and
why?
4.5 In case of no methodology, on what is your internal approach based?
Quality
5. What do you think of the general quality of penetration tests that are conducted
within the Dutch market?
Standardisation
6. Do you think an industry organisation such as CREST could be beneficial for
improving the quality within the Dutch market?
7. Do you think the use of a public standardised methodology when conducting a
penetration test can increase the quality of penetration test within the Dutch market?
Table 1: Summary of interview topics and corresponding questions

51
5.2 Implementation of the methodologies

During early July six in-depth interviews were conducted with penetration testing
providers operating within the Dutch market and in addition one survey was filled in.

5.2.1 Interviewed providers


The seven providers that were interviewed differed in size. One provider had between
10 to 20 employees. Two providers had between 100 – 1000 employees and the four
other providers had more than 1000 employees, three of those had more than 10.000
employees.

Table 6: Size of interviewed providers


4
3
2
1
0
Employees

10 to 20 100 to 1000 1000 to 10000 10000 or more

The providers were also asked whether penetration testing was the main service they
provide. Three providers explained it was their primary service, although one large
provider added that they provided many other services as well. Four other providers
described it wasn’t their main activity, but two of those providers made clear that it was
a daily activity for them.

Table 7: Importance of testing services


4
3
2
1
0
Importance

Primary service Not primary service, but daily activity


Not primary service and not daily activity

52
5.2.2 Internal approach
When asked about whether they had a formalised internal approach, all providers
answered that they indeed had created a formalised approach.

5.2.3 Use of methodologies


The providers were also asked if they used one of the four methodologies as part of
their internal approach. Four providers described they had created their own internal
methodology based on several publicly available methodologies, such as OSSTMM,
NIST SP800-115, PTES and the OWASP Testing Guide. One provider described this
process as “we looked at them and we took what we thought was logical”. Another
provider explains “we took the OWASP Testing Guide, which we simplified a bit and
then threw in some other standards that are out there”. This finding is in line with the
findings of the British market survey by Knowles et al [26]. Two providers said they
would use a methodology if this was required by the client. One of these providers
clarified “in practice, the IT landscapes are so fragmented that also the people, in-
house knowledge and wishes are fragmented, so it is an unique assignment per client”.
One provider pointed out that they limited the use of standards, in general, to avoid
licensing issues and because they thought their internal standards were better than the
public ones.

When asking providers specifically about their use of the OSSTMM, they had different
answers. All providers were aware of the existence of the OSSTMM, but three
providers said they did not use the OSSTMM at all. One of those providers explained
that in their opinion the OSSTMM was too complicated. Two out of three of the other
providers who used the OSSTMM as an inspiration for their internal methodology
agreed with this and therefore did not base their entire methodology on the OSSTMM.
One of those providers described “the OSSTMM is a very scientific method for
conducting penetration tests. In order to fully implement this employees need to be
trained and it will make the duration of a test much longer, this is why we chose our
own method”. Two providers did say, however, that while the OSSTMM indeed was
complicated it was also very thorough and ‘scientific’ which could be useful in some
instances. Another provider said they were using the Rules of Engagement of the
OSSTMM and some of its definitions for the different types of testing. Finally, the last

53
provider said they were looking into using the OSSTMM in the near future, mainly
because its structured approach and its broad focus.

Table 8: Implementation of the OSSTMM


6

0
Degree of implementation of the OSSTMM

Not implemented Used as inspiration Partially implemented Fully implemented

The second methodology, the PTES, was less known than the OSSTMM. Three
providers said they did not know this methodology and therefore didn’t implement it.
Three other providers said they used the PTES as an inspiration for their internal
methodology. One of those providers said they didn’t use the PTES in a more elaborate
way because it wasn’t a very scientific approach. One provider was more positive about
the PTES and said they had based their internal methodology partly on the PTES. The
main benefit of the PTES was that is wasn’t written too concrete so the creativity of the
tester wouldn’t be limited, this provider did however mention that the use of the
methodology was dependent on the exact question of the client.

Table 9: Implementation of the PTES


4
3
2
1
0
Degree of implementation of the PTES

Not implemented Used as inspiration Partially implemented Fully implemented

54
When asked about the NIST SP 800-115 reactions were mixed. Three providers looked
at the NIST SP800-115 when developing their internal methodology. One of these
providers said the NIST SP 800-115 was a good standard and useful for sharing results.
Another provider described that the NIST SP800-115 was limited to penetration tests
focussed on infrastructure, but that they used it for this purpose. Other providers did not
use this methodology, but without any clear reason for it, only that other methodologies
had their preference.

Table 10: Implementation of the NIST SP800-115


6

0
Degree of implementation of the NIST SP800-115

Not implemented Used as inspiration Partially implemented Fully implemented

The fourth methodology, the OWASP Testing Guide, is well known and widely used
by the interviewed providers. Six out of seven providers said they use the Testing Guide
during their penetration tests. Only one provider didn’t use the OWASP Testing Guide,
because it is “too much and too detailed”. However this provider did mention the use
of the OWASP Application Security Verification Standard (ASVS) as a replacement.
One of the six providers who mentioned the use of the OWASP Testing Guide said the
guide is used in many organisations and is well known within the industry. Two
providers said the OWASP Testing Guide is a good methodology because it is “simple
and understandable”.

However, even though the methodology is used by many providers, all of those
providers reported they only used chapter 4 of the guide, instead of the guide as a whole.
One provider explained “we have divided our methodology into topics: authentication,
authorisation, session management, communication security, this kind of high-level
topics. We have adopted this common thread and continued to substantiate some topics
further”. The previous chapters, which include the OWASP Testing Framework,
placing testing within the SDLC, aren’t used by any of the providers. In addition to
guide, the OWASP top 10 was mentioned by two providers, one provider described “I

55
can say to a client, look, the OWASP Testing Guide includes the OWASP Top 10
vulnerabilities and so you won’t have any basic vulnerabilities anymore”.

Table 11: Implementation of the OWASP Testing Guide


8
6
4
2
0
Degree of implementation of the OWASP Testing Guide

Not implemented Used as inspiration Partially implemented Fully implemented

56
5.2.4 Quality of penetration testing

In addition to the use of the methodologies, providers were asked about their opinion
on the current quality of penetration tests delivered within the Dutch market. The
majority of the providers characterised the overall quality as low or very low, but most
of them said there was a significant difference in quality between providers. One of the
main issues all providers highlighted was the sale of vulnerability scans as penetration
tests. As described in chapter 2.3 of this thesis these two activities are very different
from each other and produce different results and levels of assurance. One provider
clarifies “we see many small parties […] who buy Nessus and just scan a website with
Nessus and copy the results one-on-one to a report and say this is a pentest report” and
continues “this is very bad for the market […] for a client [...] it is very difficult to
understand the difference in quality”. He explains that in one instance, a client
discovered a flaw in their website just one week after they had done such a test and that
the “client was really pissed” and that it is bad for the “feeling of security”. Other
providers agreed with the fact it is bad for the market and industry in general. Although
this issue seems to be one of the main causes the current quality within the Dutch market
is low, more mature providers do offer better testing services. Two interviewed
providers said that from a technical point of view some providers provide high quality
services, mainly because they had the right people working for them. One provider
elaborated on this by saying: “the penetration testers I know are all training very well
and use their spare time to improve their skills, they take the OSCP exam to show they
have a certain level of quality and they activily think about how to transfer their
message in a technical and non-technical way. So I do see a certain sense of
responsibility with pentesters themselves”.

Table 12: Quality of penetration testing services


6

0
Overall quality according to Dutch providers

Very low Low Average High Very high

57
When comparing the quality of the Dutch penetration testing market to that of the rest
of Europe and more specifically the British market, the opinions were mixed. Some
providers weren’t aware of the quality of services in countries outside The Netherlands
and the providers who were aware mainly described the British market to being quite
similar to the Dutch market in terms of quality. One provider thought the technical
expertise available within the Dutch market was slightly higher. Another providers
added that the British market might be more mature from a quality perspective because
“they have CREST and people trust it”.

58
5.2.5 Standardisation

The final topic discussed in the interviews was the approach to standardisation within
the Dutch market, the use of standard methodologies and the creation of an industry
association fulfilling a role such as CREST fulfils within the British market. Almost all
providers agreed that the use of publicly available methodologies can increase the
quality of penetration tests. One provider explains: “in this case a penetration test is
conducted according to a recognized methodology. This should make it more clear for
clients if all providers use a similar method”. Another provider describes “if we all use
a standard and share knowledge with each other, the level will increase throughout the
industry, I think this is better.” A third providers explains “yes, you need a baseline
quality, always […] for many parties it will mean they increase their quality”. Most
providers did note however, that the use of standardised methodology shouldn’t limit
the testers creativity.

Table 13: Opinion on use of standard methodologies


8
6
4
2
0
Using standard methodologies for penetration tests to increase quality

Yes No

When asked about setting up an industry association like CREST the reactions were
mixed (one provider did not answer this question since this question wasn’t part of the
digital survey). First of all, three providers reported to never have heard of CREST.
After introducing the topic and explaining its role, all providers saw positive and
negative sides of this type of industry association. Some providers said they thought it
was a good idea to have an organisation “that ensures all tests that are conducted have
a certain quality” because “we are all responsible for the cybersecurity in The
Netherlands” and adds “I think some requirements should be set for this”. Although
the majority of providers were in general terms supportive of the establishment of this
type of organisation, many potential issues were raised as well. Two providers warned
that becoming certified should not be a goal on itself, one provider elaborated: “what
we have to prevent is that everyone wants to get this stamp and doesn’t care about the

59
actual quality”. Another provider agreed and described “the moment you have to
become CREST certified, obtaining a CREST certification becomes more interesting
than actually performing the right quality things”. Another issue that was raised by two
providers was that certification requirements could be misused in order to limit or
disadvantage competition. More specifically one providers explained “if CREST
requires to have a minimum of 10 employees who can provide a particular service, to
provide continuity. It means small companies immediately are disadvantaged”. A third
issue was raised by one provider who found the technical quality of personal CREST
certifications quite low, saying “I looked to the content of the certification, because I
was thinking to obtain the CREST certification, but I wasn’t very impressed”. Another
remark regarding the creation of an industry association was that the Dutch NCSC
should be more involved in the market.

60
5.3 Achievement of the goals

Based on the data gathered during the interviews it is clear that the majority of the goals
of the methodologies are not entirely met and that the methodologies only provide
limited improvement of quality within the industry.

Table 14: Goals of methodologies being achieved

All goals1
0.8
Most goals
0.6
Some goals
0.4
Few goals
0.2
No goals
0
Goals being achieved based on current implementation

OSSTMM PTES NIST SP800-115 OWASP Testing Guide

As described in chapter 4 the goal of the OSSTMM is “to provide a scientific


methodology for the accurate characterization of operational security (OpSec) through
examination and correlation of test results in a consistent and reliable way”. The
OSSTMM uses its own method including calculation of the RAVs and reporting via the
STAR. The current implementation of the OSSTMM is very limited and most often
does not include using the methodology provided by the OSSTMM, mainly because
providers use other methodologies and find the OSSTMM too complicated to
implement. The STAR framework is also not used by any of the providers. This results
in the ambitious goals of the OSSTMM not being met at all.

The main goals of the PTES are “providing business and security service providers with
a common language and scope” and to enable clients to “demand a specific baseline
of work”. The current use and implementation of this methodology suggests that these
goals aren’t met. Some providers didn’t know about the PTES and therefore didn’t
implement it and for most others it was only used as inspiration for their internal
methodology, no specific parts of the methodology were consistently used so therefore
resulting in the lack of a clear baseline of work and common scope and language. The
one provider who said they partly based their internal methodology on the PTES, also
said they don’t follow the complete standard, which again might result in the lack of a
clear baseline of work and common language.

61
Due to its flexible nature NIST SP 800-115 seems to achieve its goals at least partly.
The aim of this methodology is described as that it “offers recommendations for
technical testing and examination techniques that can be used for many assessment
methodologies and leveraged for many assessment purposes”. Although most
providers described that they did not fully implement this methodology and used it in
combination with other methodologies, this could actually contribute to reaching one
of the goals of the NIST SP 800-115. The NIST SP 800-115 recommends using another
testing methodology, such as OSSTMM and the OWASP, so using one of these
methodologies in combination with the NIST SP 800-115 could be according to the
NIST SP 800-115. Unfortunately, the NIST SP 800-115 requires the methodologies to
be followed in their entirety, so in the end the goal of the NIST SP 800-115 is only met
partially.

The main goal of the OWASP Testing Guide is described as to “collect all the possible
testing techniques, explain these techniques, and keep the guide updated”. in order to
“help people understand the what, why, when, where and how of testing web
applications”. A web application test using the Web Application Penetration Testing
Methodology is described as “a method of evaluating the security of a computer system
or network by methodically validating and verifying the effectiveness of application
security controls”. Based on the conducted interviews it becomes clear that the
OWASP Testing Guide is arguably the most useful methodology since it was known
by all providers and used in all internal penetration testing methodologies. Most
providers benefited from the explanations in the guide and especially described the
practical information, in chapter 4, regarding the different types of tests as useful.
Unfortunately, the earlier chapters, which include the OWASP Testing Framework and
recommendations on when to test, aren’t used by any providers. In addition to this the
OWASP Testing Guide recommendations on how to report findings aren’t used either.

Based on the current implementation of the methodologies and the extent to which the
goals are met, it can be concluded that the quality of penetration tests has only improved
a little bit after the introduction of the methodologies. Although the different
methodologies do provide some help to providers, the level of implementation has to
change in order to fully reach the goals of the methodologies and benefit from the
associated improvements in quality of tests.

62
6. Conclusion and discussion
With this thesis the author attempted to clarify whether the four main publicly available
standard methodologies for penetration testing, the OSSTMM, the PTES, NIST SP 800-
115 and the OWASP Testing Guide, indeed improve the quality of penetration tests and
why the methodologies do or do not provide this improvement. The author attempted
this by examining how the current implementation of the methodologies within the
Dutch testing industry supports the achievement of the goals of the methodologies. As
part of this examination the topic of penetration testing, its industry, its typical business
process and issues related to standardisation were introduced. Then the current public
methodologies were analysed by describing and comparing the goals, scope and
assessment guidelines. Thirdly the current implementation of the standard
methodologies and the reasons for this implementation were examined by interviewing
providers operating within the Dutch market. And finally, the results of the interviews
were used to determine whether the goals of the methodologies were indeed achieved
and whether the methodologies improve the quality of penetration tests.

To gather the data required to make this clarification the author reviewed literature and
interviewed seven providers. Because this number of providers could be regarded as
limited, the author chose a primarily qualitative approach for answering the research
questions. This study should primarily be seen as exploratory rather than descriptive
and aims to provide an answer on how and why the current implementation of
methodologies improves (or does not improve) the quality of penetration tests.

Based on this examination it can be concluded that some methodologies indeed


contribute a little bit to an improved quality of penetration tests within the Dutch
industry. However most methodologies do not provide much improvement and most
goals of the methodologies are not achieved. The main reason for this is the limited and
partial implementation of the methodologies.

More specifically the OWASP Testing Guide seems to be useful in providing


understanding of how to conduct a penetration test and more concrete on how to test

63
for certain vulnerabilities. But its recommendations for a structured approach, taking
into account the Software Development Lifecycle, are not followed in practice.

The NIST SP 800-115 also provides some help to a select group of providers, but most
providers chose not to use this methodology, without any clear reason other than that
other methodologies were preferred.

The PTES was a relatively unknown methodology within the industry and although one
provider did use it, its goals are not met because this requires the PTES to be followed
completely. In general it does not contribute much to improving the quality of
penetration tests within the Dutch market.

The final methodology, the OSSTMM, recommended the most ‘scientific’ approach
and could in theory have been the most useful in improving the overall quality of
penetration tests. However due to its complexity no provider implemented this
methodology in full and therefore it does not contribute much to the improvement of
the quality of penetration tests within the Dutch industry.

In addition to determining the usefulness of the methodologies themselves, some


insights on standardisation of penetration testing in general and within the Dutch market
were gathered. It can be concluded that providers are, in general terms, positive about
the use of publicly available standard methodologies. Providers describe the general
quality of penetration tests in the Dutch market as varying widely depending on
provider but in general as low and are aware that problems exist within the Dutch
market. One clear problem was the sale of simple automated vulnerability assessment
as penetration tests. A practice that could lead to distrust of clients when purchasing a
penetration test. A potential solution, the creation of an industry association (like
CREST in the British market) tasked with maintaining a certain quality within the
market, was met with mixed opinions. Most providers were positive in general terms,
but were cautious about how, for example, requirements would be defined and that it
could make becoming certified more important than delivering actual high-quality
penetration tests and that the requirements could be used to disadvantage competitors.

64
Based on this study the author would like to make six recommendations to the
organisations involved.

1. ISECOM could investigate how the OSSTMM can be made less complex
and be more practical to implement while maintaining their thorough
scientific approach as much as possible.

2. The PTES team could investigate how the goals of their methodology can
be reached and how adoption of the methodology could be improved.

3. Although some of the goals of the NIST SP 800-115 are met, NIST could
evaluate if current adoption of the standard is satisfactory.

4. Although the OWASP Testing Guide is widely used and contributing to


improved quality of penetration tests, OWASP could investigate how
providers can be made more aware of the OWASP Testing Framework and
the place of testing within the SDLC.

5. Industry leaders within The Netherlands could conduct (further) research on


how to increase the overall quality of penetration tests within The
Netherlands.

6. CREST could cooperate with industry leaders in The Netherlands and


examine whether a Dutch CREST chapter would be beneficial for the
overall quality of penetration tests within the Dutch market.

65
Bibliography

[1] Eurostat, "Digital economy and society statistic - households and individuals,"
June 2019. [Online]. Available: https://ec.europa.eu/eurostat/statistics-
explained/index.php/Digital_economy_and_society_statistics_-
_households_and_individuals#Internet_usage. [Accessed 28 July 2019].
[2] StatLine, "Internet; toegang, gebruik en faciliteiten," 31 October 2018. [Online].
Available:
https://opendata.cbs.nl/statline/#/CBS/nl/dataset/83429NED/table?dl=16DA3.
[Accessed 28 July 2019].
[3] Centraal Bureau voor de Statistiek (Statistics Netherlands), "The Netherlands
leads Europe in internet access," 3 February 2018. [Online]. Available:
https://www.cbs.nl/en-gb/news/2018/05/the-netherlands-leads-europe-in-
internet-access. [Accessed 28 July 2019].
[4] P. Newman, "The Internet of Things 2019," Business Insider Intelligence,
London, 2019.
[5] Department for Digital, Culture, Media & Sport, "Cyber Security Breaches
Survey 2018: Statistical Release," Department for Digital, Culture, Media and
Sport, London, 2018.
[6] Nationaal Coördinator Terrorismebestrijding en Veiligheid, "Cybersecuritybeeld
Nederland," Ministry of Justice and Security, The Hague, 2019.
[7] Autoriteit Persoonsgegevens - Dutch Data Protection Authority, "Meldplicht
datalekken: facts & figures," Autoriteit Persoonsgegevens, The Hague, 2018.
[8] J. R. Lindsay, "Stuxnet and the Limits of Cyber Warfare," Security Studies, vol.
3, no. 22, pp. 365-404, 2013.
[9] BBC, "Equifax finds more victims of 2017 breach," 1 March 2018. [Online].
Available: https://www.bbc.co.uk/news/technology-43241939. [Accessed 22
February 2019].
[10] Ponemon Institute, "Cost of Cyber Crime Study," Accenture, Traverse City,
2017.
[11] Nederlandse Omroep Stichting, "Vrouw uit Breda tijdens 112-storing
overleden," 26 June 2019. [Online]. Available: https://nos.nl/artikel/2290812-
vrouw-uit-breda-tijdens-112-storing-overleden.html. [Accessed 28 July 2019].
[12] The Telegraph, "More than 900 NHS deaths yearly may be caused by IT
failinga," 6 February 2018. [Online]. Available:
https://www.telegraph.co.uk/news/2018/02/06/900-nhs-deaths-yearly-may-
caused-failings/. [Accessed 28 July 2019].
[13] Statista, "Size of the information security technology market from 2016 to 2022
(in billion U.S. dollars)," 2019. [Online]. Available:
https://www.statista.com/statistics/640141/worldwide-information-security-
market-size/. [Accessed 22 February 2019].
[14] M. Felderer, M. Buchler, M. Johns, A. D. Brucker, R. Breu and A. Pretschner,
"Security Testing: A Survey," in Advances in Computers, vol. 101, Cambridge,
Elsevier, 2015.

66
[15] M. Majzoobi, F. Koushanfar and M. Potkonjak, "Testing Techniques for
Hardware Security," in 2008 IEEE International Test Conference, Santa Clara,
CA, 2008.
[16] T. Dimkov, A. van Cleeff, W. Pieters and P. Hartel, "Two methodologies for
physical penetration testing using social engineering," in Proceedings of the
26th Annual Computer Security Applications Conference, Austix, TX, 2010.
[17] M. Felderer, P. Zech, R. Breu, M. Buchler and A. Pretschner, "Model-based
security testing: a taxonomy and systematic classification," SOFTWARE
TESTING, VERIFICATION AND RELIABILITY, no. 26, pp. 119-148, 2015.
[18] G. Tian-yang, S. Yin-sheng and F. You-yuan, "Research on Software Security
Testing," International Journal of Computer and Information Engineering, vol.
4, no. 9, 2010.
[19] J. Frankland, The importance of standardising methodology in penetration
testing, USA: Corsaire, 2009.
[20] T. Wilhelm, Professional Penetration Testing, Rockland, MA: Syngress Media
Incorporated, 2013.
[21] P. Herzog, "OSSTMM 3 The Open Source Security Testing Methodology
Manual," ISECOM, Catalonia, 2010.
[22] PTES, "Penetration Testing Execution Standard," 2014. [Online]. Available:
http://www.pentest-standard.org/index.php/FAQ. [Accessed 24 February 2019].
[23] Open Web Application Security Project, "Testing Guide 4.0," OWASP,
Maryland, 2013.
[24] K. Scarfone, M. Souppaya, A. Cody and A. Orebaugh, "Technical Guide to
Information Security Testing and Assessment," National Institute of Standards
and Technology (NIST), Gaithersburg, MD, 2008.
[25] Open Web Application Security Project (OWASP), "Application Security
Verification Standard 3.0," OWASP, Maryland, 2015.
[26] A. B. T. M. William Knowles, "Analysis and recommendations for
standardization in penetration testing and vulnerability assessment," British
Standards Institute, 2015.
[27] De Nederlandsche Bank, "Open Boek Toezicht - Pentesten," 23 January 2014.
[Online]. Available: https://www.toezicht.dnb.nl/2/50-229818.jsp#. [Accessed
28 July 2019].
[28] GOVCERT.NL (NCSC), "Pentesten doe je zo - Een handleiding voor
opdrachtgevers," GOVCERT.NL (NCSC), The Hague, 2010.
[29] Informatiebeveiligingsdienst voor gemeenten (IBD), "Handreiking
Penetratietesten - Een operationeel kennisproduct ter ondersteuning van de
implementatie van de Baseline Informatiebeveiliging Overheid (BIO),"
Vereniging van Nederlandse Gemeenten, The Hague, 2019.
[30] P. Rietveld, "Security is te belangrijk om aan ethische hackers over te laten,"
Testnet Nieuws, no. 18, 2015.
[31] M. Buijs, "Een methodiek voor preventieve, testgerichte beveiligingsaudits
(penetratietesten)," De EPD-Auditor, 2007.
[32] Centraal Bureau voor de Statistiek - Statistics Netherlands, "The Netherlands
leads Europe in internet access," 3 February 2018. [Online]. Available:
https://www.cbs.nl/en-gb/news/2018/05/the-netherlands-leads-europe-in-
internet-access. [Accessed 28 July 2019].

67
[33] AMS-IX Amsterdam, [Online]. Available: https://www.ams-ix.net/ams.
[Accessed 28 July 2019].
[34] C. Willig, Introducting Qualitative Research in Psychology, Maidenhead:
McGraw-Hill Education (UK), 2013.
[35] M. E. Khan and F. Khan, "A Comparative Study of White box, Black Box and
Grey Box Testing Techniques," in International Journal of Advanced Computer
Science and Applications (IJACSA), Bradford, 2012.
[36] J. D. DeMott, R. J. Enbody and W. F. Punch, "Revolutionizing the Field of
Grey-box Attack Surface Testing with Evolutionary Fuzzing," Las Vegas, 2007.
[37] R. J. Anderson, "What is Security Engineering?," in Security Engineering: A
Guide to Building Dependable Distributed Systems, Hoboken, NJ, John Wiley
& Sons, 2008.
[38] The British Standards Institution, "BS EN ISO/IEC27000:2017 - Information
technology. Security Techniques. Information security management systems.
Overview and vocabulary.," BSI, London, 2017.
[39] B. Arkin, S. Stender and G. McGraw, "Software Penetration Testing," IEEE
Security & Privacy, 2005.
[40] A. Cavoukian and M. Dixon, Privacy and Security by Design: An Enterprise
Architecture Approach, Ontario: Information and Privacy Commissioner, 2013.
[41] N. B. Ruparelia, "Software Development Lifecycle Models," in ACM SIGSOFT
Software Engineering Notes, New York City, NY.
[42] M. Howard and S. Lipner, The Security Development Lifecycle, Redmond,
WA: Microsoft Press, 2006.
[43] Open Web Application Security Project, "Software Assurance Maturity Model,"
OWASP, Maryland, 2009.
[44] B. Potter and G. McGraw, "Software Security Testing," IEEE Security &
Privacy, no. 2, 2004.
[45] J. H. Daniel Geer, "Penetration Testing: A Duet," in Proceedings of the 18th
Annual Computer Security Applications Conference (ACSAC'02), New York
City, NY, 2002.
[46] National Cyber Security Centre, "Penetration Testing," 9 August 2017. [Online].
Available: https://www.ncsc.gov.uk/guidance/penetration-testing. [Accessed 24
February 2019].
[47] CREST, "A guide for running an effective Penetration Testing programme,"
CREST, Slough, 2017.
[48] Penetration Test Guidance Special Interest Group PCI Security Standards
Council, "Information Supplement: Penetration Testing Guidance," PCI Security
Standards Council, Wakefield, MA, 2015.
[49] P. Godefroid, M. Y. Leven and D. Molnar, "SAGE: Whitebox Fuzzing for
Security Testing," Communications of the ACM, vol. 55, no. 3, 2012.
[50] OWASP Foundation, OWASP Code Review Guide, Bel Air, MD: OWASP
Foundation, 2008.
[51] S. Levy, Hackers - Heroes of the Computer Revolution, New York, NY: Dell
Publishing, 1968.
[52] M. Romagna and N. J. van den Hout, "Hacktivism and website defacement:
motivations, capabilities and potential threats," Madrid, 2017.

68
[53] C. Duffy, Learning Penetration Testing with Python, Birmingham: Packt
Publishing, 2015.
[54] Council of Registered Ethical Security Testers (CREST), "CREST," [Online].
Available: https://www.crest-approved.org/. [Accessed 29 July 2019].
[55] T. H. Davenport, Process Innovation: Reengineering Work through Information
Technology, Cambridge, MA: Harvard Business School Press, 1992.
[56] M. Hammer and J. Champy, "Reengineering the corporation: A manifesto for
business revolution," Business Horizons, vol. 36, no. 5, 1993.
[57] K. Vergidis, A. Tiwari and B. Majeed, "Business Process Analysis and
Optimization: Beyond Reengineering," vol. 38, no. 1, 2008.
[58] C. M. Pereira, A. Caetano and P. Sousa, "Using a Controllled Vocabulary to
Support Business Process Design," London, 2011.
[59] L. Allen, S. Ali and T. Heriyanto, Kali Linux - Assuring Security by Penetration
Testing, Birmingham: Packt Publishing, 2014.
[60] G. Johansen, Kali Linux 2 - Assuring Security by Penetration Testing - Third
Edition, Birmingham: Packt Publishing , 2016.
[61] G. Weidman, Penetration Testing: A Hands-On Introduction to Hacking, San
Fransisco, CA: No Starch Press, 2014.
[62] P. Engebretson, The Basics of Hacking and Penetration Testing, Waltham, MA:
Syngress/Elsevier, 2013.
[63] A. Whitaker and D. Newman, "Penetration Testing and Cisco Network
Defense," Cisco Press, Indianapolis, IN, 2005.
[64] PTES Group, "Penetration Testing Execution Standard - the FAQ," 14 January
2017. [Online]. Available: http://www.pentest-standard.org/index.php/FAQ.
[Accessed 28 March 2019].
[65] Open Information Systems Security Group, "Information Systems Security
Assessment Framework," OISSG, 2006.
[66] A. Shanley and M. N. Johnstone, "Selection of penetration testing
methodologies: A comparison and evaluation," Perth, 2015.
[67] M. Felderer, B. Agreiter, P. Zech and R. Breu, "A Classification for Model-
Based Security Testing," Barcelona, 2011.
[68] ISECOM, Hacking Exposed Linux: Linux Security Secrets & Solutions, New
York: McGraw-Hill, 2008.
[69] ISECOM, "Open Source Security Testing Methodology Manual (OSSTMM),"
2017. [Online]. Available: http://www.isecom.org/research/. [Accessed 28
March 2019].
[70] Office for National Statistics, "Internet users UK: 2018," Office for National
Statistics, London, 2018.
[71] BBC, Equifax finds more victims of 2017 breach, London, 2018.
[72] The Guardian, "Hacking risk leads to recall of 500,000 pacemakers due to
patient death fears," 31 August 2017. [Online]. Available:
https://www.theguardian.com/technology/2017/aug/31/hacking-risk-recall-
pacemakers-patient-death-fears-fda-firmware-update. [Accessed 22 February
2019].
[73] WIRED, "AFTER JEEP HACK, CHRYSLER RECALLS 1.4M VEHICLES
FOR BUG FIX," 24 July 2015. [Online]. Available:

69
https://www.wired.com/2015/07/jeep-hack-chrysler-recalls-1-4m-vehicles-bug-
fix/. [Accessed 22 February 2019].
[74] Markets & Markets Research Private Ltd, "Markets to Markets," 2016. [Online].
Available: https://www.marketsandmarkets.com/Market-Reports/security-
testing-market-150407261.html. [Accessed 24 February 2019].
[75] Stratistics MRC, "Global Security Testing Market Share, Size, Estimates,
Trends and Forecast 2023," 22 January 2018. [Online]. Available:
https://www.reuters.com/brandfeatures/venture-capital/article?id=25506.
[Accessed 24 February 2019].
[76] Mordor Intelligence, "Security Testing Market - Growth, Trends, and Forecast
(2019 - 2024)," 2018. [Online]. Available:
https://www.mordorintelligence.com/industry-reports/global-security-testing-
market-industry. [Accessed 24 February 2019].
[77] CREST, "Accredited Companies - Regions and Services," [Online]. Available:
https://www.crest-approved.org/accredited-companies/index.html. [Accessed 24
February 2019].
[78] A. Basta, N. Basta and M. Brown, Computer Security and Penetration Testing,
Second Edition, Stamford, CT: Cengage Learning, 2014.

70

View publication stats

You might also like