Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

Intro to DGIT Security Testing

The Applicable Definition

Security <- -> Limit, Restriction, Predictability

Limit: Quantifiable boundary e.g. Max no. of input length,


max no. of users per minute, etc

Restriction: Non-quantifiable boundary e.g. Restrict users’


access to particular features of Telflow, restrict network access
from outside VPN, etc

Predictability: Expected behaviours/responses e.g. Getting


the appropriate response webpage, correct login, correct
denial of login request
Security Testing The Analogy

Security Testing = Human Trials?


On Web App On Human
Test the amount of input the web app can Test the amount of food a human can
handle take
Restrict a normal user to full features of Restrict a human to colonize other
Telflow regions
Requesting a webpage Asking somebody a question
Enter malformed input Feed poisonous food
Enter Fuzzing a.k.a. Messing Around

All of which lead us to


Enter Disco
ve
restricted hidd r
ed sections en
ce t URL
Ex limi s
the

Fuzzing
Modify the
Tamper the cookie
parameter

e x tra
Ente
r ppe nd d
malf A loa
orme pay
inp u d Insert code
t to execute
Objectives of Security Testing

1. Identify the input capacity of Telflow

2. Identify the vulnerabilities that generate unsafe responses from Telflow

3. Verify the correctness of policy implementations

4. Patch the discovered vulnerabilities


Categories of Vulnerabilities

Visible Invisible
Feasible fuzzing via portal’s input fields Feasible fuzzing via URL
Feasible fuzzing via request packet
manipulation
Vulnerabilities to Hunt

 Hunting down vulnerabilities a specified by Open Web


Application Security Project (OWASP) and Web
Application Security Consortium (WASC)

 Details: Refer to ‘Security Testing’ hierarchy on Confluence


Methods of Testing/Discovery

 Active Scan: Auto-fuzzing of URLs for multiple vulnerabilities

 Automated Request Transmission: Auto-sending of fuzzed requests


to test a specific vulnerability, specially suitable for vulnerabilities that
take too much time to finish e.g. Brute force of login

 Manual Check list: Manually executing the procedures of fuzzing or


policy reviews, suitable for unautomateable vulnerability testings e.g.
Policy reviews, analysis of sequencing patterns, decoding patterns, etc
Suitable Tools

 Selection based on accuracy benchmark by Web Application


Vulnerability Scanner Evaluation Project (WAVSEP)

 Concerns: Usability, accuracy, completeness of fuzzing


features

 The winner: Burp Suite, after practical comparisons against


other WAVSEP top performers (ZAP, IronWASP, Arachni, &
Vega)

 Honorable mention: ZAP, but lacking sequencing, decoding,


and assumed not providing ‘Force SSL’ feature
Current State of Telflow 10-0.system

 Vulnerability Discovery Rate from Active Scan: 45.45%

 Vulnerability Discovery Rate from Automated Request


Transmissions: 50%

 Vulnerability Discovery Rate from Manual Check Lists: 35%

 Note:
 The percentage reflects the no. of discovered vulnerabilities
out of the total no. of tested vulnerabilities
 Good news: the discovered vulnerabilities are mitigable
Current State of Telflow 10-0.system

 Vulnerability Discovered from Active Scan e.g. Cross-site


Scripting (Reflected)
Current State of Telflow 10-0.system

 Vulnerability Discovered from Automated Request


Transmissions e.g. Session Fixation, Web Tampering
Current State of Telflow 10-0.system

 Vulnerability Discovered from Manual Check Lists e.g.


Insufficient Session Expiration
Thank you!

Question?

You might also like