Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 66

WEB APPLICATION PENETRATION TESTING

A Industry Oriented Mini Project report submitted


in partial fulfillment of requirements
for the award of degree of

Bachelor of Technology
In
Information Technology

By
BALLA SASIKALA (RegNo:16131A1208)
DANDUPROLU MOUNIKA (RegNo:16131A1226)
KODAMANCHILI APOORVA (RegNo:16131A1250)
KOPANATHI GAYATHRI SAI PRATYUSHA (RegNo:16131A1255)

Under the esteemed guidance of

Mr.Ch.Srikanth Varma.

M.Tech , Assistant Professor.

Department of Information Technology


GAYATRI VIDYA PARISHAD COLLEGE OF ENGINEERING
(AUTONOMOUS)
(Affiliated to JNTU-K, Kakinada)
VISAKHAPATNAM
2019 - 2020.
Gayatri Vidya Parishad College of Engineering
(Autonomous) Visakhapatnam

CERTIFICATE

This report on “WEB APPLICATION PENETRATION TESTING” is a bonafide record


of the mini project work submitted

By

BALLA SASIKALA (RegNo:16131A1208)


DANDUPROLU MOUNIKA (RegNo:16131A1226)
KODAMANCHILI APOORVA (RegNo:16131A1250)
KOPANATHI GAYATHRI SAI PRATYUSHA (RegNo:16131A1255)

in their VII semester fulfillment of the requirements for the Award of Degree of
Bachelor of Technology
In
Information Technology
During the academic year 2019-2020

Mr.Ch.Srikanth Varma. Dr.K.B.Madhuri

Head of Department

Project guide Department of information technology

External Examiner
DECLARATION

I/we here by declare that this industry oriented mini project entitled “WEB

APPLICATION PENETRATION TESTING” is a bonafide work done by me and

submitted to Department of Information Technology G.V.P college of engineering

(autonomous) Visakhapatnam, in partial fulfilment for the award of the degree of B.Tech is

of my own and it is not submitted to any other university or has been published any time

before.

PLACE: VISAKHAPATNAM BALLA SASIKALA(16131A1208)

DATE : DANDUPROLU MOUNIKA(16131A1226)

KODAMANCHILI APOORVA (16131A1250)

KOPANATHI GAYATHRI SAI PRATYUSHA(16131A1255)


ACKNOWLEDEGEMENT

We thank our faculty at the department particularly Sri Ch.Srikanth Varma he kind

suggestions and guidance through our in house mini project work .we also thank him for his

guidance as our guide that enable the successful completion of our project work.

We wish to express our deep graduate to the Sri B. MADHURI, Head of the department of

IT ,GVPCOE for giving us an opportunity to do the project in college .

We wish to express our deep gratitude to the Principal ,GVPCOE for giving us opportunity

to do the project work, giving us a chance to explore and learn new technologies in the form

of mini project

finally we would like to thank all those people who helped us in many ways in completing

this project.

BALLA SASIKALA (16131A1208)

DANDUPROLU MOUNIKA (16131A1226)

KODAMANCHILI APOORVA (16131A1250)

KOPANATHI GAYATHRI SAI PRATYUSHA (16131A1255)


ABSTRACT

Cybercrime is a global problem that’s been dominating the new cycle. It poses a threat to

individual security and an even bigger threat to large international companies, banks, and

governments. With so much data to exploit out there, Cyber security has become essential to

protect a website from these threats along with malware, it is crucial to have a comprehensive

internet security. Penetration testing . Is the most commonly used security testing technique

for web applications. Web Application Penetration Testing is done by simulating

unauthorized attacks internally or externally to get access to sensitive data. A web

application penetration helps end user find out the possibility for a hacker to access the data

from the internet, find about the security of their servers and also get to know how secure the

web hosting site and servers are. Vulnerabilities present in the website are listed and report

is generated stating severity of vulnerabilities present in the website.


INDEX
1. INTRODUCTION

1.1 Objective

1.2 About the application

1.3 Purpose

2. SYSTEM ANALYSIS

2.1 Existing System

2.2 proposed system

2.3. Modules involved

3. SRS

3.1 Functional requirements

3.2 Non Functional requirements

4. DESIGN

4.1. Process

4.2. Importance of UML in software development.

4.3. UML diagrams.

5. TOOLS DESCRIPTION

5.1. Nmap

5.2. Recon-ng

5.3. Nikto

5.4. Vega

5.5. Burp suite

6. IMPLEMENTATION

6.1. Information Gathering

6.2. Scanning
6.3. Vulnerability Scanning

6.4. Vulnerability Assessment.

7. COUNTER MEASURES

8. CONCLUSION

9. BIBLIOGRAPHY
INTRODUCTION

Cyber security refers to a set of techniques used to protect the integrity of an


organization’s security architecture and safeguard its data against attack, damage
or unauthorized access. In order to safeguard the data different testing methods are
been used. There are mainly three types of testing such as Black box testing,
White box testing and grey box testing. In our project we consider black box
penetration testing. In this testing information about the system is gathered and has
been tested.

1.1. OBJECTIVE
Our main objective is to check the severity of vulnerabilities present in a website
using automated and manual methods. A report is to be given and even suggest
where the patches are to be updated.

1.2. ABOUT THE APPLICATION


Application is done based on OWASP top 10 vulnerabilities. They are:

1. INJECTION:

An injection of code happens when an attacker sends invalid data to the web application
with the intention to make it do something different from what the application was
designed /programmed to do.

2. BROKEN AUTHENTICATION:

A broken authentication vulnerability can allow an attacker to use manual and/or automatic
mediums to try to gain control over any account he/she wants in a system – or even worse –
to gain complete control over the system.

3. SENSITIVE DATA EXPOSURE:

Applications and APIs that don’t properly protect sensitive data such as financial data,
usernames and passwords, or health information, could enable attackers to access such
information to commit fraud or steal identities. Encryption of data at rest and intransit can
help you comply with data protection regulations.
4. XML EXTERNAL ENTITY:

Poorly configured XML processors evaluate external entity references within XML
documents. Attackers can use external entities for attacks including remote code execution,
and to disclose internal files and SMB file shares.

5. BROKEN ACCESS CONTROL:

Improperly configured or missing restrictions on authenticated users allow them to access


unauthorized functionality or data, such as accessing other users’ accounts, viewing sensitive
documents, and modifying data and access rights. Penetration testing is essential for detecting
non-functional access controls; other testing methods only detect where access controls are
missing.

6. SECURITY MISCONFIGURATION:

This risk refers to improper implementation of controls intended to keep application data
safe, such as misconfiguration of security headers, error messages containing sensitive
information (information leakage), and not patching or upgrading systems, frameworks, and
components. Dynamic application security testing (DAST) can detect misconfigurations,
such as leaky APIs.

7. CROSS SITE SCRIPTING:

Cross-site scripting (XSS) flaws give attackers the capability to inject client-side scripts into
the application, for example, to redirect users to malicious websites.Developer
trainingcomplements security testing to help programmers prevent cross-site scripting with
best coding best practices, such as encoding data and input validation.

8. INSECURE DESERIALIZATION:

Insecure deserialization flaws can enable an attacker to execute code in the application
remotely, tamper or delete serialized (written to disk) objects, conduct injection attacks, and
elevate privileges. Application security tools can detect deserialization flaws but penetration
testing is frequently needed to validate the problem.
9. USING COMPONENTS WITH KNOWN VULNERABILITIES:

Developers frequently don’t know which open source and third-party components are in their
applications, making it difficult to update components when new vulnerabilities are
discovered. Attackers can exploit an insecure component to take over the server or steal
sensitive data. Software composition analysis conducted at the same time as static analysis
can identify insecure versions of components.

10. INSUFFICIENT LOGGING AND MONITORING:

The time to detect a breach is frequently measured in weeks or months. Insufficient logging
and ineffective integration with security incident response systems allow attackers to pivot to
other systems and maintain persistent threats.

1.3. PURPOSE

A cyberattack may steal, alter, or destroy a specified target by hacking into a susceptible


system. Cyberattacks can range from installing spyware on a personal computer to attempting
to destroy the infrastructure of entire nations. There are different types of cyber attacks like
DDos attack, Password cracking, SQL injection attack, man in the middle attack, password
attack, xss injection attack, man in the middle attack. In order to protect a website from
hackers/intruders it is essential to perform web application penetration testing.
SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

In the existing system we consider a website www.testfire.net. This website may contain

vulnerabilities. These vulnerabilities are to be tested using automated and manual methods.

The testfire.net website is published by IBM Corporation for the sole purpose of

demonstrating the effectiveness of IBM products in detecting web application vulnerabilities

and website defects. This site is not a real banking site. Similarities, if any, to third party

products and/or websites are purely coincidental. This site is provided "as is" without

warranty of any kind, either express or implied.

Drawbacks of existing system

Now-a-days there is rapid increasing in cyber crimes. The exiting system may contain

vulnerabilities which are the key to the cyber attacks, so in order to avoid cyber attacks those

vulnerabilities must be found. The following are the drawbacks for present system:

 Data insecurity

 Privacy insecurity

 Possibility of DDos attacks, malware attacks, ransomeware attack, password attack

 Unauthorised log-in of malicious user

3.2 PROPOSED SYSTEM

In order avoid different cyber attacks.

3.3. MODULES INVOLVED:


1. INFORMATION GATHERING:

• Finding out the target IP address and determining network range.

• Identifying whether the host is active or not, DNS records, subdomains

• Information gathering can be done using using two met methods that is active method
and passive method.

• Active method involves:

• Ping: Used to check whether the host is active are not .It involves sending
ICMP echo requests to a host. If the host is live, it will return an ICMP
ECHO reply.

• Trace route: The tracert is used to show several details about the path that
a packet takes from the computer or device you're on to whatever destination
you specify.

• Passive method involves:

• Nslookup:Nslookup is a network administration command-line tool


available in many computer operating systems for querying the Domain Name
System (DNS) to obtain domain name or IP address mapping, or other DNS
records.

• DNS Enumeration: It is done to find large amounts of information. The


DNS Enumeration is the process of locating all the DNS servers and
corresponding records of an organisation.

• DNS Recon package:Check all NS Records for Zone Transfers.


Enumerate General DNS Records for a given Domain (MX, SOA, NS, A,
AAAA, SPF and TXT).Perform common SRV Record Enumeration.

• Fierce:It is meant specifically to locate likely targets both inside and outside
a corporate network. Initially it tries for DNS zone transfer and if fails, it does
brute force or dictionary attack.
• Who is info: It is used in order to retrieve the information about the DNS
records, name servers of the given domain.

• Reverse domain IP: A reverse IP domain check takes a domain name


or IP address pointing to a web server and searches for other sites known to be
hosted on that same web server. 

• Recon-ng:Recon-ng is a full-featured Web Reconnaissance framework


written in Python. Complete with independent modules, database interaction,
built in convenience functions, interactive help, and command completion.
Recon-ng provides a powerful environment in which open source web-based
reconnaissance can be conducted quickly and thoroughly.

• Wafwoof: Web application firewalls play an important role in the security


of websites as they can mitigate risks and they can offer protection against a
large-scale of vulnerabilities. That is the reason that many companies in
nowadays are implementing a web application firewall solution in their
existing infrastructure.

2. SCANNING :

• Port scanning: Port scanning refers to the surveillance of computer ports, most
often by hackers for malicious purposes. Hackers conduct port-scanning
techniques in order to locate holes within specific computer ports.

• Nmap ping scan

• Nmap complete port scan

• Synchronize/Stealth scan

• Connect Scan
3. VULNERABILITY SCANNING:

• A vulnerability scan detects and classifies system weaknesses in computers, networks


and communications equipment and predicts the effectiveness of counter
measures.Vulnerability scanning is done by

• Using Vega

• Using Nikto

A. Vega: Vega is a free and open source web security scanner and web security testing
platform to test the security of web applications. Vega can help you find and validate
SQL Injection, Cross-Site Scripting (XSS), inadvertently disclosed sensitive
information, and other vulnerabilities. It is written in Java, GUI based, and runs on
Linux, OS X, and Windows.Vega can help you find vulnerabilities such as: reflected
cross-site scripting, stored cross-site scripting, blind SQL injection, remote file
include, shell injection, and others.

B. Nikto: Nikto is a free software command-line vulnerability scanner that scans web
servers for dangerous files, outdated server software and other problems. It
performs generic and server type specific checks.

SQL INJECTION:

• SQL Injection (SQLi) refers to an injection attack wherein an attacker can execute


malicious SQL statements (also commonly referred to as a malicious payload) that
control a web application's database server.

SENSITIVE DATA EXPOSURE:

• Sensitive Data Exposure occurs when an application does not adequately protect
sensitive information. The data can vary and anything from passwords, session
tokens, credit card data to private health data and more can be exposed.
CROSS SITE SCRIPTING VULNERABILITY:

• In an XSS attack, a Web application is sent with a script that activates when it is read
by an unsuspecting user's browser or by an application that has not protected itself
against cross-site scripting.

BURP SUITE:

• Burp Suite is an integrated platform for performing security testing of web


applications.Its various tools work seamlessly together to support the entire testing
process, from initial mapping and analysis of an application's attack surface, through
finding and exploiting security vulnerabilities.

BROKEN AUTHENTICATION AND SESSION MANAGEMENT:

• Application functions related to authentication and session management are often not
implemented correctly, allowing attackers to compromise passwords, keys, or session
tokens, or to exploit other implementation flaws to assume other users’ identities.
SOFTWARE REQUIRMENTS SPECIFICATION

3.1FUNCTIONAL REQUIRMENTS

A functional requirements defines a function of a system or it component .A function

described as a set of inputs,the behavior, and outputs.

3.1.1 SOFTWARE REQUIREMENTS

 Operating system: Windows 10/8.1/7, Kali Linux

 VMware

3.1.2 HARDWARE REQIUREMENTS

 Processor: Intel/AMD processor

 RAM:8GB

 Disk space: 500GB

3.2 NON FUNCTIONAL REQUIREMENTS

A non- functional requirement is a requirement that specify criteria that can be used to judge the

operation of a system, rather than specific behaviors .Nonfunctional requirements are called

qualities of a system, there are as follows:

Privacy

Privacy is the primary concern of our project. Internet privacy is the privacy and security level

of personal data published via internet. Through this project privacy of an individual is

maintained.
Reporting

Different tools are used in the project. Each tool generate different reports which are listed and

analyzed.

Certification

An SSL certificate is a type of digital certificate that provides authentication for a website and

enables an encrypted connection. This project ensures the SSL certification of a website.

Maintainability

This project requires high maintenance as the vulnerabilities on the website can not be predicted.

Vulnerabilities in the website should be checked at regular intervals to ensure security.

Data Security

Data security refers to protective digital privacy measures that are applied to prevent

unauthorized access to computers, databases and websites. Data security also protects data from

corruption. Web application penetesting helps in ensuring data security.

Readability
Readability is the ease with which a developer can understand where the patches are to be

updated. Readability of countermeasures mentioned in the project must be high so that the

patches will be updated successfully.


SYSTEM DESIGN

4.1 INTRODUCTION TO UML

The system to be developed is best designed using UML i.e Unified Modeling Language. The

Unified Modeling Language includes a set of graphics notation techniques to create visual models of

object oriented intensive systems. UML is a visual language for specifying, constructing, and

documenting the artifacts of software- intensive systems.

Complex software designs difficult for you to describe with text alone can readily be

conveyed through diagrams using UML. You can use UML with all processes throughout the

development lifecycle and across different implementation.

A model is a simplification of reality. We build models so that we can better understand the

system we are developing. Through modeling, we achieve four aims. They are:

1. Models help us to visualize a system as it is or as we want it to be.

2. Models permit us to specify the structure or behavior of a system.

3. Models give us a template that guides us in constructing a system.

4. Models document the decisions we have made.

We build models of complex systems because we cannot comprehend such a System in its entirety.

PRINCIPLES OF MODELING
There are four basic principles common to designing any kind of system. They are:

1. The choice of what models to create has a profound influence on how a problem is attacked

and how a solution is shaped.

2. The right models will brilliantly illuminate the most wicked development problems, offering

insight that you simply could not gain otherwise; the wrong models will mislead you, causing

you to focus on irrelevant issues.

3. Every model may be expressed at different levels of precision.

4. The best models are connected to reality.

5. No single model is sufficient. Every nontrivial system is best approached through a small set

of nearly independent models.

To understand the architecture of a system, we need several complementary and

interlocking views: a use case view (exposing the requirements of the system), a design view

(capturing the vocabulary of the problem space and the solution space), a process view (modeling

the distribution of the system's processes and threads), an implementation view (addressing the

physical realization of the system), and a deployment view (focusing on system engineering issues).

Each of these views may have structural, as well as behavioral, aspects. Together, these views

represent the blueprints of software.

Object-Oriented Modeling:

In software, there are several ways to approach a model. The two most common ways are

from an algorithmic perspective and from an object-oriented perspective.

The traditional view of software development takes an algorithmic perspective. In this

approach, the main building block of all software is the procedure or function. This view leads

developers to focus on issues of control and the decomposition of larger algorithms into smaller
ones. As requirements change (and they will) and the system grows (and it will), systems built with

an algorithmic focus turn out to be very hard to maintain.

The contemporary view of software development takes an object-orientedperspective. In this

approach, the main building block of all software systems is the object or class. Simply put, an object

is a thing, generally drawn from the vocabulary of the problem space or the solution space; a class is

a description of a set of common objects. Every object has identity (you can name it or otherwise

distinguish it from other objects), state (there's generally some data associated with it), and

behavior (you can do things to the object, and it can do things to other objects, as well). Object-

oriented development provides the conceptual foundation for assembling systems out of

components using technology such as Java Beans or COM+.

A Conceptual Model of the UML:

To understand the UML, we need to form a conceptual model of the language, and this

requires learning three major elements: the UML's basic building blocks, the rules that dictate how

those building blocks may be put together, and some common mechanisms that apply throughout

the UML.

6.2 BUILDING BLOCKS OF UML:

The vocabulary of the UML encompasses three kinds of building blocks:

1. Things

2. Relationships

3. Diagrams
Things are the abstractions that are first-class citizens in a model; relationships tie these things

together; diagrams group interesting collections of things.

 Things in the UML:

There are four kinds of things in the UML:

1. Structural things

2. Behavioral things

3. Grouping things

4. Annotational things

I. Structural Things:Structural thingsare the nouns of UML models. These are the mostly static parts

of a model, representing elements that are either conceptual or physical. In all, there are seven kinds

of structural things.

 A classis a description of a set of objects that share the same attributes, operations,

relationships, and semantics. A class implements one or more interfaces. Graphically, a

class is rendered as a rectangle, usually including its name, attributes, and operations.

 An interfaceis a collection of operations that specify a service of a class or component.

An interface therefore describes the externally visible behavior of that element. An

interface might represent the complete behavior of a class or component or only a part

of that behavior.

 A collaboration defines an interaction and is a society of roles and other elements that

work together to provide some cooperative behavior that's bigger than the sum of all

the elements. Therefore, collaborations have structural, as well as behavioral,


dimensions. Graphically, a collaboration is rendered as anellipse with dashed lines,

usually including only its name.

 A use case is a description of set of sequence of actions that a system performs that

yields an observable result of value to a particular actor. A use case is used to structure

the behavioral things in a model. Graphically, a use case is rendered as an ellipse with

solid lines, usually including only its name.

 An active class is a class whose objects own one or more processes or threads and

therefore can initiate control activity. An active class is just like a class except that its

objects represent elements whose behavior is concurrent with other elements.

Graphically, an active class is rendered just like a class, but with heavy lines, usually

including its name, attributes, and operations.

 A component is a physical and replaceable part of a system that conforms to and

provides the realization of a set of interfaces. Graphically, a component is rendered as a

rectangle with tabs, usually including only its name.

 A node is a physical element that exists at run time and represents a computational

resource, generally having at least some memory and, often, processing capability. A set

of components may reside on a node and may also migrate from node to node.

Graphically, a node is rendered as a cube, usually including only its name.

II. Behavioral Things: Behavioral things are the dynamic parts of UML models. These are the verbs of

a model, representing behavior over time and space. In all, there are two primary kinds of behavioral

things.

 An interaction is a behavior that comprises a set of messages exchanged among a set of

objects within a particular context to accomplish a specific purpose.


 A state machine is a behavior that specifies the sequences of states an object or an

interaction goes through during its lifetime in response to events, together with its

responses to those events. A state machine involves a number of other elements, including

states, transitions (the flow from state to state), events (things that trigger a transition), and

activities (the response to a transition). Graphically, a state is rendered as a rounded

rectangle, usually including its name and its substrates, if any.

III. Grouping Things: Grouping things are the organizational parts of UML models. These are the

boxes into which a model can be decomposed. In all, there is one primary kind of grouping thing,

namely, packages.

 A package is a general-purpose mechanism for organizing elements into groups. Structural

things, behavioral things, and even other grouping things may be placed in a package. Unlike

components, a package is purely conceptual. Graphically, a package is rendered as a tabbed

folder, usually including only its name and, sometimes, its contents.

IV. Annotational Things: Annotational things are the explanatory parts of UML models. These are

the comments you may apply to describe, illuminate, and remark about any element in a model.

There is one primary kind of annotational thing, called a note.

 A noteis simply a symbol for rendering constraints and comments attached to an element or

a collection of elements. Graphically, a note is rendered as a rectangle with a dog-eared

corner, together with a textual or graphical comment.

 Relationships in the UML:

There are four kinds of relationships in the UML:


1. Dependency

2. Association

3. Generalization

4. Realization

These relationships are the basic relational building blocks of the UML. We use them to write

well-formed models.

 A dependency is a semantic relationship between two things in which a change to one thing

may affect the semantics of the other thing (the dependent thing).

 An association is a structural relationship that describes a set of links, a link being a

connection among objects.

 A generalization is a specialization/generalization relationship in which objects of the

specialized element (the child) are substitutable for objects of the generalized element (the

parent).

 A realization is a semantic relationship between classifiers, wherein one classifier specifies a

contract that another classifier guarantees to carry out.

 Diagrams in the UML

A diagram is the graphical presentation of a set of elements, most often rendered as a

connected graph of vertices (things) and arcs (relationships). You draw diagrams to visualize a

system from different perspectives, so a diagram is a projection into a system. UML includes nine

diagram.

 A class diagram shows a set of classes, interfaces, and collaborations and their relationships.

These diagrams are the most common diagram found in modeling object-oriented systems.
Class diagrams address the static design view of a system. Class diagrams that include active

classes address the static process view of a system.

 An object diagram shows a set of objects and their relationships. Object diagrams represent

static snapshots of instances of the things found in class diagrams. These diagrams address

the static design view or static process view of a system as do class diagrams, but from the

perspective of real or prototypical cases.

 A use case diagram shows a set of use cases and actors (a special kind of class) and their

relationships. Use case diagrams address the static use case view of a system. These

diagrams are especially important in organizing and modeling the behaviors of a system.

 Both sequence diagrams and collaboration diagrams are kinds of interaction diagrams. An

interaction diagram shows an interaction, consisting of a set of objects and their

relationships, including the messages that may be dispatched among them. Interaction

diagrams address the dynamic view of a system. A sequence diagram is an interaction

diagram that emphasizes the time-ordering of messages; a collaboration diagram is an

interaction diagram that emphasizes the structural organization of the objects that send and

receive messages. Sequence diagrams and collaboration diagrams are isomorphic, meaning

that we can take one and transform it into the other.

 A statechart diagram shows a state machine, consisting of states, transitions, events, and

activities. Statechart diagrams address the dynamic view of a system.

 An activity diagram is a special kind of a statechart diagram that shows the flow from activity

to activity within a system. Activity diagrams address the dynamic view of a system. They are

especially important in modeling the function of a system and emphasize the flow of control

among objects.

 A component diagram shows the organizations and dependencies among a set of

components. Component diagrams address the static implementation view of a system.


 A deployment diagram shows the configuration of run-time processing nodes and the

components that live on them. Deployment diagrams address the static deployment view of

an architecture.

UML DIAGRAMS

1. Use case diagram

Use case diagram comprises of use cases and actors such that there would be various kinds of

relationships among the use cases and the actors. A use case diagram shows all the actions that

a particular actor needs to perform throughout the system at every and any point of time. There

would be only one use case diagram per each system.

Unauthorized user trying to login – Without sql Injection


Unauthorized user trying to login – With sql Injection

2. Sequence diagram

This diagram, as the name suggests, contains the sequence of flow of actions that are processed

through a system and the life lines of the entities, when and how are they accessed. It also contains

the security like which entity can process which entity and which one is visible, etc. There can be

many number of sequence diagrams per each activity being done


Client requests through proxy (With out burp suite)
5.TOOLS DESCRIPTION

5.1 Nmap – The Network Mapper Tool:

Nmap is a free and open-source network scanner created by Gordon Lyon. Nmap is used to

discover hosts and services on a computer network by sending packets and analyzing the

responses. Nmap provides a number of features for probing computer networks, including

host discovery and service and operating system detection.

Nmap ("Network Mapper") is a free and open source licensed utility for network discovery

and security auditing. Many systems and network administrators also find it useful for tasks

such as network inventory, managing service upgrade schedules, and monitoring host or

service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are

available on the network, what services (application name and version) those hosts are

offering, what operating systems (and OS versions) they are running, what type of packet

filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan

large networks, but works fine against single hosts. Nmap runs on all major computer

operating systems, and official binary packages are available for Linux, Windows, and Mac

OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an

advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and

debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation

and response analysis tool (Nping).

Nmap was named “Security Product of the Year” by Linux Journal, Info World,

LinuxQuestions.Org, and Codetalker Digest.


Nmap is ...

 Flexible: Supports dozens of advanced techniques for mapping out networks filled

with IP filters, firewalls, routers, and other obstacles. This includes many port

scanningmechanisms (both TCP & UDP), OS detection, version detection, ping

sweeps, and more. See the documentation page.

 Powerful: Nmap has been used to scan huge networks of literally hundreds of

thousands of machines.

 Portable: Most operating systems are supported, including Linux, Microsoft

Windows, FreeBSD, OpenBSD, Solaris, IRIX, Mac OS X, HP-UX, NetBSD, Sun

OS, Amiga, and more.

 Easy: While Nmap offers a rich set of advanced features for power users, you can

start out as simply as "nmap -v -A targethost". Both traditional command line and

graphical (GUI) versions are available to suit your preference. Binaries are available

for those who do not wish to compile Nmap from source.

 Free: The primary goals of the Nmap Project is to help make the Internet a little more

secure and to provide administrators/auditors/hackers with an advanced tool for

exploring their networks. Nmap is available for free download, and also comes with

full source code that you may modify and redistribute under the terms of the license.

 Well Documented: Significant effort has been put into comprehensive and up-to-date

man pages, whitepapers, tutorials, and even a whole book! Find them in multiple

languages here.

 Supported: While Nmap comes with no warranty, it is well supported by a vibrant

community of developers and users. Most of this interaction occurs on the Nmap


mailing lists. Most bug reports and questions should be sent to the nmap-dev list, but

only after you read the guidelines. We recommend that all users subscribe to the low-

traffic nmap-hackers announcement list. You can also find Nmap

on Facebook and Twitter. For real-time chat, join the #nmap channel

on Freenode or EFNet.

 Acclaimed: Nmap has won numerous awards, including "Information Security

Product of the Year" by Linux Journal, Info World and Codetalker Digest. It has been

featured in hundreds of magazine articles, several movies, dozens of books, and one

comic book series. Visit the press page for further details.

 Popular: Thousands of people download Nmap every day, and it is included with

many operating systems (Redhat Linux, Debian Linux, Gentoo, FreeBSD, OpenBSD,

etc). It is among the top ten (out of 30,000) programs at the Freshmeat.Net repository.

This is important because it lends Nmap its vibrant development and user support

communities.

Features

Nmap features include:

 Host discovery – Identifying hosts on a network. For example, listing the hosts that

respond to TCP and/or ICMP requests or have a particular port open.

 Port scanning – Enumerating the open ports on target hosts.

 Version detection – Interrogating network services on remote devices to determine

application name and version number.

 OS detection – Determining the operating system and hardware characteristics of

network devices.
 Scriptable interaction with the target – using Nmap Scripting Engine (NSE)

and Lua programming language.

Nmap can provide further information on targets, including reverse DNS names, device

types, and MAC addresses.

Typical uses of Nmap:

 Auditing the security of a device or firewall by identifying the network connections

which can be made to, or through it.

 Identifying open ports on a target host in preparation for auditing.

 Network inventory, network mapping, maintenance and asset management.

 Auditing the security of a network by identifying new servers.

 Generating traffic to hosts on a network, response analysis and response time

measurement.

 Finding and exploiting vulnerabilities in a network.

 DNS queries and subdomain search

User interfaces

NmapFE, originally written by Zach Smith, was Nmap's official GUI for Nmap versions 2.2

to 4.22. For Nmap 4.50 (originally in the 4.22SOC development series) NmapFE was

replaced with Zenmap, a new official graphical user interface based on UMIT, developed by

Adriano Monteiro Marques.

Various web-based interfaces allow controlling Nmap or analysingNmap results from a web

browser. These include Nmap-CGI, and IVRE.


Microsoft Windows specific GUIs exist, including NMapWin, which has not been updated

since June 2003 (v1.4.0), and NMapW by Syhunt.

Output

Nmap provides four possible output formats. All but the interactive output is saved to a file.

Nmap output can be manipulated by text processing software, enabling the user to create

customized reports.

Interactive

presented and updated real time when a user runs Nmap from the command line.

Various options can be entered during the scan to facilitate monitoring.

XML

a format that can be further processed by XML tools. It can be converted into

a HTML report using XSLT.

Grepable

output that is tailored to line-oriented processing tools such as grep, sed or awk.

Normal

the output as seen while running Nmap from the command line, but saved to a file.

Script kiddie

meant to be an amusing way to format the interactive output replacing letters with

their visually alike number representations. For example,  Interesting

ports  becomes  Int3rest1ng p0rtz .

Legal issues

Nmap is a tool that can be used to discover services running on Internet connected systems.

Like any tool, it could potentially be used for black hat hacking, as a precursor to attempts to
gain unauthorized access to computer systems; however, Nmap is also used by security and

systems administration to assess their own networks for vulnerabilities (i.e. white hat

hacking).

System administrators can use Nmap to search for unauthorized servers, or for computers that

do not conform to security standards.

In some jurisdictions, unauthorized port scanning is illegal.

License

Nmap was originally distributed under the GNU Public License (GPL). In later releases,

Nmap's authors added clarifications and specific interpretations to the license where they felt

the GPL was unclear or lacking. For instance, Nmap 3.50 specifically revoked the license

of SCO Group to distribute Nmap software because of their views on the SCO-Linux

controversies.

In academia

Nmap is an integral part of academic activities. It has been used for research involving

the TCP/IP protocol suite and networking in general. As well as being a research tool, Nmap

has become a research topic.

5.2 Recon-ng:

Package Description:
Recon-ng is a full-featured Web Reconnaissance framework written in Python. Complete

with independent modules, database interaction, built in convenience functions, interactive

help, and command completion, Recon-ng provides a powerful environment in which open

source web-based reconnaissance can be conducted quickly and thoroughly.

Recon-ng has a look and feel similar to the Metasploit Framework, reducing the learning

curve for leveraging the framework. However, it is quite different. Recon-ng is not intended

to compete with existing frameworks, as it is designed exclusively for web-based open source

reconnaissance. If you want to exploit, use the Metasploit Framework. If you want to Social

Engineer, us the Social Engineer Toolkit. If you want to conduct reconnaissance, use Recon-

ng! See the Usage Guide for more information.

Recon-ng is a completely modular framework and makes it easy for even the newest of

Python developers to contribute. Each module is a subclass of the “module” class. The

“module” class is a customized “cmd” interpreter equipped with built-in functionality that

provides simple interfaces to common tasks such as standardizing output, interacting with the

database, making web requests, and managing API keys. Therefore, all the hard work has

been done. Building modules is simple and takes little more than a few minutes. See the

Development Guide for more information.

Tools included in the recon-ng package

recon-ng – Web Reconnaissance framework written in Python

root@kali:~# recon-ng --help

usage: recon-ng [-h] [-v] [-w workspace] [-r filename] [--no-check]

                [--no-analytics]

recon-ng - Tim Tomes (@LaNMaSteR53) tjt1980[at]gmail.com


optional arguments:

  -h, --help      show this help message and exit

  -v, --version   show program's version number and exit

  -w workspace    load/create a workspace

  -r filename     load commands from a resource file

  --no-check      disable version check

  --no-analytics  disable analytics reporting

5.3Nikto:

Nikto is a free software command-line vulnerability scanner that scans webservers for

dangerous files/CGIs, outdated server software and other problems. It performs generic and

server type specific checks. It also captures and prints any cookies received. The Nikto code

itself is free software, but the data files it uses to drive the program are not.

Features

Nikto can detect over 6700 potentially dangerous files/CGIs, checks for outdated versions of

over 1250 servers, and version specific problems on over 270 servers. It also checks for

server configuration items such as the presence of multiple index files and HTTP server

options, and will attempt to identify installed web servers and software. Scan items and

plugins are frequently updated and can be automatically updated.

Variations

There are some variations of Nikto, one of which is MacNikto. MacNikto is an AppleScript

GUI shell script wrapper built in Apple's Xcode and Interface Builder, released under the
terms of the GPL. It provides easy access to a subset of the features available in the

command-line version, installed along with the MacNikto application.

5.4Vega:

Vega helps you find and fix cross-site scripting (XSS), SQL injection, and more. Vega is a

free and open source web security scanner and web security testing platform to test the

security of web applications. The Vega scanner finds XSS (cross-site scripting), SQL

injection, and other vulnerabilities.

1. Using the Vega Scanner

Introduction

When you start Vega for the first time, you will be in the scanner perspective. Vega has two

perspectives: The scanner, and the proxy. We'll start the introduction with the scanner. The

Vega scanner is an automated security testing tool that crawls a website, analyzing page

content to find
links and form parameters. Vega finds injection points, referred to as path state nodes, and

runs modules written in Javascript to analyze them. Vega also runs Javascript modules on all

responses sent back from the server during the scan.

The Scanner Perspective:

The screenshot above shows the complete Vega scanner perspective. The parts that comprise

it, such as "Website View", "Scan Info", etc., are moveable. To restore to the original layout,

click on the "Window" menu item and select "Reset Perspective". This will reassemble the UI

parts into this arrangement. This does not affect the data or operation of any current scan.
Workspaces

Vega stores information about the current and past scans in a "workspace". Clearing the

workspace will remove all scan data, including alerts and saved requests/responses. To do so,

select the "File" menu item and click on "Reset Current Workspace".

Preferences

Vega scans websites recursively, building an internal representation of the site in a tree-like

data structure comprised of entities known as "path state nodes". Path state nodes can be

directories, files, or files with POST or GET parameters. Complex websites can result in long

scans and large path state data structures, so Vega offers configurable parameters that limit

the scan scope in the scanner preferences. To access these parameters, click on the Window

menu item and choose "Preferences". There are two sets of preferences associated with the

scanner: Scanner preferences and Scanner debugging. Select Scanner debugging.

Scanner Preferences

The scan limits are set in the scanner preferences. These include the following parameters:

 Total number of path descendants

This is the total children of a node + all its children. Children of a path state node

could be its subdirectories, or its parameters, with one node for each in a set of

parameters.

 Total number of child paths for a single node

Limits on the number of children per node (subdirectories + files + parameters).

 Maximum path depth

The limit on the heirarchy of path state nodes (e.g. /level1/level2/level3/level4…)


 Maximum number of duplicate path elements

The maximum number of permitted duplicate, adjacent path nodes. For example:

/images/images/images.

 Maximum length of strings to display in alert reports

The alerts can include text from the module, such as the response body. The level of

permitted module verbosity can be configured here by the user.

 Maximum number of requests to send per second

This setting regulates the speed at which Vega scans.

Scanner Debugging

The scanner debugging preferences contain settings intended for use during module

development or debugging.

 Log all scanner requests

By default, Vega only saves the requests and responses that generate alerts within its

database. Enabling this will result in all requests and responses being saved. They will

be accessible from the message viewer.

 Display debug output in console Enabling this will cause Vega to output verbose

logging to the console.

Starting the first scan

To start a scan, click the new scan icon at the top left corner. Alternatively, you can select the

"Scan" menu bar item and click on "Start new Scan".

This will cause the New Scan wizard to open.


The user can either supply a base URI. As the target, or edit a target scope. Target scopes

allow multiple base URIs and exclusions that will not be scanned by Vega. Another way to

add or remove resources from a target path is via the web view.

For this tutorial, we will just enter a base URI. Clicking next will advance to the next wizard

page.

Modules

Modules are units of extended functionality written in Javascript (the Vega engine is written

in Java, but includes the Rhino JS interpreter). Vega supports two kinds of modules:

Basic Modules

These run on path state nodes and perform active fuzzing, including:

URIs that are known to be files or directories

URIs with parameters, with each parameter being a distinct path state node

Response Processing Modules

These run on all responses that are returned from the server. They can be considered "grep"

modules.
Both types of modules can store information in the shared knowledge base and generate

XML-based alerts.

Remember: experiment with Vega on servers that belong to you and are not in production

use.

Authentication and cookies

Vega supports the configuration of credentials for performing automated scans while

authenticated to the application or server. These credentials include:

 Basic HTTP

 Digest HTTP

 NTLM

 Macro (form based authentication)

Credentials must be configured using Identities. The Vega Identities feature has its own Wiki

page. There is also a short video tutorial here: https://www.youtube.com/watch?

v=Yw2UbKivkgQ.

Vega also permits the configuration of cookies that will be sent with all scanner requests.

These can be added individually through the Wizard UI.

Running a Scan

Vega will start crawling the target web application. Vega sends many requests. This is

because in addition to analyzing the page content, the crawling engine does several tests on

each potential path, trying to determine if it is a file or a directory. Vega also compares pages
to each other, and tries to figure out what the 404 page looks like. Vega modules also send

their own requests.

The scan progress will be indicated with a progress bar. Note that the total number of links to

crawl will grow as Vega discovers new ones and generates variations to perform the above

described tests, so the finish time will be a moving target. The preferences described at the

start of this tutorial control the parameters that limit scope of the scan.

To stop an active scan, click the red icon with an "x" next to the new scan button.

Website View

Vega will build a list in the top right corner of the paths crawled and seen. The greyed-out

paths are those that that have not been accessed. Vega will not crawl links on other websites.

Alerts

As the scan progresses, instances of alerts will appear in the summary box shown in the

previous screenshot. The alerts that correspond to each instance can be found in the box to

the lower right.

Opening up the scan results will reveal a tree of alerts, with severity at the highest level,

followed by type, and then path instance. Both current and previous scan results for the

workspace are listed. The target icon representing the current scan will be blinking until it is

finished.
Clicking on an alert will open it in the central pane. To return to the scan summary, click on

the top-level item in the alerts tree in the Scan Alerts view, in the bottom right corner.

The alert incorporates both dynamic content from the module and static content from a

corresponding XML file. One great feature about alerts is the link to the saved request and

response. To slide open a fastview with the message editor, click on the request link towards

the bottom of the alert.

Viewing saved requests and responses from within alerts

Clicking a request link like the one shown above will pop open the message viewer, with the

associated request and response already selected:

Here the details of the request and response can be viewed. The request can also be replayed

by right clicking it in the request list just above the message content boxes. Doing this will

open up the request editor, which is documented more extensively in the proxy tutorial.

Fastview Icons

Another way to get to the request viewer is to click on the icon in the status bar, in the bottom

left corner. This will open up the fast view in a manner similar to when the request link is

clicked on in an alert. There is also a fastview link to the console, which blinks when there is

error output that has not been seen.

5.5 Burp suite:


Burp or Burp Suite is a graphical tool for testing Web application security. The tool is

written in Java and developed by PortSwigger Web Security.

Burp or Burp Suite is a graphical tool for testing Web application security. The tool is

written in Java and developed by PortSwigger Web Security.

The tool has three editions. A Community Edition that can be downloaded free of charge, a

Professional Edition and an Enterprise edition that can be purchased after a trial period. The

Community edition has significantly reduced functionality. It was developed to provide a

comprehensive solution for web application security checks. In addition to basic

functionality, such as proxy server, scanner and intruder, the tool also contains more

advanced options such as a spider, a repeater, a decoder, a comparer, an extender and a

sequencer.

The company behind Burp Suite has also developed a mobile application containing similar

tools compatible with iOS 8 and above.

PortSwigger was founded in 2004 by DafyddStuttard, a leading expert in web security, who

also authored a popular manual on web application security. 

Burp Suite Tools

Burp Suite contains various tools for performing different testing tasks. The tools operate

effectively together, and you can pass interesting requests between tools as your work

progresses, to carry out different actions.


Tools:

 HTTP Proxy - It operates as a web proxy server, and sits as a man-in-the-middle

between the browser and destination web servers. This allows the interception, inspection

and modification of the raw traffic passing in both directions.

 Scanner - A web application security scanner, used for performing automated

vulnerability scans of web applications.

 Intruder - This tool can perform automated attacks on web applications. The tool

offers a configurable algorithm that can generate malicious HTTP requests. The intruder

tool can test and detect SQL Injections, Cross Site Scripting, parameter manipulation and

vulnerabilities susceptible to brute-force attacks.

 Spider - A tool for automatically crawling web applications. It can be used in

conjunction with manual mapping techniques to speed up the process of mapping an

application's content and functionality.

 Repeater - A simple tool that can be used to manually test an application. It can be

used to modify requests to the server, resend them, and observe the results.

 Decoder - A tool for transforming encoded data into its canonical form, or for

transforming raw data into various encoded and hashed forms. It is capable of

intelligently recognizing several encoding formats using heuristic techniques.

 Comparer - A tool for performing a comparison (a visual "diff") between any two

items of data.

 Extender - Allows the security tester to load Burp extensions, to extend Burp's

functionality using the security testers own or third-party code (BAppStore)


 Sequencer - A tool for analyzing the quality of randomness in a sample of data items.

It can be used to test an application's session tokens or other important data items that are

intended to be unpredictable, such as anti-CSRF tokens, password reset tokens, etc.

The Burp Target tool contains detailed information about your target applications, and lets

you drive the process of testing for vulnerabilities. 

Burp Proxy is an intercepting web proxy that operates as a man-in-the-middle between the

end browser and the target web application.It lets you intercept, inspect and modify the raw

traffic passing in both directions.


IMPLEMENTATION

INFORMATION GATHERING:

Finding out the target IP address and determine network range. Identify active machine,
DNS records, subdomain. Foot printing is an ethical hacking process of gathering
information about the target and its environment. This is a pre-attack stage and maximum
efforts are deployed to ensure that the operations conducted are executed under stealth
and target can’t trace back you. Foot printing is a first and the important step because
after this a penetration tester knows how the hacker sees this network.Good information
gathering can make the difference between a successful pentest and one that has failed to
provide maximum benefit to the client.It includes:
 Registration details of the website, contact details.
 Email harvesting,
 Finding out the target IP address and determine network range
 Identify active machine, DNS record , subdomains.
 Operating system fingerprinting.
 Finding login pages, sensitive directory.
 Find out any known vulnerability for that particular version.
 Information gathering can be done in 2 methods:
1. Active method.
2. Passive method.
ACTIVE METHODS:
PING:
Used to check whether the host is active that is (to verify that a device can
communicate with another on a network). Ping is used diagnostically to ensure that
a host computer the user is trying to reach is actually operating. Ping works by
sending an Internet Control Message Protocol (ICMP) Echo Request to a specified
interface on the network and waiting for a reply. Ping can be used for troubleshooting
to test connectivity and determine response time.

IP address of the target: 65.61.137.117.


Os of the target – windows as the ttl(time to live) number is 110 which is close to 128.
Host is live.
TRACEROUTE:

The tracert is used to show several details about the path that a packet takes from the
computer or device you're on to whatever destination you specify. Traceroute is a very useful
tool for determining the response delays and routing loops present in a network pathway
across packet switched nodes. It also helps to locate any points of failure encountered while
en route to a certain destination.

Traceroute uses ICMP messages and TTL fields in the IP header for its operations, and
transmits packets with small TTL values. Every hop that handles the packet subtracts "1"
from the packet's TTL. If the TTL reaches zero, the packet has expired and is discarded.
Traceroute depends on the common router practice of sending an ICMP time-exceeded
message back to the sender when the TTL expires.

• It measures the speed and route data takes to a destination server.


PASSIVE METHODS:

NSlookup:

 Nslookup is a network administration command-line tool available in many computer


operating systems for querying the Domain Name System (DNS) to obtain domain
name or IP address mapping, or other DNS records.

 Domain Name System (DNS) is a collection of databases that


translate hostnames to IP addresses.DNS is often referred to as the internet's phone
book because it converts easy-to-remember hostnames like www.google.com, to IP
addresses like 216.58.217.46. This takes place behind the scenes after you type
an URL into a web browser's address bar.Without DNS (and especially search engines
like Google), navigating the internet wouldn't be easy since we'd have to enter the IP
address of each website we want to visit.

 NsLookup queries the specified DNS server and retrieves the requested records that
are associated with the domain name you provided. These records contain information
like the domain name’s IP addresses.

The following types of DNS records are especially useful:

 A: the IPv4 address of the domain.


 AAAA: the domain’s IPv6 address.
 CNAME: the canonical name — allowing one domain name to map on to another.
This allows more than one website to refer to a single web server.
 MX: the server that handles email for the domain.
 NS: one or more authoritative name server records for the domain.
 TXT: a record containing information for use outside the DNS server. The content
takes the form name=value. 
 Finding the nameservers can give us some information about the hosting provider of
the domain.

DNS Enumeration:

• DNS Enumeration is done to find large amounts of information.The DNS Enumeration


is the process of locating all the DNS servers and corresponding records of an
organisation.DNS is like a map or an address book.

• The Information Gathering phase, DNS enumeration is one of the most critical


steps.When we mention DNS enumeration, we are referenced to all the techniques we
useto gather as much information as possible by querying the DNS server of a website
orhost. These steps will be analyzed in this article. We will use both automatic
andmanual techniques to achieve our goal, and we will see their differences.
Essentiallywhat we will see in this article is:

• Ns lookup and dig tools

• Zone Transfer attack with digFierce

• DNSenum and DNSrecon

• Finding subdomains by using Search Engines

DNSENUM PACKAGE OPERATIONS:

• Get the host IP address

• Get the name servers (threaded).

• Get the MX record (threaded).

• Get extra names and subdomains via google scraping.


DNSRECON PACKAGE DESCRIPTION :

1.Check all NS Records for Zone Transfers

2.Enumerate General DNS Records for a given Domain (MX, SOA, NS, A, AAAA, SPF
and TXT)

3.Perform common SRV Record Enumeration. Top Level Domain (TLD) Expansion.
FIERCE:
• Fierce is not an IP scanner, it is not a DDoS tool, it is not designed to scan the
whole Internet or perform any un-targeted attacks. It is meant specifically to locate
likely targets both inside and outside a corporate network. Only those targets are
listed (unless the –non pattern switch is used).
TECHNICALINFO.NET TOOL:
Technicalinfo.net features a collection of white papers on a number of topics in security. It
was created by Gunter Ollmann, director of security strategy for
IBM “InternetSecurity Systems”.The useful section of Technical Info is Tools, where
Ollmann has provided Web interfaces to a number of tools that can scan open-source
information about Internet domains and IP addresses. As explained in his paper on “Passive
Information Gathering Techniques,” organizations should routinely monitor the details of
information about their networks by scanning the net and ensuring that what’s actually
available is only what should be available.
DOMAIN DOSSIER :

The Domain Dossier tool generates reports from public records about domain names and
IP addresses to help solve problems, investigate cybercrime, or just better understand how
things are set up. These reports may show you :

 Owner’s contact information


 Registrar and registry information
 The company that is hosting a Web site
 Where an IP address is geographically located
 What type of server is at the address and upstream networks of a site
Go to “technicalinfo.net” and click on the TOOLS link or we can even give
“technicalinfo.net/tools/” in the browser.Then the below window appears.

Steps:
 We should not give any text in the Address textbox
 Then check all the checkboxes as shown
 Then click on Submit button

 Now it is redirected to Domain Dossier page

 Give the domain as “testfire.net” or we can give the “IP address” in the domain
or IP address textbox as shown above
 Again check all the checkboxes and click on Go button
 Then we get the reports as discussed
WHO IS INFO:
Who is info gives the information about the name servers and DNS records of the target
system.

 Above we get the subdomains of akam.net website


 Time-to-live (TTL) value: It is value in an Internet Protocol (IP) packet that tells
a network router whether or not the packet has been in the network too long and
should be discarded
 Email and other information of the DNS records.
Domain Whois record:

Observation:

 The domain “testfire.net” has been registered with an entity called “CSC Corporate
Domains, Inc”
 Any queries should be conducted against “whois.corporatedomains.com”
 The “testfire.net” domain was initially registered (i.e. created) on 23 rd July 1999, and
must be renewed before the 23rd July 2019(i.e expiry date)
Reverse IP DomainCheck:
A reverse IP domain check takes a domain name or IP address pointing to a web server and
searches for other sites known to be hosted on that same web server. Data is gathered from
search engine results, which are not guaranteed to be complete. IP-Address.org provides
interesting visual  lookup tool.
 It gives the information about the host server that, the target system is not listed in
blacklist,2 live websites are using the IP of the host server.

 These are the 3 web servers hosted on the target system.

RECON-NG:
Recon-ng provides a powerful environment in which open source web-based reconnaissance
can be conducted quickly and thoroughly.
Now to check the modules available type
Command: show modules
SUBDOMAINS:
We will use this module by typing use ‘module name’.Type run to execute this module.

When we type run it starts finding all the sub-domains of that particular domain. 

TO CHECK XSSED VULNERABILITY:


If we want to run the XSSED vulnerabilities test module, we can do so in the following
manner.The tool runs the defined module and displays the results on the screen as shown
below.
WAFWOOF TOOL:
• Web application firewalls play an important role in the security of websites as they
can mitigate risks and they can offer protection against a large-scale
of vulnerabilities.
• That is the reason that many companies in nowadays are implementing a web
application firewall solution in their existing infrastructure.
 Attacker wants to exploit flaws in your applications, Website administrator best way

to detect attackers footprints in websites WEB APPLICATION FIREWALL.

 This will be detecting and blocking the specific patterns on the web applications.

Pentesters will be identifying the presence of firewall in the web application.

You might also like