Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Lab 2 Perform passive information gathering techniques to gather the

information of the target.

Doing a penetration test is about breaking into the system and taking its
ownership . To break into a system, we need to identify its possible entry points
and any vulnerabilities in these entry points. To identify the above information
we need to gather information about our targets. More we know about the
targets, it will easier our chances of successfully penetrating the system.

Information gathering can be broken into two main logical steps.

1. Passive Information Gathering

2. Active Information Gathering

In passive information gathering process we are collecting information about


the targets using publicly available information(resources). Can be use Search
engine results, who-is information. The goal is to find many information as
possible about the target.

Passive Information Gathering

In passive information gathering process we are collecting information about


the targets using publicly available information(resources). We can use Search
engine results, who-is information. The goal is to find many information as
possible about the target.

We need to have a web browser, an internet connection to try the google search
operators.
Google Search Operators

Site:

If we include [site:] in our query, Google will restrict the results to those
websites in the given domain.

For example in the query site:lk we will find pages within .lk domain
Intitle:

If we include [intitle:] in our query, Google will restrict the results to those
websites that mention search word in the title.

For example in the query intitle:google we will get websites that mention the
word “google” in their title.
Inurl:

If we include [inurl:] in our query, Google will restrict the results to those
websites that mention search word in the URL.

For example in the query inurl:google, google will return websites that mention
the word “google” in their url.
Info:

The query [info:] will present some information that Google has about that web
page.

For example the query info:google.com will show information about the Google
homepage.
Filetype:

If we include [filetype:] in our query, Google will restrict the results to those
file extension specified by type.

For example the query filetype:pdf hacking search results will contain hacking
related .pdf files. In here by changing file extension we can easily find the
relevant movies, songs, research papers, documents.

References:- https://www.exploit-db.com/google-hacking-database
Shodan Search Engine

Shodan is a specialized search engine that lets users find sensitive information
about unprotected internet-connected devices. Computers, baby monitors,
printers, webcams, routers, home automation systems, smart appliances,
servers using various filters. Any device that is not protected is potentially
vulnerable to anyone, including hackers, using Shodan to find it.

With Shodan, anyone can find devices that use default login details a serious,
and all too common, security misconfiguration.

References:- https://www.shodan.io/
Censys Search Engine

Censys is a search engine that scans the Internet searching for devices and
return aggregate reports on how resources, devices, websites, and certificates
are configured and deployed. Censys regularly probe every public IP address
and popular domain names, curate and enrich the resulting data, and make it
intelligible through an interactive search engine and API.

References:- https://censys.io/
Netcraft Site Report

Netcraft is an Internet monitoring company that monitors uptimes and


provides server operating system detection as well as a number of other
services. Site report service provides us with some great information about our
target including the IP address and OS of the web server as well as the DNS
server.

References:- https://toolbar.netcraft.com/site_report
Wayback Machine

Wayback Machine has catalogued more than 370 billion web pages from as far
back as 1996, so there’s a good chance that the website we want to see can be
found on Wayback Machine. As long as the site allows for crawlers and isn’t
password protected or blocked.

References:- https://archive.org/web/
Robtex

Robtex is one of the world’s largest network tools, with millions of unique daily
visitors. At robtex.com, we can find everything that need to know about
domains, DNS, IP, Routes.

References:- https://www.robtex.com/

You might also like