Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 201

8 of 19

1
Unit I

1.a discuss computer forensics needs

Computer forensics is also known as digital or cyber forensics. It is a branch of


digital forensic science. Using technology and investigative techniques,
computer forensics helps identify, collect, and store evidence from an electronic
device. Computer forensics can be used by law enforcement agencies in a
court of law or by businesses and individuals to recover lost or damaged data.
b) discuss computer forensics fudamentals

 Identification: Identifying what evidence is present, where it is stored, and


how it is stored (in which format). Electronic devices can be personal
computers, Mobile phones, PDAs, etc.
 Preservation: Data is isolated, secured, and preserved. It includes
prohibiting unauthorized personnel from using the digital device so that
digital evidence, mistakenly or purposely, is not tampered with and making a
copy of the original evidence.
 Analysis: Forensic lab personnel reconstruct fragments of data and draw
conclusions based on evidence.
 Documentation: A record of all the visible data is created. It helps in
recreating and reviewing the crime scene. All the findings from the
investigations are documented.
 Presentation: All the documented findings are produced in a court of law for
further investigations.

PROCEDURE:
The procedure starts with identifying the devices used and collecting the
preliminary evidence on the crime scene. Then the court warrant is obtained for
the seizure of the evidence which leads to the seizure of the evidence. The
evidence are then transported to the forensics lab for further investigations
and the procedure of transportation of the evidence from the crime scene to
labs are called chain of custody. The evidence are then copied for analysis and
the original evidence is kept safe because analysis are always done on the
copied evidence and not the original evidence.

The analysis is then done on the copied evidence for suspicious activities and
accordingly, the findings are documented in a nontechnical tone. The
documented findings are then presented in a court of law for further
investigations.
Some Tools used for Investigation:
Tools for Laptop or PC –
 COFFEE – A suite of tools for Windows developed by Microsoft.
 The Coroner’s Toolkit – A suite of programs for Unix analysis.
 The Sleuth Kit – A library of tools for both Unix and Windows.
Tools for Memory :
 Volatility
 WindowsSCOPE
Tools for Mobile Device :
 MicroSystemation XRY/XACT

2. a) discuss steps of digital forensics

Digital forensics entails the following steps:

 Identification
 Preservation
 Analysis
 Documentation
 Presentation
 Identification
 It is the first step in the forensic process. The identification
process mainly includes things like what evidence is present,
where it is stored, and lastly, how it is stored (in which format).
 Electronic storage media can be personal computers, Mobile
phones, PDAs, etc.
 Preservation
 In this phase, data is isolated, secured, and preserved. It
includes preventing people from using the digital device so that
digital evidence is not tampered with.
 Analysis
 In this step, investigation agents reconstruct fragments of data
and draw conclusions based on evidence found. However, it
might take numerous iterations of examination to support a
specific crime theory.
 Documentation
 In this process, a record of all the visible data must be created. It
helps in recreating the crime scene and reviewing it. It Involves
proper documentation of the crime scene along with
photographing, sketching, and crime-scene mapping.
 Presentation
 In this last step, the process of summarization and explanation of
conclusions is done.
 However, it should be written in a layperson’s terms using
abstracted terminologies. All abstracted terminologies should
reference the specific details.

b\ discuss computer crimes

F COMPUTER CRIMES
Home > Blog > Common Types of Computer Crimes

Posted on: October 24, 2018


On This Page
HackingPiracyCyber Stalking/HarassmentHacking Penalties in NebraskaDefending a Cybercrime Charge

The growth and advances in digital technology creates a whole new platform for
criminal activity. Since the advancement of technology, any crime that involves
using a computer or network is generally referred to as a cybercrime or computer
crime. The penalties differ for each crime, depending on whether they violated
state or federal laws. In general, they include fines, imprisonment, probation, or
all the above.

Hacking
The computer crime hacking refers to the practice of gaining unauthorized
access to another person’s computer, database, or network where private
information can be stored. It is a felony in the U.S. to hack a computer system,
whether it is a single personal computer or an organizational computer network.
However, not all types of “hacking” refer to crimes. Some organizations perform
“Ethical hacking” on their own systems or with permission to explore the systems
of others to look for vulnerabilities. This is treated differently than “malicious
hacking,” the act of entering someone’s computer without their knowledge to take
data or leave viruses.
240 Years Of Combined Experience Fighting For Our Clients

START TODAY

Free Case Evaluation

Piracy
Piracy is a computer crime that occurs when a person distributes copyrighted
material without gaining permission from the original owner. The shared material
can be different types of media, including music, software, movies, images, and
books. There are many sharing websites that practice internet piracy by offering
free, downloadable versions of products. In many jurisdictions, it is only the
sharing of materials that is illegal, and being in receipt may not be illegal.
However, many peer-to-peer (p2p) systems require users to share material with
others as it is being downloaded, resulting in a form of collaborative piracy. The
charges for piracy differ from case to case, so it is important to contact and
attorney to assure that you are correctly informed about the laws regarding your
specific situation.

Cyber Stalking/Harassment
The victim of cyber stalking is subjected to an excessive number of online
messages, whether through social media, web forums, or email. It is common for
the perpetrator to have real world contact with the victim but use the internet to
stalk them instead of stalking them in the physical sense. It could progress into
traditional stalking if the perpetrator feels they need to make more of an impact
on their victim’s lives.

Identity theft
Identity theft in the world of computer crimes involves a form of hacking where
the perpetrator accesses the victim’s sensitive information such as their Social
Security Number (SSN), bank account information, or credit card numbers. They
then use this information to spend their victim’s money for online shopping or
simply to steal the money through fraudulent transfers.

Child Pornography/Abuse
This cybercrime can involve the perpetrator looking to create or distribute sexual
images of children. In some cases, the accused seeks out minors on the internet,
whether that be via social media or chatrooms with the objective of producing
child pornography. The government monitors a large amount of chat rooms in
hopes to reduce and prevent this type of exploitation, and also maintains
databases of existing child pornographic content that may be shared. Convictions
for these charges typically mean long prison sentences. It is important to contact
an attorney in the case of any accusations of these crimes because the
punishments are so severe.

3. a)discuss types of digital forensics evidences

Types of Digital Evidence


In this article, we’ll talk strictly about digital evidence available on the PC or, more
precisely, on the computer’s hard drive and live memory dumps. This leaves the
entire domain of mobile forensics aside, for a good reason: mobile forensics has
its own techniques, approaches, methods and issues.

Types of digital evidence include all of the following, and more:

 Address books and contact lists


 Audio files and voice recordings
 Backups to various programs, including backups to mobile devices
 Bookmarks and favorites
 Browser history
 Calendars
 Compressed archives (ZIP, RAR, etc.) including encrypted archives
 Configuration and .ini files (may contain account information, last access
dates etc.)
 Cookies
 Databases
 Documents
 Email messages, attachments and email databases
 Events
 Hidden and system files
 Log files
 Organizer items
 Page files, hibernation files and printer spooler files
 Pictures, images, digital photos
 Videos
 Virtual machines
 System files
 Temporary files

b)discuss legal aspects of digital forensics

unit 2

1. Understanding computer components


a) discuss input ouput devices

An input/output device, often known as an IO device, is any hardware that


allows a human operator or other systems to interface with a computer.
Input/output devices, as the name implies, are capable of delivering data
(output) to and receiving data from a computer (input). An input/output (I/O)
device is a piece of hardware that can take, output, or process data. It receives
data as input and provides it to a computer, as well as sends computer data to
storage media as a storage output.
Input Devices
Input devices are the devices that are used to send signals to the computer for
performing tasks. The receiver at the end is the CPU (Central Processing Unit),
which has work to send signals to the output devices. Some of the
classifications of Input devices are:
 Keyboard Devices
 Pointing Devices
 Composite Devices
 Game Controller
 Visual Devices
 Audio Input Devices
Some of the input devices are described below.

Keyboard

The keyboard is the most frequent and widely used input device for entering
data into a computer. Although there are some additional keys for performing
other operations, the keyboard layout is similar to that of a typical typewriter.
Generally, keyboards come in two sizes: 84 keys or 101/102 keys but currently
keyboards with 104 keys or 108 keys are also available for Windows and the
Internet.
Keyboard

Types of Keys
 Numeric Keys: It is used to enter numeric data or move the cursor. It
usually consists of a set of 17 keys.
 Typing Keys: The letter keys (A-Z) and number keys (09) are among these
keys.
 Control Keys: These keys control the pointer and the screen. There are four
directional arrow keys on it. Home, End, Insert, Alternate(Alt), Delete,
Control(Ctrl), etc., and Escape are all control keys (Esc).
 Special Keys: Enter, Shift, Caps Lock, NumLk, Tab, etc., and Print Screen
are among the special function keys on the keyboard.
 Function Keys: The 12 keys from F1 to F12 are on the topmost row of the
keyboard.
Mouse
The most common pointing device is the mouse. The mouse is used to move a
little cursor across the screen while clicking and dragging. The cursor will stop if
you let go of the mouse. The computer is dependent on you to move the
mouse; it won’t move by itself. As a result, it’s an input device.
A mouse is an input device that lets you move the mouse on a flat surface to
control the coordinates and movement of the on-screen cursor/pointer.
The left mouse button can be used to select or move items, while the right
mouse button when clicked displays extra menus.
Mouse

Joystick

A joystick is a pointing device that is used to move the cursor on a computer


screen. A spherical ball is attached to both the bottom and top ends of the stick.
In a socket, the lower spherical ball slides. You can move the joystick in all four
directions.

Joystick
The joystick’s function is comparable to that of a mouse. It is primarily used in
CAD (Computer-Aided Design) and playing video games on the computer.

Track Ball

Track Ball is an accessory for notebooks and laptops, which works on behalf of
a mouse. It has a similar structure to a mouse. Its structure is like a half-
inserted ball and we use fingers for cursor movement. Different shapes are
used for this like balls, buttons, or squares.
Light Pen
A light pen is a type of pointing device that looks like a pen. It can be used to
select a menu item or to draw on the monitor screen. A photocell and an optical
system are enclosed in a tiny tube. When the tip of a light pen is moved across
a monitor screen while the pen button is pushed, the photocell sensor element
identifies the screen location and provides a signal to the CPU.

Light Pen

Scanner
A scanner is an input device that functions similarly to a photocopier. It’s
employed when there’s information on paper that needs to be transferred to the
computer’s hard disc for subsequent manipulation. The scanner collects images
from the source and converts them to a digital format that may be saved on a
disc. Before they are printed, these images can be modified.
Scanner

Optical Mark Reader (OMR)

An Optical Mark Reader is a device that is generally used in educational


institutions to check the answers to objective exams. It recognizes the marks
present by pencil and pen.
Optical Character Reader (OCR)
OCR stands for optical character recognition, and it is a device that reads
printed text. OCR optically scans the text, character by character turns it into a
machine-readable code, and saves it to the system memory.

Magnetic Ink Card Reader (MICR)

It is a device that is generally used in banks to deal with the cheques given to
the bank by the customer. It helps in reading the magnetic ink present in the
code number and cheque number. This process is very fast compared to any
other process.
Bar Code Reader
A bar code reader is a device that reads data that is bar-coded (data that is
represented by light and dark lines).Bar-coded data is commonly used to mark
things, number books, and so on. It could be a handheld scanner or part of a
stationary scanner. A bar code reader scans a bar code image, converts it to an
alphanumeric value, and then sends it to the computer to which it is connected.
Bar Code Reader

Web Camera
Because a web camera records a video image of the scene in front of it, a
webcam is an input device. It is either built inside the computer (for example, a
laptop) or attached through a USB connection. A webcam is a computer-
connected tiny digital video camera. It’s also known as a web camera because
it can take images and record video. These cameras come with software that
must be installed on the computer in order to broadcast video in real-time over
the Internet. It can shoot images and HD videos, however, the video quality isn’t
as good as other cameras (In Mobiles or other devices or normal cameras).
Web Camera

Digitizer

Digitizer is a device that is used to convert analog signals to digital signals. it


converts signals into numeric values. An example of a Digitizer is Graphic
Tablet, which is used to convert graphics to binary data.

Microphone

The microphone works as an input device that receives input voice signals and
also has the responsibility of converting it also to digital form. It is a very
common device that is present in every device which is related to music.
Output Devices
Output Devices are the devices that show us the result after giving the input to
a computer system. Output can be of many different forms like image, graphic
audio, video, etc. Some of the output devices are described below.
Monitor
Monitors, also known as Visual Display Units (VDUs) , are a computer’s primary
output device. It creates images by arranging small dots, known as pixels, in a
rectangular pattern. The amount of pixels determines the image’s sharpness.
The two kinds of viewing screens used for monitors are described below.
 Cathode-Ray Tube (CRT) Monitor: Pixels are minuscule visual elements
that make up a CRT display. The higher the image quality or resolution, the
smaller the pixels.
 Flat-Panel Display Monitor: In comparison to the CRT, a flat-panel
display is a type of video display with less volume, weight, and power
consumption. They can be hung on the wall or worn on the wrist.
Flat-panel displays are currently used in calculators, video games, monitors,
laptop computers, and graphical displays.
Monitor

Television

Television is one of the common output devices which is present in each and
every house. It portrays video and audio files on the screen as the user handles
the television. Nowadays, we are using plasma displays as compared to CRT
screens which we used earlier.
Printer
Printers are output devices that allow you to print information on paper. There
are certain types of printers which are described below.
 Impact Printers
 Character Printers
 Line Printers
 Non-Impact Printers
 Laser Printers
 Inkjet Printers
Printer

Impact Printer
Characters are printed on the ribbon, which is subsequently crushed against the
paper, in impact printers. The following are the characteristics of impact
printers:
 Exceptionally low consumable cost.
 Quite noisy
 Because of its low cost, it is ideal for large-scale printing.
 To create an image, there is physical contact with the paper.
Character Printers
Character Printer has the capability to print only one character at a time. It is of
two types.
 Dot Matrix Printer
 Daisy Wheel
Line Printers
Line Printers are printers that have the capability to print one line at a time. It is
of two types.
 Drum Printer
 Chain Printer
Non-Impact Printers
Characters are printed without the need for a ribbon in non-impact printers.
Because these printers print a full page at a time, they’re also known as Page
Printers. The following are the characteristics of non-impact printers:
 Faster
 They don’t make a lot of noise.
 Excellent quality
 Supports a variety of typefaces and character sizes
Laser Printers
Laser Printers use laser lights for producing dots which will produce characters
on the page.
Inkjet Printers
Inkjet printers are printers that use spray technology for printing papers. High-
quality papers are produced in an Inkjet printer. They also do color printing.

Speakers

Speakers are devices that produce sound after getting a command from a
computer. Nowadays, speakers come with wireless technology also like
Bluetooth speakers.

Projector

Projectors are optical devices that have the work to show visuals on both types
of screens, stationary and moving both. It helps in displaying images on a big
screen. Projectors are generally used in theatres, auditoriums, etc.

Plotter

Plotter is a device that helps in making graphics or other images to give a real
view. A graphic card is mandatorily required to use these devices. These are
the pen-like devices that help in generating exact designs on the computer.

Braille Reader

Braille Reader is a very important device that is used by blind users. It helps
people with low vision or no vision to recognize the data by running their fingers
over the device to understand easily. It is a very important device for blind
persons as it gives them the comfort to understand the letters, alphabets, etc
which helps them in study.

Video Card

A video Card is a device that is fitted into the motherboard of the computer. It
helps in improvising digital content in output devices. It is an important tool that
helps people in using multiple devices.

Global Positioning System (GPS)


Global Positioning System helps the user in terms of directions, as it uses
satellite technology to track the geometrical locations of the users. With
continuous latitudinal and longitudinal calculations, GPS gives accurate results.
Nowadays, all smart devices have inbuilt GPS.

Headphones

Headphones are just like a speaker, which is generally used by a single person
or it is a single-person usable device and is not commonly used in large areas.
These are also called headsets having a lower sound frequency.
Both the Input and Output Devices of the Computer
There are so many devices that contain the characteristics of both input and
output. They can perform both operations as they receive data and provide
results. Some of them are mentioned below.

USB Drive

USB Drive is one of the devices which perform both input and output operations
as a USB Drive helps in receiving data from a device and sending it to other
devices.

Modem

Modems are one of the important devices that helps in transmitting data using
telephonic lines.

CD and DVD

CD and DVD are the most common device that helps in saving data from one
computer in a particular format and send data to other devices which works as
an input device to the computer.

Headset

The headset consists of a speaker and microphone where a speaker is an


output device and a microphone works as an input device.

Facsimile
A facsimile is a fax machine that consists of a scanner and printer, where the
scanner works as an input device and the printer works as an output device.

b)discuss CPU

the CPU is responsible for all the major tasks like processing data and
instructions inside the computer system. But, all this is possible only because of
the components present inside the CPU which divide the work among
themselves and process it at a fast pace to produce the desired result. We will
study each of these components in the subsequent parts.

1. Control Unit (CU)

The control unit controls the way input and output devices, the Arithmetic, and Logic Unit, and the
computer’s memory respond to the instruction sent to the CPU. It fetches the input, converts it in a
decoded form, and then sends it for processing to the computer’s processor, where the desired
operation is performed. There are two types of Control units – the Hardwire CU and the
Microprogrammable CU.

Functions of Control Unit:

 It controls the sequence in which instructions move in and out of the processor and also the
way the instructions are performed.
It is responsible for fetching the input, converting it into signals, and
storing it for further processing.
2. Arithmetic Logic Unit (ALU)

The Arithmetic and Logical Unit is responsible for arithmetical and logical calculations as well as
taking decisions in the system. It is also known as the mathematical brain of the computer. The ALU
makes use of registers for the calculations. It takes input from input registers, performs operations on
the data, and stores the output in an output register.

Functions of ALU:

 It is mainly used to make decisions like performing arithmetic and logical operations.
 It acts as a bridge between the computer’s primary memory and the secondary memory. All
information that is exchanged between the primary and secondary memory passes through
the ALU.
3. Registers

Registers are part of a computer’s memory that is used to store the instructions temporarily to
provide the processor with the instructions at times of need. These registers are also known as
Processor registers as they play an important role in the processing of data. These registers store data
in the form of memory address and after the processing of the instruction present at that memory
address is completed, it stores the memory address of the next instruction. There are various kinds of
registers that perform different functions.

Functions of Registers:

 Input registers are used to carry the input.


 Output registers are used to carry the output.
 Temporary registers store data temporarily.
 Address registers store the address of the memory.
 The program counter stores the address of the instructions.
 Data registers hold the memory operand.
 Instruction registers hold the instruction codes.
4. Cache

The cache is a type of Random Access Memory that stores small amounts of data and instructions
temporarily which can be reused as and when required. It reduces the amount of time needed to fetch
the instructions as instead of fetching it from the RAM, it can be directly accessed from Cache in a
small amount of time.

Functions of Cache:

 They reduce the amount of time needed to fetch and execute instructions.
 They store data temporarily for later use.
5. Buses

A bus is a link between the different components of the computer system and the processor. They are
used to send signals and data from the processor to different devices and vice versa. There are three
types of buses – Address bus which is used to send memory address from process to other
components. The data bus, which is used to send actual data from the processor to the components,
and the Control bus, used to send control signals from the processor to other devices.

Functions of Bus:

 It is used to share data between different devices.


 It supplies power to different components of the system.
6. Clock

As the name suggests, the clock controls the timing and speed of the functions of different
components of the CPU. It sends out electrical signals which regulate the timing and speed of the
functions.

Functions of Clock:

 It maintains the synchronization of the components of the computer system.


 It keeps track of the current date and time.
So, this is all about the major components of the CPU which are responsible for the smooth
processing of instructions and data in the computer system.

c) discuss digital media

Digital media
35 languages

 Article
 Talk
 Read
 Edit
 View history

Tools












From Wikipedia, the free encyclopedia

Hard drives store information in binary form and so are considered a type of physical digital media.

In mass communication, digital media is any communication media that operate in


conjunction with various encoded machine-readable data formats. Digital content can
be created, viewed, distributed, modified, listened to, and preserved on a digital
electronics device, including digital data storage media (in contrast to analog electronic
media) and digital broadcasting. Digital defines as any data represented by a series of
digits, and media refers to methods of broadcasting or communicating this information.
Together, digital media refers to mediums of digitized information broadcast through a
screen and/or a speaker.[1] This also includes text, audio, video, and graphics that are
transmitted over the internet for viewing or listening to on the internet. [2]
Digital media platforms, such as YouTube, Vimeo, and Twitch, accounted for viewership
rates of 27.9 billion hours in 2020.[3] A contributing factor to its part in what is commonly
referred to as the digital revolution can be attributed to the use of interconnectivity.[4]

Examples[edit]
Examples of digital media include software, digital images, digital video, video
games, web pages and websites, social media, digital data and databases, digital
audio such as MP3, electronic documents and electronic books. Digital media often
contrasts with print media, such as printed books, newspapers and magazines, and
other traditional or analog media, such as photographic film, audio tapes or video tapes.
Digital media has had a significantly broad and complex impact on society and culture.
Combined with the Internet and personal computing, digital media has
caused disruptive innovation in publishing, journalism, public relations, entertainment,
education, commerce and politics. Digital media has also posed new challenges
to copyright and intellectual property laws, fostering an open content movement in
which content creators voluntarily give up some or all of their legal rights to their work.
The ubiquity of digital media and its effects on society suggest that we are at the start of
a new era in industrial history, called the Information Age, perhaps leading to
a paperless society in which all media are produced and consumed on computers.
[5]
However, challenges to a digital transition remain, including outdated copyright
laws, censorship, the digital divide, and the spectre of a digital dark age, in which older
media becomes inaccessible to new or upgraded information systems.[6] Digital media
has a significant, wide-ranging and complex impact on society and culture.[5]

2. a) system software – operating system discuss

1. System Software:
System software is a type of computer program that is designed to run a
computer’s hardware and application programs it controls a computer’s internal
functioning, chiefly through an operating system. It also controls peripherals
devices such as monitors, printers, and storage devices.
2. Operating System:
An operating system or OS is system software that manages computer
hardware, and software resources, and provides common services for computer
programs. All operating systems are system software. Every desktop computer,
tablet, and smartphone includes an operating system that provides basic
functionality for the device.
Difference between System Software and Operating System :
S.
No. System Software Operating System

The software which manages the


resources and makes the An operating system is a software that
interaction between a user and communicates with your computer
machine possible is system hardware and provides a place to run the
1. software. application.

System software manages the Operating System manages system as well


2. system. as system software.

Types of System Software-


 Operating Systems- MS
Types of Operating System-
Windows, Mac OS, Linux, etc.
 Device Drivers- Display drivers,  Batch Operating System
USB drivers, etc.  Multiprocessing Operating System
 Firmware- BIOS, Embedded  Multiprogramming Operating System
Systems, etc.  Multi Tasking Operating System
 Language Translators- Compiler,  Time Sharing Operating System
Interpreter, Assembler  Real-Time Operating System
3.  Utility- Antivirus software,  Distributed Operating System
WinRAR, etc.  Network Operating System
4. It’s run only when required. It runs all the time.

It loads in the main memory It resides in the main memory all the time
5. whenever required. while the system is on.

It is loaded by the operating It resides in the main memory all the time
6. system. while the system is on.

Examples of system software are


MacOS, Android, and Microsoft Examples of OS are Windows, OS X, and
7. windows. Linux.
System software and operating system are two related concepts but
are not the same thing. Here are some of the main differences:

1. Scope: Operating system is a type of system software, but system software


is not limited to operating system only. System software includes various
other programs that help to manage and run the computer system, such as
device drivers, language translators, utility programs, and more.
2. Function: Operating system is the main program that manages and controls
all the other software and hardware components of the computer. It provides
a platform for other software applications to run on top of it. System
software, on the other hand, provides tools and services to help developers
create and manage software applications.
3. Importance: Operating system is an essential component of any computer
system. Without an operating system, a computer cannot function. System
software, on the other hand, is important but not essential. Some
applications can run without system software, but they may not be able to
utilize the full power of the computer system.
4. Complexity: Operating systems are usually more complex than other types
of system software. They have to manage hardware resources, provide
security, and support various software applications. System software, on the
other hand, can be relatively simple in comparison, depending on its
purpose.
Overall, while both operating systems and system software are important
components of a computer system, they serve different purposes and have
different levels of importance and complexity.
2 b) Operating system architecture discuss

Architecture of Operating System


Challenge Inside! : Find out where you stand! Try quiz, solve problems & win rewards!

Go to Challenge

Overview

The operating system provides an environment for the users to execute computer programs.
Operating systems are already installed on the computers you buy for eg personal computers
have windows, Linux, and macOS, mainframe computers have z/OS, z/VM, etc, and mobile
phones have operating systems such as Android, and iOS. The architecture of an operating
system consists of four major components hardware, kernel, shell, and application and we shall
explore all of them in detail one by one.

Scope

 In this article we'll learn how Operating system acts as an intermediary for the users
 We'll go through the Components of the operating system including process
management, memory management, security, error detection, and I/O management.
 We'll also learn about the four architectures of operating systems monolithic, layered,
microkernel, and hybrid.
 We'll learn that how the Hybrid architecture of operating systems includes all of the previously
mentioned operating systems.

Architecture of an Operating System

Highlights:

 Operating system gives an environment for executing programs by the users.


 Kernel is the most central part of the operating systems.
 Software running on any operating system can be system software and application software.
The operating system as we know is an intermediary and its functionalities include file
management, memory management, process management, handling input and output, and
peripheral devices as well.


 The operating system handles all of the above tasks for the system as well as application
software. The architecture of an operating system is basically the design of its software and
hardware components. Depending upon the tasks or programs we need to run users can use the
operating system most suitable for that program/software.


 Before explaining various architectures of the operating systems, let's explore a few terms first
which are part of the operating system.

1) Application: The application represents the software that a user is running on an operating
system it can be either system or application software eg slack, sublime text editor, etc.

2) Shell: The shell represents software that provides an interface for the user where it serves to
launch or start some program for which the user gives instructions.

It can be of two types first is a command line and another is a graphical user interface for
eg: MS-DOS Shell, PowerShell, csh, ksh, etc.

3) Kernel: Kernel represents the most central and crucial part of the operating system where it is
used for resource management i.e. it provides necessary I/O, processor, and memory to the
application processes through inter-process communication mechanisms and system calls. Let's
understand the various types of architectures of the operating system.

Types of Architectures of Operating System

Highlights:
 Architectures of operating systems can be of four types monolithic, layered, microkernel, and
hybrid.
 Hybrid architecture is the combination of all architectures. There are major four types of
architectures of operating systems.

1) Monolithic Architecture

In monolithic architecture, each component of the operating system is contained in the kernel i.e.
it is working in kernel space, and the components of the operating system communicate with
each other using function

calls.
Examples of this type of architecture are OS/360, VMX, and LINUX.

Advantages:

1. The main advantage of having a monolithic architecture of the operating system is that it
provides CPU scheduling, memory management, memory management, etc through system
calls.
2. In a single address space, the entire large process is running.
3. It is a single static binary file.

Disadvantages:
1. The main disadvantage is that all components are interdependent and when one of them fails
the entire system fails.
2. In case the user has to add a new service or functionality the entire operating system needs to
be changed.

2) Layered architecture

In Layered architecture, components with similar functionalities are grouped to form a layer and
in this way, total n+1 layers are constructed and counted from 0 to n where each layer has a
different set of functionalities and services. Example: THE operating system, also windows XP,
and LINUX implements some level of layering.

The layers are implemented according to the following rule:

1. Each layer can communicate with all of its lower layers but not with its upper layer i.e.
any ith layer can communicate with all layers from 0 to i-1 but not with the i+1th layer.
2. Each layer is designed in such a way that it will only need the functionalities which are present in
itself or the layers below it.

There are 6 layers in layered architecture as shown below:

Let's explain the layers one by one


1) Hardware: This layer is the lowest layer in the layered operating system architecture, this
layer is responsible for the coordination with peripheral devices such
as keyboards, mice, scanners etc.

2) CPU scheduling: This layer is responsible for process scheduling, multiple queues are used
for scheduling. Process entering the system are kept in the job queue while those which are ready
to be executed are put into the ready queue. It manages the processes which are to be kept in the
CPU and those which are to be kept out of the CPU.

3) Memory Management: This layer handles the aspect of memory management i.e. moving
the processes from the secondary to primary memory for execution and vice-versa. There are
memories like RAM, and ROM. RAM is the memory where our processes run they are moved to
the RAM for execution and when they exit they are removed from RAM.

4) Process Management: This layer is responsible for managing the various processes i.e.
assigning the CPU to those processes on a priority basis for their execution. Process management
uses many scheduling algorithms for prioritizing the processes for execution such as the Round-
Robin algorithm, FCFS(First Come First Serve), SJF(Shortest Job First), etc.

5) I/O Buffer: Buffering is the temporary storage of data and I/O Buffer means that the data
input is first buffered before storing it in the secondary memory. All I/O devices have buffers
attached to them for the temporary storage of the input data because it cannot be stored directly
in the secondary storage as the speed of the I/O devices is slow as compared to the processor.

6) User Programs: This is the application layer of the layered architecture of the operating
system, it deals with all the application programs running eg games, browsers, words, etc. It is
the highest layer of layered architecture.

Advantages:

1) Layered architecture of the operating system provides modularity because each layer is
programmed to perform its own tasks only. 2) Since the layered architecture has independent
components changing or updating one of them will not affect the other component or the
entire operating system will not stop working, hence it is easy to debug and update. 3) The user
can access the services of the hardware layer but cannot access the hardware layer itself because
it is the innermost layer. 4) Each layer has its own functionalities and it is concerned with itself
only and other layers are abstracted from it.

Disadvantages:

1. Layered architecture is complex in implementation because one layer may use the services of
the other layer and therefore, the layer using the services of another layer must be put below
the other one.
2. In a layered architecture, if one layer wants to communicate with another it has to send a
request which goes through all layers in between which increases response time causing
inefficiency in the system.
3) Microkernel Architecture

In this architecture, the components like process management, networking, file system
interaction, and device management are executed outside the kernel while memory management
and synchronization are executed inside the kernel. The processes inside the kernel have
relatively high priority, the components possess high modularity hence even if one or more
components fail the operating system keeps on working.

Example: Linux and Windows XP contain Modular components.

Advantages:
1. Microkernel operating systems are modular and hence, disturbing one of the components will
not affect the other component.
2. The architecture is compact and isolated and hence relatively efficient.
3. New features can be added without recompilation.

Disadvantages:

1. Implementing drivers as procedures require a function call or context switch.


2. In microkernel architecture, providing services is costlier than monolithic operating systems.

4) Hybrid Architecture

Hybrid architecture as the name suggests consists of a hybrid of all the architectures explained so
far and hence it has properties of all of those architectures which makes it highly useful in
present-day operating systems.

The hybrid-architecture consists of three layers

1) Hardware abstraction layer: It is the interface between the kernel and hardware and is
present at the lowest level.

2) Microkernel Layer: This is the old microkernel that we know and it consists of CPU
scheduling, memory management, and inter-process communication.

3) Application Layer: It acts as an interface between the user and the microkernel. It contains
the functionalities like a file server, error detection, I/O device management, etc.
Example: Microsoft Windows NT kernel implements hybrid architecture of the operating
system.

Advantages:

1. Since it is a hybrid of other architectures it allows various architectures to provide their services
respectively.
2. It is easy to manage because it uses a layered approach.
3. Number of layers is relatively lesser.
4. Security and protection are relatively improved.

Disadvantage:

1)Hybrid architecture of the operating system keeps certain services in the kernel space while
moving less critical services to the user space.

Conclusion

 We conclude that the operating system has various architectures with which we can describe
the functionality of various components.
 The components of the operating system are process management, memory management, I/O
management, Error Detection, and controlling peripheral devices.
 These architectures include monolithic, layered, microkernel, and hybrid architectures classified
on the basis of the structure of components.
 Hybrid architecture is the most efficient and useful architecture as it implements the
functionalities of all other architectures.
 Hybrid architecture is better in terms of security as well.

Frequently Asked Questions

1) How the Hybrid Architecture of Operating Systems is Better Than Other


Architectures?

The hybrid architecture of operating systems as we know is a hybrid of other architectures like
monolithic, layered, and microkernel architectures of operating systems so it has the
functionalities of all of them. Since, we can see that hybrid architecture contains all the
functionalities of monolithic, layered, and microkernel architecture, therefore it is better than all
of them.

2) What are the Key Differences Between Monolithic and Layered Architecture
of Operating Systems?

The key differences between the monolithic and layered architecture of the operating system are:
1. In the monolithic operating system the entire operating system functionalities operate in the
kernel space while in layered architecture there are several layers where each layer has a
specific set of functionalities.
2. In the monolithic operating system there are mainly three layers while in a layered the number
of layers is multiple.

3. Discuss application software

What is an Application Software?


The term “application software” refers to software that performs specific functions for a
user. When a user interacts directly with a piece of software, it is called application
software. The sole purpose of application software is to assist the user in doing specified
tasks. Microsoft Word and Excel, as well as popular web browsers like Firefox and
Google Chrome, are examples of application software. It also encompasses the category
of mobile apps, which includes apps like WhatsApp for communication and games like
Candy Crush Saga. There are also app versions of popular services, such as weather or
transportation information, as well as apps that allow users to connect with
businesses. Global Positioning System (GPS), Graphics, multimedia, presentation
software, desktop publishing software, and so on are examples of such software.
Functions of Application Software
Application software programs are created to help with a wide range of tasks. Here are a
few examples:
 Information and data management
 Management of documents (document exchange systems)
 Development of visuals and video
 Emails, text messaging, audio, and video conferencing, and cooperation are all
options.
 Management of accounting, finance, and payroll
 Management of resources (ERP and CRM systems)
 Management of a project
 Management of business processes
 Software for education (LMS and e-learning systems)
 Software for healthcare applications
Application Software

Need for Application Software


End-users can use “application software” to conduct single or many tasks. Following are
a few reasons to need application software in computers:
 Helps the user in completing specified tasks: Application software is designed with
the user in mind. They help the end-user with specialized tasks in a variety of
industries, including education, business, and entertainment. Microsoft Word, for
example, is popular application software that allows users to create, edit, delete, and
do other tasks with Word documents.
 Manages and manipulates data: Business companies utilize application software to
manage and manipulate employees, customers, and other databases. Enterprise
resource management systems and customer relationship management systems are
two common examples of application software.
 Allows users to effectively organize information: Individual users can use
application software to efficiently create and handle large amounts of data. Microsoft
Excel, for example, is popular application software that allows users to manage
datasheets.
Types of Application Software
Application software can also be categorized based on its charge ability and accessibility.
Here is some application software:
 Freeware: It is offered for free, as the name implies. You can utilize freeware
application software that you can obtain from the Internet. This software, on the other
hand, does not allow you to change it or charge a fee for sharing it. Examples include
Adobe PDF, Mozilla Firefox, and Google Chrome.
 Shareware: This is given away to users for free as a trial, usually with a limited-time
offer. If consumers want to keep using this application software, they will have to
pay. WinZip, Anti-virus, and Adobe Reader are instances of shareware.
 Open-source: This type of application software comes with the source code, allowing
you to edit and even add features to it. These could be offered for free or for a fee.
Open-source application software includes Moodle and Apache Web Server.
 Closed source: This category includes the majority of the application software
programs used nowadays. These are normally charged, and the source code is usually
protected by intellectual property rights or patents. It usually comes with a set of
restrictions. Microsoft Windows, Adobe Flash Player, WinRAR, macOS, and other
operating systems are examples.
 Word Processing Software: Word Processing Software can be explained as software
that has the functionalities of editing, saving, and creating documents with Word
Processor Software like Microsoft Word.
 Spreadsheet Software: Spreadsheet Software is a kind of software that deals with the
worksheet where it works on some automated version to perform numeric functions.
For Example, Microsoft Excel.
 Presentation Software: It is a type of application software that is used to present
some applications like newly launched functions, products, etc. For Example,
Microsoft Powerpoint.
 Multimedia Software: Multimedia refers to the mixture of audio, video, image, text,
etc., and can be displayed or used with the help of multimedia software. There are so
many media players that do this kind of work.
 Web Browsers: Web Browser is one of the most used applications worldwide, it
takes you to the internet. You can use your desktop, mobile, etc for using this.
 Educational Software: Due to the enhancement of the Internet, there are so many
educational software runs in the market. It consists of Language learning Software,
Classroom Management Software, etc.
 Graphics Software: Graphics Software is also used in large amounts. There are so
many applications where it is used. Some of the applications include Canva, Adobe,
PhotoShop, etc.
 Simulation Software: Simulation Software is a kind of Software that is used to
compare two different kinds of products and also it helps in evaluating them.
Examples of Application Software
Some examples of application software are:
 System for Hotel Management: It relates to the hotel industry’s management
strategies. Hotel administration, accounting, billing, marketing, housekeeping, and
front office or front desk.
 System for Payroll Management: It is a term used by all modern businesses to refer
to every employee who receives a regular salary or another form of payment. The
payroll software calculates all different payment options and generates the relevant
paychecks. Employee salary slips can also be printed or sent using this software.
 System for Human Resources Management: It describes the systems and activities
that exist at the nexus of Human Resource Management (HRM) and Information
Technology (IT). The HR department’s role is primarily administrative and is found
in all businesses.
 Attendance Recording System: It’s a piece of software that tracks and optimizes a
person’s or student’s presence in an organization or school. Nowadays, customers’
existing time/attendance recording devices, such as biometrics/access cards, can be
connected with attendance systems. Attendance management can be accomplished in
two ways: Integration of biometrics & Integration of manual attendance
 System of Billing: It is the billing software that is utilized to complete the billing
process. It keeps track of marked products and services given to a single consumer or
a group of customers.
Business Application Software
Many Application Software is used in Business. Some of them are mentioned below.
 Customer Relationship Management (CRM): CRM is a type of technology that can
manage the customer, transactions of customers, future transactions, etc. It is very
important nowadays. It helps in expanding business to the next level as it stays
connected with customers, keeping more revenues, and less tension.
CRM

 Enterprise Resource Planning (ERP): Enterprise Resource Planning is a type of


Software that handles some basic parts of any operation, resource management, etc.
 Project Management Software: Project Management Software is also a useful
application software that helps in the planning of the project, and allocation of
resources. It helps in effectively managing the project from a single place.
 Database: DBMS (Database Management System) is a way to keep data in an
automatic system. Here, various types of operations can also be performed in the
database.
 Business Process Management: Business Process Management Software help in
enabling repetitive task automatically by applying some specific technique.
 Resource Management Software: Resource Planning Software is a simple software
used to maintain the capital of the organization. It also helps in the allocation of
projects.
Advantages of Application Software
 It meets the client’s particular requirements. The client recognizes that they must use
one explicit program to complete the task because it is planned explicitly for one
reason.
 Businesses that are associated with particular applications can restrict access and
consider ways to monitor their operations.
 With the logic of health, standard updates from engineers for Licensed application
programming can be obtained.
Disadvantages of Application Software
 Developing application software to achieve certain goals can be quite expensive for
developers. This can have an impact on their financial plan and income stream,
especially if an excessive amount of time is spent on a product that is not generally
worthy.
 Application software that is frequently used by many of us and then published on the
internet poses a genuine risk of infection by a bug or other malicious projects.
Difference between System Software and Application Software
The Windows Operating System is a good example of system software. MS Office,
Photoshop, and CorelDraw are some well-known examples of application software.

System Software Application Software

The main purpose of such type of software is to


Application software is created to
manage the resources of the system. It acts as a
execute a certain set of tasks.
platform for the execution of application software

System software is written in a low-level Application software is written in a


programming language like machine code or high-level language like Java, C+
assembly language. +,.Net, or PHP.

When the computer is turned on, system software


When a user asks, application
begins to run and stops when the computer is
software runs according to the task.
turned off.

User-specific application software is


Without system software, a computer system
not required to run the system as a
cannot even start.
whole.

The system software has a wide range of The objective of application


capabilities. software is to accomplish a certain
System Software Application Software

task.

System software includes language processors Payroll software, accounting


(interpreters, compilers, and assemblers), operating software, MS Office, and so on are
systems, and so on. examples of an application.

For more, refer to Difference Between System Software and Application Software
FAQs on Application Software

1. How can you Differentiate Between an App and an Application?

Answer:
Apps can be used generally for mobile devices whereas Applications can be termed as a
software program for doing a preferred task.

2. State the Difference Between System Software and Application Software?

Answer:
System Software has the capability to run on its own whereas Application Software is
software that is dependent on the System Software.
For more, refer to the Difference Between System Software and Application Software.

3. How to Choose the Best Application Software?

Answer:
The best Application Software can be chosen based on the user’s requirements. if it fulfils
your requirements, then it is perfect for you.

4. State the Difference Between ‘On-Premise’ and ‘Hosted’ Application


Software?

Answer:
On-Premise is basically a data server inside the organization whereas Hosted
Application manages to data externally.
4.discuss

a) Memory organization concept

Memory Organization in Computer Architecture


A memory unit is the collection of storage units or devices together.
The memory unit stores the binary information in the form of bits.
Generally, memory/storage is classified into 2 categories:

 Volatile Memory: This loses its data, when power is switched off.

 Non-Volatile Memory: This is a permanent storage and does


not lose any data when power is switched off.

Memory Hierarchy
The total memory capacity of a computer can be visualized by
hierarchy of components. The memory hierarchy system consists of all
storage devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller Cache memory.

Auxillary memory access time is generally 1000 times that of the


main memory, hence it is at the bottom of the hierarchy.

The main memory occupies the central position because it is


equipped to communicate directly with the CPU and with auxiliary
memory devices through Input/output processor (I/O).

When the program not residing in main memory is needed by the


CPU, they are brought in from auxiliary memory. Programs not
currently needed in main memory are transferred into auxiliary
memory to provide space in main memory for other programs that are
currently in use.

The cache memory is used to store program data which is currently


being executed in the CPU. Approximate access time ratio between
cache memory and main memory is about 1 to 7~10
Memory Access Methods

Each memory type, is a collection of numerous memory locations. To


access data from any memory, first it must be located and then the
data is read from the memory location. Following are the methods to
access information from memory locations:

1. Random Access: Main memories are random access memories,


in which each memory location has a unique address. Using this
unique address any memory location can be reached in the same
amount of time in any order.

2. Sequential Access: This methods allows memory access in a


sequence or in order.

3. Direct Access: In this mode, information is stored in tracks, with


each track having a separate read/write head.

Main Memory

The memory unit that communicates directly within the CPU, Auxillary
memory and Cache memory, is called main memory. It is the central
storage unit of the computer system. It is a large and fast memory
used to store data during computer operations. Main memory is made
up of RAM and ROM, with RAM integrated circuit chips holing the
major share.

 RAM: Random Access Memory

o DRAM: Dynamic RAM, is made of capacitors and


transistors, and must be refreshed every 10~100 ms. It is
slower and cheaper than SRAM.
o SRAM: Static RAM, has a six transistor circuit in each cell
and retains data, until powered off.

o NVRAM: Non-Volatile RAM, retains its data, even when


turned off. Example: Flash memory.

 ROM: Read Only Memory, is non-volatile and is more like a


permanent storage for information. It also stores the bootstrap
loader program, to load and start the operating system when
computer is turned on. PROM(Programmable
ROM), EPROM(Erasable PROM) and EEPROM(Electrically
Erasable PROM) are some commonly used ROMs.

Auxiliary Memory

Devices that provide backup storage are called auxiliary memory. For
example: Magnetic disks and tapes are commonly used auxiliary
devices. Other devices used as auxiliary memory are magnetic drums,
magnetic bubble memory and optical disks.

It is not directly accessible to the CPU, and is accessed using the


Input/Output channels.

Cache Memory

The data or contents of the main memory that are used again and
again by CPU, are stored in the cache memory so that we can easily
access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory then the CPU
moves onto the main memory. It also transfers block of recent data
into the cache and keeps on deleting the old data in cache to
accomodate the new one.

Hit Ratio

The performance of cache memory is measured in terms of a quantity


called hit ratio. When the CPU refers to memory and finds the word in
cache it is said to produce a hit. If the word is not found in cache, it is
in main memory then it counts as a miss.

The ratio of the number of hits to the total CPU references to memory
is called hit ratio.
Hit Ratio = Hit/(Hit + Miss)

Associative Memory

It is also known as content addressable memory (CAM). It is a


memory chip in which each bit position can be compared. In this the
content is compared in each bit cell which allows very fast table
lookup. Since the entire chip can be compared, contents are randomly
stored without considering addressing scheme. These chips have less
storage capacity than regular memory chips.

b)Data storage concepts


Data storage concepts

The ubiquity of data formats, record types and volumes has served as the
springboard for different approaches to storing it. From personal data storage
devices to limitless data center and cloud-based data repositories, you now
have many options to organize and store your corporate data.

Direct-attached storage (DAS)

Direct-attached storage stands for all types of physical data storage devices
you can connect to a computer. Portable and affordable — yet only accessible
by one computer at a time — DAS is a standard solution for keeping small-
scale records data backups or for transferring data between devices.

Popular types of direct-attached storage include external hard drives or solid-


state drives (SSD), flash drives (USB sticks) and, although now dwindling in
popularity, CD/DVD disks and other older methods.

Network-attached storage (NAS)

Network-attached storage (NAS) is a special hardware unit, featuring file-level


architecture that can be accessed by more than one device as long as all
users are connected to the internal network. In essence, a NAS unit features
several storage disks or hard drives, processors, RAM, and lightweight
operating systems (OS) for managing access requests.

NAS architecture is relatively simple to establish, and this is why many


organizations rely on it to set up local storage systems for several users.
Offering high accessibility, NAS units can be used for both data storage and
local file sharing. You can also configure it as a data backup site by adding
support for replicated disks or a redundant array of independent disks. NAS
storage is often used as a synonym for unstructured data protocols like
network file system (NFS), server message block (SMB) and object storage.

Storage area networks (SAN)

Storage area networks (SANs) help assemble an even more complex on-
premises data management architecture that features two components:

 A dedicated network for data exchanges with network switches for load
balancing
 Data storage system, consisting of on-premises hardware
The purpose of SAN is to act as a separate "highway" for transmitting data
between servers and storage devices across the organization, in a bypass of
local area networks (LANs) and wide-area networks (WANs). Featuring a
management layer, SANs can be configured to speed up and strengthen
server-to-server, storage-to-server and storage-to-storage connections.

For instance, you can set up a dedicated low-latency data exchange lane
between a server running big data analytics workloads and a storage system
(i.e., data warehouse) hosting the processed data. Doing so helps prevent
bottlenecks and delays for other users on the LANs/WANs. The type of data
storage that makes use of a dedicated SANs is typically defined as "block
storage."

Software-defined storage (SDS)

A software-defined storage (SDS) system is a hardware-independent,


software-based storage architecture that can be used on any computing
hardware platform. While NAS and SAN storage systems require you to use
vendor-supplied OS and supporting software, SDS lets you bring your own
license to any type of x86 server.

The two major benefits of SDS include:

 Scalability: You can reconfigure available hardware devices to


accommodate the required data types and formats better and configure
access for different types of networks or applications, using APIs.
 Total cost of ownership (TCO): Instead of purchasing another NAS unit
compatible with your network infrastructure, you can re-deploy an
existing one, saving on the TCO of your investment and avoid vendor
lock in on the hardware.

Hyperconverged storage (HCS)

Hyperconverged storage allows you to virtualize on-premises storage


resources to create a shared storage pool. Each hardware unit (node) gets
virtualized and then connected in a cluster, which you then manage as a
unified system.

The big boon of HCS is hardware consolidation and associated capital


expenses (CapEx). Typically, 80 percent of data storage spending is tied to
hardware. By using hyperconverged storage architecture and virtualizing
some of your legacy servers, you can significantly reduce maintenance costs.
Additionally, many HCS solutions come with advanced file-management
systems for optimizing data formats and volumes. This means your files can
be packaged in more compact blocks to be dispatched over long distances. In
that sense, HCS can also offer some cost benefits for WANs.

Cloud data storage

Unlike other options, cloud data storage assumes you will (primarily) use
offsite storage of data in public, private, hybrid or multicloud environments that
are managed and maintained by cloud services providers such as Amazon,
Microsoft and Google, among others.

Unlike NAS or SAN, public cloud data storage doesn't require a separate
internal network — all data is accessible via the internet. Also, there are
virtually no limits on scalability since you are renting storage resources from a
third party that effectively offers an endless supply of servers.

But while cloud data storage assumes only operational expenses (OpEx),
these too can add up without proper monitoring and optimization.

As with all the data storage options available, this approach works perfectly in
some applications and presents drawbacks in others.

5. Network

a) discuss topology

Types of Network Topology


 Read

 Discuss

In Computer Network ,there are various ways through which different components are
connected to one another. Network Topology is the way that defines the structure, and
how these components are connected to each other.
Types of Network Topology
The arrangement of a network that comprises nodes and connecting lines via sender and
receiver is referred to as Network Topology. The various network topologies are:
 Point to Point Topology
 Mesh Topology
 Star Topology
 Bus Topology
 Ring Topology
 Tree Topology
 Hybrid Topology

Point to Point Topology

Point-to-Point Topology is a type of topology that works on the functionality of the


sender and receiver. It is the simplest communication between two nodes, in which one is
the sender and the other one is the receiver. Point-to-Point provides high bandwidth.

Point to Point Topology

Mesh Topology

In a mesh topology, every device is connected to another device via a particular channel.
In Mesh Topology, the protocols used are AHCP (Ad Hoc Configuration Protocols),
DHCP (Dynamic Host Configuration Protocol), etc.

Mesh Topology

Figure 1: Every device is connected to another via dedicated channels. These channels
are known as links.
 Suppose, the N number of devices are connected with each other in a mesh topology,
the total number of ports that are required by each device is N-1. In Figure 1, there are
5 devices connected to each other, hence the total number of ports required by each
device is 4. The total number of ports required = N * (N-1).
 Suppose, N number of devices are connected with each other in a mesh topology, then
the total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In
Figure 1, there are 5 devices connected to each other, hence the total number of links
required is 5*4/2 = 10.
Advantages of Mesh Topology
 Communication is very fast between the nodes.
 Mesh Topology is robust.
 The fault is diagnosed easily. Data is reliable because data is transferred among the
devices through dedicated channels or links.
 Provides security and privacy.
Drawbacks of Mesh Topology
 Installation and configuration are difficult.
 The cost of cables is high as bulk wiring is required, hence suitable for less number of
devices.
 The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various internet
service providers are connected to each other via dedicated channels. This topology is
also used in military communication systems and aircraft navigation systems.
For more, refer to the Advantages and Disadvantages of Mesh Topology.

Star Topology

In Star Topology, all the devices are connected to a single hub through a cable. This hub
is the central node and all other nodes are connected to the central node. The hub can be
passive in nature i.e., not an intelligent hub such as broadcasting devices, at the same
time the hub can be intelligent known as an active hub. Active hubs have repeaters in
them. Coaxial cables or RJ-45 cables are used to connect the computers. In Star
Topology, many popular Ethernet LAN protocols are used as CD(Collision Detection),
CSMA (Carrier Sense Multiple Access), etc.
Star Topology

Figure 2: A star topology having four systems connected to a single point of connection
i.e. hub.
Advantages of Star Topology
 If N devices are connected to each other in a star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
 Each device requires only 1 port i.e. to connect to the hub, therefore the total number
of ports required is N.
 It is Robust. If one link fails only that link will affect and not other than that.
 Easy to fault identification and fault isolation.
 Star topology is cost-effective as it uses inexpensive coaxial cable.
Drawbacks of Star Topology
 If the concentrator (hub) on which the whole topology relies fails, the whole system
will crash down.
 The cost of installation is high.
 Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office where all
computers are connected to a central hub. This topology is also used in wireless networks
where all devices are connected to a wireless access point.
For more, refer to the Advantages and Disadvantages of Star Topology.

Bus Topology

Bus Topology is a network type in which every computer and network device is
connected to a single cable. It is bi-directional. It is a multi-point connection and a non-
robust topology because if the backbone fails the topology crashes. In Bus Topology,
various MAC (Media Access Control) protocols are followed by LAN ethernet
connections like TDMA, Pure Aloha, CDMA, Slotted Aloha, etc.
Bus Topology

Figure 3: A bus topology with shared backbone cable. The nodes are connected to the
channel via drop lines.
Advantages of Bus Topology
 If N devices are connected to each other in a bus topology, then the number of cables
required to connect them is 1, known as backbone cable, and N drop lines are
required.
 Coaxial or twisted pair cables are mainly used in bus-based networks that support up
to 10 Mbps.
 The cost of the cable is less compared to other topologies, but it is used to build small
networks.
 Bus topology is familiar technology as installation and troubleshooting techniques are
well known.
 CSMA is the most common method for this type of topology.
Drawbacks of Bus Topology
 A bus topology is quite simpler, but still, it requires a lot of cabling.
 If the common cable fails, then the whole system will crash down.
 If the network traffic is heavy, it increases collisions in the network. To avoid this,
various protocols are used in the MAC layer known as Pure Aloha, Slotted Aloha,
CSMA/CD, etc.
 Adding new devices to the network would slow down networks.
 Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are
connected to a single coaxial cable or twisted pair cable. This topology is also used in
cable television networks. For more, refer to the Advantages and Disadvantages of Bus
Topology.

Ring Topology

In a Ring Topology, it forms a ring connecting devices with exactly two neighboring
devices. A number of repeaters are used for Ring topology with a large number of nodes,
because if someone wants to send some data to the last node in the ring topology with
100 nodes, then the data will have to pass through 99 nodes to reach the 100th node.
Hence to prevent data loss repeaters are used in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made bidirectional
by having 2 connections between each Network Node, it is called Dual Ring Topology.
In-Ring Topology, the Token Ring Passing protocol is used by the workstations to
transmit the data.

Ring Topology

Figure 4: A ring topology comprises 4 stations connected with each forming a ring.
The most common access method of ring topology is token passing.
 Token passing: It is a network access method in which a token is passed from one
node to another node.
 Token: It is a frame that circulates around the network.
Operations of Ring Topology
1. One station is known as a monitor station which takes all the responsibility for
performing the operations.
2. To transmit the data, the station has to hold the token. After the transmission is done,
the token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques: Early token release releases the
token just after transmitting the data and Delayed token release releases the token
after the acknowledgment is received from the receiver.
Advantages of Ring Topology
 The data transmission is high-speed.
 The possibility of collision is minimum in this type of topology.
 Cheap to install and expand.
 It is less costly than a star topology.
Drawbacks of Ring Topology
 The failure of a single node in the network can cause the entire network to fail.
 Troubleshooting is difficult in this topology.
 The addition of stations in between or the removal of stations can disturb the whole
topology.
 Less secure.
For more, refer to the Advantages and Disadvantages of Ring Topology.
Tree Topology
This topology is the variation of the Star topology. This topology has a hierarchical flow
of data. In Tree Topology, protocols like DHCP and SAC (Standard Automatic
Configuration ) are used.

Tree Topology

Figure 5: In this, the various secondary hubs are connected to the central hub which
contains the repeater. This data flow from top to bottom i.e. from the central hub to the
secondary and then to the devices or from bottom to top i.e. devices to the secondary hub
and then to the central hub. It is a multi-point connection and a non-robust topology
because if the backbone fails the topology crashes.
Advantages of Tree Topology
 It allows more devices to be attached to a single central hub thus it decreases the
distance that is traveled by the signal to come to the devices.
 It allows the network to get isolated and also prioritize from different computers.
 We can add new devices to the existing network.
 Error detection and error correction are very easy in a tree topology.
Drawbacks of Tree Topology
 If the central hub gets fails the entire system fails.
 The cost is high because of the cabling.
 If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At the top
of the tree is the CEO, who is connected to the different departments or divisions (child
nodes) of the company. Each department has its own hierarchy, with managers
overseeing different teams (grandchild nodes). The team members (leaf nodes) are at the
bottom of the hierarchy, connected to their respective managers and departments.
For more, refer to the Advantages and Disadvantages of Tree Topology.

Hybrid Topology

This topological technology is the combination of all the various types of topologies we
have studied above. Hybrid Topology is used when the nodes are free to take any form. It
means these can be individuals such as Ring or Star topology or can be a combination of
various types of topologies seen above. Each individual topology uses the protocol that
has been discussed earlier.

Hybrid Topology

Figure 6: The above figure shows the structure of the Hybrid topology. As seen it
contains a combination of all different types of networks.
Advantages of Hybrid Topology
 This topology is very flexible.
 The size of the network can be easily expanded by adding new devices.
Drawbacks of Hybrid Topology
 It is challenging to design the architecture of the Hybrid Network.
 Hubs used in this topology are very expensive.
 The infrastructure cost is very high as a hybrid network requires a lot of cabling and
network devices.
A common example of a hybrid topology is a university campus network. The network
may have a backbone of a star topology, with each building connected to the backbone
through a switch or router. Within each building, there may be a bus or ring topology
connecting the different rooms and offices. The wireless access points also create a mesh
topology for wireless devices. This hybrid topology allows for efficient communication
between different buildings while providing flexibility and redundancy within each
building.
For more, refer to the Advantages and Disadvantages of Hybrid Topology.

Last Updated : 11 May, 2023

b)Devices

Switch, Router, Gateways and Bridge


 Read

 Discuss

Network Devices: Network devices, also known as networking hardware, are physical
devices that allow hardware on a computer network to communicate and interact with
one another. For example Repeater, Hub, Bridge, Switch, Routers, Gateway, Brouter, and
NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal
over the same network before the signal becomes too weak or corrupted to extend the
length to which the signal can be transmitted over the same network. An important point
to be noted about repeaters is that they not only amplify the signal but also regenerate it.
When the signal becomes weak, they copy it bit by bit and regenerate it at its star
topology connectors connecting following the original strength. It is a 2-port device.
2. Hub – A hub is a basically multi-port repeater. A hub connects multiple wires coming
from different branches, for example, the connector in star topology which connects
different stations. Hubs cannot filter data, so data packets are sent to all connected
devices. In other words, the collision domain of all hosts connected through Hub remains
one. Also, they do not have the intelligence to find out the best path for data packets
which leads to inefficiencies and wastage.
Types of Hub
 Active Hub:- These are the hubs that have their power supply and can clean, boost,
and relay the signal along with the network. It serves both as a repeater as well as a
wiring center. These are used to extend the maximum distance between nodes.
 Passive Hub:- These are the hubs that collect wiring from nodes and power supply
from the active hub. These hubs relay signals onto the network without cleaning and
boosting them and can’t be used to extend the distance between nodes.
 Intelligent Hub:- It works like an active hub and includes remote management
capabilities. They also provide flexible data rates to network devices. It also enables
an administrator to monitor the traffic passing through the hub and to configure each
port in the hub.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add on
the functionality of filtering content by reading the MAC addresses of the source and
destination. It is also used for interconnecting two LANs working on the same protocol. It
has a single input and single output port, thus making it a 2 port device.
Types of Bridges
 Transparent Bridges:- These are the bridge in which the stations are completely
unaware of the bridge’s existence i.e. whether or not a bridge is added or deleted from
the network, reconfiguration of the stations is unnecessary. These bridges make use of
two processes i.e. bridge forwarding and bridge learning.
 Source Routing Bridges:- In these bridges, routing operation is performed by the
source station and the frame specifies which route to follow. The host can discover
the frame by sending a special frame called the discovery frame, which spreads
through the entire network using all possible paths to the destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its
efficiency(a large number of ports imply less traffic) and performance. A switch is a data
link layer device. The switch can perform error checking before forwarding data, which
makes it very efficient as it does not forward packets that have errors and forward good
packets selectively to the correct port only. In other words, the switch divides the
collision domain of hosts, but the broadcast domain remains the same.
Types of Switch
1. Unmanaged switches: These switches have a simple plug-and-play design and do not
offer advanced configuration options. They are suitable for small networks or for use
as an expansion to a larger network.
2. Managed switches: These switches offer advanced configuration options such as
VLANs, QoS, and link aggregation. They are suitable for larger, more complex
networks and allow for centralized management.
3. Smart switches: These switches have features similar to managed switches but are
typically easier to set up and manage. They are suitable for small- to medium-sized
networks.
4. Layer 2 switches: These switches operate at the Data Link layer of the OSI model and
are responsible for forwarding data between devices on the same network segment.
5. Layer 3 switches: These switches operate at the Network layer of the OSI model and
can route data between different network segments. They are more advanced than
Layer 2 switches and are often used in larger, more complex networks.
6. PoE switches: These switches have Power over Ethernet capabilities, which allows
them to supply power to network devices over the same cable that carries data.
7. Gigabit switches: These switches support Gigabit Ethernet speeds, which are faster
than traditional Ethernet speeds.
8. Rack-mounted switches: These switches are designed to be mounted in a server rack
and are suitable for use in data centers or other large networks.
9. Desktop switches: These switches are designed for use on a desktop or in a small
office environment and are typically smaller in size than rack-mounted switches.
10. Modular switches: These switches have modular design, which allows for easy
expansion or customization. They are suitable for large networks and data centers.

5. Routers – A router is a device like a switch that routes data packets based on their IP
addresses. The router is mainly a Network Layer device. Routers normally connect LANs
and WANs and have a dynamically updating routing table based on which they make
decisions on routing the data packets. The router divides the broadcast domains of hosts
connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks that
may work upon different networking models. They work as messenger agents that take
data from one system, interpret it, and transfer it to another system. Gateways are also
called protocol converters and can operate at any network layer. Gateways are generally
more complex than switches or routers. A gateway is also called a protocol converter.
7. Brouter – It is also known as the bridging router is a device that combines features of
both bridge and router. It can work either at the data link layer or a network layer.
Working as a router, it is capable of routing packets across networks and working as the
bridge, it is capable of filtering local area network traffic.
8. NIC – NIC or network interface card is a network adapter that is used to connect the
computer to the network. It is installed in the computer to establish a LAN. It has a
unique id that is written on the chip, and it has a connector to connect the cable to it. The
cable acts as an interface between the computer and the router or modem. NIC card is a
layer 2 device which means that it works on both the physical and data link layers of the
network model.
c) protocols and port

40 Network Protocols with Port


Numbers, Transport Protocols and
Meanings
Vinay, 6 years ago 7 6 min read

40 Network Protocol Names And Port Numbers With Their Transport Protocols And Meanings
tabulated by Precious Ocansey (HND, Network Engineer).

Before going straight to the table.

Firstly, what are Network Protocols?

Network protocols are the languages and rules used during communication in a computer
network. There are two major transport protocols namely;

TCP and UDP


TCP which stands for “Transmission Control Protocol”, is a suite of communication protocols
used to interconnect network devices on a local network or a public network like the internet.
TCP is known as “connection-oriented” protocols as it ensures each data packet is delivered as
requested. Therefore, TCP is used for transferring most types of data such as webpages and files
over the Internet.

UDP which stands for “User Datagram Protocol” is part of the TCP/IP suite of protocols used for
data transferring. UDP is a known as a “connectionless-oriented” protocol, meaning it doesn’t
acknowledge that the packets being sent have been received. For this reason, the UDP protocol is
typically used for streaming media. While you might see skips in video or hear some fuzz in
audio clips, UDP transmission prevents the playback from stopping completely.

Furthermore, TCP also includes built-in error checking means TCP has more overhead and is
therefore slower than UDP, it ensures accurate delivery of data between systems. Therefore TCP
is used for transferring most types of data such as webpages and files over the local network or
Internet. UDP is ideal for media streaming which does not require all packets to be delivered.
Port Numbers: They are the unique identifiers given to all protocol numbers so they can be
accessed easily.

Below is as written by Precious Ocansey. The 40 Network Protocols, their port numbers and
their transport protocols

PROTOCOLS (SERVICE PORTS TRANSPORT


MEANINGS
NAMES) NUMBERS PROTOCOLS

It is a protocol that carries data


1.File Transfer Protocol
20 and 21 TCP guarantees that data will be delivered
(FTP)
properly.

It is a cryptographic network protocol


2.Secure Shell (SSH) 22 TCP and UDP
used to secure data communication.

It is the used for remote management


3.Telnet 23 TCP protocol for managing network
devices.

It is a communication protocol which


4.Simple Mail Transfer is used to transmit email messages
25 TCP
Protocol (SMTP) over the internet to the destination
server.

It is used in the performance of one


5.Domian Name System simple task of converting IP address
53 TCP and UDP
(DNS)
To domain names that everyone
can easily understand.
TFTP is typically used by devices to
6.Trivial File Transfer
69 UDP upgrade software and firmware and
Protocol (TFTP)
that include cisco.

It is a kind of protocol used to define


7.Hyper Text Transfer how data is transmitted and formatted
80 TCP
Protocol (HTTP) and also used by www as a channel
for communication.

8.Dynamic Host
It is a kind of service used in the client
Configuration Protocol 67 and 68 UDP
and server model.
(DHCP)

9.Post Office Protocol 3 It is a protocol used by e-mail client to


110 TCP
(POP3) retrieve e-mail from the servers.

10.Network News Transport 119 TCP nntp is an application protocol used


for transporting USENET news
Protocol (NNTP) articles between news servers and the
end user client.

It is the synchronization of time


11.Network Time Protocol
123 UDP between network devices in the
(NTP)
network.

NetBIOS itself is not a protocol but is


typically used in combination with IP
12.NetBIOS 135 and 139 TCP and UDP
with the NetBIOS over TCP/IP
protocol.

13.Simple Network
It has the ability to monitor, configure
Management Protocol 161 and 162 TCP and UDP
and control network devices.
(SNMP)

LDAP provides a mechanism of


14.Lightweight Directory
389 TCP and UDP accessing and maintaining distributed
Access Protocol
directory information.

It is a protocol of a secured socket


15.Transport Layer Security
443 TCP layer that uses asymmetric keys to
(TLS)
transfer data over a network.

16.Real-Time Transport It is used for delivering audio and


1023 TO 65535 UDP
Protocol. (RTP) video data over an IP network.

It renders authentication and


17.Hyper Text Transfer encryption that provides secure
443 TCP
Protocol Secure. (HTTPS) communication with the use of secure
socket layer.

It is an application layer protocol and


18.Internet Message Access
143 TCP and UDP an internet standards for e-mail
Protocol. (IMAP4)
retrieval.

19.Address Resolution It is used to resolve the network layer


3389 TCP
Protocol (ARP) address into the link address.

20.Border Gateway Protocol It is used to maintain very large


179 TCP
(BGP) routing tables and traffic processing.

It is an application layer protocol that


21.Internet Relay Chat
194 UDP facilitate communication in the form
(IRC)
of text.
It is used to establish, modify, and
22.Session Initiation
TCP and UDP terminate multimedia communication
Protocol. (SLP)
session such as VoIP.

23.Session Description It describes the content of multimedia


TCP
Protocol.(SDP) communication.

It provides a user with a graphical


24.Remote Desktop
3389 TCP interface to connect to another
Protocol. (RDP)
computer over a network connection.

It is an application layer protocol that


25.Server Message Block
TCP helps in accessing network resources,
(SMB)
such as shared files and printers.

26.Secure File Transfer It uses the SSH protocol to access and


22 TCP and UDP
Protocol (SFTP) transfer file over the network.

It is a communication protocol used


27.Internet Group
by hosts and adjacent routers on IPv4
Management Protocol 2 TCP
network to establish multicast group
(IGMP)
membership.

28.Route Access Protocol


38 TCP
(RAP)

It is used for determining the location


29.Resource Location
39 TCP of higher level service from host on a
Protocol (RLP)
network.

30.Host Name Server


42 TCP
Protocol (HNSP)

31.Internet Control It is used by a ping utility to check the


PING
Messages Protocol (ICMP) reachability the device in a network.

32.Remote Directory It is used retrieves information about


TCP
Access Protocol (RDAS) domain names from a central registry.

It is describe an approach for providing


stream lined support of OSI
33.Lightweight Presentation
TCP and UDP application services on top of TCP/IP –
Protocol (LPP)
based network for some constrained
environment.

34.Remote Procedure Call TCP and UDP It is a protocol for requesting a service
Protocol (RPC) from a program location in a remote
computer through a network.

It is the method by which IP addresses


35.Network Address
3022 TCP and UDP are mapped from one group to another,
Translation (NAT)
transparent to end users.

it is used by Microsoft server operating


36.Microsoft Active
445 TCP systems for client/server access and file
Directory Protocol (MADP)
and printer sharing.

It is used by Novell GroupWise for its


37.Calender Access
1026 TCP calendar access protocol and also used
Protocol (CAP)
by windows task scheduler.

It is used to connect two private


38. Layer Two Tunneling business network together over an
1701 TCP
Protocol.(L2TP) internet connection to create a virtual
network.

A tunneling and encryption standard is


39.Point To Point Tunneling used to connect two private business
1732 TCP
Protocol (PPTP) network together over an internet
connection to create a virtual network.

It holds information regarding which


40.Remote Procedure Call
135 TCP ports and IP addresses the services are
(RPC)
currently running .

d) communIcation media

What is Communication Media?

Definition: Communication media is defined as means of delivering or


receiving a message, information, or data. The means through which the
information is passed can be in verbal or non-verbal type. There has to be a
common language known by both the sender or receiver to transfer
information successfully.

Different means are used for transmitting data from one source to another.
These two forms of communication media are-

1. Analog
Some of the common examples of analog media are conventional radios,
land-line telephones, VCRs, television transmissions, etc.

2. Digital

Common examples of digital media can be compute networking,


smartphones, computer-mediated communication, e-mail, website,
application, etc.

All in all, such communication mediums act as channels as they help in linking
various sources to pass the information, message, or data. Let us now go
through the types of communication media based upon the methods of
communication

Examples of Popular Communication Media

Given below are some of the types of communication media;

1. Television

Television is a medium of one-way communication where a viewer is shown


information in the form of audiovisual. It can be monochrome or colored. It is
one of the popular sources of spreading information.

Also Read Management Science - Concept, Characteristics and Tools

2. Radio

Radio is a communication medium where the information is passed on the


audio form. The radio receives signals by modulation of electromagnetic
waves. Its frequencies are said to be below those of visible light.

3. Print

A print is a hard copy of a picture used in a magazine, books, newspaper, etc.


With the help of print, an audience can connect better with the content matter.
4. Internet

The Internet is the largest and the most popular type of communication media.
Almost everything can be searched on the internet. The internet has access to
all the relevant information sought by the audience.

5. Outdoor Media

Such forms of mass media revolve around signs, placards, billboards, etc that
are used inside or outside of vehicles, shops, commercial buildings, stadiums,
etc.

1. Guided Media: It is also referred to as Wired or Bounded transmission


media. Signals being transmitted are directed and confined in a narrow pathway
by using physical links.
Features:
 High Speed
 Secure
 Used for comparatively shorter distances
There are 3 major types of Guided Media:
(i) Twisted Pair Cable –
It consists of 2 separately insulated conductor wires wound about each other.
Generally, several such pairs are bundled together in a protective sheath. They
are the most widely used Transmission Media. Twisted Pair is of two types:
 Unshielded Twisted Pair (UTP):
UTP consists of two insulated copper wires twisted around one another. This
type of cable has the ability to block interference and does not depend on a
physical shield for this purpose. It is used for telephonic applications.

Advantages:
⇢ Least expensive
⇢ Easy to install
⇢ High-speed capacity
Disadvantages:
⇢ Susceptible to external interference
⇢ Lower capacity and performance in comparison to STP
⇢ Short distance transmission due to attenuation
Applications:
Used in telephone connections and LAN networks
 Shielded Twisted Pair (STP):
This type of cable consists of a special jacket (a copper braid covering or a
foil shield) to block external interference. It is used in fast-data-rate Ethernet
and in voice and data channels of telephone lines.

Advantages:
⇢ Better performance at a higher data rate in comparison to UTP
⇢ Eliminates crosstalk
⇢ Comparatively faster
Disadvantages:
⇢ Comparatively difficult to install and manufacture
⇢ More expensive
⇢ Bulky
Applications:
The shielded twisted pair type of cable is most frequently used in extremely cold
climates, where the additional layer of outer covering makes it perfect for
withstanding such temperatures or for shielding the interior components.
(ii) Coaxial Cable –
It has an outer plastic covering containing an insulation layer made of PVC or
Teflon and 2 parallel conductors each having a separate insulated protection
cover. The coaxial cable transmits information in two modes: Baseband
mode(dedicated cable bandwidth) and Broadband mode(cable bandwidth is
split into separate ranges). Cable TVs and analog television networks widely
use Coaxial cables.
Advantages:
 High Bandwidth
 Better noise Immunity
 Easy to install and expand
 Inexpensive
Disadvantages:
 Single cable failure can disrupt the entire network
Applications:
Radio frequency signals are sent over coaxial wire. It can be used for cable
television signal distribution, digital audio (S/PDIF), computer network
connections (like Ethernet), and feedlines that connect radio transmitters and
receivers to their antennas.
(iii) Optical Fiber Cable –
It uses the concept of refraction of light through a core made up of glass or
plastic. The core is surrounded by a less dense glass or plastic covering called
the cladding. It is used for the transmission of large volumes of data.
The cable can be unidirectional or bidirectional. The WDM (Wavelength Division
Multiplexer) supports two modes, namely unidirectional and bidirectional mode.

Advantages:
 Increased capacity and bandwidth
 Lightweight
 Less signal attenuation
 Immunity to electromagnetic interference
 Resistance to corrosive materials
Disadvantages:
 Difficult to install and maintain
 High cost
 Fragile
Applications:
 Medical Purpose: Used in several types of medical instruments.
 Defence Purpose: Used in transmission of data in aerospace.
 For Communication: This is largely used in formation of internet cables.
 Industrial Purpose: Used for lighting purposes and safety measures in
designing the interior and exterior of automobiles.
(iv) Stripline
Stripline is a transverse electromagnetic (TEM) transmission line medium
invented by Robert M. Barrett of the Air Force Cambridge Research Centre in
the 1950s. Stripline is the earliest form of the planar transmission line. It uses a
conducting material to transmit high-frequency waves it is also called a
waveguide. This conducting material is sandwiched between two layers of the
ground plane which are usually shorted to provide EMI immunity.
(v) Microstripline
In this, the conducting material is separated from the ground plane by a layer of
dielectric.
2. Unguided Media:
It is also referred to as Wireless or Unbounded transmission media. No physical
medium is required for the transmission of electromagnetic signals.
Features:
 The signal is broadcasted through air
 Less Secure
 Used for larger distances
There are 3 types of Signals transmitted through unguided media:
(i) Radio waves –
These are easy to generate and can penetrate through buildings. The sending
and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz.
AM and FM radios and cordless phones use Radio waves for transmission.
Further Categorized as (i) Terrestrial and (ii) Satellite.
(ii) Microwaves –
It is a line of sight transmission i.e. the sending and receiving antennas need to
be properly aligned with each other. The distance covered by the signal is
directly proportional to the height of the antenna. Frequency Range:1GHz –
300GHz. These are majorly used for mobile phone communication and
television distribution.

Microwave Transmission

(iii) Infrared –
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.

6. IP address

a) types
What is an IP Address?
An IP address is a numerical label assigned to the devices connected
to a computer network that uses the IP for communication. IP address
act as an identifier for a specific machine on a particular network. It
also helps you to develop a virtual connection between a destination
and a source.

Types of IP address
There are mainly four types of IP addresses:

 Public,
 Private,
 Static
 Dynamic.

Among them, public and private addresses are based on their location
of the network private, which should be used inside a network while
the public IP is used outside of a network.

Let us see all these types of IP address in detail.

Public IP Addresses
A public IP address is an address where one primary address is
associated with your whole network. In this type of IP address, each of
the connected devices has the same IP address.

EXPLORE MORE Learn Java

Programming with Beginners Tutorial08:32 Linux Tutorial for


Beginners: Introduction to Linux Operating...01:35 What is

Integration Testing Software Testing Tutorial03:04 What is

JVM (Java Virtual Machine) with Architecture JAVA...02:24

How to write a TEST CASE Software Testing Tutorial01:08

Seven Testing Principles Software Testing05:01 Linux File

Permissions Commands with Examples13:29 How to use

Text tool in Photoshop CC Tutorial08:32 What is NoSQL

Database Tutorial02:00 Important Linux Commands for


Beginners Linux Tutorial15:03
This type of public IP address is provided to your router by your ISP.
Private IP Addresses
A private IP address is a unique IP number assigned to every device
that connects to your home internet network, which includes devices
like computers, tablets, smartphones, which is used in your
household.

It also likely includes all types of Bluetooth devices you use, like
printers or printers, smart devices like TV, etc. With a rising industry of
internet of things (IoT) products, the number of private IP addresses
you are likely to have in your own home is growing.

Dynamic IP address:
Dynamic IP addresses always keep changing. It is temporary and are
allocated to a device every time it connects to the web. Dynamic IPs
can trace their origin to a collection of IP addresses that are shared
across many computers.

Dynamic IP addresses are another important type of internet protocol


addresses. It is active for a specific amount of time; after that, it will
expire.

Static IP Addresses
A static IP address is an IP address that cannot be changed. In
contrast, a dynamic IP address will be assigned by a Dynamic Host
Configuration Protocol (DHCP) server, which is subject to change.
Static IP address never changes, but it can be altered as part of
routine network administration.

Static IP addresses are consistent, which is assigned once, that stays


the same over the years. This type of IP also helps you procure a lot
of information about a device.
Types of Website IP Addresses
Two types of website IP Addresses are 1) Share IP Address 2)
Dedicated IP Address

Shared IP Addresses:
Shared IP address is used by small business websites that do not yet
get many visitors or have many files or pages on their site. The IP
address is not unique and it is shared with other websites.

Dedicated IP Addresses:
Dedicated IP address is assigned uniquely to each website. Dedicated
IP addresses helps you avoid any potential backlists because of bad
behavior from others on your server. The dedicated IP address also
gives you the option of pulling up your website using the IP address
alone, instead of your domain name. It also helps you to access your
website when you are waiting on a domain transfer.

Version of IP address
Two types of IP addresses are 1)IPV4 and 2) IPV6.

IPV4
IPv4 was the first version of IP. It was deployed for production in the
ARPANET in 1983. Today it is the most widely used IP version. It is
used to identify devices on a network using an addressing system.

The IPv4 uses a 32-bit address scheme allowing to store 2^32


addresses, which is more than 4 billion addresses. To date, it is
considered the primary Internet Protocol and carries 94% of Internet
traffic.

IPV6
It is the most recent version of the Internet Protocol. Internet Engineer
Taskforce initiated it in early 1994. The design and development of
that suite is now called IPv6.
This new IP address version is being deployed to fulfill the need for
more Internet addresses. It was aimed to resolve issues which are
associated with IPv4. With 128-bit address space, it allows 340
undecillion unique address space.

b) classes

What is an IP Address?
An IP (Internet Protocol) address is a numerical label assigned to the
devices connected to a computer network that uses the IP for
communication.

IP address act as an identifier for a specific machine on a particular


network. It also helps you to develop a virtual connection between a
destination and a source. The IP address is also called IP number or
internet address. It helps you to specify the technical format of the
addressing and packets scheme. Most networks combine TCP with
IP.

An IP address consists of four numbers, each number contains one to


three digits, with a single dot (.) separates each number or set of
digits.

Parts of IP
address
IP Address is divided into two parts:
 Prefix: The prefix part of IP address identifies the physical
network to which the computer is attached. . Prefix is also known
as a network address.
 Suffix: The suffix part identifies the individual computer on the
network. The suffix is also called the host address.

In this networking tutorial, you will learn:

EXPLORE MORE Learn Java

Programming with Beginners Tutorial08:32 Linux Tutorial for

Beginners: Introduction to Linux Operating...01:35 What is

Integration Testing Software Testing Tutorial03:04 What is

JVM (Java Virtual Machine) with Architecture JAVA...02:24

How to write a TEST CASE Software Testing Tutorial01:08

Seven Testing Principles Software Testing05:01 Linux File


Permissions Commands with Examples13:29 How to use

Text tool in Photoshop CC Tutorial08:32 What is NoSQL

Database Tutorial02:00 Important Linux Commands for


Beginners Linux Tutorial15:03

 What is an IP Address?
 IP Header Classes:
 How does IP address work?
 What is Classful Addressing?
 Class A Network
 Class B Network
 Class C Network
 Class D Network
 Class E Network
 Limitations of classful IP addressing
 Rules for assigning Network ID:
Types of IP4 Classes

IP Header Classes:
Max number of
Class Address Range Subnet masking Example IP Leading bits Application
networks

Used for

IP Class A 1 to 126 255.0.0.0 1.1.1.1 8 128 large number

of hosts.

Used for

IP Class B 128 to 191 255.255.0.0 128.1.1.1 16 16384 medium size

network.
Max number of
Class Address Range Subnet masking Example IP Leading bits Application
networks

Used for local

IP Class C 192 to 223 255.255.255.0 192.1.11. 24 2097157 area

network.

Reserve

IP Class D 224 to 239 NA NA NA NA For

multi-tasking.

This class is

reserved

IP Class E 240 to 254 NA NA NA NA for research

and Development

Purposes.

How does IP address work?


IP address works in an IP network like a postal address. For example,
a postal address combines two addresses, address, or your area your
house address.

The address or your area is a group address of all houses that belong
to a specific area. The house address is the unique address of your
homes in that area. Here, your area is represented by a PIN code
number.

In this example, the network address comprises all hosts which belong
to a specific network. The host address is the unique address of a
particular host in that network.
What is Classful Addressing?
Classful addressing is a network addressing the Internet’s architecture
from 1981 till Classless Inter-Domain Routing was introduced in 1993.

This addressing method divides the IP address into five separate


classes based on four address bits.

Here, classes A, B, C offers addresses for networks of three distinct


network sizes. Class D is only used for multicast, and class E
reserved exclusively for experimental purposes.

Let’s see each of the network classes in detail:

Class A Network
This IP address class is used when there are a large number of hosts.
In a Class A type of network, the first 8 bits (also called the first octet)
identify the network, and the remaining have 24 bits for the host into
that network.

An example of a Class A address is 102.168.212.226. Here, “102”


helps you identify the network and 168.212.226 identify the host.

Class A addresses 127.0.0.0 to 127.255.255.255 cannot be used and


is reserved for loopback and diagnostic functions.

Class B Network
In a B class IP address, the binary addresses start with 10. In this IP
address, the class decimal number that can be between 128 to 191.
The number 127 is reserved for loopback, which is used for internal
testing on the local machine. The first 16 bits (known as two octets)
help you identify the network. The other remaining 16 bits indicate the
host within the network.
An example of Class B IP address is 168.212.226.204, where *168
212* identifies the network and *226.204* helps you identify the Hut
network host.

Class C Network
Class C is a type of IP address that is used for the small network. In
this class, three octets are used to indent the network. This IP ranges
between 192 to 223.

In this type of network addressing method, the first two bits are set to
be 1, and the third bit is set to 0, which makes the first 24 bits of the
address them and the remaining bit as the host address. Mostly local
area network used Class C IP address to connect with the network.

Example for a Class C IP address:

192.168.178.1

Class D Network
Class D addresses are only used for multicasting applications. Class
D is never used for regular networking operations. This class
addresses the first three bits set to “1” and their fourth bit set to use for
“0”. Class D addresses are 32-bit network addresses. All the values
within the range are used to identify multicast groups uniquely.

Therefore, there is no requirement to extract the host address from the


IP address, so Class D does not have any subnet mask.

Example for a Class D IP address:


227.21.6.173
Class E Network
Class E IP address is defined by including the starting four network
address bits as 1, which allows you two to incorporate addresses from
240.0.0.0 to 255.255.255.255. However, E class is reserved, and its
usage is never defined. Therefore, many network implementations
discard these addresses as undefined or illegal.

Example for a Class E IP address:

243.164.89.28

Limitations of classful IP addressing


Here are the drawbacks/ cons of the classful IP addressing method:

 Risk of running out of address space soon


 Class boundaries did not encourage efficient allocation of
address space

Rules for assigning Network ID:


The network ID will be assigned based on the below-given rules:

 The network ID cannot start with 127 because 127 belongs to


class A address and is reserved for internal loopback functions.
 All bits of network ID set to 1 are reserved for use as an IP
broadcast address and cannot be used.
 All bits of network ID are set to 0. They are used to denote a
particular host on the local network and should not be routed.

Summary:
 An IP (Internet Protocol) address is a numerical label assigned to
the devices connected to a computer network that uses the IP for
communication.
 IP Address is divided into two parts: 1) Prefix 2)Suffix
 IP address works in a network like a postal address. For
example, a postal address combines two addresses, address, or
your area your house address.
 In a class A type of network, the first 8 bits (also called the first
octet) identify the network, and the remaining have 24 bits for the
host into that network.
 In class B type of network, the first 16 bits (known as two octets)
help you identify the network. The other remaining 16 bits
indicate the host within the network.
 In class C, three octets are used to indent the network. This IP
ranges between 192 to 223.
 Class D addresses are 32-bit network addresses. All the values
within the range are used to identify multicast groups uniquely.
 Class E IP address is defined by including the starting four
network address bits as 1.
 The major drawback of IP address classes is the risk of running
out of address space soon.
 Important rule for assigning network id is that the network ID
cannot start with 127 as this number belongs to class A address
and reserved for internal loopback functions.
unit 3

1. Discuss
a)Basic principles of digital forensics

Basic principles of Digital Forensics


Digital forensics is a critical component of modern investigations. Learn
about the basic principles of digital forensics and its importance in this
informative article.

 DEVISINH SODHA
 26/04/2023
Digital forensics is a field that involves the recovery and investigation
of digital data for use in legal proceedings. The data may be
recovered from a computer system, a mobile device, or any other
digital storage media.

Table of Contents

Digital forensics is essential in modern-day investigations, and its


importance continues to grow as technology advances. In this article,
we will discuss the basic principles of digital forensics.
Principle 1: Preservation of Evidence
The first principle of digital forensics is the preservation of evidence.
The preservation of evidence is critical in any investigation. The digital
forensic analyst must ensure that the evidence collected is not altered,
tampered with, or destroyed. This principle ensures that the evidence
collected is admissible in court and that it has not been tampered with.

To preserve digital evidence, the forensic analyst must create an


exact copy of the original data. This copy, also known as a forensic
image, is an exact replica of the original data and includes all hidden
and deleted files. The forensic analyst must also ensure that the copy
is not altered in any way. This can be achieved by using a write
blocker, which prevents any changes from being made to the original
data during the copying process.
Principle 2: Identification of Evidence
The second principle of digital forensics is the identification of
evidence. The identification of evidence involves identifying potential
sources of digital evidence that may be relevant to the investigation.
The digital forensic analyst must be able to identify the type of data
that is relevant to the investigation and where it may be located.

There are several sources of digital evidence, including computer


systems, mobile devices, social media accounts, and cloud-based
storage. The forensic analyst must be able to identify potential
sources of evidence and then obtain and analyze that data.
Principle 3: Analysis of Evidence
The third principle of digital forensics is the analysis of evidence. The
analysis of evidence involves examining the data collected to
determine its relevance to the investigation. The forensic analyst must
be able to analyze the data collected to identify any relevant
information, such as file names, timestamps, and user accounts.

The analysis of evidence may involve the use of specialized tools and
techniques, such as file carving, data recovery, and metadata
analysis. The forensic analyst must also be able to interpret the data
collected and provide a report on their findings.
Principle 4: Documentation of Evidence
The fourth principle of digital forensics is the documentation of
evidence. The documentation of evidence involves documenting the
steps taken during the investigation and the results obtained. The
forensic analyst must keep a detailed record of their findings and
provide a report on their analysis of the data collected.

The documentation of evidence is important because it provides a


record of the investigation, which can be used in court. The
documentation should include information such as the date and time
of the investigation, the tools used, and the results obtained.
Principle 5: Presentation of Evidence
The fifth principle of digital forensics is the presentation of evidence.
The presentation of evidence involves presenting the findings of the
investigation in a clear and concise manner. The forensic analyst must
be able to present their findings in a way that is easily understood by
both technical and non-technical audiences.

The presentation of evidence may involve the use of charts, graphs,


and other visual aids to help illustrate the findings of the investigation.
The forensic analyst must also be able to explain their findings in plain
language and provide an objective opinion on the evidence collected.

Digital forensics is a rapidly evolving field that is essential in modern-


day investigations. The basic principles of digital forensics, including
the preservation, identification, analysis, documentation, and
presentation of evidence, are critical to the success of any
investigation. Digital forensic analysts must have a strong
understanding of these principles and be able to apply them effectively
in various investigations.

In addition to the five basic principles discussed above, there are


several other important considerations in digital forensics. These
include the legal and ethical considerations of digital forensics, the
importance of keeping up-to-date with new technology, and the need
for ongoing training and education.
Legal and Ethical Considerations
Digital forensics must be conducted in a legally and ethically sound
manner. The forensic analyst must adhere to laws and regulations
governing the collection, analysis, and presentation of digital
evidence. Failure to comply with these laws and regulations can result
in the evidence being thrown out of court or even criminal charges
being filed against the forensic analyst.
Ethical considerations are also important in digital forensics. The
forensic analyst must maintain objectivity and integrity throughout the
investigation, avoiding any bias or personal opinions that could
compromise the investigation. The forensic analyst must also respect
the privacy of individuals and ensure that any personal information
collected is not disclosed unless necessary for the investigation.
Keeping Up-to-Date with New Technology
Digital forensics is a field that is constantly evolving, and forensic
analysts must keep up-to-date with new technology and techniques.
New technologies and devices are constantly being developed, and
forensic analysts must be able to recognize and analyze them.

Staying up-to-date with new technology also helps forensic analysts to


remain competitive in the field and to provide better services to their
clients.
Ongoing Training and Education
To maintain proficiency in digital forensics, forensic analysts must
engage in ongoing training and education. This can include attending
seminars and workshops, participating in online training courses, and
obtaining professional certifications.

Ongoing training and education help forensic analysts to stay up-to-


date with new technologies and techniques and to improve their skills
and knowledge.

Conclusion

Digital forensics is an essential component of modern-day


investigations, and its importance continues to grow as technology
advances. The five basic principles of digital forensics – preservation,
identification, analysis, documentation, and presentation of evidence –
are critical to the success of any investigation.

In addition to these principles, forensic analysts must also be aware of


legal and ethical considerations, keep up-to-date with new technology,
and engage in ongoing training and education. By adhering to these
principles and considerations, forensic analysts can provide reliable
and effective digital forensics services to their clients
b) Methodogies of digital forensics

Digital forensics is the method of analyzing a computer system after an attack has taken
place and looking for evidence . The digital forensic process consists of five steps :
1 2

 Identification: finding evidence on electronic devices and saving the data to a safe drive . 32

 Preservation: isolating, securing, and preserving the data . 2

 Analysis: reconstructing fragments of data and drawing conclusions based on evidence . 2

 Documentation: creating a record of all the visible data . 2

 Presentation: giving the digital evidence to police or

2. Discuss design systems with forensic needs in mind


2-1 Design Systems with Forensic Needs in Mind Tools that are designed for detecting
malicious activity on computer networks are rarely designed with evidence collection in mind.
Some organizations are attempting to support their existing systems with forensics tools in
order to address authentication issues that arise in court. Other organizations are
implementing additional systems specifically designed to secure digital evidence, popularly
called Network Forensic Analysis Tools (NFATs). The purpose of design such system is to
enable Digital detectives from monitoring, acquiring relative data that can be considered as
digital evidence from suspect system. Digital system may be Computer, Network, Mobile
device etc….., all these equipment potentially has the ability to be used as tools to run a
digital attack against victims such as denial of service or hacking other computers, on the
other hand they can be used for threating others such as writing blackmail (Simple Definition
of blackmail: the crime of threatening to tell secret information about someone unless the
person being threatened gives you money or does what you want) and so on. Forensics
tools can help investigators to determine many facts that can be used legally to prosecute
‫ةاضاقمل‬criminal when evidence collect in its real time. Because the digital data are volatile, it
can be Lecture 2 "Rules of Evidence" Digital Forensics 2017-2018 01 removed
quickly in a way that make it hard to trace or to be collect while investigators collect data. For
PCs, “for instance “ the data in RAM which reflect the current process can be disappear
when computer powered off, in this scenario we believe that pre installing forensics software
tools on digital devises can help collect such sensitive data in critical time. Other digital
system can be monitored using suitable software, for example, Networks can be supplied
with IDS or WIRESHARK to monitor and record most activities that can help track the
suspect activity. These tools when installed prior to the crime, it will help detectives to gain a
lot of information about the crime and the attack. One if the Main benefits of considering the
design of any system to be forensically minded is to collect evidence in a way that help
digital detectives to collect; identify and analyze the electronic evidence in the best way to be
inadmissible in the court. But even so, the rules of electronic evidence must be implemented
to persuade the judge to accept these evidence.
3. Discuss phases of digital forensics
The Phases of Digital Forensics

Digital forensics is a branch of forensic science that focuses on digital


devices and cybercrime. Through a process of identifying, preserving,
analyzing and documenting digital evidence, forensic investigators
recover and investigate information to aid in the conviction of
criminals. The digital forensic process is extensive, and a secure
environment is necessary to retrieve and preserve digital evidence.
The Nine phases of digital forensics
4. Give brief introduction to digtal forensics tool

Juniper researchers state that cybercrime will cost over 2 trillion USD to businesses by
2019. As costs go up, so the demand for digital forensic experts will increase in
tandem. Tools are a forensic examiner's best friend – using the right tool helps to move
things faster, improve productivity and gather all the evidence.
Whether it's for an internal human resources case, an investigation into unauthorized
access to a server, or if you just want to learn a new skill, these suites and utilities will
help you conduct memory forensic analysis, hard drive forensic analysis, forensic
image exploration, forensic imaging and mobile forensics. As such, they all provide the
ability to bring back in-depth information about what's 'under the hood' of a system.

Here are my top 10 free tools to become a digital forensic wizard:

1. SIFT Workstation
SIFT (SANS investigative forensic toolkit) Workstation is a freely-available virtual appliance
that is configured in Ubuntu 14.04. SIFT contains a suite of forensic tools needed to perform a
detailed digital forensic examination. It is one of the most popular open-source incident
response platforms.

Download SIFT Workstation

2. Autopsy

Autopsy is a GUI-based open-source digital forensic programme to analyse hard drives and
smartphones efficiently. Autospy is used by thousands of users worldwide to investigate what
happened in a computer.

Autopsy was designed to be an end-to-end platform, with modules that come out-of-the-box
and others that are available from third parties. Some of the modules provide timeline analysis,
keyword searching, data carving, and Indicator of Compromise using STIX.

Download Autopsy

3. FTK Imager
FTK Imager is a data preview and imaging tool used to acquire data (evidence) in a
forensically sound manner by creating copies of data without making changes to the original
evidence. It saves an image of a hard disk, in one file or in segments, which may be
reconstructed later on. It calculates MD5 hash values and confirms the integrity of the data
before closing the files.

Download FTK Imager

4. DEFT
DEFT is a household name when it comes to digital forensics and intelligence activities. The
Linux distribution DEFT is made up of a GNU/Linux and DART (Digital Advanced Response
Toolkit), a suite dedicated to digital forensics and intelligence activities. On boot, the system
does not use the swap partitions on the system being analysed. During system startup, there are
no automatic mount scripts.

Download DEFT

5. Volatility
Also built into SIFT, Volatility is an open-source memory forensics framework for incident
response and malware analysis. It is written in Python and supports Microsoft Windows, Mac
OS X, and Linux (as of version 2.5).

Forensic analysis of raw memory dump will be performed on a Windows platform. The
Volatility tool is used to determine whether the PC is infected or not. Subsequently, the
malicious programme can be extracted from the running processes from the memory dump.

Download Volatility

6. LastActivityView
LastActivityView is a tool for the Windows operating system that collects information from
various sources on a running system, and displays a log of actions made by the user and events
that occurred on this computer.
The activity displayed by LastActivityView includes: Running an .exe file, opening open/save
dialog-box, opening file/folder from Explorer or other software, software installation, system
shutdown/start, application or system crash and network connection and disconnection.

Download LastActivityView

7. HxD
HxD is a carefully designed and fast hex editor which, in addition to raw disk editing and
modifying of main memory (RAM), handles files of any size. The easy-to-use interface offers
features such as searching and replacing, exporting, checksums/digests, insertion of byte
patterns, a file shredder, concatenation or splitting of files, statistics and much more.

Download HxD

8. CAINE
CAINE offers a complete forensic environment that is organised to integrate existing software
tools as software modules and to provide a friendly graphical interface. This is a digital
forensics platform and graphical interface to the Sleuth Kit and other digital forensics tools.

Download CAINE

9. Redline
Redline is a free endpoint security tool that provides host investigative capabilities to users to
find signs of malicious activity through memory and file analysis and the development of a
threat assessment profile.

Redline can help audit and collect all running processes and drivers from memory, file-system
metadata, registry data, event logs, network information, services, tasks and web history; and
analyse and view imported audit data, including the ability to filter results around a given
timeframe.

Download Redline
10. PlainSight
PlainSight is a versatile computer forensics environment that allows you to perform forensic
operations such as: getting hard disk and partition information, extracting user and group
information, examining Windows firewall configuration, examining physical memory dumps,
extracting LanMan password hashes and previewing a system before acquiring it.

Download PlainSight

This is by no means an extensive list and may not cover everything you need for an
investigation, but it's a great starting point to becoming a forensic examiner. If you find any
other tool useful, please leave a comment below.

5. Life of a digital forensics investor – outline

A day in the life of a Digital Forensics Analyst


The cases investigated by a Digital Forensics Analyst can vary from minor
breaches to multi-million-dollar lawsuits! No matter the size of the
investigation, each one must be carried out as accurately, thoroughly and
responsibly as possible.
While each case is different, Digital Forensics Analysts often use a similar
process to carry out their investigations successfully.
Here’s an example of the step-by-step process you may use as a Digital
Forensics Investigator. During a single day, you could be focused on any part
of this investigation process.

1. Preparation & Prioritisation


The first step in any investigation is to make a plan! You need to think about
your approach and priorities for a particular case.
Gathering legitimate evidence for court is often the top priority, but speed can
sometimes be the more critical factor, even if that means evidence will be
admissible in court. Your priorities shape how you will carry out your analysis!
2. Identification & Preservation
Once you’ve planned your investigation, it’s now time to identify the evidence
and preserve the information on it!
How do you preserve the data in a forensics investigation? There are a few
methods to ensure you analyse the information without manipulating and
invalidating the original data.

1. Make copies of the relevant data so you can work from them rather than
the original.
2. Consider using a write blocker. A write blocker is a hardware restriction
that allows forensics analysts to read data without changing it.
Once you have identified the evidence, the next crucial step is to
preserve the information on it.

3. Analysis
Now that you’ve preserved the evidence, you can begin analysing it. You’ll
use this information to determine how the cyber criminal breached the system
and what data they stole, modified or wiped.
An example of the methods a Digital Forensics Analyst may use to conduct
their analysis:

 Steganography – find hidden messages and passwords!


 Event logs and log files – uncover hardware and software actions and
different types of logins.
 File integrity and hashes - identify if any unauthorised changes have been
made to a file and prove that you haven’t changed it.
 Memory captures – take a snapshot of a system’s RAM to review processes
that were running and data available in memory.

4. Documentation
As you document your findings, avoid assumptions and ensure your
conclusions are verifiable and accurate. If you make a mistake during
documentation, it could be used to prove that your evidence is untrustworthy!
Here are some precautions Digital Forensic Analyst’s take to ensure the
evidence holds up in a court case!

 Use a non-ring-bound notepad when writing conclusions to identify later if


pages are ripped out.
 Any physical evidence such as hard drives or USB sticks goes into sealable
named and dated bags. Take all necessary precautions to ensure the
evidence is not tampered with or affected.
 Consider the chain of custody for each piece of evidence. When evidence is
passed to other investigative bodies, the date and who it was given to should
be noted. Recording the chain of custody ensures that the evidence can be
accounted for.
As you document your findings, avoid assumptions and ensure your
conclusions are verifiable and accurate.

As you document your findings, avoid assumptions and ensure your


conclusions are verifiable and accurate.

5. Presentation
Presenting your investigation’s findings is the final step of the process.
You should present findings without bias and in chronological order. Be
thorough, accurate and verify your conclusions throughout the investigation. If
you are presenting in court, your evidence is more likely to hold up and crack
the case!
Does the life of a Digital Forensics Analyst sound interesting? Why not try
your hand at some of the skills needed to analyse a piece of evidence. From
steganography to file hashes, play through real-world simulations in
the CyberStart Forensics base!
6. Data acquisition

a) Discuss principles of data acquisition

. What Is Data Acquisition in Digital Forensics?


The gathering and recovery of sensitive data during a digital forensic investigation is
known as data acquisition. Cybercrimes often involve the hacking or corruption of data.
Digital forensic analysts need to know how to access, recover, and restore that data as
well as how to protect it for future management. This involves producing a forensic
image from digital devices and other computer technologies.
Digital forensic analysts must be fully trained in the process of data acquisition.
However, they are not the only ones who should understand how data acquisition
works. Other IT positions that require knowledge of data acquisition include data
analyst, penetration tester, and ethical hacker.
Moreover, the entire organization should understand the basics of how cybercrime
works, including the importance of not intruding into hacked computer systems. Just as
in a real-life crime scene, a “civilian” who stumbles into a digital crime scene can
inadvertently destroy evidence or otherwise corrupt the crime scene, impeding later
investigation. This speaks to the need to ensure that an entire business operation has
cybersecurity training that covers the basics of proper information technology use, anti-
phishing techniques, and network security (EC-Council, 2020).
What Are the Most Commonly Used Data Acquisition Methods?
Certified computer examiners receive training in a range of data acquisition methods, as
different situations may call for the use of different techniques. Digital forensic
investigators need to be aware of the various types of data acquisition methods and
should know when to choose one method over another. Above all, they need to make
sure that the methods they choose do not damage the evidence in question. The most
commonly used techniques are described below (Nelson et al., 2010). While other
forms of data acquisition may also be used in some instances, this usually occurs only
in highly specialized cases.
Bit-stream disk-to-image files
This is the most common data acquisition method in the event of a cybercrime. It
involves cloning a disk drive, which allows for the complete preservation of all
necessary evidence. Programs used to create bit-stream disk-to-image files include
FTK, SMART, and ProDiscover, among others.
Bit-stream disk-to-disk files
When it is not possible to create an exact copy of a hard drive or network, different tools can be
used to create a disk-to-disk copy. While certain parameters of the hard drive may be changed,
the files will remain the same.

Logical acquisition
Logical acquisition involves collecting files that are specifically related to the case under
investigation. This technique is typically used when an entire drive or network is too large to be
copied.
Sparse acquisition

b) Discuss evidence handling

standards

Principle 1

In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner
safeguarding the accuracy and reliability of the evidence, law enforcement and forensic
organizations must establish and maintain an effective quality system. Standard Operating
Procedures (SOPs) are documented quality-control guidelines that must be supported by proper
case records and use broadly accepted procedures, equipment, and materials.

Standards and Criteria 1.1


All agencies that seize and/or examine digital evidence must maintain an appropriate SOP
document. All elements of an agency’s policies and procedures concerning digital evidence must be
clearly set forth in this SOP document, which must be issued under the agency’s management
authority.

Discussion. The use of SOPs is fundamental to both law enforcement and forensic science.
Guidelines that are consistent with scientific and legal principles are essential to the acceptance of
results and conclusions by courts and other agencies. The development and implementation of
these SOPs must be under an agency’s management authority.

Standards and Criteria 1.2


Agency management must review the SOPs on an annual basis to ensure their continued suitability
and effectiveness.

Discussion. Rapid technological changes are the hallmark of digital evidence, with the types,
formats, and methods for seizing and examining digital evidence changing quickly. In order to
ensure that personnel, training, equipment, and procedures continue to be appropriate and effective,
management must review and update SOP documents annually.

Standards and Criteria 1.3


Procedures used must be generally accepted in the field or supported by data gathered and
recorded in a scientific manner.

Discussion. Because a variety of scientific procedures may validly be applied to a given problem,
standards and criteria for assessing procedures need to remain flexible. The validity of a procedure
may be established by demonstrating the accuracy and reliability of specific techniques. In the digital
evidence area, peer review of SOPs by other agencies may be useful.

Standards and Criteria 1.4


The agency must maintain written copies of appropriate technical procedures.

Discussion. Procedures should set forth their purpose and appropriate application. Required
elements such as hardware and software must be listed and the proper steps for successful use
should be listed or discussed. Any limitations in the use of the procedure or the use or interpretation
of the results should be established. Personnel who use these procedures must be familiar with
them and have them available for reference.

Standards and Criteria 1.5


The agency must use hardware and software that is appropriate and effective for the seizure or
examination procedure.

Discussion. Although many acceptable procedures may be used to perform a task, considerable
variation among cases requires that personnel have the flexibility to exercise judgment in selecting a
method appropriate to the problem.

Hardware used in the seizure and/or examination of digital evidence should be in good operating
condition and be tested to ensure that it operates correctly. Software must be tested to ensure that it
produces reliable results for use in seizure and/or examination purposes.

Standards and Criteria 1.6

All activity relating to the seizure, storage, examination, or transfer of digital evidence must be
recorded in writing and be available for review and testimony.

Discussion. In general, documentation to support conclusions must be such that, in the absence of
the originator, another competent person could evaluate what was done, interpret the data, and
arrive at the same conclusions as the originator.

The requirement for evidence reliability necessitates a chain of custody for all items of evidence.
Chain-of-custody documentation must be maintained for all digital evidence.

Case notes and records of observations must be of a permanent nature. Handwritten notes and
observations must be in ink, not pencil, although pencil (including color) may be appropriate for
diagrams or making tracings. Any corrections to notes must be made by an initialed, single strikeout;
nothing in the handwritten information should be obliterated or erased. Notes and records should be
authenticated by handwritten signatures, initials, digital signatures, or other marking systems.

Standards and Criteria 1.7


Any action that has the potential to alter, damage, or destroy any aspect of original evidence must
be performed by qualified persons in a forensically sound manner.

Discussion. As outlined in the preceding standards and criteria, evidence has value only if it can be
shown to be accurate, reliable, and controlled. A quality forensic program consists of properly trained
personnel and appropriate equipment, software, and procedures to collectively ensure these
attributes.

Comments

SWGDE’s proposed standards for the exchange of digital evidence will be posted on the National
Forensic Science Technology Center, Law Enforcement Online, and IOCE Web sites in the near
future.
Comments and questions concerning the proposed standards may be forwarded to
whitcomb@mail.ucf.edu or mpollitt.cart@fbi.gov

c) Discuss processing of digital data

There are five steps in a digital forensics investigation, the first two of which are the
most critical during data acquisition (EC-Council, 2021b):
 Identification
 Preservation
 Analysis
 Documentation
 Presentation
The first stage involves ensuring that all files and evidence related to the ongoing
investigation have been properly identified. This involves conducting an appropriate
examination of the device or network in question as well as interviewing the individuals
involved in the network breach. These individuals may have guidance for your
investigation or other useful information and may be able to tell you how the breach in
question occurred.
The second stage is preservation of evidence: maintaining the data in the state in which
it is found for later examination and analysis. No one else should be able to access the
information in question. After completing these steps, you can move on to copying,
examining, and analyzing the evidence.
Properly identifying and preserving evidence ensures that it can be analyzed.
Accurately identified and preserved evidence can help digital forensic investigators
understand how the data damage occurred, what hacking methods were used, and how
individuals and organizations can prevent similar cyberattacks in the future. These
conclusions must be supported by the evidence, which is confirmed in the
documentation step. All evidence is then placed into a presentation that can be given to
others.
Proper management of data acquisition is critical in any investigation. However, it’s only
the first step in properly conducting digital forensics and protecting your clients’
information. At EC-Council, we offer internationally recognized trainings and
certifications, such as the Certified Hacking Forensic Investigator (C|HFI) program. By
completing this course and earning your C|HFI certification, you’ll learn about a variety
of aspects of the field of digital forensics, including digital data acquisition. Start your
certification journey with EC-Council today!

unit 4

1.Evidence collection
a) discuss rules of evidence

There are five rules of collecting electronic evidence. These relate to five properties that
evidence must have to be useful.

1. Admissible

2. Authentic

3. Complete

4. Reliable

5. Believable

Admissible

Admissible is the most basic rule (the evidence must be able to be used) in court or
otherwise. Failure to comply with this rule is equivalent to not collecting the evidence in the
first place, except the cost is higher.

Authentic

If you can’t tie the evidence positively with the incident, you can’t use it to prove anything.
You must be able to show that the evidence relates to the incident in a relevant way.

Complete

It’s not enough to collect evidence that just shows one perspective of the incident. Not only
should you collect evidence that can prove the attacker’s actions, but also evidence that
could prove their innocence. For instance, if you can show the attacker was logged in at the
time of the incident, you also need to show who else was logged in, and why you think they
didn’t do it. This is called exculpatory evidence, and is an important part of proving a case.

Reliable

The evidence you collect must be reliable. Your evidence collection and analysis
procedures must not cast doubt on the evidences authenticity and veracity.

Believable

The evidence you present should be clearly understandable and believable by a jury.
There’s no point presenting a binary dump of process memory if the jury has no idea what it
all means. Similarly, if you present them with a formatted, human-understandable version,
you must be able to show the relationship to the original binary, otherwise there’s no way for
the jury to know whether you’ve faked it.
Using the preceding five rules, you can derive some basic do’s and don’ts:

o Minimize handling/corruption of original data

o Account for any changes and keep detailed logs of your actions

o Comply with the five rules of evidence

o Do not exceed your knowledge

o Follow your local security policy

o Capture as accurate an image of the system as possible

o Be prepared to testify

o Ensure your actions are repeatable

o Work fast

o Proceed from volatile to persistent evidence

o Don’t shutdown before collecting evidence

o Don’t run any programs on the affected system

Minimize Handling/Corruption of Original Data

Once you’ve created a master copy of the original data, don’t touch it or the original itself—
always handle secondary copies. Any changes made to the originals will affect the outcomes of
any analysis later done to copies. You should make sure you don’t run any programs that modify
the access times of all files (such as tar and xcopy). You should also remove any external
avenues for change and, in general, analyze the evidence after it has been collected.

Account for Any Changes and Keep Detailed Logs of Your Actions

Sometimes evidence alteration is unavoidable. In these cases, it is absolutely essential that the
nature, extent, and reasons for the changes be documented. Any changes at all should be
accounted for—not only data alteration but also physical alteration of the originals (i.e., the
removal of hardware components).

Comply with the Five Rules of Evidence


The five rules are there for a reason. If you don’t follow them, you are probably wasting your
time and money. Following these rules is essential to guaranteeing successful evidence
collection.

Do Not Exceed Your Knowledge

If you don’t understand what you are doing, you can’t account for any changes you make and
you can’t describe what exactly you did. If you ever find yourself “out of your depth,” either go
and learn more before continuing (if time is available) or find someone who knows the territory.
Never soldier on regardless—you’re just damaging your case.

Follow Your Local Security Policy

If you fail to comply with your company’s security policy, you may find yourself with some
difficulties. Not only may you end up in trouble (and possibly fired if you’ve done something
really against policy), but also you may not be able to use the evidence you’ve gathered. If in
doubt, talk to those who know.

Capture as Accurate an Image of the System as Possible

Capturing an accurate image of the system is related to minimizing the handling or corruption of
original data—differences between the original system and the master copy count as a change to
the data. You must be able to account for the differences.

Be Prepared to Testify

If you’re not willing to testify to the evidence you have collected, you might as well stop before
you start. Without the collector of the evidence being there to validate the documents created
during the evidence-collection process, the evidence becomes hearsay, which is inadmissible.
Remember that you may need to testify at a later time.

Ensure That Your Actions Are Repeatable

No one is going to believe you if they can’t replicate your actions and reach the same results.
This also means that your plan of action shouldn’t be based on trial-and-error.

Work Fast

The faster you work, the less likely the data is going to change. Volatile evidence may vanish
entirely if you don’t collect it in time. This is not to say that you should rush—you must still be
collecting accurate data. If multiple systems are involved, work on them in parallel (a team of
investigators would be handy here), but each single system should still be worked on
methodically. Automation of certain tasks makes collection proceed even faster.

Proceed from Volatile to Persistent Evidence


Some electronic evidence (see below) is more volatile than others are. Because of this, you
should always try to collect the most volatile evidence first.

Don’t Shutdown before Collecting Evidence

You should never, ever shutdown a system before you collect the evidence. Not only do you lose
any volatile evidence but also the attacker may have trojaned (trojan horse) the startup and
shutdown scripts, Plug-and-Play devices may alter the system configuration and temporary file
systems may be wiped out. Rebooting is even worse and should be avoided at all costs. As a
general rule, until the compromised disk is finished with and restored, it should never be used as
a boot disk.

Don’t Run Any Programs on the Affected System

Because the attacker may have left trojaned programs and libraries on the system, you may
inadvertently trigger something that could change or destroy the evidence you’re looking for.
Any programs you use should be on read-only media (such as a CD-ROM or a write-protected
floppy disk), and should be statically linked.

b) Jurisdiction

Jurisdiction In Cyberspace

A fast-paced world, and surprisingly fitting in one’s hand. The world is in the era of
“internet and cyberspace”, and it seems faster and better than ever. But it all comes
with a price, that mankind is still in the exploration of. Just as in the real and physical
world, the virtual space created by humans also sees a plethora of criminal activities
on a day to day basis where the data of millions of people acts as valuable assets. It
has the power to instigate a civil war or to destroy nations altogether, steal data for
ransom, or even rob millions from a bank in seconds. It becomes quite a challenge to
map out a conclusive set of applicable laws to contain this mass virtual force. The
major obstacle being how, when these offences are prosecuted, the personal
jurisdiction is to be applied.

This article breaks down how the legal principles have evolved while determining
personal jurisdiction in cyberspace.

Cyberspace- The Virtual Universe

Cyberspace is an imaginary area or a virtual space where a connection can be


established between two computers at any two points in the world, with absolutely no
limits.
The word ‘cyberspace’ was used in the Novel ‘Neuromancer’ by William Gibson,
for the first time in 1984, which is a science fiction and defined as an interaction
between the human mind and computers. 1

While cyberspace and the internet share very similar connotations, cyberspace can be
defined as anything that is done using the internet, while the internet is a network or
networks.

In layman terms “cyberspace” is a virtual universe made up of the widely spread and
interconnected digital gadgets and technology, enabling one to create, modify, share,
exchange, extract and destroy the physical resources floating all over the internet.

The world we live in is possibly at its simplest, most sophisticated version, as at this
point in time, and we could only hope for it to make many innovative new changes.
The world seems so much smaller at our fingertips, lives have collectively become
easier. Education, E-commerce, shopping, banking, and almost every other essential
has taken its spot on the internet. In fact, some of the richest multinational companies
are that of Google and Facebook that are empires built virtually on nothing but data.
The huge number of users are the customers and their personal information, the asset.
Each of these businesses run on nothing but loads of information, some private, some
not, and it becomes necessary to build a hyper-vigilant screening process in providing
our personal information, because of the immense threats that tag-along with this
mighty tool.

With business transactions moving online, the conventional methods of dealing with
legal complications are also in need of remoulding to fit into the present, needful
circumstances.

It is often very ambiguous to decipher what place holds jurisdiction over disputes that
arise in the vast cyberspace. In her paper “Principles of Jurisdiction”, Betsy
Rosenblatt states that “a court must first decide “where” the internet conduct takes
place, and what it means for internet activity to have an “effect” within a state or a
nation”. [1]

The concept of national borders and distance stands irrelevant in cyberspace. By


setting up a website from a home computer, here in India, one can grant access to
anybody around the world, making communication a piece of cake. While
communication is easier, the legal threats posed are quite drastic.
Threats To Cyberspace
With the amount of information being constantly exchanged, the threats in cyberspace
are equally large. It is also important to register the intensity of changes the
cyberspace is constantly subjected to, which concurrently aids in the advancement of
the cyberattacks.

Cyberattacks can range from personal data breaches to mass frauds, each of which is
equally dangerous and harmful, putting one’s usage of cyberspace at risk.

Cyberattacks are where internet users use malicious maneuvres to steal, destroy,
expose, or gain unauthorized access into the personal information of a person,
company, military databases, etc.,

Cyberattacks are a part of cyber warfare- where cyberspaces containing classifies


military information, are attacked to wage war and other military purposes, and cyber
terrorism- where cyberspaces are used to conduct violent criminal activities.

Some of these common cyberattacks include phishing, identity theft, ransomware,


hacking, child pornography, malware, credit or debit card frauds, disinformation-
harming an individual, property or a nation.

What Is Personal Jurisdiction?


Personal jurisdiction refers to the jurisdiction exerted by law, over a person in
deciding a particular lawsuit. It also operates along with the due procedure of law
established by the constitution of that country. Personal jurisdiction in cyberspace has
evolved, one case law at a time, like cyberspace itself. The advancements are
constant; hence it proposes a challenge for the laws to keep up with it.

Due to its versatile and inconsistent nature, absence of physical boundaries and
dynamic space structures, containing cyberspace in the bounds of a few specific laws
and assigning jurisdiction becomes quite a task.

To break it down, a “cyberspace” is created by a computer, and this virtual space


“holds” all information. All physical transactions and all legal connotations attached
to it goes into overdrive in cyberspace.

“A transaction in cyberspace fundamentally involves three parties. The user, the


server host and the person with whom the transaction is taking place with the need to
be put within one jurisdiction.” [2]
In terms of personal jurisdiction, to separate disputes into domestic or international, in
cyberspace, it is important to distinguish disputes based on (i) what has happened? (ii)
where has it happened? (iii) why did it happen?

Hence, a resident shall inevitably be tried under municipal laws, but there persists
ambiguity while dealing with non-residents.

Traditionally, jurisdiction is exerted by a court in specific matters by terms of


territory, subject matter, or the applicable law.

Often involving multiple countries in one single transaction on cyberspace, it is


challenging to dissect the disputes arising into the laws of one particular country. One
of the ultimate recourses could be sought under Public International Law, to eliminate
jurisdictional clashes between countries and conflicts of law arising out of it, using the
principles of “personal jurisdiction”. Jurisdiction, under International Law is of three
types: (1) jurisdiction to prescribe; (2) jurisdiction to enforce; and (3) jurisdiction to
adjudicate. To replicate these into cyberspace, one can consider the ‘law of the
server’, that is, the physical position of the server or where the webpage is located and
claim the jurisdiction of that country. However, these principles are of no use when
the cyberspaces are used to commit terrorist activities hence maintaining anonymity
of its servers.

Personal Jurisdiction in Cyberspace Around the


World
The United States, having one of the strongest cyberspace laws in force, while
formulating principles to deal with cases of cyberspaces, stood by the concept of
‘minimum contacts’, a standard that was outlined by the Court in International Shoe v
Washington, 1945. The Court ruled that a non-resident of a state may be sued in that
state if the party has ‘certain minimum contacts with [the state] such that maintenance
of the suit does not offend traditional notions of fair play and substantial justice.’[3]

The US Supreme Court later laid down the “Zipper test” or the “Sliding Scale test”
that- “In the absence of general jurisdiction, specific jurisdiction permits a court to
exercise personal jurisdiction over a non-resident defendant for forum-related
activities where the relationship between the defendant and the forum falls within the
‘minimum contacts’ framework” and classified websites as (i) passive, (ii) interactive
and (iii) integral to the defendant’s business.

The difficulty experienced with the application of the Zippo sliding scale test paved
the way for the application of “the effects test”. The courts have thus moved from a
‘subjective territoriality’ test to an ‘objective territoriality’ or ‘the effects test’ in
which the forum court will exercise jurisdiction if it is shown that effects of the
defendant’s website are felt in the forum state. In other words, it must have resulted in
some harm or injury to the plaintiff within the territory of the forum state- as
pronounced primarily in Calder v. Jones.

The recent lawsuit by the International League Against Racism and Anti-Semitism
and the Union of French Law Students against Yahoo!, (Yahoo! Inc., v La Ligue
Contre Le Racisme Et L’Antisémitisme), which has received a lot of attention in the
popular press summarizes the difficulties that remain in resolving both the
prescriptive and enforcement jurisdictional issues in cyberspace.

It appears that courts and legislatures have found legitimate grounds for asserting
prescriptive jurisdiction over defendants based upon actions taken in cyberspace, but
that may have little importance when the plaintiff seeks a restorative remedy.
Enforcement jurisdiction, which requires the injured party to attach either the
defendant or his tangible assets, becomes an issue of comity or state’s recognition of
its obligation to enforce a law. [4]
“In sum, under U.S. law. if it is reasonable to do so, a court in one state will exercise
jurisdiction over a party in another state or country whose conduct has substantial
effects in the state and whose conduct constitutes sufficient contacts with the state to
satisfy due process. Because this jurisdictional test is ambiguous, courts in every state
of the U.S. may be able to exercise jurisdiction over parties anywhere in the world,
based solely on Internet contacts with the state.”[5]

In European countries, the jurisdiction of cyberspace is determined by the Brussels


Regulations by extending its operations to online disputes and states that “subject to
the provisions of this Regulation, persons domiciled in a contracting state shall,
whatever their nationality, be sued in the courts of that state”- thus eliminating the
ambiguity of jurisdiction.

Germany has passed a law that subjects any Web site accessible in Germany to
German law, holding Internet service providers (ISPs) liable for violations of German
content laws if the providers were aware of the content and were reasonably able to
remove the content. [6]
Malaysia’s new cyberspace law also extends well beyond the borders of Malaysia.
The bill applies to offenses committed by a person in any place, inside or outside of
Malaysia, if at the relevant time the computer, program, or data was either (i) in
Malaysia or (ii) capable of being connected to or sent to or used by or with a computer
in Malaysia. The offender is liable regardless of his nationality or citizenship. [7]
Personal Jurisdiction in Cyberspace- The Indian
Mechanism
Casio India Co. Limited v. Ashita Tele Systems Pvt. Limited the Supreme Court held
that “the website of Defendant can be accessed from Delhi is sufficient to invoke the
territorial jurisdiction of this Court”. [8]

In India TV Independent News Service Pvt. Limited v. India Broadcast Live Llc &
Ors., it was held that “the Defendant is carrying on activities within the jurisdiction of
this court; has sufficient contacts with the jurisdiction of the court and the claim of the
Plaintiff has arisen as a consequence of the activities of Defendant, within the
jurisdiction of this court”.

In Banyan Tree Holding (P) Limited v. A. Murali Krishna Reddy, The Division Bench
of the Delhi High Court, while answering the referral order of the learned Single
Judge, affirmed the ruling in India TV, and overruled the Casio Judgement. [9]

Various laws in India can be deemed applicable to today’s scenario of cyberspace and
everything that is involved with it. It is fascinating to notice how some of these laws,
though decades old, stand accurate to today’s circumstances.

Based on the Sections 15 to 20 of the Code of Civil Procedure, 1908, stipulating the
Indian approach to determining jurisdiction, the jurisdiction shall be detrimental to the
location of the immovable property, or the place of residence or place of the work of
the defendant or the place where the cause of action has arisen. These provisions stand
inapplicable for cyberspace disputes.

The provisions of the Code of Criminal Procedure, 1973 prescribes for multiple places
of jurisdiction based on the place of commission of crime or occurrence of the
consequence of a crime in cases of a continuing crime, which, in the case of
cyberspace, stands accurate.

The persisting laws relating to cyberspace are dealt under the Information Technology
Act, 2000, in India. The objective of the Act is to provide legal recognition to e-
commerce and to facilitate storage of electronic records with the Government.

The Act provides various definitions and instances of cybercrimes, prescribing the
punishment for those crimes and also provides laws for trial of cyber law cases in and
out of the country.
Sec 1 of the IT Act states that, this Act extends to the whole of India and, unless
otherwise provided, it shall also apply to any offence or contravention committed
outside India by any person.

Sec 75 of the IT Act deals with the provisions of the act to apply for offences or
contravention committed outside India, irrespective of his nationality, and shall apply
to an offence or contravention committed outside India by any person if the act or
conduct constituting the offence or contravention involves a computer, computer
system or computer network located in India.

Sec 46 of the IT Act gives power to adjudicate in case of contravention of any


provision in this Act and also appoints an Adjudicating Officer who is vested with the
powers of Civil Courts and are conferred on the Cyber Appellate Tribunal.

As much as the Information Technology Act 2000 seems inclusive, it still does pose
ambiguity in jurisdiction when the offence has been committed outside of India or by
a non-citizen, while also following the principle of Lex Fori, meaning the law of the
country. [10]

A part of IT Act 2000, there are other relevant legislation under Indian laws that gives
the authority to India Courts to adjudicate the matters related to cyber-crimes such as:

Sec 3 and 4 of Indian Penal Code, 1882 that deals with extra territorial jurisdiction of
Indian courts.

Section 188 of the Code of Criminal Procedure, 1973 provides that even if a citizen of
India outside the country commits the offence, the same is subject to the jurisdiction
of courts in India.

And Section 178 deals with the crime or part of it committed in India and Section 179
deals with the consequences of crime in Indian Territory.

Conclusion
Cyberspace- a word gathered from fiction, yet has made its presence felt in reality, to
almost everyone. It is not a luxury, but a bare necessity for people of all ages and has
the entire world to offer, one simple click away.

Artsy and simplistic, is how cyberspace feels and how technology is evolving, giving
mankind its best, the crimes relating to cyberspaces are also on a drastic rise. The
more advanced the cyberspace gets, the more advanced its disputes. The existence of
a virtual space creates a cape of invisibility for those that wish to misuse this
innovation. It is detrimental to observe that since cyberspace and the internet are ever-
evolving entities, the laws to deal with mishaps occurring in these cyberspaces are
formulated long after the damage is done. The very nature of its consistent
development and ease of change, the laws and lawmakers are often stuck at war to
propose an accurate implementation of these offences. Oftentimes, these offences turn
grave and brutal, bringing threat to people’s safety, their lives, important military
information costing a nation’s security or frauds that steal money from a large number
of people- among others.

It is crucial to learn that creating a well-developed set of consistent laws are the only
solution to battle bigger evils that could arise out of cyberspace and its invisible
pursuit.

FAQs
1. Why Is There A Need To For Personal Jurisdiction In Cyberspace?

Personal Jurisdiction does nothing but to provide a clearer vision as to applicability of


laws while prosecuting an offence that has occurred in the vast expanse of cyberspace,
making it easier for courts to decide on the dispute and accurate laws be applied or be
formulated accordingly.

2. Do Cyberspace Laws Need Constant Modifications?

It is evident to note that cyberspace is a dynamic entity, and it advances by the second.
There is a newer innovation every day, making technology easier. But it comes with
its own set of innovative problems. The newer the innovations, the newer ways there
are to commit offences in cyberspace. It calls for a close inspection of cyberspace and
to regulate and reform the disputes in effective new ways with the help of effective
and newer laws.

References
https://blog.ipleaders.in/cyber-security-and-its-legal-implications/

c)Techniques and standards for preserving data


Standards and best practice

Illustration by Jørgen Stamp digitalbevaring.dk CC BY 2.5 Denmark

Introduction

The use and development of reliable standards has long been a cornerstone of the information
industry. They facilitate the access, discovery and sharing of digital resources, as well as their long-
term preservation. There are both generic standards applicable to all sectors that can support digital
preservation, and industry-specific standards that may need to be adhered to. Using standards that
are relevant to the digital institutional environment helps with organisational compliance and
interoperability between diverse systems within and beyond the sector. Adherence to standards also
enables organisations to be audited and certified.

Operational standards

There are a number of standards which can help with the development of an operational model for
digital preservation.

Taking custodial control of digital materials requires a set of procedures to govern their transfer into
a digital preservation environment. This can include identifying and quantifying the materials to be
transferred, assessing the costs of preserving them and identifying the requirements for future
authentication and confidentiality. ISO 20652: Space Data and Information Transfer Systems -
Producer-Archive Interface - Methodology Abstract Standard (ISO, 2006) is an international standard
that provides a methodological framework for developing procedures for the formal transfer of digital
materials from the creator into the digital preservation environment. Objectives, actions and the
expected results are identified for four phases - initial negotiations with the creator (Preliminary
Phase), defining requirements (Formal Definition Phase), the transfer of digital materials to the
digital preservation environment (Transfer Phase) and ensuring the digital materials and their
accompanying metadata conform to what was agreed (Validation Phase).
ISO 14721:2012 Space Data and Information Transfer Systems - Open Archival Information System
- Reference Model (OAIS) (ISO, 2012b) provides a systematic framework for understanding and
implementing the archival concepts needed for long-term digital information preservation and
access, and for describing and comparing architectures and operations of existing and future
archives. It describes roles, processes and methods for long-term preservation. Developed by the
Consultative Committee for Space Data Systems (CCSDS) OAIS was first published in 1999 and
has had an influence upon many digital preservation developments since the early 2000s. A useful
introductory guide to the standard is available as a DPC Technology Watch Report (Lavoie, 2014).

An OAIS is ‘an archive, consisting of an organization of people and systems that has accepted the
responsibility to preserve information and make it available for a defined ‘Designated Community’.
An ‘OAIS archive’ could be distinguished from other uses of the term ‘archive’ by the way that it
accepts and responds to a series of specific responsibilities. OAIS defines these responsibilities as:

 Negotiate for and accept appropriate information from information producers;


 Obtain sufficient control of the information in order to meet long-term preservation objectives;
 Determine the scope of the archive’s user community;
 Ensure that the preserved information is independently understandable to the user community, in the
sense that the information can be understood by users without the assistance of the information
producer;
 Follow documented policies and procedures to ensure the information is preserved against all
reasonable contingencies, and that there are no ad hoc deletions.
 Make the preserved information available to the user community, and enable dissemination of
authenticated copies of the preserved information in its original form, or in a form traceable to the
original. (Lavoie, 2014)

OAIS also defines the information model that needs to be adopted. This includes not only the digital
material but also any metadata used to describe or manage the material and any other supporting
information called Representation Information.

The OAIS functional model is widely used to establish workflows and technical implementations. It
defines a broad range of digital preservation functions including ingest, access, archival storage,
preservation planning, data management and administration. These provide a common set of
concepts and definitions that can assist discussion across sectors and professional groups and
facilitate the specification of archives and digital preservation systems.

OAIS provides a high level framework and a useful shared language for digital preservation but for
many years the concept of ‘OAIS conformance/compliance’ remained hard to pin down. Though the
term was frequently used in the years immediately following the publication of the standard, it relied
on the ability to measure up to just six mandatory but high level responsibilities. A more detailed
discussion about ‘OAIS compliance’ can be found in the Technology Watch Report.

ISO/TR 18492:2005 Long-term preservation of electronic document-based information (ISO/TR,


2005) provides a practical methodology for the continued preservation and retrieval of authentic
electronic document-based information, which includes technology-neutral guidance on media
renewal, migration, quality, security and environmental control. The guidance is developed to ensure
authenticity of records beyond the lifetime of original information keeping systems.

ISO 15489:2001 Information and documentation -- Records management (ISO, 2001) can also be a
useful standard for defining the roles, processes and methods for a digital preservation
implementation where the focus is the long-term management of records. This standard outlines a
framework of best practice for managing business records to ensure that they are curated and
documented throughout their lifecycle while remaining authoritative and accessible.

ISO 16175:2011 Principles and functional requirements for records in electronic office environments
(ISO, 2011) relates to electronic document and records management systems as well as enterprise
content management systems. While it does not include specific requirements for digital
preservation, it does acknowledge the need to maintain records over time and that format
obsolescence issues need to be considered in the specification of these electronic systems.

There are international standards that are generic to good business management that may also be
relevant in the digital preservation domain.

 Certification against ISO 9001 Quality management systems (ISO, 2015) demonstrates an
organisation’s ability to provide and improve consistent products and services.
 Certification against ISO/IEC 27001 Information technology -- Security techniques -- Information
security management systems (ISO/IEC, 2013) demonstrates that digital materials are securely
managed ensuring their authenticity, reliability and usability.
 ISO/IEC 15408 The Common Criteria for Information Technology Security Evaluation (ISO/IEC,
2009) provides a standardised framework for specifying functional and assurance requirements for IT
security and a rigorous evaluation of these.

There are a number of routes through which a digital preservation implementation can be certified.
These range from light touch peer review certification methods such as the Data Seal of Approval,
through the more extensive internal methods of DIN 31644 Information and documentation - Criteria
for trustworthy digital archives (DIN, 2012), to the comprehensive international standard ISO
16363:2012 Audit and certification of trustworthy digital repositories (ISO, 2012a) (see Audit and
certification).

Technical standards

There are specific advantages to using standards for the technical aspects of a digital preservation
programme, primarily in relation to metadata and file formats.

In conjunction with relevant descriptive metadata standards, PREMIS and METS are de facto
standards which will enhance a digital preservation programme. PREMIS (PREservation Metadata:
Implementation Strategies) is a standard hosted by the Library of Congress and first published in
2005. The data dictionary and supporting tools have been specifically developed to support the
preservation of digital material. METS (Metadata Encoding and Transmission Standard) is an XML
encoding standard which enables digital materials to be packaged with archival information
(see Metadata and documentation).

There are also standards relating to file formats. Choosing file formats that are non-proprietary and
based on open format standards gives an organisation a good basis for a digital preservation
programme. ISO/IEC 26300-1:2015 Open Document Format for Office Applications (ISO/IEC, 2015)
provides an XML schema for the preservation of widely used documents such as text documents,
spreadsheets, presentations. ISO 19005 Electronic document file format for long-term preservation
(ISO, 2005) prescribes elements of valid PDF/A which ensures that they are self-contained and
display consistently across different devices. Aspects of JPEG-2000 and TIFF are also covered by
ISO standards. (see File formats and standards).
Barriers to using standards

A standards based approach to digital preservation is important, but there are also factors which
inhibit their use as a digital preservation strategy:

 The pace of change is so rapid that standards which have reached the stage of being formally endorsed
- a process which usually takes years - will inevitably lag behind developments and may even be
superseded.
 Competitive pressures between suppliers encourage the development of proprietary extensions to, or
implementations of standards which can dilute the advantages of consistency and interoperability for
preservation.
 The standards themselves adapt and change to new technological environments, leading to a number
of variations of the original standard which may or may not be interoperable in the long-term even if
they are backwards compatible in the short-term.
 Standards can be intimidating to read and resource intensive to implement.
 In such a changeable and highly distributed environment, it is impossible to be completely
prescriptive.

These factors mean that standards will need to be seen as part of a suite of preservation strategies
rather than the key strategy itself. The digital environment is not inclined to be constrained by rigid
rules and a digital preservation programme can often be a blend of standards and best practice that
is sufficiently flexible and adapted to suit the needs of the organisation, its circumstances and the
digital materials being managed.

Standards, best practice and good practice

In recent years best practice guidance and case studies have been published by national archives,
national libraries and other cultural organisations. Digital preservation is also a topic well discussed
on blogs and social media which can often provide real time information in relation to theory and
practice from around the world. Papers at conferences such as iPRES, the International Digital
Curation Conference (IDCC) and the Preservation and Archiving Special Interest Group (PASIG)
can be a useful source of up to date thinking from academics and practitioners in digital
preservation.

Standards should be understood as a formal description and recognition of what a community of


experts might term best practice. Standards, and the best practice from which they derive can be
intimidating and there is a risk for those starting in digital preservation that the ‘best becomes the
enemy of the good’. So in adopting or recommending standards it should always be understood that
some action is almost always better than no action. Digital preservation is a messy business which
throws up unexpected challenges. So it is almost always the case that a poorly implemented
standard is preferable to waiting for perfection.

Sector specific requirements

Specific industries have become active in the development of preservation standards, and particular
types of content and use cases have emerged that overlap and extend a number of standards.
There is considerable benefit in digital preservation standards being embedded in sector-specific
standards since this will greatly assist their adoption, although this may present a challenge to
coordination of activities. Three examples are given below:
1. Audio visual materials present a special case for digital preservation (see Moving pictures and
sounds). Recommendations for audio recordings and video recordings exist under the auspices of the
International Association of Sound and Audio-visual Archives (such as IASA- TC04, 2009), while a
range of industry bodies and content holders including the BBC, RAI, ORF and INA have formed the
PrestoCentre to progress research and development of preservation standards in this
field. https://www.prestocentre.org/
2. The aerospace industry has particular requirements in product lifecycle management and information
exchange which have given rise to a series of industry wide initiatives to standardise approaches to
aligning and sharing CAD drawings for engineering. The membership body PROSTEP created the
ISO 10303 ‘Standard for Exchange of Product Model Data’ which has developed into the LOTAR
standard (http://www.lotar-international.org/lotar-standard/overview-on-parts.html). LOTAR is
not incompatible with OAIS, but because it fits within a data exchange protocol important to the
industry, aerospace engineers are more likely to encounter LOTAR than OAIS
3. .The Storage Network Industry Association has also begun to make progress on the development of a
series of standards. A SNIA working group on long-term data retention has responsibility for both
physical and logical preservation, and the creation of reference architectures, services and interfaces
for preservation. In addition, a working group on Cloud Storage is likely to become particularly
influential in relation to preservation. Cloud architectures change how organizations view repositories
and how they access services to manage them. For example, it is unclear how one would measure the
success of a ‘trusted digital repository’ that was based in a cloud provider.

Resources

Seeing Standards; A visualisation of the metadata universe

http://jennriley.com/metadatamap/

The sheer number of metadata standards in the cultural heritage sector is overwhelming, and their
inter-relationships further complicate the situation. This visual map of the metadata landscape is
intended to assist planners with the selection and implementation of metadata standards. Each of
the 105 standards listed here is evaluated on its strength of application to defined categories in each
of four axes: community, domain, function, and purpose. (2010, 1 page).

Dlib Magazine

http://www.dlib.org/dlib.html

Dlib Magazine publishes on a regular basis a wide range of papers and case studies on the practical
implementation of digital preservation standards and best practice.
Core Trust Seal

https://www.coretrustseal.org/

PREMIS

http://www.loc.gov/standards/premis/

Library of Congress, 2015

The Digital Curation Centre

http://www.dcc.ac.uk/

The Digital Curation Centre makes available research and case studies in relation to the
preservation of research data. Iit also publishes recordings of its annual international digital curation
conference proceedings.

The Signal

http://blogs.loc.gov/digitalpreservation/

The Signal is a digital preservation blog published by the Library of Congress

IPRES

http://www.ipres-conference.org/

IPRES, the International Conference on Digital Preservation publishes a website and proceedings
from their annual event which looks at different themes within the digital preservation landscape,

The Digital Preservation Coalition Wiki

http://wiki.dpconline.org/index.php?title=Main_Page

The Digital Preservation Coalition Wiki provides a collaborative space for users of OAIS, the British
Library’s file format assessments as well as other resources.

Digital Preservation Matters

http://preservationmatters.blogspot.co.uk/
The Digital Preservation Matters blog is a personal account of experiences from working with Digital
Preservation

2.Evidence Analysis

a) Os/File system forensics

Computer forensics: Media & file system forensics [updated


2019]
July 5, 2019 byInfosec

Share:

A file system in a computer is the manner in which files are named and
logically placed for storage and retrieval. It can be considered as a
database or index that contains the physical location of every single
piece of data on the respective storage device, such as hard disk, CD,
DVD or a flash drive. This data is organized in folders, which are called
directories. These directories further contain folders and files.

For storing and retrieving files, file systems make use of metadata,
which includes the date the file was created, data modified, file size,
and so on. They can also restrict users from accessing a particular file
by using encryption or a password.

Files are stored on a storage media in “sectors”. Unused sectors can


be utilized for storing data, typically done in sector groups known as
blocks. The file system identifies the file size and position and the
sectors that are available for storage. If a structure for organizing files
wouldn’t exist, it would not be possible to delete or retrieve files, or to
keep two files with the same name since all the files would exist in the
same folder. For example, it is because of folders that we are able to
name two different image files with the same name, as both exist in
two different folders. But if two files are in the same directory, they
cannot have the same name.
Most of the applications need a file system to work, hence every
partition needs to have one. Programs are also dependent on file
systems, which means that if a program is built to be used in Mac OS,
it will not run on Windows.

Learn Digital Forensics

Build your skills with hands-on forensics training for computers, mobile
devices, networks and more.
Start Learning

Some commonly used file systems

FAT File System

FAT or File Allocation Table is a file system used by operating systems


for locating files on a disk. Due to fragmentation, files may be
scattered around and divided into sections. FAT system keeps a track
of all parts of the file. FAT has existed as a file system since the
advent of personal computers.
Features

 File Name

o FAT system in MS DOS allows file names of 8 characters only

o FAT file system in Windows supports long file name, with full file path being
as long as 255 characters

o File name should start with alphanumeric characters

o File names can have any character except “/ = [],? ^“”

o File names can have more than one period and spaces. Characters that come
after the last period in full file name are considered as the file extension.
 FAT file system does not support folder and local security. This means users
logged into a computer locally will gain complete access to folders and files
that lie in FAT partitions.

 It provides fast access to files. The rate depends upon the size of partition,
file size, type of file and number of files in the folder.

FAT 32 File System

This is an advanced version of the FAT File system and can be used on
drives ranging from 512 MB to 2 TB.

Features

 It is more storage-efficient and supports up to 2TB of size

 Provides a better usage of disk space

 Easier access of files in partitions less than 500 MB or greater than 2GB in
size
The figure below shows partitioning layout in FAT and FAT 32 file
systems.
NTFS File System

The NTFS File System stands for New Technology File System.

Features

 Naming

o File name can be as long as 255 characters

o File names can have any character other than / “ :*

o They are not case sensitive


 It provides folder and file security. This is done by passing on NTFS
permission to files and folders. Security works at local as well as network
level. Every file and folder in the list has an Access Control List that includes
the users, security identifier, and the access privileges that are granted to
the users.

 Files and partition sizes are larger in NTFS than those of FAT. An NTFS
partition can be of a size as large as 16 Exabytes, but practically it is limited
to 2TB. File size can range from 4GB to 64 GB.

 It provides up to 50% file compression


 It is a reliable and recoverable file system which makes use of transaction
logs for updating files and folders automatically.

 It provides bad-cluster mapping. This means that it can detect bad clusters
or erroneous space in the disk, retrieve the data in those clusters, and then
store it in another space. To avoid further data storage in those areas, bad
clusters are marked for errors.

EXT File Systems

Extended file system (EXT), Second Extended file system (EXT2) and
Third Extended file system (EXT3) are designed and implemented on
Linux. The EXT is an old file system that was used in pioneer Linux
systems. EXT2 is probably one of the most widely used Linux file
systems. EXT 3 also includes same features as EXT 2, but also
includes journaling.
Here we will talk about the most commonly used EXT2. With the
optimizations in kernel code, it provides robustness along with good
performance whilst providing standard and advanced Unix file
features.

Features

 Supports standard file types in Unix i.e. regular files, device special files,
directories, symbolic links
 Can manage file systems created on huge partitions. Originally, file system
size was restricted to 2 GB, but with recent work in VFS layer, this limit has
now increased to 4 TB.

 Reserves about 5 percent of blocks for administrator usage, thus allowing


the admins to recover from situations of overfilled processes.

 Allows for secure deletion of files. Once data is deleted, the space is
overwritten with random data to prevent malicious users from gaining
access to the previous data.

What is a file format?

A file format is a layout and organization of data within the file. If a file
is to be used by a program, it must be able to recognize and have
access to the data in the file. For instance, a text document can be
recognized by a program such as Microsoft that is designed to run text
files but not by a program that is designed to run audio or video files.

A file format is indicated along with the file name in the form of a file
extension. The extension contains three or four letters identifying the
format and is separated from the file name by a period.

Some common types of files

There are many types of file formats that have their respective
programs for processing the files. Some of the common file formats
are:

 Word files or documents (.doc)

 Images (.jpg, .gif, .png, etc.)

 Executable files (.exe)

 Multimedia (.mp3, .mp4 and others)


 Acrobat reader files (.pdf)

 Web page files (.html or .htm)

 Notepad or wordpad files (.txt)

 Powerpoint files (.ppt)

 Disk image file containing all the files and folders on a disk (.iso)

 Dynamic Link Library Files (.dll)

 Compressed files that combine a number of files into one single file (.zip
and .rar)

Steps in the file system forensics process

Carrying out a forensic analysis of file systems is a tedious task and


requires expertise every step of the way. Following are the steps that
can help analyze a file system for data that may provide evidence in a
forensic investigation.

Acquisition

The system should be secured to ensure that all data and equipment
stays safe. In other words, all media required for forensic analysis
should be acquired and kept safe from any unauthorized access. Find
out all files on the computer system including encrypted, password-
protected, hidden and deleted (but not overwritten) files. These files
must be acquired from all storage media that include hard drive and
portable media. Once acquired, forensic investigators have to make a
copy of them so that the original files are kept intact without the risk
of alteration.

This can be done in four ways:


 Disk-to-Image: This is the most common method as it provides more
flexibility and allows to create multiple copies.

 Disk-to-Disk: Used where disk-to-image is not possible.

 Logical: it captures only the files that are of interest to the case. Used when
time is limited.

 Sparse: It gathers fragments of deleted or unallocated data.

Validation and discrimination

Before you analyze an image, you need to validate it to ensure the


integrity of the data.

Hashing algorithms help forensic investigators determine whether a


forensic image is exact copy of original volume or disk. This validates
the integrity of an evidence and conforms to its admissibility into the
court.

Extraction

Next comes data extraction, which involves the retrieving of


unstructured or deleted data and needs to be processed for forensic
investigation. Many computer users think that a file, once deleted, will
disappear forever from the hard disk. However, this is not true.
Deleting files only removes it from the disc contents table. In FAT
systems it is called the File Allocation Table, while in NTFS it is called
the Master File Table. Data is stored in clusters on the hard disc and
consists of a certain number of bits. Parts of files are mostly scattered
throughout the disc, and deleting the files makes it difficult to
reconstruct them, but not impossible. With increased disk capacity, it
now takes longer for all fragments of a file to be overwritten.
In many cases, the criminals may have hidden the data that can turn
out to be useful for forensic investigation. Criminals with basic
technical knowledge have many options available for hiding data such
as disk editor, encryption, steganography, and so on. Recovering and
reconstructing this data can be time consuming, but generally it
produces fruitful evidence.

Extracting data from unallocated space is file carving. It is a helpful


technique in digital forensics that finds deleted or hidden files from the
media. A hidden file can lie in any areas such as slack space,
unallocated clusters or lost clusters of the digital media or disk. For
using file carving, a file should have a header which can be located by
performing a search which continues till the file footer is located. Data
that lies between these two points is extracted and then analyzed for
file validation.

Reconstruction

Extracted data can be reconstructed using a variety of available


software tools that are based on various reconstruction algorithms
such as bottom-up tree reconstruction and inference of partition
geometry. Reconstructed data is thoroughly analyzed for further
evidence and put forth in the form of a report.
Learn Digital Forensics

Build your skills with hands-on forensics training for computers, mobile
devices, networks and more.
Start Learning

Reporting

In order to keep a track record of every step of the investigation,


document every procedural step. Evidence presented without proper
documentation may not be admissible in court. This documentation
should not only include the recovered files and data, but also the
physical layout of the system along with any encrypted or
reconstructed data.

Forensic analysis of time-based metadata can help investigators


correlate distinct information quickly and to find notable time and
dates of activities related to improper computer usage, spoliation and
misappropriation.

To know more about computer and mobile system forensics, you might
be interested in the following resources:
 Our Computer Forensics Boot
Camp: https://www.infosecinstitute.com/courses/computer-forensics-boot-
camp/?utm_source=resources&utm_medium=infosec
%20network&utm_campaign=course%20pricing&utm_content=hyperlink

 Computer Forensics: /category/computerforensics/introduction/

b)Application forensics

APPLICATION
FORENSICS
Balancing the need for speed and careful, methodical steps.

When you suspect that an application has been compromised or a data breach has occurred, it’s
important to act quickly to preserve evidence and identify the root cause. Application forensics
involves forensic examination of applications and their contents (such as logs, security event
monitoring, databases, and config files) to trace the origin of the attack.

There are a number of specific attributes that make application forensics a specialized
discipline:

o Modern-day applications are often distributed across multiple servers, or even multiple
datacenters or cloud providers.

o Applications are often mission-critical for the business, and cannot be taken offline for image
capture and investigation.

o Applications are often backed by large databases that reside on a complex storage layer
with many physical disks.

o Application attacks don’t naturally leave a trail of evidence like other forms of attack.

o Application forensics and incident response requires a comprehensive understanding of


application security issues — this is a specialized knowledge base.

c) Web forensics
Web Forensics

Web forensics relates to any sort of crime committed over the


Internet. With proper knowledge and expert skills, criminal activities
like child pornography, hacking/cracking and identity theft may be
traced back to its perpetrators. Criminals can only be successfully
punished if a sufficient amount of conclusive evidence against them is
found. In this case, Internet history, cache and server logs are of
immense value. You might be surprised by the number of offenders
who search the Internet for advice on how to conduct a crime.

This leaves a trail of evidence both on the client side (e.g., registry
entries, temporary files, index.dat, cookies, favorites, a list of visited
sites or partial website data downloaded to the local browser cache)
and also on the server side (e.g., during log analysis on a server, you
may save precious registers such as the perpetrator IP Address, a
timestamp for each visit, what information was posted, etc.). Again, if
you have the proper tools and knowledge, once you gather this sort of
evidence, it is a great step towards building a strong case.

d) Network forensics

Network forensics overview


April 14, 2020 byDimitar Kostadinov

Share:

Most attacks move through the network before hitting the target and
they leave some trace. According to Locard’s exchange principle,
“every contact leaves a trace,” even in cyberspace.

Network forensics is a science that centers on the discovery and


retrieval of information surrounding a cybercrime within a networked
environment. Common forensic activities include the capture,
recording and analysis of events that occurred on a network in order
to establish the source of cyberattacks.

Network forensics can be particularly useful in cases of network


leakage, data theft or suspicious network traffic. It focuses
predominantly on the investigation and analysis of traffic in a network
that is suspected to be compromised by cybercriminals (e.g., DDoS
attacks or cyber exploitation).

Accessing internet networks to perform a thorough investigation may


be difficult. Most internet networks are owned and operated outside of
the network that has been attacked. Investigation is particularly
difficult when the trace leads to a network in a foreign country.

Data enters the network en masse but is broken up into smaller pieces
called packets before traveling through the network. In order to
understand network forensics, one must first understand internet
fundamentals like common software for communication and search,
which includes emails, VOIP services and browsers. One must also
know what ISP, IP addresses and MAC addresses are.

Learn Network Forensics


Get hands-on experience analyzing logs, protocols, wireless, web
traffic and email for traces left behind by attackers.
Get Started

Identification of attack patterns requires investigators to understand


application and network protocols. Applications and protocols include:

 Web protocols (e.g., http and https)

 File transfer protocols (e.g., Server Message Block/SMB and Network File
System/NFS)

 Email protocols, (e.g., Simple Mail Transfer Protocol/SMTP)

 Network protocols (e.g., Ethernet, Wi-Fi and TCP/IP)


Investigators more easily spot traffic anomalies when a cyberattack
starts because the activity deviates from the norm.

Methods

There are two methods of network forensics:

 “Catch it as you can” method: All network traffic is captured. It guarantees


that there is no omission of important network events. This process is time-
consuming and reduces storage efficiency as storage volume grows

 “Stop, look and listen” method: Administrators watch each data packet that
flows across the network but they capture only what is considered
suspicious and deserving of an in-depth analysis. While this method does not
consume much space, it may require significant processing power

Primary sources

Investigators focus on two primary sources:


 Full-packet data capture: This is the direct result of the “Catch it as you can”
method. Large enterprises usually have large networks and it can be
counterproductive for them to keep full-packet capture for prolonged periods
of time anyway

 Log files: These files reside on web servers, proxy servers, Active Directory
servers, firewalls, Intrusion Detection Systems (IDS), DNS and Dynamic Host
Control Protocols (DHCP). Unlike full-packet capture, logs do not take up so
much space
Log files provide useful information about activities that occur on the
network, like IP addresses, TCP ports and Domain Name Service
(DNS). Log files also show site names which can help forensic experts
see suspicious source and destination pairs, like if the server is
sending and receiving data from an unauthorized server somewhere in
North Korea. In addition, suspicious application activities — like a
browser using ports other than port 80, 443 or 8080 for communication
— are also found on the log files. Log analysis sometimes requires both
scientific and creative processes to tell the story of the incident.

Network forensics is also dependent on event logs which show time-


sequencing. Investigators determine timelines using information and
communications recorded by network control systems. Analysis of
network events often reveals the source of the attack.

Tools

Free software tools are available for network forensics. Some are
equipped with a graphical user interface (GUI). Most though, only have
a command-line interface and many only work on Linux systems.

Here are some tools used in network forensics:

 EMailTrackerPro shows the location of the device from which the email is
sent
 Web Historian provides information about the upload/download of files on
visited websites

 Wireshark can capture and analyze network traffic between devices


According to “Computer Forensics: Network Forensics Analysis and
Examination Steps,” other important tools include NetDetector,
NetIntercept, OmniPeek, PyFlag and Xplico. The same tools used for
network analysis can be used for network forensics.

It is interesting to note that network monitoring devices are hard to


manipulate. For that reason, they provide a more accurate image of an
organization’s integrity through the recording of their activities.

Legal considerations

Privacy and data protection laws may pose some restrictions on active
observation and analysis of network traffic. Without explicit
permission, using network forensics tools must be in line with the
legislation of a particular jurisdiction. Permission can be granted by a
Computer Security Incident Response Team (CSIRT) but a warrant is
often required.

Conclusion: How does network forensics compare to computer


forensics?

Network forensics is a subset of digital forensics. Compared to digital


forensics, network forensics is difficult because of volatile data which
is lost once transmitted across the network. Network forensics
focuses on dynamic information and computer/disk forensics works
with data at rest.

Similarly to Closed-Circuit Television (CCTV) footage, a copy of the


network flow is needed to properly analyze the situation. Due to the
dynamic nature of network data, prior arrangements are required to
record and store network traffic. The deliberate recording of network
traffic differs from conventional digital forensics where information
resides on stable storage media. Also, logs are far more important in
the context of network forensics than in computer/disk forensics.

As part of the entire digital forensic investigation, network forensics


helps assemble missing pieces to show the investigator the whole
picture. It can support root-cause analysis by showing initial method
and manner of compromise.

e) Mobile device forensics

Mobile forensics, a subtype of digital forensics, is concerned with retrieving data


from an electronic source. The recovery of evidence from mobile devices such
as smartphones and tablets is the focus of mobile forensics. Because
individuals rely on mobile devices for so much of their data sending, receiving,
and searching, it is reasonable to assume that these devices hold a significant
quantity of evidence that investigators may utilize.
Mobile devices may store a wide range of information, including phone records
and text messages, as well as online search history and location data. We
frequently associate mobile forensics with law enforcement, but they are not the
only ones who may depend on evidence obtained from a mobile device.
Uses of Mobile Forensics:
The military uses mobile devices to gather intelligence when planning military
operations or terrorist attacks. A corporation may use mobile evidence if it fears
its intellectual property is being stolen or an employee is committing fraud.
Businesses have been known to track employees’ personal usage of business
devices in order to uncover evidence of illegal activity. Law enforcement, on the
other hand, may be able to take advantage of mobile forensics by using
electronic discovery to gather evidence in cases ranging from identity theft to
homicide.
Process of Mobile Device Forensics:
 Seizure and Isolation: According to digital forensics, evidence should
always be adequately kept, analyzed, and accepted in a court of law. Mobile
device seizures are followed by a slew of legal difficulties. The two main
risks linked with this step of the mobile forensic method are lock activation
and network / cellular connectivity.
 Identification: The identification purpose is to retrieve information from the
mobile device. With the appropriate PIN, password, pattern, or biometrics, a
locked screen may be opened. Passcodes are protected, but fingerprints are
not. Apps, photos, SMSs, and messengers may all have comparable lock
features. Encryption, on the other hand, provides security that is difficult to
defeat on software and/or hardware level.
 Acquisition: Controlling data on mobile devices is difficult since the data
itself is movable. Once messages or data are transmitted from a
smartphone, control is gone. Despite the fact that various devices are
capable of storing vast amounts of data, the data itself may be stored
elsewhere. For example, data synchronization across devices and apps may
be done either directly or via the cloud. Users of mobile devices commonly
utilize services such as Apple’s iCloud and Microsoft’s One Drive, which
exposes the possibility of data harvesting. As a result, investigators should
be on the lookout for any signs that data may be able to transcend the
mobile device from a physical object, as this might have an impact on the
data collecting and even preservation process.
 Examination and analysis: Because data on mobile devices is
transportable, it’s tough to keep track of it. When messages or data from a
smartphone are moved, control is lost. Despite the fact that numerous
devices can hold vast amounts of data, the data itself may be stored
elsewhere.
 Reporting: The document or paper trail that shows the seizure, custody,
control, transfer, analysis, and disposition of physical and electronic
evidence is referred to as forensic reporting. It is the process of verifying
how any type of evidence was collected, tracked, and safeguarded.
Principles of Mobile Forensics:
The purpose of mobile forensics is to extract digital evidence or relevant data
from a mobile device while maintaining forensic integrity. To accomplish so, the
mobile forensic technique must develop precise standards for securely seizing,
isolating, transferring, preserving for investigation, and certifying digital
evidence originating from mobile devices.
The process of mobile forensics is usually comparable to that of other fields of
digital forensics. However, it is important to note that the mobile forensics
process has its own unique characteristics that must be taken into account. The
use of proper methods and guidelines is a must if the investigation of mobile
devices is to give positive findings.

unit 5

1. Attacks
a) Computer attacks
What is a cyber attack?
A cyber attack is any attempt to gain unauthorized access to a computer, computing
system or computer network with the intent to cause damage. Cyber attacks aim to
disable, disrupt, destroy or control computer systems or to alter, block, delete, manipulate
or steal the data held within these systems.

Any individual or group can launch a cyber attack from anywhere by using one or more
various attack strategies.

People who carry out cyber attacks are generally regarded as cybercriminals. Often
referred to as bad actors, threat actors and hackers, they include individuals who act
alone, drawing on their computer skills to design and execute malicious attacks. They can
also belong to a criminal syndicate, working with other threat actors to find weaknesses
or problems in the computer systems -- called vulnerabilities -- that they can exploit for
criminal gain.

Government-sponsored groups of computer experts also launch cyber attacks. They're


identified as nation-state attackers, and they have been accused of attacking the
information technology (IT) infrastructure of other governments, as well as
nongovernment entities, such as businesses, nonprofits and utilities.

Why do cyber attacks happen?


Cyber attacks are designed to cause damage. They can have various objectives, including
the following:

Financial gain. Cybercriminals launch most cyber attacks, especially those against
commercial entities, for financial gain. These attacks often aim to steal sensitive data,
such as customer credit card numbers or employee personal information, which the
cybercriminals then use to access money or goods using the victims' identities.

THIS ARTICLE IS PART OF


The ultimate guide to cybersecurity planning for businesses
 Which also includes:

 10 cybersecurity best practices and tips for businesses

 Cybersecurity budget breakdown and best practices

 Top 7 enterprise cybersecurity challenges in 2023

Other financially motivated attacks are designed to disable computer systems, with
cybercriminals locking computers so owners and authorized users cannot access the
applications or data they need; attackers then demand that the targeted organizations pay
them ransoms to unlock the computer systems.

Still, other attacks aim to gain valuable corporate data, such as propriety information;
these types of cyber attacks are a modern, computerized form of corporate espionage.

Disruption and revenge. Bad actors also launch attacks specifically to sow chaos,
confusion, discontent, frustration or mistrust. They could be taking such action as a way
to get revenge for acts taken against them. They could be aiming to publicly embarrass
the attacked entities or to damage the organizations' reputations. These attacks are often
directed at government entities but can also hit commercial entities or nonprofit
organizations.

Nation-state attackers are behind some of these types of attacks. Others,


called hacktivists, might launch these types of attacks as a form of protest against the
targeted entity; a secretive decentralized group of internationalist activists known
as Anonymous is the most well known of such groups.

Insider threats are attacks that come from employees with malicious intent.

Cyberwarfare. Governments around the world are also involved in cyber attacks, with
many national governments acknowledging or suspected of designing and executing
attacks against other countries as part of ongoing political, economic and social disputes.
These types of attacks are classified as cyberwarfare.

How do cyber attacks work?


Threat actors use various techniques to launch cyber attacks, depending in large part on
whether they're attacking a targeted or an untargeted entity.

In an untargeted attack, where the bad actors are trying to break into as many devices or
systems as possible, they generally look for vulnerabilities in software code that will
enable them to gain access without being detected or blocked. Or, they might employ
a phishing attack, emailing large numbers of people with socially engineered messages
crafted to entice recipients to click a link that will download malicious code.

In a targeted attack, the threat actors are going after a specific organization, and the
methods used vary depending on the attack's objectives. The hacktivist group
Anonymous, for example, was suspected in a 2020 distributed denial-of-service (DDoS)
attack on the Minneapolis Police Department website after a Black man died while being
arrested by Minneapolis officers. Hackers also use spear-phishing campaigns in a
targeted attack, crafting emails to specific individuals who, if they click included links,
would download malicious software designed to subvert the organization's technology or
the sensitive data it holds.

Cyber criminals often create the software tools to use in their attacks, and they frequently
share those on the so-called dark web.

Cyber attacks often happen in stages, starting with hackers surveying or scanning for
vulnerabilities or access points, initiating the initial compromise and then executing the
full attack -- whether it's stealing valuable data, disabling the computer systems or both.

In fact, most organizations take months to identify an attack underway and then contain
it. According to the "2022 Cost of a Data Breach" report from IBM, organizations with
fully deployed artificial intelligence and automation security tools took an average of 181
days to identify a data breach and another 68 days to contain it, for a total of 249 days.
Organizations with partially deployed AI and automation took a total of 299 days to
identify and contain a breach, while those without AI and automation took an average of
235 days to identify a breach and another 88 days to contain it, for a total of 323 days.

What are the most common types of cyber attacks?


Cyber attacks most commonly involve the following:

1. Malware is malicious software that attacks information systems. Ransomware,


spyware and Trojans are examples of malware. Depending on the type of malicious
code, malware could be used by hackers to steal or secretly copy sensitive data,
block access to files, disrupt system operations or make systems inoperable.

2. Phishing occurs when hackers socially engineer email messages to entice recipients
to open them. The messages trick recipients into downloading the malware within
the email by either opening an attached file or embedded link. The "2022 State of
the Phish" report from cybersecurity and compliance company Proofpoint found
that 83% of survey respondents said their organization experienced at least one
successful phishing attack in 2021, up 46% over 2020. Moreover, the survey also
revealed that 78% of organizations saw an email-based ransomware attack in 2021.

3. SMiShing (also called SMS phishing or smishing) is an evolution of the phishing


attack methodology via text (technically known as Short Message Service, or SMS).
Hackers send socially engineered texts that download malware when recipients click
on them. According to the Proofpoint report, 74% of organizations experienced
smishing attacks in 2021, up from 61% in 2020.

4. Man-in-the-middle, or MitM, occur when attackers secretly insert themselves


between two parties, such as individual computer users and their financial
institutions. Depending on the actual attack details, this type of attack may be more
specifically classified as a man-in-the-browser attack, monster-in-the-middle
attackor machine-in-the-middle attack. MitM is also sometimes called
an eavesdropping attack.
5. DDoS take place when hackers bombard an organization's servers with large
volumes of simultaneous data requests, thereby making the servers unable to
handle any legitimate requests.

6. SQL injection occurs when hackers insert malicious code into servers using
the Structured Query Language programming language to get the server to reveal
sensitive data.

7. Zero-day exploit happens when hackers first exploit a newly identified vulnerability
in IT infrastructure. For example, a series of critical vulnerabilities in a widely used
piece of open source software, the Apache Log4j Project, was reported in December
2021, with the news sending security teams at organizations worldwide scrambling
to address them.

8. Domain name system (DNS) tunneling is a sophisticated attack in which attackers


establish and then use persistently available access -- or a tunnel -- into their targets'
systems.

9. Drive-by, or drive-by download, occurs when an individual visits a website that, in


turn, infects the unsuspecting individual's computer with malware.

10. Credential-based attacks happen when hackers steal the credentials that IT workers
use to access and manage systems and then use that information to illegally access
computers to steal sensitive data or otherwise disrupt an organization and its
operations.

11. Credential stuffing takes place when attackers use compromised login credentials
(such as an email and password) to gain access to other systems.

12. Brute-force attack in which hackers employ trial-and-error methods to crack login
credentials such as usernames, passwords and encryption keys, hoping that the
multiple attempts pay off with a right guess.
How can you prevent a cyber attack?
There is no guaranteed way for any organization to prevent a cyber attack, but there are
numerous cybersecurity best practices that organizations can follow to reduce the risk.
Reducing the risk of a cyber attack relies on using a combination of skilled security
professionals, processes and technology.

Reducing risk also involves three broad categories of defensive action:

1. preventing attempted attacks from actually entering the organization's IT systems;

2. detecting intrusions; and

3. disrupting attacks already in motion -- ideally, at the earliest possible time.

Best practices include the following:

 implementing perimeter defenses, such as firewalls, to help block attack attempts


and to block access to known malicious domains;

 adopting a zero trust framework, which requires every attempt to access an


organization's network or systems -- whether it comes from an internal user or from
another system -- to verify it can be trusted.

 using software to protect against malware, namely antivirus software, thereby


adding another layer of protection against cyber attacks;

 having a patch management program to address known software vulnerabilities


that could be exploited by hackers;

 setting appropriate security configurations, password policies and user access


controls;

 maintaining a monitoring and detection program to identify and alert to suspicious


activity;

 instituting a threat hunting program, where security teams using automation,


intelligent tools and advanced analyses actively look for suspicious activity and the
presence of hackers before they strike.

 creating incident response plans to guide reaction to a breach; and


 training and educating individual users about attack scenarios and how they as
individuals have a role to play in protecting the organization.

Tips for
security professionals and employees on how to improve cybersecurity
What are the most well-known cyber attacks?
Cyber attacks have continued to increase in sophistication and have had significant
impacts beyond just the companies involved.

For example, JBS S.A., an international meat-processing company, suffered a successful


ransomware attack on May 30, 2021. The attack shut down facilities in the United States
as well as Australia and Canada, forcing the company to pay an $11 million ransom.

That came just weeks after another impactful cyberattack. Hackers hit Colonial Pipeline
in May 2021 with a ransomware attack. The attack shut down the largest fuel pipeline in
the United States, leading to fuel shortages along the East Coast.

Several months before that, the massive SolarWinds attack breached U.S. federal
agencies, infrastructure and private corporations in what is believed to be among the
worst cyberespionage attacks inflicted on the U.S. On Dec. 13, 2020, Austin-based IT
management software company SolarWinds was hit by a supply chain attack that
compromised updates for its Orion software platform. As part of this attack, threat actors
inserted their own malware, now known as Sunburst or Solorigate, into the updates,
which were distributed to many SolarWinds customers.

The first confirmed victim of this backdoor was cybersecurity firm FireEye, which
disclosed on Dec. 8 that it was breached by suspected nation-state hackers. It was soon
revealed that SolarWinds attacks affected other organizations, including tech giants
Microsoft and VMware, as well as many U.S. government agencies. Investigations
showed that the hackers -- believed to be sponsored by the Russian government -- had
been infiltrating targeted systems undetected since March 2020.

Here is a rundown of some of the most notorious breaches, dating back to 2009:

 a July 2020 attack on Twitter, in which hackers were able to access the Twitter
accounts of high-profile users;

 a breach at Marriott's Starwood hotels, announced in November 2018, with the


personal data of upward of 500 million guests compromised;

 the Feb. 2018 breach at Under Armour's MyFitnessPal (Under Armour has since sold
MyFitnessPal), which exposed email addresses and login information for 150 million
user accounts;

 the May 2017 WannaCry ransomware attack, which hit more than 300,000
computers across various industries in 150 nations, causing billions of dollars of
damage;

 the September 2017 Equifax breach, which saw the personal information of 145
million individuals compromised;

 the Petya attacks in 2016, which were followed by the NotPetya attacks of 2017,
which hit targets around the world, causing more than $10 billion in damage;

 another 2016 attack, this time at FriendFinder, which said more than 20 years' worth
of data belonging to 412 million users was compromised;
 a data breach at Yahoo in 2016 that exposed personal information contained within
500 million user accounts, which was then followed by news of another attack that
compromised 1 billion user accounts;

 a 2014 attack against entertainment company Sony, which compromised both


personal data and corporate intellectual property, including yet-to-be-released films,
with U.S. officials blaming North Korea for the hack;

 eBay's May 2014 announcement that hackers used employee credentials to collect
personal information on its 145 million users;

 the 2013 breach suffered by Target Corp., in which the data belonging to 110 million
customers was stolen; and

 the Heartland Payment Systems data breach, announced in January 2009, in which
information on 134 million credit cards was exposed.
Cyber attack trends
The volume, cost and impact of cyber attacks continue to grow each year, according to
multiple reports.

Consider the figures from one 2022 report. The "Cybersecurity Solutions for a Riskier
World" report from ThoughtLab noted that the number of material breaches suffered by
surveyed organizations jumped 20.5% from 2020 to 2021. Yet, despite executives and
board members paying more attention -- and spending more -- on cybersecurity than ever
before, 29% of CEOs and CISOs and 40% of chief security officers said their
organization is unprepared for the ever-evolving threat landscape.

The report further notes that security experts expect the volume of attacks to continue
their climb.

The types of cyber attacks, as well as their sophistication, also grew during the first two
decades of the 21st century -- particularly during the COVID pandemic when, starting in
early 2020, organizations enabled remote work en masse and exposed a host of potential
attack vectors in the process.
Consider, for example, the growing number and type of attack vectors -- that is, the
method or pathway that malicious code uses to infect systems -- over the years.

The first virus was invented in 1986, although it wasn't intended to corrupt data in the
infected systems. Cornell University graduate student Robert Tappan Morris created the
first worm distributed through the internet, called the Morris worm, in 1988.

Then came Trojan horse, ransomware and DDoS attacks, which became more destructive
and notorious with names such as WannaCry, Petya and NotPetya -- all ransomware
attack vectors.

The 2010s then saw the emergence of cryptomining malware -- also


called cryptocurrency mining malware or cryptojacking -- where hackers use malware to
illegally take over a computer's processing power to use it to solve complex mathematical
problems to earn cryptocurrency, a process called mining. Cryptomining malware
dramatically slows down computers and disrupts their normal operations.

Hackers also adopted more sophisticated technologies throughout the first decades of the
21st century, using machine learning and artificial intelligence, as well as bots and other
robotic tools, to increase the velocity and volume of their attacks.

And they developed more sophisticated phishing and spear-phishing campaigns, even as
they continued to go after unpatched vulnerabilities; compromised credentials, including
passwords; and misconfigurations to gain unauthorized access to computer systems.

b) Network attacks

What is a Network Attack?

Network attacks are unauthorized actions on the digital assets within an organizational
network. Malicious parties usually execute network attacks to alter, destroy, or steal
private data. Perpetrators in network attacks tend to target network perimeters to gain
access to internal systems.
There are two main types of network attacks: passive and active. In passive network
attacks, malicious parties gain unauthorized access to networks, monitor, and steal
private data without making any alterations. Active network attacks involve modifying,
encrypting, or damaging data.

Upon infiltration, malicious parties may leverage other hacking activities, such as
malware and endpoint attacks, to attack an organizational network. With more
organizations adopting remote working, networks have become more vulnerable to data
theft and destruction.

Protect Your Network with Forcepoint Next-Generation


Firewall (NGFW)
Learn How

Types of Network Attacks

Modern organizations rely on the internet for communication, and confidential data is
often exchanged between networks. Remote accessibility also provides malicious
parties with vulnerable targets for data interception. These may violate user privacy
settings and compromise devices connected to the internet.

Network attacks occur in various forms. Enterprises need to ensure that they maintain
the highest cybersecurity standards, network security policies, and staff training to
safeguard their assets against increasingly sophisticated cyber threats.

DDoS

DDoS (distributed denial of service) attacks involve deploying sprawling networks of


botnets — malware-compromised devices linked to the internet. These bombard and
overwhelm enterprise servers with high volumes of fraudulent traffic. Malicious attackers
may target time-sensitive data, such as that belonging to healthcare institutions,
interrupting access to vital patient database records.

Man-in-the-middle Attacks

Man-in-the-middle (MITM) network attacks occur when malicious parties intercept


traffic conveyed between networks and external data sources or within a network. In
most cases, hackers achieve man-in-the-middle attacks via weak security protocols.
These enable hackers to convey themselves as a relay or proxy account and
manipulate data in real-time transactions.

Unauthorized Access
Unauthorized access refers to network attacks where malicious parties gain access to
enterprise assets without seeking permission. Such incidences may occur due to weak
account password protection, unencrypted networks, insider threats that abuse role
privileges, and the exploitation of inactive roles with administrator rights.

Organizations should prioritize and maintain the least privilege principle to avoid the
risks of privilege escalation and unauthorized access.

SQL Injection

Unmoderated user data inputs could place organizational networks at risk of SQL
injection attacks. Under the network attack method, external parties manipulate forms
by submitting malicious codes in place of expected data values. They compromise the
network and access sensitive data such as user passwords.

There are various SQL injection types, such as examining databases to retrieve details
on their version and structure and subverting logic on the application layer, disrupting its
logic sequences and function.

Network users can reduce the risks of SQL injection attacks by implementing
parameterized queries/prepared statements, which helps verify untrusted data inputs.

Recent Network Attacks

Network attacks remain a lingering issue for organizations as they transition to remote
operations with increased reliance on confidential network communications. Recent
network attacks demonstrate that malicious parties may strike at the least expected
moment. So, cyber vigilance and security should be a priority across all industries.

Social Engineering

According to ISACA’s State of Cybersecurity 2020 Report, social engineering is


the most popular network attack method, with 15 percent of compromised parties
reporting the technique as the vehicle of infiltration. Social engineering involves
elaborate techniques in deception and trickery techniques — phishing — that leverage
users’ trust and emotions to gain access to their private data.

Advanced Persistent Threats

Some network attacks may involve advanced persistent threats (APTs) from a team of
expert hackers. APT parties will prepare and deploy a complex cyber-attacks program.
This exploits multiple network vulnerabilities while remaining undetected by network
security measures such as firewalls and antivirus software.

Ransomware
In ransomware attacks, malicious parties encrypt data access channels while
withholding decryption keys, a model that enables hackers to extort affected
organizations. Payment channels usually include untraceable cryptocurrency accounts.
While cybersecurity authorities discourage paying off malicious parties, some
organizations continue to do so as a quick solution in regaining data access.

Protection from Network Attacks

Evolving network attacks require a modern and proactive network


security solution. Forcepoint’s NGFW (Next Generation Firewall) provides modern
organizations with a suite of sophisticated features necessary to detect and respond to
the most insidious threats within a network.

The NGFW’s real-time monitoring interface enables users to react quickly to the
slightest network anomalies without delay, with a clear breakdown of ongoing
processes. NGFW prioritizes critical networks and devices while identifying the most
evasive network attacks that bypass conventional firewalls.

Additionally, Forcepoint’s next-gen firewall solution safeguards user privacy while


operating decryption functions that effectively spot potentially stolen or compromised
data within SSL and TLS traffic.

Avoid camouflaged network attacks with a firewall solution built to close the evasion
gap. Experience the Forcepoint method to optimize your enterprise data security
standards through its digital transformation.

c) System attacks

What are different types of attacks on a system

Tushar Panhalkar
Director and Lead Trainer at INFOSAVVY

98 articles Follow

June 9, 2020

Many approaches exist to gain access are different types of attacks on a system. One
common requirement for all such approaches is that the attacker finds and exploits a
system’s weakness or vulnerability.

Types of attacks on a system

1. Operating System Attacks

Today’s Operating Systems (OS) are loaded with features and are increasingly complex.
While users take advantage of these features, they are prone to more vulnerabilities,
thus enticing attackers. Operating systems run many services such as graphical user
interfaces (GUIs) that support applications and system tools, and enable Internet access.
Extensive tweaking is required to lock them down. Attackers constantly look for OS
vulnerabilities that allow them to exploit and gain access to a target system or network.
To stop attackers from compromising the network, the system or network
administrators must keep abreast of various new exploits and methods adopted by
attackers, and monitor the networks regularly.

By default, most operating systems’ installation programs install a large number of


services and open ports. This situation leads attackers to search
for vulnerabilities. Applying patches and hot fixes is not easy with today’s complex
networks. Most patches and fixes tend to solve an immediate issue. In order to protect
the system from operating system attacks in general, it is necessary to remove and/or
disable any unneeded ports and services.

Some OS vulnerabilities include:

– Buffer overflow vulnerabilities

– Bugs in the operating system

– An unmatched operating system

Attacks performed at the 05 level include:

– Exploiting specific network protocol implementations


– Attacking built-in authentication systems

– Breaking file-system security

– Cracking passwords and encryption mechanisms

2. Misconfiguration Attacks

Security misconfiguration or poorly configured security controls might allow attackers to


gain unauthorized access to the system, compromise files, or perform other unintended
actions. Misconfiguration vulnerabilities affect web servers, application platforms,
databases, networks, or frameworks that may result in illegal access or possible system
takeover. Administrators should change the default configuration of the devices before
deploying them in the production network. To optimize the configuration of the
machine, remove any unneeded services or software. Automated scanners detect
missing patches, misconfigurations, use of default accounts, unnecessary services, and
so on.

Also Read : Top 10 Most Common Types of Cyber Attacks

Related Product : Certified Ethical Hacker | CEH Certification

3. Application-Level Attacks

Software developers are often under intense pressure to meet deadlines, which can
mean they do not have sufficient time to completely test their products before shipping
them, leaving undiscovered security holes. This is particularly troublesome in newer
software applications that come with a large number of features and functionalities,
making them more and more complex. An increase in the complexity means more
opportunities for vulnerabilities. Attackers find and exploit these vulnerabilities in the
applications using different tools and techniques to gain unauthorized access and steal
or manipulate data.

Security is not always a high priority to software developers, and they handle it as an
“add-on” component after release. This means that not all instances of the software will
have the same level of security. Error checking in these applications can be very poor (or
even nonexistent), which leads to:

 Buffer overflow attacks


 Sensitive information disclosure
 Denial-of-service attacks
 SQL injection attacks
 Cross-site scripting
 Phishing
 Session hijacking
 Parameter/form tampering
 Man-in-the-middle attacks
 Directory traversal attacks
 SQL injection attacks

4. Shrink-Wrap Code Attacks

Software developers often use free libraries and code licensed from other sources in
their programs to reduce development time and cost. This means that large portions of
many pieces of software will be the same, and if an attacker discovers vulnerabilities in
that code, many pieces of software are at risk.

Attackers exploit default configuration and settings of the off-the-shelf libraries and
code. The problem is that software developers leave the libraries and code unchanged.
They need to customize and fine-tune every part of their code in order to make it not
only more secure, but different enough so that the same exploit will not work.

An attack can be active or passive. An “active attack” attempts to alter system resources or affect
their operation. A “passive attack” attempts to learn or make use of information from the system
but does not affect system resources (e.g., wiretapping).you can learn all types of attack in CEH
v10 location in Mumbai. The infosavvy provides the certified Ethical hacking training and EC
Council Certification.

5. Man-in-the-middle (MitM) attack

A MitM attack occurs when a hacker inserts itself between the communications of a
client and a server. Here are some common types of man-in-the-middle attacks:

Session hijacking

In this type of MitM attack, an attacker hijacks a session between a trusted client and
network server. The attacking computer substitutes its IP address for the trusted client
while the server continues the session, believing it’s communicating with the client. as
an example , the attack might unfold like this:

1. A client connects to a server.


2. The attacker’s computer gains control of the client.

3. The attacker’s computer disconnects the client from the server.

4. The attacker’s computer replaces the client’s IP address with its own IP address and
spoofs the client’s sequence numbers.

5. The attacker’s computer continues dialog with the server and therefore the server
believes it’s still communicating with the client.

IP Spoofing

IP spoofing is used by an attacker to convince a system that it’s communicating with a


known, trusted entity and provide the attacker with access to the system. The attacker
sends a packet with the IP source address of a known, trusted host rather than its own IP
source address to a target host. The target host might accept the packet and act upon it.

Replay

A replay attack occurs when an attacker intercepts and saves old messages then tries to
send them later, impersonating one among the participants. this sort can be easily
countered with session timestamps or nonce (a random number or a string that changes
with time).

Currently, there’s no single technology or configuration to stop all MitM attacks.


Generally, encryption and digital certificates provide an efficient safeguard against MitM
attacks, assuring both the confidentiality and integrity of communications. But a man-in-
the-middle attack are often injected into the center of communications in such how that
encryption won’t help — for instance , attacker “A” intercepts public key of person “P”
and substitute it together with his own public key. Then, anyone wanting to send an
encrypted message to P using P’s public key’s unknowingly using A’s public key.
Therefore, A can read the message intended for P then send the message to P,
encrypted in P’s real public key, and P will never notice that the message was
compromised. additionally , A could also modify the message before resending it to P.
As you’ll see, P is using encryption and thinks that his information is protected but it’s
not, due to the MitM attack.

So, how can you confirm that P’s public key belongs to P and to not A? Certificate
authorities and hash functions were created to solve this problem. When person 2 (P2)
wants to send a message to P, and P wants to be sure that A won’t read or modify the
message which the message actually came from P2, the following method must be used:
1. P2 creates a symmetric key and encrypts it with P’s public key.
2. P2 sends the encrypted symmetric key to P.
3. P2 computes a hash function of the message and digitally signs it.
4. P2 encrypts his message and therefore the message’s signed hash using the symmetric
key and sends the whole thing to P.
5. P is able to receive the symmetric key from P2 because only he has the private key to
decrypt the encryption.
6. P, and only P, can decrypt the symmetrically encrypted message and signed hash
because he has the symmetric key.
7. he’s ready to verify that the message has not been altered because he can compute the
hash of received message and compare it with digitally signed one.
8. P is additionally ready to convince himself that P2 was the sender because only P2 can
sign the hash in order that it’s verified with P2 public key.

6. Phishing and spear phishing attacks

Phishing attack is that the practice of sending emails that appear to be from trusted
sources with the goal of gaining personal information or influencing users to do
something. It combines social engineering and technical trickery. It could involve an
attachment to an email that loads malware onto your computer. It could even be a link
to an illegitimate website which will trick you into downloading malware or handing
over your personal information.

Spear phishing may be a very targeted sort of phishing activity. Attackers take the time
to conduct research into targets and make messages that are personal and relevant. due
to this, spear phishing are often very hard to spot and even harder to defend against.
one among the only ways in which a hacker can conduct a spear phishing attack is email
spoofing, which is when the information within the “From” section of the e-mail is
falsified, making it appear as if it’s coming from someone you recognize , like your
management or your partner company. Another technique that scammers use to add
credibility to their story is website cloning — they copy legitimate websites to fool you
into entering personally identifiable information (PII) or login credentials.

To reduce the danger of being phished, you’ll use these techniques:

 Critical thinking — don’t accept that an email is that the real deal just because you’re
busy or stressed otherwise you have 150 other unread messages in your inbox. Stop for
a moment and analyze the e-mail.
 Hovering over the links — Move your mouse over the link, but don’t click it! Just let your
mouse cursor h over over the link and see where would actually take you. Apply critical
thinking to decipher the URL.
 Analyzing email headers — Email headers define how an email need to your address. The
“Reply-to” and “Return-Path” parameters should lead to the same domain as is stated
within the email.
 Sandboxing — you’ll test email content during a sandbox environment, logging activity
from opening the attachment or clicking the links inside the e-mail .

7. Drive-by attack

Drive-by download attacks are a standard method of spreading malware. Hackers search
for insecure websites and plant a malicious script into HTTP or PHP code on one among
the pages. This script might install malware directly onto the pc of somebody who visits
the site, or it’d re-direct the victim to a site controlled by the hackers. Drive-by
downloads can happen when visiting a website or viewing an email message or a pop-
up window. Unlike many other types of cyber security attacks, a drive-by doesn’t rely on
a user to do anything to actively enable the attack — you don’t need to click a
download button or open a malicious email attachment to become infected. A drive-by
download can cash in of an app, operating system or web browser that contains security
flaws thanks to unsuccessful updates or lack of updates.

To protect yourself from drive-by attacks, you would like to stay your browsers and
operating systems up to date and avoid websites which may contain malicious code.
stick with the sites you normally use — although keep in mind that even these sites are
often hacked. Don’t keep too many unnecessary programs and apps on your device. The
more plug-ins you have, the more vulnerabilities there are which will be exploited by
drive-by attacks.

Read More : https://www.info-savvy.com/what-are-different-types-of-attacks-on-


a-system/

2. Discuss attack detection and investigation

What is Threat Detection and Response?

Threat detection and response is the practice of identifying any malicious activity
that could compromise the network and then composing a proper response to
mitigate or neutralize the threat before it can exploit any present vulnerabilities.

Within the context of an organization's security program, the concept of "threat


detection" is multifaceted. Even the best security programs must plan for worst-case
scenarios: when someone or something has slipped past their defensive and
preventative technologies and becomes a threat.
Detection and response is where people join forces with technology to address a
breach. A strong threat detection and response program combines people,
processes, and technology to recognize signs of a breach as early as possible, and
take appropriate actions.

Detecting Threats

When it comes to detecting and mitigating threats, speed is crucial. Security


programs must be able to detect threats quickly and efficiently so attackers don’t
have enough time to root around in sensitive data. A business’s defensive programs
can ideally stop a majority of previously seen threats, meaning they should know
how to fight them.

These threats are considered "known" threats. However, there are additional
“unknown” threats that an organization aims to detect. This means the organization
hasn't encountered them before, perhaps because the attacker is using new
methods or technologies.

Known threats can sometimes slip past even the best defensive measures, which is
why most security organizations actively look for both known and unknown threats
in their environment. So how can an organization try to detect both known and
unknown threats?

Leveraging Threat Intelligence

Threat intelligence is a way of looking at signature data from previously seen


attacks and comparing it to enterprise data to identify threats. This makes it
particularly effective at detecting known threats, but not unknown, threats. Known
threats are those that are recognizable because the malware or attacker
infrastructure has been identified as associated with malicious activity.

Unknown threats are those that haven't been identified in the wild (or are ever-
changing), but threat intelligence suggests that threat actors are targeting a swath
of vulnerable assets, weak credentials, or a specific industry vertical. User behavior
analytics (UBA) are invaluable in helping to quickly identify anomalous behavior -
possibly indicating an unknown threat - across your network. UBA tools establish a
baseline for what is "normal" in a given environment, then leverage analytics (or in
some cases, machine learning) to determine and alert when behavior is straying
from that baseline.

Attacker behavior analytics (ABA) can expose the various tactics, techniques, and
procedures (TTPs) by which attackers can gain access to your corporate network.
TTPs include things like malware, cryptojacking (using your assets to mine
cryptocurrency), and confidential data exfiltration.
During a breach, every moment an attacker is undetected is time for them to tunnel
further into your environment. A combination of UBAs and ABAs offer a great
starting point to ensure your security operations center (SOC) is alerted to potential
threats as early as possible in the attack chain.

Responding to Security Incidents

One of the most critical aspects to implementing a proper incident response


framework is stakeholder buy-in and alignment, prior to launching the framework.
No one likes surprises or questions-after-the-fact when important work is waiting to
be done. Fundamental incident response questions include:

 Do teams know who is responsible at each phase of incident response?


 Is the proper chain of communications well understood?
 Do team members know when and how to escalate issues as needed?

A great incident response plan and playbook minimizes the impact of a breach and
ensures things run smoothly, even in a stressful breach scenario. If you're just
getting started, some important considerations include:

 Defining roles and duties for handling incidents: These responsibilities, including
contact information and backups, should be documented in a readily accessible
channel.
 Considering who to loop in: Think beyond IT and security teams to document which
cross-functional or third-party stakeholders – such as legal, PR, your board, or
customers – should be looped in and when. Knowing who owns these various
communications and how they should be executed will help ensure responses run
smoothly and expectations are met along the way.

What Should a Robust Threat Detection Program Employ?

 Security event threat detection technology to aggregate data from events across the
network, including authentication, network access, and logs from critical systems.
 Network threat detection technology to understand traffic patterns on the network
and monitor network traffic, as well as to the internet.
 Endpoint threat detection technology to provide detailed information about possibly
malicious events on user machines, as well as any behavioral or forensic information to
aid in investigating threats.
 Penetration tests, in addition to other preventative controls, to understand detection
telemetry and coordinate a response.
A Proactive Threat Detection Program

To add a bit more to the element of telemetry and being proactive in threat
response, it’s important to understand there is no single solution. Instead, a
combination of tools acts as a net across the entirety of an organization's attack
surface, from end to end, to try and capture threats before they become serious
problems.

Setting Attacker Traps with Honeypots

Some targets are just too tempting for an attacker to pass up. Security teams know
this, so they set traps in hopes that an attacker will take the bait. Within the context
of an organization's network, an intruder trap could include a honeypot target that
may seem to house network services that are especially appealing to an attacker.
These “honey credentials” appear to have user privileges an attacker would need in
order to gain access to sensitive systems or data.

When an attacker goes after this bait, it triggers an alert so the security team knows
there is suspicious activity in the network they should investigate. Learn more about
the different types of deception technology.

Threat Hunting

Instead of waiting for a threat to appear in the organization's network, a threat hunt
enables security analysts to actively go out into their own network, endpoints, and
security technology to look for threats or attackers that may be lurking as-yet
undetected. This is an advanced technique generally performed by veteran security
and threat analysts.

By employing a combination of these proactively defensive methods, a security


team can monitor the security of the organization's employees, data, and critical
assets. They’ll also increase their chances of quickly detecting and mitigating a
threat.

3. Discuss Anti Forensics

Anti-forensics – Part 1

What anti-forensics is about

As already mentioned, anti-forensics aims to make investigations on


digital media more difficult and therefore, more expensive. Usually, it
is possible to distinguish anti-forensic techniques in specific
categories, each of which is particularly meant to attack one or more
steps that will be performed by analysts during their activity. All
forensic analysts in fact, from either private or public laboratories, like
that of police for example, will take specific steps during each phase
of the analysis of a new case.

Knowing these steps, generally summarized as “Identification,”


“Acquisition,” “Analysis” and “Reporting,” is the first measure to better
understand the benefits and limitations of each anti-forensic
technique. As in many other branches of information security, a good
level of security is achieved through a stratified model for solving a
problem. This means that attacking only one of these steps taken by
the investigators often do not lead to the desired result Furthermore,
an expert analyst, in the best of cases, will still be able to
demonstrate that they were able to deal with some evidence, even
without knowing the content of that evidence.

Instead, attacking the identification, acquisition and analysis phases


of evidence-gathering will make quite sure of the contrary.

These are the general anti-forensic categories discussed within this


document:

 Data Hiding, Obfuscation and Encryption

 Data Forgery

 Data Deletion and Physical Destruction

 Analysis Prevention

 Online Anonymity
Note that not all anti-forensic methods are covered by this document,
due to space reasons or because some are very easy to detect. Details
on some of these techniques have been deliberately omitted, always
for space reasons and because of the large amount of information
already present online.

– Data Hiding, obfuscation and encryption

Obviously, the great advantage of hiding data is to maintain the


availability of these when there is need. Regardless of the operating
system, using the physical disk for data hiding is a widely used
technique, but those related to the OS or the file system in use are
quite common. In the use of physical disk for data hiding, these
techniques are made feasible due to some options implemented during
their production that are intended to facilitate their compatibility and
their diffusion, while other concealment methods take advantage of
the data management property of the operating system and/or file
system. At this stage, we are going to attack, as we can imagine, the
first phase of an investigation: “Identification.”

If evidence cannot be found, in fact, it will be neither analyzed nor


reported.

Unused space in MBR

Most hard drives have, at the beginning, some space reserved for MBR
(Master Boot Record). This contains the necessary code to begin
loading an OS and also contains the partition tables. The MBR also
defines the location and size of each partition, up to a maximum four.
The MBR only requires a single sector. From this and the first partition,
we can find 62 unused sectors (sector n. 63 is to be considered the
start of cylinder 1). For a classic DOS-style partition table, the first
partition needs to start here.

This results in 62 unused sectors where we can hide data. Although


the size of data that we can “hide” in this area is limited, an expert
investigator will definitely look at its contents to search for
compromising material.

HPA Area

The most common technique to hide data at the hardware level is to


use the HPA (Host Protected Area) area of disk. This is generally an
area not accessible by the OS and is usually used only for recovery
operations. This area is also invisible to certain forensic tools and is
therefore ideal for hiding data that we do not want to be found easily.
The following image shows a representation of HPA within a physical
media.

The starting address of this area is represented by the next sector to


the disk size given with the ATA command SET MAX ADDRESS * (SET
MAX ADDRESS + 1 sector). Some forensic tools such as EnCase, The
Sleuth Kit, and ATA Forensic are easily able to counter the use of HPA
for hiding data. For example, via the utility “disk_stat” of The Sleuth
Kit, it is possible to perform the detection of this area. It can also
perform a momentary reset by using the “disk_sreset” utility.

Example:

[plain]# disk_stat /dev/hdb


Maximum Disk Sector: 120103199
Maximum User Sector: 118006047
** HPA Detected (Sectors 118006048 – 120103199) **
[/plain]

Another method to find HPA areas are the ATA


commands IDENTIFY_DEVICE , which contains the maximum number
of usable sectors, and READ_NATIVE_MAX, which returns the
maximum number of sectors on the disk, regardless of a possible DCO
area. If the value returned from these two commands is different, there
is a high probability that an HPA area is in place. We can also proceed
with its removal by using the SET_MAX_ADDRESS command updating
the number of addressable sectors.

DCO area

The use of the DCO (Device Configuration Overlay) is another good


way to hide potentially incriminating data. It was introduced as an
optional feature in the standard of ATA-6. This technique is stealthier
than the use of HPA and is also less known. The following image
shows a representation of DCO within a physical media.

The more effective way to detect DCO areas remains the use of ATA
command DEVICE CONFIGURATION IDENTIFY, which is able to show
the real size of a disk. Comparing the output of this command with that
resulting from the command READ_NATIVE_MAX_ADDRESS makes it
easy to find any hidden areas. It’s also important to note that “ The ATA
Forensic Tool” is also able to find hidden areas of this kind.

Use of Slack space

The “Slack Space,” in a nutshell, is the unused space between the end
of a stored file, and the end of a given data unit, also known as cluster
or block. When a file is written into the disk, and it doesn’t occupy the
entire cluster, the remaining space is called slack space. It’s very
simple to imagine that this space can be used to store secret
information.
The image below shows a graphical representation of how the slack
space can appear within a cluster:

The use of this technique is quite widespread, and is more commonly


known as “file slack.” However, there are many other places to hide
data through the “slack space” technique, such as the so-called
“Partition Slack.” A file system usually allocates data in clusters or
blocks as already mentioned, where a cluster represents more
consecutive sectors. If the total number of sectors in a partition is not
a multiple of the cluster size, there will be some sectors at the end of
the partition that cannot be accessed by the OS, and that could be
used to hide data.

Another common technique is to mark some fully usable sectors as


“bad” in such a way that these will no longer be accessible by the OS.
By manipulating file system metadata that identifies “bad blocks”
like $BadClus in NTFS, it’s possible to obtain blocks that will contain
hidden data. It’s also possible to get a similar result with a more low-
level manipulation (see Primary Defect List, Grown Defect
List and System Area).

Surely, another technique yet to be mentioned under this category is


the use of additional clusters during the disk storage of a file. For
example, if we use 10 clusters to allocate the content of a file that
should take only 5, we have 5 clusters that we can use to store hidden
data. For this technique, files that do not change their size over time
are usually used to avoid overwriting the hidden data. For this reason,
therefore, OS files are often used to accomplish this operation.
Alternate data stream and extended file attributes

The content of a file, in NTFS, is stored in the resident


attribute $Data of MFT. By default, all data of a file is stored in a
single $Data attribute of the MFT. In a nutshell, considering the
immense amount of information on the web about ADS, a file may have
more than one $Data attribute, offering a simple way to hide data in
an NTFS file system. Similar to the Alternate Data Streams in NTFS,
Linux supports a feature called “xattr” (Extended Attribute) that can
also be used to associate more than one data stream to a file. The
common antagonistic practices to these hiding techniques include a
special focus on less-used storage spaces, and generally base our
results on several forensics tools, not just one.

It may also be helpful to check the hardware parameters of the media


offered by the manufacturer. Finally, it is also possible, with enough
time, to perform a static analysis of the slack space.

Steganography / background noise

In information security, steganography is a form of security through


obscurity. The steganographic algorithms, unlike cryptographics, aim
to keep the “plausible” form of data that they are intended to protect,
so that no suspicion will be raised regarding actual secret content.
The steganographic technique currently most widespread is the Least
Significant Bit or LSB. It is based on the fact that a high resolution
image is not going to change its overall appearance if we change some
minor bits inside it.

For example, consider the 8-bit binary number 11111111 (1 byte): the
right-most 1-bit is considered the least significant because it’s one
that, if changed, has the least effect on the value of this number.
Taking into account a bearing image, therefore, the idea is to break
down the binary format of the message and put it on the LSBs of each
pixel of the image. Steganography, obviously, may be used with many
types of file formats, such as audio, video, binary and text. Other
steganographic techniques that should surely be mentioned are the
Bit-Plane Complexity Segmentation (BPCS), the Chaos Based Spread
Spectrum Image Steganography (CSSIS) and Permutation
Steganography (PS). Going into the details of all steganographic
algorithms however, is beyond the scope of this document and should
be dealt with through a dedicated paper.

Steganography, alone, cannot guarantee the confidentiality of the


hidden data, although it remains difficult to go back to an original
hidden message without knowing the generation algorithm. However,
if these techniques are associated with cryptographic algorithms, the
security level of the hidden message increases significantly, adding
also the variable of “plausible deniability” about the information
hidden. It’s possible, finally, to mention one of the most widely used
software in time to apply steganography: S-Tool. This is software that
is already dated, but still effective. However, a lot of software
designed for this purpose are available online.

Encryption

Encryption is one of the most effective techniques for mitigating


forensic analysis. We refer to it as the nightmare of every analyst. As
just mentioned, using strong cryptographic algorithms, for example
AES256, together with the techniques described above, adds a further
fundamental level of anti-forensics security for the data that we want
to hide. In addition, the type and content of the information that we
want to protect or to hide, can never be compared to anything already
known, because the resulting cipher-text of a good cryptographic
algorithm are computationally indistinguishable from random data
stream, adding the so-called “plausible deniability” on top of all our
encrypted documents.

The most widely used tool for anti-forensics encryption is


certainly TrueCrypt, an open source tool that is able to create and
mount virtual encrypted disks for Windows, Linux and OS X systems.

Generally, in the presence of an encrypted mounted volume, a forensic


analyst will try, without doubt, to capture the contents of the same
before the volume is un-mounted. Or, if the machine is turned off, the
only option for acquiring the content of a dismounted encrypted drive
is to do a brute-force password guessing attack. (The “Rubber-hose” is
not covered by this document :>)

A noteworthy feature of TrueCrypt is that when using it for full disk


encryption, it leaves a “TrueCrypt Boot Loader” string in its boot
loader that can help a forensic analyst in the recognition of
a TrueCrypt encrypted disk, as shown in the image below:

Using a good Hex Editor however, it’s possible to modify this string
with something random or misleading. Because of the large amount of
information about both the tools and symmetric cryptographic
algorithms, these will not be discussed in detail in this document, but,
generally, it is very important to always maintain a reasonable doubt
on the use of cryptographic algorithms within our systems, especially
to avoid an analyst being able to prove the presence of an encrypted
area.

In principle, the countermeasures to such techniques are quite limited.


Besides the already mentioned “brute-force password guessing,” the
rest is limited to trying a “live” analysis of the volume and an “entropy
test” designed to determine whether the data has been encrypted
using a known algorithm. Another possibility is the exploitation of
encryption algorithm’s vulnerabilities, especially considering
customized ones. Otherwise, good luck.

Rootkits

Rootkits are often used to mask files, directories, registry keys and
active processes, and lend themselves to in-depth considerations.
They are, of course, effective only in the course of a live analysis of
the system under investigation.

Usually, we can divide them into two main categories, closely related
to the area in which they work: UserSpace Rootkit (Ring 3)
and KernelSpace Rootkit (Ring 0). Because they are able to alter the
resulting output of standard system function calls, they can,
consequently, also alter the results of forensics tools.

There are many ways to deliberately implement rootkit techniques


within our own system in order to hide information, the details of
which will need a dedicated paper, but generally, they make use of
code or DLL injection, at least for those at Ring 3, through which to
perform hooking or patching of the commonly used API functions. Ring
0 rootkit, on the other hand, are usually implemented using the
operating systems support for device drivers in kernel-mode.
Although their use may seem unlikely in systems subject to
investigation, they can be present especially in cases related to cyber
crimes and where we suspect a good technical skill of the owner.

In order to show that it can be relatively easy to implement these


techniques, we can create a simple global DLL injection by changing
the value of the registry key “HKLMSoftware MicrosoftWindows
NTCurrentVersionWindowsAppInit_DLLs” in order to load an arbitrary
library in all processes that use the User32.dll, which only a few do
not use.

The piece of code presented below is the main instructions for


changing the value of this key:

[plain]RegOpenKeyEx(HKEY_LOCAL_MACHINE,”SoftwareMicrosoftWin
dows NTCurrentVersionWindows”,0,KEY_SET_VALUE,&hKey );
GetWindowsDirectory(PATH_SISTEMA,sizeof(PATH_SISTEMA));
strcat_s(PATH_SISTEMA,”my_rootkit_library.dll”);
RegSetValueEx(hKey,”Appinit_Dlls”,0,REG_SZ,(const
unsigned
char*)PATH_SISTEMA,sizeof(PATH_SISTEMA));
[/plain]

Next, this is the main code of our custom prepared DLL to hide a
specific process in our system through the use of Mhook library:

[plain]do
{
pCurrent = pNext;
pNext = (PMY_SYSTEM_PROCESS_INFORMATION)((PUCHAR)pCurrent +
pCurrent->NextEntryOffset);

if (!wcsncmp(pNext->ImageName.Buffer, L”notepad.exe”, pNext-


>ImageName.Length))
{
if (0 == pNext->NextEntryOffset)
{
pCurrent->NextEntryOffset = 0;
}
else
{
pCurrent->NextEntryOffset += pNext->NextEntryOffset;
}

pNext = pCurrent;
}
}

while(pCurrent->NextEntryOffset != 0);
}
[/plain]

Once the value of the registry key is modified, the result is that the
process “notepad.exe” will no longer be shown in software like Task
Manager or Process Explorer.

Nothing new here, but it is a useful example to show that some


controls have to be made during the analysis of systems belonging to
people with good technical skills. These controls are usually
translated into “Integrity Check,” “Signature Based Rootkit Detection,”
“Hook Detection” and “Cross-Checks.”

Often, however, for more complex threats that are usually created in a
customized manner, these may not be sufficient, because it is also
possible, by simultaneously using different rootkit techniques
on userspace and kernelspace, to deeply modify an operating
system, until the point that standard anti-rootkit algorithms of common
anti-rootkit tools fail to detect such hidden active processes, for
example. The following image shows, in fact, through a custom and
powerful rootkit (the details of which will not be discussed in this
document), that we can alter the final “Process List” output of GMER,
a well known anti-rootkit tool, while our rootkit hides the process
“cmd.exe“…

No hidden process has been detected, while “cmd.exe” is running.

In this case, it is possible to mention standard procedures to be


undertaken when we go to suspect such actions of concealment.

These procedures include the acquisition of the volatile memories and


the capture of any network traffic, in addition, of course, to the static
analysis of media.

AV / AR scans must also necessarily be included among these


countermeasures.

Data Forgery

Data forgery is also a practice aimed at avoiding the identification of


incriminating material. In addition to changing file extensions, there
are other methods that can significantly falsify the true nature of
information.

Even in this case, the technique relies on the fact that if information
cannot be identified, it cannot be analyzed.

Transmogrification

The easiest way to implement this technique is to modify the header of


a file so that it can no longer be associated with any type of file
already known. Following the general structure of a PE executable file
for example, it always starts with a word value (2 bytes) shown below

HEX -> X4DX5A / ASCII -> MZ


Many forensic tools for recovering files within the analyzed systems
refer to these parameters, sometimes only the header and others both
headers and footers. Obviously, by changing these values, and
restoring them only in case of necessity, it is possible to avoid
detection of a hypothetical compromising document. This approach is
adopted by “Transmogrify,” an anti-forensic tool developed by the
MAFIA (Metasploit Anti-Forensic Investigation Arsenal). The technique
basically aims to deceive the signature-based scan engine of these
tools.

Deceive Exe strings analysis

When an investigator is in contact with some suspicious executable,


he usually limits its analysis to a control of the internal strings of the
same, to visualize its general purpose and origin. In order to make this
analysis not very useful, it is possible to proceed with the use of exe
compressors or encryptor. This technique usually makes the
reconstruction of the operations performed by our files much more
difficult, depending on the level of protection put in place and the file
interaction with virtual environments and dynamic analysis.

There are many packers and encryptors available for this purpose, but
most of them are easily identifiable and, usually, it is very easy to
recover the code protected through them. In contrast, a custom
“homemade” packer definitely adds a new level of security in this
sense, as it will almost certainly require the assistance of a
specialized software reverse engineer (increasing the cost and time of
analysis) to analyze it in depth.

Even an executable protected with a custom packer, however, often


has very few strings embedded and functions imported. This
immediately catches the attention and could lead one to think about
some kind of malicious software.
Assuming that the reader is already familiar with the general
functioning of exe packers, a way to address this issue is to wrap the
stub program with valid code, so that it looks like a legitimate
application. In addition, we can also “decorate” the stub program with
some misleading string so that the analyst has the opportunity to read
something.

Another good way to make life difficult for analysts is to employ a


cryptor that goes to protect some important routines of our software.
But then we have the problem of where to store the decryption key
within the software. One solution is to use many keys for the
decryption of our routine by dividing our program into multiple
segments, each of which will make use of a different key.

Of course, each key will not be stored in clear in our program (even if
compressed), but will be generated at runtime from an initial value at
our discretion. To conclude, it’s important to know that, because
sooner or later our program will have to start, and this software
protection should be considered as a temporary analysis
countermeasure because a good reverser will still be able to go back
to the original protected instructions.

Timestamp alterations / MACB scrambling

In a few words that summarize this sub-chapter, the purpose of these


activities is to prevent a reliable reconstruction of the operations
performed by a user or during the breach of a system.

Usually, these events are reconstructed in a “timeline” primarily


through the use of MACB timestamp parameters of the file system,
where MACB stands for “Modified, Accessed, Changed, Birth.”

It’s important to note that not all file systems record the same
information about these parameters and not all operating systems
take advantage of the opportunity given by the file system to record
this information.

A simple way to do so is described


here: http://www.forensicswiki.org/wiki/Timestomp.

Log files

There’s not much to say about the log files. Every computer
professional knows of their existence and the ease with which they
can be altered. Specifically, in contrast to a forensic analysis, the log
files can be altered in order to insert dummy, misleading or malformed
data. Simply, they can also be destroyed. However, the latter case is
not recommended, because a forensic analyst expects to find some
data if he goes to look for them in a specific place, and, if he doesn’t
find them, will immediately think that some manipulation is in place,
which of course could also be demonstrated. The best way to deal
with log files is to allow the analyst to find what he is looking for, but
of course making sure that he will see what we want him to see.

It’s good to know that the first thing that a forensic analyst will do if
he suspects a log alteration, will be to try to find as many alternative
sources as possible, both inside and outside of the analyzed system.
So it is good to pay attention to any log files replicated or redundant
(backups?!).

Data deletion

The first mission of a forensic examiner is to find as much information


as possible (files) relating to a current investigation. For this purpose,
he will do anything to try to recover as many files as possible from
among those deleted or fragmented. However, there are some
practices to prevent or hinder this process in a very efficient way.
Wiping

If you want to irreversibly delete your data, you should consider the
adoption of this technique. When we delete a file in our system, the
space it formally occupied is in fact marked only as free. The content
of this space, however, remains available, and a forensics analyst
could still recover it. The technique known as “disk wiping” overwrites
this “space” with random data or with the same data for each sector of
disk, in such a way that the original data is no longer recoverable.
Generally, in order to counter the use of advanced techniques for file
recovery, more “passages” for each sector and specific overwriting
patterns are adopted.

“Data wiping” can be performed at software level, with dedicated


programs that are able to perform overwriting of entire disks or based
on specific areas in relation to individual files.

Meta-data shredding

The technique of “Meta-Data Shredding“, in addition to overwriting the


bytes that constitute a file, also destroys all metadata associated with
it.

Physical destruction

The technique of physical destruction of media is certainly self


explanatory. However, we should focus on the most effective and
clean of these: disk degaussing.

“Degaussing” refers to the process of reduction or elimination of a


magnetic field. This means, when referring to hard drives, floppy disks
or magnetic tape, a total cancellation of the data contained within
these.
Although it’s very effective, degaussing is a technique rarely used
because of the high costs of the equipment needed to put it into
practice. In view of modern magnetic media, to use this technique
means to make the media totally unusable for future writings.

Another spectacular and certainly “hot” technique for media


destruction is surely this: http://www.youtube.com/watch?v=k-
ckechIqW0

4. elaborate case studies on File system

CASE STUDY: File Analysis - Real World Stories

It has been estimated that 80-90% of all business data is unstructured, and 80% of this is
never even retrieved again once it has served its purpose.

With the recent rapid introduction of remote collaboration tools, information is


becoming even less structured, harder to find across ecosystems (multiple repositories
and applications), and more difficult to manage. Further, with the availability of features
such as the Teams recording option to transcribe your meetings, meeting information is
being indexed and becoming much more historically valuable (but rarely managed).

While eDRMS tools such as Content Manager allow you to structure your information
and manage it appropriately, we have seen a number of organisations still struggle to
contain all of their corporate information to a single managed system, which can then
introduce security, searchability and governance risks.

Users will be users, and if they can save information somewhere more convenient, they
will!

There is, however, a solution that does not require businesses to radically change the
way they do everything though.

File analysis tools allow organisations to connect to multiple sets of different types data,
understand exactly what they are holding, and action it. This includes file shares, emails,
archive systems, SharePoint, eDRMS and many more.
ControlPoint and Data Discovery, two leading file analysis products from Micro Focus,
have been used by WyldLynx both internally and extensively with a number of clients
over the past 5 years.

With very similar file analysis functionality, the main difference between the two
products is that ControlPoint is run as a locally installed product, where Data Discovery
is cloud based. Click the links to learn more about these two powerful and helpful tools.

The introduction of these tools has allowed for a vast number of wins relating to legacy
data clean up and locating key data, including:

 Identifying duplicates, trivial files, and information no longer required, as well as allowing
the clean-up of this information to create more space
 Scoured repositories for key project/sensitive information and relocate it to the managed
information system
 Responded to Information Requests with certainty that all information stores have been
assessed
 Responded to potential security breaches with confidence that you know what is being
held
 Actioned Commission of Inquiry or Royal Commission disposal freezes by locating and
placing a hold on in-scope information
 Understood the history and story of data growth and analyse based on a range of filters

Throughout these processes, we have seen both tangible and non-tangible benefits for
all of our analytics customers. Here are a few highlights from some of our experiences
with ControlPoint and Data Discovery on actual, real world business data:

Save Money in Storage Alone

This customer had 20 terabytes of unstructured data of which 50%(!) was duplicated
across their environment. By de-duplicating the data they estimated a saving of $2.5
Million dollars over 5 years in storage alone.

Respond to a Disposal Freeze

The Federal and QLD government issued a disposal freeze on records relating to
vulnerable people. File analytics was used to locate information in scope, and then apply
a hold to these records so that they could not be deleted or edited until the disposal
freeze was lifted.

Respond to Requests for Information


One customer had a multitude of ways to obtain information from a range of systems
when an information request was submitted. However, they were not always confident
that they could produce everything within the scope of the request. By scanning their
multiple systems through one interface, they were able to successfully and confidently
locate the information using content and metadata key words.

Locate sensitive data such as credit card information

WyldLynx always reiterates to our customers that no matter what policies they have in
place for storing sensitive information (such as credit card details), users will be users
and may not actually consider the policy while in the process of doing their work. After
undertaking a pattern matching exercise across their data stores, we found that 1% of a
customer's unstructured (and unsecured) data contained a pattern matching a credit
card pattern. This allowed the customer to relocate and secure this information into
Content Manager with the correct security applied.

Sensitive Government Information

Sensitive documents relating to Board and Executive meetings were held in an


unstructured file system, allowing anyone with access the ability to delete them, as well
as view or alter information contributing to senate briefs. The customer was able to
locate these and secure them into the relevant repository, reducing risk of breach of
confidentiality and providing confidence to due process.

Cloud migration

A customer needed to make sure they did not transfer their 'junk' from their on-premise
storage over to SharePoint during their cloud transition. They were able to relocate their
transient information (non-records) from the file system into an archive drive, and
relocate only what was relevant to the cloud (using time and content-based rules and
policies). The transient information was then due to be destroyed automatically after
two years of not being accessed.

As you can see there is a large range of use cases for file analytics, and the list is
continuing to grow.

Customers are seeking more automated information management options,


and ControlPoint and Data Discovery from Micro Focus allow organisations to both
define their needs and actually apply policies that meet those needs, all from a single
interface.
5. elaborate cases studies on network storage

Five case studies of interest to corporate investigators

Attorneys, forensic professionals and e-discovery providers have


become very comfortable working with traditional types of digital
evidence (e.g., email, text messages, spreadsheets, word processing
files). There is a lot to be learned there, but technology evolves
rapidly. As our world becomes increasingly digitalized, there are ever-
increasingly creative ways to find information. Here we look at five
cases that show the potential in underused and emerging types of
digital evidence.

This article assumes that most investigations attorneys are at least


passingly familiar with the basics of digital forensicsi and that most
investigations attorneys are also experienced with the review of
traditional types of digital evidence such as email, text messages and
standard PC filesii to develop an understanding of underlying facts.iii

Case study one: Locational data – geotags

The facts: Following the Russian annexation of Crimea in February


2014, international tensions built over allegations that Russian troops
were operating in other parts of Ukraine. Russian officials repeatedly
denied these allegations.iv Starting in late June 2014, Alexander
Sotkin, a sergeant in the Russian Army, posted a month-long series of
selfies taken from his cell phone to his public Instagram account. The
press picked the story up when it was discovered that the jpeg files
posted included geotag metadata, and that the geotags and pictures
showed the sergeant moving on-duty from a military base in Russia
into eastern Ukraine and then back to the base.v

The takeaway: Geotags, such as those embedded in Sotkin’s pictures,


are a form of locational metadata. Geotags generated by smartphones
tend to be very accurate and are associated with other types of file
metadata, like date- and timestamps. Combine these attributes with
the conventional wisdom that a picture is worth a thousand words and
reports showing that smartphone users take over 150 pictures per
month, and you have a treasure trove of data to pin down
who/what/when/where details during an investigation.

Geotags and other types of locational data can also be embedded in


other types of files, such as video files and SMS text messages. Other
cell phone locational data can be drawn from routes stored in mapping
applications, Wi-Fi connections, cell towers in call history and
applications like weather or real estate tools.

Case study two: Wearable sensors

The facts: Connie Dabate was murdered in her home in 2015.


According to his arrest warrant, her husband Richard provided an
elaborate explanation of the day’s events, claiming that he returned
home after receiving an alarm alert. Richard went on to claim that,
upon entering his house, he was immobilized and tortured by an
intruder. He told police that the intruder then shot and killed Connie
when she returned home from the gym. Relying on evidence collected
from Connie’s Fitbit, police were able to show that she had been in the
house at the time Richard said she was at the gym. According to the
Fitbit’s data, Connie stopped moving one minute before the home
alarm went off.vi

The takeaway: Wearable devices like Fitbits monitor location via GPS
and activities like distance traveled, steps taken, sleep time and heart
rate. The devices are configured to synchronize data to applications
on smartphones and personal computers or to cloud or social media
sites. Evidentiary collections can be made from either of these
sources using standard digital forensics tools and techniques.

Case study three: Data from asset trackers – sensors and IoT devices

The facts: The case of Howze v. Western Express, Inc.vii revolved


around injuries caused when a tractor-trailer forced a motorcycle off
the road. The truck in question could not be definitively identified by
an eye witness, although the witness recalled that the trailer logo read
“Western Express.” The defendant’s trucks were equipped with asset
trackers which included a GPS feature. Data from the trackers was
collected and retained in a centralized database. The defendant
claimed that a search of the database showed that it had no trucks on
the road in question on the night of the accident. To counter that
claim, the plaintiff cited Western Express’ six-month GPS data
retention policy, and challenged the validity of the defendant’s search,
which was conducted 27 months after the accident. The judge decided
that there was a question of material fact that needed to be sorted out
by a jury.

The takeaway: Asset trackers take advantage of GPS, Wi-Fi and


Bluetooth technology to allow organizations to monitor their moveable
assets. They may collect basic locational data or may have expanded
features that capture other information like diagnostics, messaging,
weather conditions, or compliance data. They are used to track high-
value, moveable assets (e.g., fleet vehicles, construction equipment,
medical devices) and are starting to show up in the growing array of
consumer IoT devices.viii Howze helps demonstrate that asset tracker
evidence is highly probative. It is also highly available to investigators
who are working for an organization that owns or finances the asset.
As in Howze, the client’s database can be searched or the data can be
extracted to a better platform to help understand and preserve the
who/what/when/where details in a controlled manner. The
investigators can avoid having to examine the asset itself or involve
the asset custodian in their inquiry. Howze also demonstrates the
need to handle structured data (i.e., records stored in a database) in a
defensible manner. Structured data should be collected and validated
early in the investigation to avoid spoliative events like a regularly-
scheduled database purge. Handling of the structured data should be
defensibly documented. If the dataset is large or if queries are
complex, a forensic consultant who understands structured data
should be retained. Structured data analytics is a complex discipline
and is not included in the standard forensic examiner’s toolkit.
Specialists will be needed.

Case study four: Network data reveals theft of trade secrets

The facts: Xiaolang Zhang worked as an engineer for Apple’s


autonomous car division. He had been with the company 2 ½ years
when he announced that he would be resigning and returning to China
to take care of his elderly mother. He told his manager that he would
be working for an electric car manufacturer in China. The conversation
left the manager suspicious. Company security started an
investigation. They searched Zhang’s two work phones and laptop—
but were most alarmed when they reviewed Zhang’s network activity.
The story the network data told was that Zhang’s activity had spiked
to a two-year high in the days leading up to his resignation. It
consisted of “bulk searches and targeted downloading copious pages
of information” taken from secret databases he could access. When
confronted, Zhang admitted to taking company data. The matter was
referred to the FBI, and Zhang was indicted for theft of trade secrets. ix

The takeaway: Network forensics is a sub-specialty of digital


forensics. It involves analysis of log data from servers and other
networking tools (e.g., firewalls, routers, intrusion detection
applications) in order to trace or monitor network activity. Attorneys
with cyber law practices have become very familiar with network
forensics, as it is one of the go-to tools for intrusion and breach
detection. Network forensics can involve retroactive analysis or live-
stream traffic monitoring. The volume of data collected can be
enormous, so data analytics techniques are used heavily.

It used to be the case that network forensics was seldom practiced.


To reduce the need for storage hardware, few organizations had their
network logging features turned on. Fewer still retained their logs long
enough to be of value when investigators came calling. Practices have
changed as companies have become more sophisticated and diligent
about cyber security. The Zhang case demonstrates that the
availability of network data presents opportunities to investigate user
activity in non-cyber cases, (i.e., a theft of trade secrets matter). As in
the Zhang case, network logs can be analyzed to identify mass
movements or deletions of data and other suspect user activity.

6. elaborare case studies on web

Air India data breach highlights third-party risk


Date: May 2021

Impact: personal data of 4.5 million passengers worldwide


Details: A cyberattack on systems at airline data service provider
SITA resulted in the leaking of personal data of of passengers of Air
India. The leaked data was collected between August 2011 and
February 2021, when SITA informed the airline. Passengers didn't
hear about it until March, and had to wait until May to learn full
details of what had happened. The cyber-attack on SITA’s
passenger service system also affected Singapore Airlines,
Lufthansa, Malaysia Airlines and Cathay Pacific.

CAT burglar strikes again: 190,000 applicants’ details leaked to


dark web
Date: May 2021

Impact: 190,000 CAT applicants’ personal details

Details: The personally identifiable information (PII) and test results


of 190,000 candidates for the 2020 Common Admission Test, used to
select applicants to the Indian Institutes of Management (IIMs),
were leaked and put up for sale on a cybercrime forum. Names,
dates of birth, email IDs, mobile numbers, address information,
candidates’ 10th and 12th grade results, details of their bachelor’s
degrees, and their CAT percentile scores were all revealed in the
leaked database.

The data came from the CAT examination conducted on 29


November 2020 but according to security intelligence firm
CloudSEK, the same thread actor also leaked the 2019 CAT
examination database.

Hacker delivers 180 million Domino’s India pizza orders to dark


web
Date: April 2021

Impact: 1 million credit card records and 180 million pizza


preferences

Details: 180 million Domino’s India pizza orders are up for sale on
the dark web, according to Alon Gal, CTO of cyber intelligence firm
Hudson Rock.
Gal found someone asking for 10 bitcoin (roughly $535,000 or ₹4
crore) for 13TB of data that they said included 1 million credit card
records and details of 180 million Dominos India pizza orders, topped
with customers’ names, phone numbers, and email addresses. Gal
shared a screenshot showing that the hacker also claimed to have
details of the Domino’s India’s 250 employees, including their
Outlook mail archives dating back to 2015.

Jubilant FoodWorks, the parent company of Domino’s India, told


IANS that it had experienced an information security incident, but
denied that its customers’ financial information was compromised,
as it does not store credit card details. The company website shows
that it uses a third-party payment gateway, PayTM.

Trading platform Upstox resets passwords after breach report


Date: April 2021

Impact: All Upstox customers had their passwords reset

Details: Indian trading platform Upstox has openly acknowledged a


breach of know-your-customer (KYC) data. Gathered by financial
services companies to confirm the identity of their customers and
prevent fraud or money laundering, KYC data can also be used by
hackers to commit identity theft.

On April 11, Upstox told customers it would reset their passwords


and take other precautions after it received emails warning that
contact data and KYC details held in a third-party data warehouse
may have been compromised.

Upstox apologised to customers for the inconvenience, and sought


to reassure them it had reported the incident to the relevant
authorities, enhanced security and boosted its bug bounty program
to encourage ethical hackers to stress-test its systems.

Police exam database with information on 500,000 candidates


goes up for sale
Date: February 2021

Impact: 500,000 Indian police personnel


Details: Personally identifiable information of 500,000 Indian police
personnel was put up for sale on a database sharing forum. Threat
intelligence firm CloudSEK traced the data back to a police exam
conducted on 22 December, 2019.

The seller shared a sample of the data dump with the information of
10,000 exam candidates with CloudSEK. The information shared by
the company shows that the leaked information contained full
names, mobile numbers, email IDs, dates of birth, FIR records and
criminal history of the exam candidates.

Further analysis revealed that a majority of the leaked data


belonged to candidates from Bihar. The threat-intel firm was also
able to confirm the authenticity of the breach by matching mobile
numbers with candidates’ names.

This is the second instance of army or police workforce data being


leaked online this year. In February, hackers isolated the information
of army personnel in Jammu and Kashmir and posted that database
on a public website.

COVID-19 test results of Indian patients leaked online


Date: January 2021

Impact: At least 1500 Indian citizens (real-time number estimated to


be higher)

Details: COVID-19 lab test results of thousands of Indian patients


have been leaked online by government websites.

What’s particularly worrisome is that the leaked data hasn’t been


put up for sale in dark web forums, but is publicly accessible owing
to Google indexing COVID-19 lab test reports.

First reported by BleepingComputer, the leaked PDF reports that


showed up on Google were hosted on government agencies’
websites that typically use *.gov.in and *.nic.in domains. The
agencies in question were found to be located in New Delhi.

The leaked information included patients’ full names, dates of birth,


testing dates and centers in which the tests were held. Furthermore,
the URL structures indicated that the reports were hosted on the
same CMS system that government entities typically use for posting
publicly accessible documents.

Niamh Muldoon, senior director of trust and security at OneLogin


said: “What we are seeing here is a failure to educate and enable
employees to make informed decisions on how to design, build, test
and access software and platforms that process and store sensitive
information such as patient records.”

He added that the government ought to take quick measures to


reduce the risk of a similar breach from reoccurring and invest in a
comprehensive information security program in partnership with
trusted security platform providers.

User data from Juspay for sale on dark web


Date: January 2021

Impact: 35 million user accounts

Details: Details of close to 35 million customer accounts, including


masked card data and card fingerprints, were taken from a server
using an unrecycled access key, Juspay revealed in early January.
The theft took place last August, it said.

The user data is up for sale on the dark web for around $5000,
according to independent cybersecurity researcher Rajshekhar
Rajaharia.

BigBasket user data for sale online


Date: October 2020

Impact: 20 million user accounts

Details: User data from online grocery platform BigBasket is for sale
in an online cybercrime market, according to Atlanta-based cyber
intelligence firm Cyble.

Part of a database containing the personal information of close to


20 million users was available with a price tag of 3 million rupees
($40,000), Cyble said on November 7.
The data comprised names, email IDs, password hashes, PINs,
mobile numbers, addresses, dates of birth, locations, and IP
addresses. Cyble said it found the data on October 30, and after
comparing it with BigBasket users’ information to validate it,
reported the apparent breach to BigBasket on November 1.

Unacademy learns lesson about security


Date: May 2020

Impact: 22 million user accounts

Details: Edutech startup Unacademy disclosed a data breach that


compromised the accounts of 22 million users. Cybersecurity firm
Cyble revealed that usernames, emails addresses and passwords
were put up for sale on the dark web.

Founded in 2015, Unacademy is backed by investors including


Facebook, Sequoia India and Blume Ventures.

Hackers steal healthcare records of 6.8 million Indian citizens


Date: August 2019

Impact: 68 lakh patient and doctor records

Details: Enterprise security firm FireEye revealed that hackers have


stolen information about 68 lakh patients and doctors from a health
care website based in India. FireEye said the hack was perpetrated
by a Chinese hacker group called Fallensky519.

Furthermore, it was revealed that healthcare records were being


sold on the dark web – several being available for under USD 2000.

Local search provider JustDial exposes data of 10 crore users


Date: April 2019

Impact: personal data of 10 crore users released

Details: Local search service JustDial faced a data breach on


Wednesday, with data of more than 100 million users made publicly
available, including their names, email ids, mobile numbers, gender,
date of birth and addresses, an independent security researcher said
in a Facebook post.
SBI data breach leaks account details of millions of customers
Date: January 2019

Impact: three million text messages sent to customers divulged

Details: An anonymous security researcher revealed that the


country’s largest bank, State Bank of India, left a server unprotected
by failing to secure it with a password.

The vulnerability was revealed to originate from ‘SBI Quick’ – a free


service that provided customers with their account balance and
recent transactions over SMS. Close to three million text messages
were sent out to customers.

7. elaborate case studies on mobile

6. Police raids around world after investigators crack An0m


cryptophone app in major hacking operation
In June, police in 16 countries launched multiple raids after intercepting the
communications of organised criminal groups. The gangs had been sending
messages on an encrypted communications network, unaware that it was being
run by the FBI. This was only one of several similar raids in 2021, which, while
successful at disrupting organised and cyber crime, have at the same time
surfaced legitimate concerns over the ability of law enforcement to conduct
surveillance, and the admissibility of the evidence they collected.

You might also like