Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

COMPUTER SYSTEM SECURITY(KNC301)

UNIT-2
Anubhav Sharma
Asstt.Prof. (CSE/IT)
Confidentiality
Confidentiality refers to protecting information from being accessed by unauthorized
parties. In other words, only the people who are authorized to do so can gain access to
sensitive data.
So, in summary, a breach of confidentiality means that someone gains access to
information who shouldn't have access to it.

Software-based Fault Isolation(SFI):-

A good way to get programs to behave in a good manner consistent with a given security
policy is by "brainwashing." That is, modify the programs so that they behave only in
safe ways. This is embodied by a recent approach to security known as software-based
fault isolation (SFI).

So far, the environment has been responsible for policy enforcement, where the
environment is either the OS/kernel or the hardware. Hardware methods include
addressing mechanisms (e.g. virtual memory); OS methods include having two modes
(where the supervisor mode has access to everything). The new approach we discuss
today is to construct a piece of software that transforms a given program p into a program
p', where p' is guaranteed to satisfy a security policy of interest.
When would the SFI approach be useful:?

 If we are trying to run programs on a machine that lacks suitable protection


hardware, then with this method we can achieve the functionality that the
hardware doesn't support.
 Another application is when hardware doesn't support the security policy of
interest.
 The original motivation for this approach was performance. We can use SFI to
achieve memory protection at a low cost.
 Another use might be for programs that support plug-ins. We would like to protect
the main program (e.g. a browser) from buggy plug-ins.
 Finally, this approach is useful for programs comprising many "objects." Such
programs would profit from having sub-address space level protection. If one
object "goes bad," it would be nice if that behavior didn't trash everything else.

(Software bases Fault Isolation)SFI for Logical Fault Domains:-


We take a single address space and partition it into regions called logical fault
domains (LFDs). We would like to enforce (without resorting to help from the operating
system) programs in an LFD being isolated from reading or writing to memory outside
their LFD. We also need a cheap way to communicate between LFDs. The policy we
want is as follows:

 Programs can read and write memory within their LFD. This ensures secrecy and
integrity for that memory.
 Jumps (jmps, calls and returns) are to be kept within their own LFD.

There are two systems-level issues that should be mentioned. Multiple LFDs in a single
address space might correspond to multiple programs. Then:

 As with address spaces, it is necessary to support communication between LFDs.


Local procedure calls don't work because we need to preserve the sanctity of the
registers 'tmp' and 'ctmp' registers. So, we need to support some sort of an inter-
LFD call; it will save and restore the 'tmp' and 'ctmp' registers as necessary.
 Programs running in an LFD will likely require OS/kernel services. However, we
cannot allow direct access to the kernel, because one LFD could affect resources
'owned' by the entire address-space. One solution is to modify kernel services to
support individual LFDs and their associated resources.

There are some problems with using SFI. Clearly, we would like the SFI transformation
process to preserve some properties of the original program. The problem is in defining
precisely what those properties are, since the transformed program obviously doesn't
preserve all properties. For instance, the transformed program will not have the same
number of instructions as the original. Library routines can also be used to circumvent
SFI. In particular, we must prevent programs from invoking library routines that have not
been SFI'd. Finally, denial of service attacks remain possible. For example, one LFD
could acquire a shared lock and then never relinquish it.

Rootkit:-
A Rootkit is a collection of computer software, typically malicious, designed to enable
access to a computer or an area of its software that is not otherwise allowed (for example,
to an unauthorized user) and often masks its existence or the existence of other
software. The term rootkit is a concatenation of "root" (the traditional name of the
privileged account on Unix-like operating systems) and the word "kit" (which refers to
the software components that implement the tool). The term "rootkit" has negative
connotations through its association with malware.[1]
Rootkit installation can be automated, or an attacker can install it after having obtained
root or Administrator access. Obtaining this access is a result of direct attack on a system,
i.e. exploiting a known vulnerability (such as privilege escalation) or
a password (obtained by cracking or social engineering tactics like "phishing"). Once
installed, it becomes possible to hide the intrusion as well as to maintain privileged
access. The key is the root or administrator access. Full control over a system means that
existing software can be modified, including software that might otherwise be used to
detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that is
intended to find it. Detection methods include using an alternative and trusted operating
system, behavioral-based methods, signature scanning, difference scanning, and memory
dump analysis. Removal can be complicated or practically impossible, especially in cases
where the rootkit resides in the kernel; reinstallation of the operating system may be the
only available solution to the problem.[2] When dealing with firmware rootkits, removal
may require hardware replacement, or specialized equipment.

Confinement:- It ensure application does not deviate from pre-approved behavior


It can be implemented at many levels:
> Hardware: isolated hardware (“air gap”) Difficult to manage Sufficient?
>Virtual machines: isolate Operating System on single hardware
>System call interposition: Isolates a process in a single operating system
>Isolating threads sharing same address space:
Software Fault Isolation (SFI), e.g., Google Native Code
Confinement using Virtual Machines:-

Why so popular now?


VMs in the 1960’s: Few computers, lots of users
VMs allow many users to shares a single computer

VMs 1970’s – 2000: non-existent


VMs since 2000: Too many computers, too few users Print
server, mail server, web server, File server, database server, …

Wasteful to run each service on a different computer VMs


save hardware while isolating services

More generally: VMs heavily used in cloud computing

VMM security assumption


>VMM Security assumption: Malware can infect guest OS and
guest apps But malware cannot escape from the infected VM
Cannot infect host OS
>Cannot infect other VMs on the same hardware
Requires that VMM protect itself and is not buggy VMM is much
simpler than full OS
… but device drivers run in Host OS

VMM Introspection protecting the anti-virus system

Example: intrusion Detection / anti-virus


Runs as part of OS kernel and user space process Kernel root kit can
shutdown protection system
Common practice for modern malware

Standard solution: run IDS system in the network Problem:


insufficient visibility into user’s machine

Better: run IDS as part of VMM (protected from malware) VMM


can monitor virtual hardware for anomalies
VMI: Virtual Machine Introspection Allows VMM to check
Guest OS internals

VMM Detection
Can an OS detect it is running on top of a VMM?
Applications: Virus detector can detect VMBR
Normal virus (non-VMBR) can detect VMM refuse to run to
avoid reverse engineering

Software that binds to hardware (e.g. MS Windows) can refuse to


run on top of VMM
DRM systems may refuse to run on top of VMM
COMPUTER SYSTEM SECURITY(KNC301)
UNIT-3
Anubhav Sharma
Asstt.Prof. (CSE/IT)

Cross-site scripting (XSS):-


A cross-site scripting (XSS) attack rewrites the structure of a Web page or executes
arbitrary JavaScript within the victim's Web browser. It occurs when a Web site takes
some piece of information from the user such as an e-mail address, a user ID, a comment
to a blog post, or a zip code and displays the information in a Web page. All forms of the
XSS attack rely on the ability of a user-supplied bit of information to be rendered in the
site's Web page.

A cross-site request forgery (CSRF) attack forces the victim's browser to make a request
without the victim's knowledge or agency. Browsers make requests all the time without
the knowledge or approval of the user: images, frames, and script tags. The CSRF
focuses on finding a link, that when requested performs some action beneficial to the
attacker. The Web browser's same origin policy (SOP) prohibits the interaction between
content pulled from different domains, but it doesn't block a Web page from pulling that
content together. The attacker only needs to forge a request. The content of the site's
response, which is protected by the same origin policy (SOP), is immaterial to the success
of the attack. A CSRF attack would use an iframe or img element to force the user's
browser to accomplish the same query, but to do so without the user's intervention or
knowledge. The page might be hosted on a server controlled by the attacker. One of the
most effective CSRF countermeasures assigns a temporary pseudo-random token to the
sensitive forms or links that may be submitted by an authenticated user. The value of the
token is known only to the Web application and the user's Web browser.

Access Control Concepts:-


Access Control Systems are an important part of an overall Security Program that is
designed to deter and reduce both criminal behavior and violations of an organization's
security policies. It is important to understand that Access Control is not an element of
security; it is a concession that security programs make to daily operational necessities.
Perfect security involves perfect Access Control. Access Control Systems are an
automated method to allow “presumed” friendliness to enter controlled, restricted, and
secured areas of a facility with only minimal vetting at the Access Control Portal. Indeed,
Access Control Portals are doorways through a security perimeter in which the entrants
are “assumed” to be friendly, due to their status as an employee, contractor, or softly
vetted visitor.
Access Control in UNIX and Windows NT

We have been discussing access control policies and have been concerned with defining
what accesses subjects can make to objects. We model this behavior with a protection
matrix and commands. In the last lecture, we pointed out that there are at least two ways
to implement the matrix:

 access control lists (ACLs)-- the column for each object stored as a list associated
with that object.
 capabilities -- the row for each subject stored as a list associated with that subject.

We noted that there are two generic ways to implement ACLs. One is to employ a
mechanism for name interpretation, to put a layer between the names a subject utters and
what those names mean. Another is operation interpretation, to put the protection
mechanism into the execution of an operation.

UNIX -- Access Control


UNIX uses access control lists. A user logs into UNIX and has a right to start processes
that make requests. A process is "bigger" than a subject, many domains may correspond
to a single process. Each process has an identity(uid). This uid is obtained from the file
that stores user passwords: /etc/passwd.

Every process inherits its uid based on which user starts the process. Every process also
has an effective uid, also a number, which may be different from the uid.

Finally, each UNIX process is a member of some groups. In the original UNIX every
user was a member of one group. Currently, users can be members of more than one
group. Group information can be gotten from /etc/passwd or from a file /etc/groups.
System administrators control the latter file.

When a process is created, associated with it is the list of all the groups it is in.

Recall that groups are a way to shorten access control lists. They are useful in other ways
as well.

Here is a high-level overview of the UNIX file system. A directory is a list of pairs:
(filename, i-node number). Running the command 'ls' will produce a list of filenames
from this list of pairs for the current working directory. An i-node contains a lot of
information, including:

 where the file is stored -- necessary since the directory entry is used to access the
file,
 the length of the file -- necessary to avoid reading past the end of the file,
 the last time the file was read,
 the last time the file was written,
 the last time the i-node was read,
 the last time the i-node was written,
 the owner -- a uid, generally the uid of the process that created the file,
 a group -- gid of the process that created the file is a member of,
 12 mode bits to encode protection privileges -- equivalent to encoding a set of
access rights.

There is a UNIX that uses a notion of an additional access control list, and not just mode
bits to handle access control. In this case, each file has mode bits as we have been
discussing and also extended permissions. The extended permissions provide exceptions
to the mode bits as follows:

 Specify: for example, "r-- u:harry" means that user harry has read only access.
 Deny: for example "-w- g:acsu" means remove write access from the group acsu.
 Permit: for example "rw- u:bill, g:swe" means give read and write access to bill if
bill is also a member of the group swe. The comma is conjunction.

With extended permissions it's possible to force a user to enter a particular group before
being allowed access to a file.

Windows NT -- Access Control:-


Windows NT supports multiple file systems, but the protection issues we will consider
are only associated with one: NTFS. In NT there is the notion of an item, which can be a
file or a directory. Each item has an owner. An owner is usually the thing that created the
item. It can change the access control list, allow other accounts to change the access
control list and allow other accounts to become owner. Entries in the ACL are individuals
and groups. Note that NT was designed for groups of machines on a network, thus, a
distinction is made between local groups (defined on a particular workstation) and global
groups (domain wide). A single name can therefore mean multiple things.

NTFS is structured so that a file is a set of properties, the contents of the file being just
one of those properties. An ACL is a property of an item. The ACL itself is a list of
entries: (user or group, permissions). NTFS permissions are closer to extended
permissions in UNIX than to the 9 mode bits. The permission offer a rich set of
possibilities:

 R -- read
 W -- write
 X -- execute
 D -- delete
 P -- modify the ACL
 O -- make current account the new owner ("take ownership")

The owner is allowed to change the ACL. A user with permission P can also change the
ACL. A user with permission O can take ownership. There is also a packaging of
privileges known as permissions sets:

 no access
 read -- RX
 change -- RWXO
 full control -- RWXDPO

NT access control is richer than UNIX, but not fundamentally different.

Access Control Models:-


Now that we have reviewed the cornerstone access control concepts, we can discuss the
different access control models: the primary models are Discretionary Access
Control (DAC), Mandatory Access Control (MAC), and Non-Discretionary Access
Control.

1. Discretionary Access Controls (DAC)


Discretionary Access Control (DAC) gives subjects full control of objects they have
created or been given access to, including sharing the objects with other subjects.
Subjects are empowered and control their data. Standard UNIX and Windows operating
systems use DAC for file systems: subjects can grant other subjects access to their files,
change their attributes, alter them, or delete them.

2. Mandatory Access Controls (MAC)


Mandatory Access Control (MAC) is system-enforced access control based on a subject’s
clearance and an object’s labels. Subjects and Objects have clearances and labels,
respectively, such as confidential, secret, and top secret. A subject may access an object
only if the subject’s clearance is equal to or greater than the object’s label. Subjects
cannot share objects with other subjects who lack the proper clearance, or “write down”
objects to a lower classification level (such as from top secret to secret). MAC systems
are usually focused on preserving the confidentiality of data.
Mandatory Access Control is expensive and difficult to implement, especially when
attempting to separate differing confidentiality levels (security domains) within the same
interconnected IT system.

3. Non-Discretionary Access Control


Role-Based Access Control (RBAC) defines how information is accessed on a system
based on the role of the subject. A role could be a nurse, a backup administrator, a help
desk technician, etc. Subjects are grouped into roles and each defined role has access
permissions based upon the role, not the individual.

Browser isolation:-
Browser isolation is a cyber security model for web browsing that can be used to
physically separate an internet user’s browsing activity from their local machine, network
and infrastructure. With this model, individual browser sessions are abstracted away from
hardware and direct internet access, trapping harmful activity inside the disposable
environment. Browser isolation may also be referred to as remote browser isolation, web
isolation or remote browsing.

A major weakness in popular security tools is protection from web or browser-based


attacks, malware and ransomware. The development of browser isolation technology was
meant to combat that weakness. By separating browsing activity from endpoint hardware,
the device’s attack surface is reduced, sensitive data is protected and malware or other
known and unknown security threats are minimized. This is an evolution of the
cybersecurity concepts of security through physical isolation and air-gapping.

How it works

Browser isolation works by providing users with a disposable, non-persistent


environment for browsing. This can be executed through a variety of methods but
typically involves virtualization, containerization or cloud browsing. When a user closes
the browsing session or the session is timed out, the isolated environment is reset or
discarded. Additionally, any malicious code or harmful traffic is discarded as well,
preventing it from ever reaching the endpoint device or network.

The browser isolation method treats all websites, files and content equally by labeling
them as untrusted or blacklisted unless otherwise specified. Within the isolated
environment, files can be rendered remotely or sanitized without the need to download
them. This is different from other security methods that do not treat information equally
and filter content based on potential threatening signs.

Advantages and disadvantages of browser isolation:-

The primary benefit to browser isolation is reducing the spread of malware through web
browsers. This has proven to be more effective than other anti-virus application methods
since it does not need to be programmed to find specific threats or risks. However, the
installation of browser isolation technology can be complex or expensive. This usually
means an organization has to hire IT professionals with the right expertise or contracted
service providers to oversee and troubleshoot isolation efforts. Additionally, browser
isolation may cause users to experience slight delay or lag times when browsing.

The Web Security Landscape

A computer is secure if you can depend on it and its software to behave as you expect.

Using this definition, web security is a set of procedures, practices, and technologies for
protecting web servers, web users, and their surrounding organizations. Security protects
you against unexpected behavior.

Why should web security require special attention apart from the general subject of
computer and Internet security? Because the Web is changing many of the assumptions
that people have historically made about computer security and publishing:
o The Internet is a two-way network. As the Internet makes it possible for web
servers to publish information to millions of users, it also makes it possible for computer
hackers, crackers, criminals, vandals, and other “bad guys” to break into the very
computers on which the web servers are running. Those risks don’t exist in most other
publishing environments, such as newspapers, magazines, or even “electronic” publishing
systems involving teletext, voice-response, and fax-back.
o The World Wide Web is increasingly being used by corporations and governments
to distribute important information and conduct business transactions. Reputations can be
damaged and money can be lost if web servers are subverted.
o Although the Web is easy to use, web servers and browsers are exceedingly
complicated pieces of software, with many potential security flaws. Many times in the
past, new features have been added without proper attention being paid to their security
impact. Thus, properly installed software may still pose security threats.
o Once subverted, web browsers and servers can be used by attackers as a launching
point for conducting further attacks against users and organizations.
o It is considerably more expensive and more time-consuming to recover from a
security incident than to take preventative measures ahead of time.

Why Worry about Web Security?

The World Wide Web is the fastest growing part of the Internet. Increasingly, it is also
the part of the Internet that is most vulnerable to attack.

Web servers make an attractive target for attackers for many reasons:

Publicity

Web servers are an organization’s public face to the Internet and the electronic
world. A successful attack on a web server is a public event that may be seen by
hundreds of thousands of people within a matter of hours. Attacks can be mounted
for ideological or financial reasons; alternatively, they can simply be random acts
of vandalism.
Commerce(business)

Many web servers are involved with commerce and money. Indeed, the
cryptographic protocols built into Netscape Navigator and other browsers were
originally placed there to allow users to send credit card numbers over the Internet
without fear of compromise. Web servers have thus become a repository for
sensitive financial information, making them an attractive target for attackers. Of
course, the commercial services on these servers also make them targets of
interest.

Proprietary information

Organizations are using web technology as an easy way to distribute information


both internally, to their own members, and externally, to partners around the
world. This proprietary information is a target for competitors and enemies.

Network access

Because they are used by people both inside and outside an organization, web
servers effectively bridge an organization’s internal and external networks. Their
position of privileged network connectivity makes web servers an ideal target for
attack, as a compromised web server may be used to further attack computers
within an organization.

Unfortunately, the power of web technology makes web servers and browsers especially
vulnerable to attack as well:

Server extensibility

By their very nature, web servers are designed to be extensible. This extensibility
makes it possible to connect web servers with databases, legacy systems, and other
programs running on an organization’s network. If not properly implemented,
modules that are added to a web server can compromise the security of the entire
system.
Browser extensibility

In the same manner that servers can be extended, so can web clients. Today,
technologies such as ActiveX, Java, JavaScript, VBScript, and helper applications
can enrich the web experience with many new features that are not possible with
the HTML language alone. Unfortunately, these technologies can also be
subverted and employed against the browser’s user—often without the user’s
knowledge.

Disruption of service

Because web technology is based on the TCP/IP family of protocols, it is subject


to disruption of service: either accidentally or intentionally through denial-of-
service attacks. People who use this technology must be aware of its failings and
prepare for significant service disruptions.

Complicated support

Web browsers require external services such as DNS (Domain Name Service) and
IP (Internet Protocol) routing to function properly. The robustness and
dependability of those services may not be known and can be vulnerable to bugs,
accidents, and subversion. Subverting a lower-level service can result in problems
for the browsers as well.

The solution to these problems is not to forsake web technology but to embrace both the
limitations and the appropriate security measures. However, it is also important to
understand the limits of any system and to plan accordingly for failure and accident.

computer network

A computer network is a collection of computers that are physically and logically


connected together to exchange information. A Local Area Network, or LAN, is a
network in which all of the computers are physically connected to short (up to a few
hundred meters) segments of Ethernet, or token ring, or are connected to the same
network hub. A Wide Area Network, or WAN, is a network in which the computers are
separated by considerable distance, usually miles, sometimes thousands of miles.
An internetwork is a network of computer networks. The largest internetwork in the
world today is the Internet, which has existed in some form since the early 1970s and is
based on the IP (Internet Protocol) suite.

Information that travels over the Internet is divided into compact pieces called packets .
The way that data is divided up and reassembled is specified by the Internet Protocol.
User information can be sent in streams using the Transmission Control Protocol
(TCP/IP) or as a series of packets using the User Datagram Protocol (UDP). Other
protocols are used for sending control information.

Computers can be connected to one or more networks. Computers that are connected to at
least one network are called hosts . A computer that is connected to more than one
network is called a multi-homed host . If the computer can automatically transmit packets
from one network to another, it is called a gateway . A gateway that examines packets
and determines which network to send them to next is functioning as a router . A
computer can also act as a repeater, by forwarding every packet appearing on one
network to another, or as a bridge , in which the only packets forwarded are those that
need to be. Firewalls are special kinds of computers that are connected to two networks
but selectively forward information. There are fundamentally two kinds of firewalls.
A packet-filtering firewall decides packet-by-packet whether a packet should be copied
from one network to another. Firewalls can also be built from application-level proxies,
which operate at a higher level. Because they can exercise precise control over what
information is passed between two networks, firewalls are thought to improve computer
security.

Most Internet services are based on the client/server model. Under this model, one
program requests service from another program. Both programs can be running on the
same computer or, as is more often the case, on different computers. The program
making the request is called the client; the program that responds to the request is called
the server. Often, the words “client” and “server” are used to describe the computers as
well, although this terminology is technically incorrect. Most client software tends to be
run on personal computers, such as machines running the Windows 95 or MacOS
operating system. Most server software tends to run on computers running the UNIX or
Windows NT operating system. But these operating system distinctions are not too useful
because both network clients and servers are available for all kinds of operating systems.

The World Wide Web was invented in 1990 by Tim Berners-Lee while at the Swiss-
based European Laboratory for Particle Physics (CERN). The company created a web
browser called Mozilla, but soon renamed Netscape. Soon Clark’s company was renamed
Netscape Communications and the web browser was renamed Netscape Navigator.

Information is displayed on the World Wide Web as a series of pages . Web pages are
written in the HyperText Markup Language (HTML). The pages themselves are usually
stored on dedicated computers called web servers. The term web server is used
interchangeably to describe the computer on which the web pages reside and the program
on that computer that receives network requests and transmits HTML files in response.
Web pages are requested and received using messages formatted according to
the HyperText Transport Protocol (HTTP).

Besides transmitting a file, a web server can run a program in response to an incoming
web request. Originally, these programs were invoked using the Common Gateway
Interface (CGI). Although CGI makes it simple to have a web server perform a
complicated operation, such as performing a database lookup, it is not efficient because it
requires that a separate program be started for each incoming web request. A more
efficient technique is to have the web server itself perform the external operation. A
variety of Application Programmer Interfaces (APIs), such as the Netscape API (NSAPI),
are now available to support this function.

A virus is a malicious computer program that makes copies of itself and attaches
those copies to other programs. A worm is similar to a virus, except that it sends copies
of itself to other computers, where they run as standalone programs. A Trojan horse is a
program that appears to have one ubiquitous function, but actually has a hidden malicious
function. For instance, a program that claims to be an address book, but actually
reformats your hard drive when you run it, is a kind of Trojan horse.
What’s a “Secure Web Server” Anyway?

In recent years, the phrase “secure web server” has come to mean different things to
different people:

o For the software vendors that sell them, a secure web server is a program that
implements certain cryptographic protocols, so that information transferred between a
web server and a web browser cannot be eavesdropped upon.
o For users, a secure web server is one that will safeguard any personal information
that is received or collected. It’s one that supports their privacy and won’t subvert their
browser to download viruses or other rogue programs onto their computer.
o For a company that runs one, a secure web server is resistant to a determined
attack over the Internet or from corporate insiders.

A secure web server is all of these things, and more. It’s a server that is reliable. It’s a
server that is mirrored or backed up, so in the event of a hardware or software failure it
can be reconstituted quickly. It’s a server that is expandable, so that it can adequately
service large amounts of traffic.

HTTP is a protocol which allows the fetching of resources, such as HTML


documents. It is the foundation of any data exchange on the Web and it is a client-server
protocol, which means requests are initiated by the recipient, usually the Web browser. A
complete document is reconstructed from the different sub-documents fetched, for
instance text, layout description, images, videos, scripts, and more.
Clients and servers communicate by exchanging individual messages (as opposed to a
stream of data). The messages sent by the client, usually a Web browser, are
called requests and the messages sent by the server as an answer are called responses.
Designed in the early 1990s, HTTP is an extensible protocol which has evolved over time.
It is an application layer protocol that is sent over TCP, or over a TLS-encrypted TCP
connection, though any reliable transport protocol could theoretically be used. Due to its
extensibility, it is used to not only fetch hypertext documents, but also images and videos
or to post content to servers, like with HTML form results. HTTP can also be used to fetch
parts of documents to update Web pages on demand.

Components of HTTP-based systems

HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a
proxy on behalf of it). Most of the time the user-agent is a Web browser, but it can be
anything, for example a robot that crawls the Web to populate and maintain a search
engine index.

Each individual request is sent to a server, which handles it and provides an answer, called
the response. Between the client and the server there are numerous entities, collectively
called proxies, which perform different operations and act as gateways or caches, for
example.

In reality, there are more computers between a browser and the server handling the
request: there are routers, modems, and more. Thanks to the layered design of the Web,
these are hidden in the network and transport layers. HTTP is on top, at the application
layer. Although important to diagnose network problems, the underlying layers are mostly
irrelevant to the description of HTTP.

A Web page is a hypertext document. This means some parts of displayed text are links
which can be activated (usually by a click of the mouse) to fetch a new Web page,
allowing the user to direct their user-agent and navigate through the Web. The browser
translates these directions in HTTP requests, and further interprets the HTTP responses to
present the user with a clear response.

Browser isolation is a cybersecurity model which aims to physically isolate


an internet user's browsing activity (and the associated cyber risks) away from their local
networks and infrastructure. Browser isolation technologies approach this model in
different ways, but they all seek to achieve the same goal, effective isolation of the web
browser and a user's browsing activity as a method of securing web browsers from
browser-based security exploits, as well as web-borne threats such as ransomware and
other malware.[1] When a browser isolation technology is delivered to its customers as
a cloud hosted service, this is known as remote browser isolation (RBI), a model which
enables organizations to deploy a browser isolation solution to their users without
managing the associated server infrastructure.

Security Interface
Provide user interface elements for security features such as authorization, access to
digital certificates, and access to items in keychains.
frame busting
Web framing attacks such as clickjacking use iframes to hijack a user's web session. The
most common defense, called frame busting, prevents a site from functioning when
loaded inside a frame. We study frame busting practices for the Alexa Top-500 sites and
show that all can be circumvented in one way or another. Some circumventions are
browser-specific while others work across browsers. We conclude with recommendations
for proper frame busting.

A framekiller (or framebuster or framebreaker) is a


technique used by web applications to prevent their web pages from being displayed
within a frame. A frame is a subdivision of a Web browser window and can act like a
smaller window. It's usually deployed to prevent a frame from an external Web site being
loaded from within a frameset without permission often as part of clickjacking attack.

Cookies frames
Frame use cookies in order to create an enjoyable and easy online shopping experience
for you. By using Frame’s website you are agreeing that we can house cookies on your
device.

What are Cookies?


Cookies are small text files we put in your browser to track usage of our site but they
don’t tell us who you are and are harmless.

Can I delete or control my Cookies?


If you wish to delete any cookies that are already on your computer or device please refer
to instructions for your file management software to locate the file or directory that stores
cookies.
Cookies

Cookies are text files. They are stored on a user’s computer by a web browser, at the
request of the web server.

A cookie is limited to a small amount of data and can only be read by the website that
created it.

To avoid the size limitations of cookies, some websites will store a unique identification
code in a cookie, and the remainder of the data in their own databases.

Cookies are generally used to:

 Store and maintain user preferences on a website


 Track user behaviour (analytics)
 Store items in shopping baskets
 Help advertisers show relevant website adverts

Cookies are not programs. They cannot perform any operations, they are
not viruses or malware.

Cookies can be disabled in your browser settings, however this could make some
websites unusable (e.g. e-commerce).

Major web server threats :-


Web Servers store the web pages and provide them to the client upon request processed
through HTTP which is the basic protocol to give out information on world wide web.
The actual role of web servers is dependent on the way they are implemented; however,
the generic web servers store HTML or server-side scripting files such as PHP, ASP, etc.
that generate HTML files on-the-fly. Web servers may also interact with databases in
case they are implemented in a way to fetch information from databases and provide
them in a specific HTML format.
The Vulnerabilities Of Web Servers That Make Them Prone To Attacks

 Vulnerabilities in the implementation of TCP/IP protocol suit are the most exploited
of them all.

 Exploitation of authentication loopholes and session identifiers.

 Manual modifications of the URL parameters.

 Issues in verification of input data.

Types of Web Server Attacks and their Preventions

>URL INTERPRETATION ATTACK

This attack is also called URL poisoning as the attackers manipulates the URL by
changing its semantics but keeping the syntax intact. The parameters of the URL are
adjusted so that information beyond what is intended can be retrieved from the web
server. This type of attack is quite common with CGI-based websites.

>SQL INJECTION ATTACK

As the name suggests, SQL injection attack aims to modify a database or extract
information from it. An SQL query with parameters from the URL is fed to the database
that has the ability to alter the data. The stored procedures in the database can also be
executed through SQL injection and database can be made to do things, it is intended to
do only when desired by the authorized personnel.

>INPUT VALIDATION ATTACK

Input validation attack is an attack on the web server where the server executes a code
injected by a hacker to the web server or the database server. There are many input types
that need to be validated before execution including data type, data ranges, and others. By
executing the code with inputs that are not validated, information can be retrieved or
modified by the attacker.

>BUFFER OVERFLOW ATTACKS

Buffer Overflow attack implies the deliberate overflowing of the buffer memory that is
reserved for the users™ input. When an application awaits a users™ input, it allocates a
stack with a memory location where the input data by the user is fed. The attackers flood
this space by writing arbitrary data so that the memory stack is full and the users deny the
service. This is one of the ways to perform denial of service attack which is dealt with in
more detail further.

>IMPERSONATION ATTACKS

Impersonation attack is also called IP spoofing where the hacker pretends to be accessing
the web server with an IP that is actually impersonating an IP that has the access to the
web server. There are special programs that the hackers make use of, to create an IP
packet that appears to be originating from the intranet and hence gain entry to the section
of the web server that is intended to be accessed only by the authorized personnel.

>PASSWORD-BASED ATTACKS

The authentication system of a web server is often based on the password that identifies a
valid user and grants access to the web server. If the hacker can, by any means, get your
username and password, he or she can access the information that only you are supposed
to access. The older applications do not have strong authentication system and this makes
it easy for the eavesdroppers to get through the authentication process.
>DENIAL OF SERVICE ATTACKS

Denial of service attack (DOS) is an attack where the server denies serving the users with
a response to their request. This attack is performed by several means and buffer flow is
one of them. It is an effective and naturally one of the most popular ways of attacking the
web server. The attackers after gaining access to the network randomize the attention of
the security system experts so that they do not become aware of the attack immediately
so that they can exploit the web server in other ways.

>BRUTE FORCE

Brute Force, as the name suggests, implies cracking the username, password combination
by using all possible iterations. This is a basic form of web server attack and is
implemented when the hackers have a clue that weak passwords have been used in the
authentication. The chance of brute force working is maximum when no other security
measures are there besides password authentication.

>SOURCE CODE DISCLOSURE

Through Source code disclosure attack, the attackers are able to retrieve the application
files without using any parsing. The source code of the application is recovered and then
it is analyzed to find loopholes that can be used to attack the web servers. It is often
caused when the application is designed poorly or there are errors in the configuration.

>SESSION HIJACKING

HTTP is a stateless protocol while all the web applications have states. When the tracking
of these states is based on poor mechanism, session hijacking becomes easy for the
hackers. It is also called cookie hijacking because a web server determines the session
with a user based on the cookie. The cookie stored on the users™ computer is stolen by
the hijacker by either intercepting it through the access to the network or through a
previously saved cookie. Sniffing programs are used to perform this attack in an
automated manner.

Cross-Site Request Forgery (CSRF:-

Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute
unwanted actions on a web application in which they're currently authenticated. CSRF
attacks specifically target state-changing requests, not theft of data, since the attacker has
no way to see the response to the forged request. With a little help of social engineering
(such as sending a link via email or chat), an attacker may trick the users of a web
application into executing actions of the attacker's choosing. If the victim is a normal
user, a successful CSRF attack can force the user to perform state changing requests like
transferring funds, changing their email address, and so forth. If the victim is an
administrative account, CSRF can compromise the entire web application.

Cross site scripting :-


Cross-site scripting (XSS) is a type of computer security vulnerability typically found
in web applications. XSS enables attackers to inject client-side scripts into web
pages viewed by other users. A cross-site scripting vulnerability may be used by attackers
to bypass access controls such as the same-origin policy. Cross-site scripting carried out
on websites accounted for roughly 84% of all security vulnerabilities documented
by Symantec up until 2007.[1] In 2017, XSS was still considered a major threat
vector.[2] XSS effects vary in range from petty nuisance to significant security risk,
depending on the sensitivity of the data handled by the vulnerable site and the nature of
any security mitigation implemented by the site's owner network.

Related vulnerabilities[edit]
In a Universal Cross-Site Scripting (UXSS, or Universal XSS) attack, vulnerabilities in
the browser itself or in the browser plugins are exploited (rather than vulnerabilities in
other websites, as is the case with XSS attacks); such attacks are commonly used
by Anonymous, along with DDoS, to compromise control of a network.
Several classes of vulnerabilities or attack techniques are related to XSS: cross-zone
scripting exploits "zone" concepts in certain browsers and usually executes code with a
greater privilege. HTTP header injection can be used to create cross-site scripting
conditions due to escaping problems on HTTP protocol level (in addition to enabling
attacks such as HTTP response splitting).
Cross-site request forgery (CSRF/XSRF) is almost the opposite of XSS, in that rather
than exploiting the user's trust in a site, the attacker (and his malicious page) exploits the
site's trust in the client software, submitting requests that the site believes represent
conscious and intentional actions of authenticated users.[57] XSS vulnerabilities (even in
other applications running on the same domain) allow attackers to bypass CSRF
prevention efforts.

SECURE SOFTWARE DEVELOPMENT PRACTICES

The Importance of Secure Development

With the vast amount of threats that constantly pressure companies and governments, it is
important to ensure that the software applications these organizations utilize are
completely secure. Secure development is a practice to ensure that the code and processes
that go into developing applications are as secure as possible. Secure development entails
the utilization of several processes, including the implementation of a Security
Development Lifecycle (SDL) and secure coding itself.

Secure Development Lifecycle

Integrating security practices into the software development lifecycle and verifying the
security of internally developed applications before they are deployed can help mitigate
risk from internal and external sources. Using Veracode to test the security of
applications helps customers implement a secure development program in a simple and
cost-effective way.
The Security Development Lifecycle (SDL) is a software development security assurance
process consisting of security practices grouped by six phases: training, requirements &
design, construction, testing, release, and response.

You might also like