Professional Documents
Culture Documents
CSS KNC301 Notes Unit 2 and Unit 3
CSS KNC301 Notes Unit 2 and Unit 3
UNIT-2
Anubhav Sharma
Asstt.Prof. (CSE/IT)
Confidentiality
Confidentiality refers to protecting information from being accessed by unauthorized
parties. In other words, only the people who are authorized to do so can gain access to
sensitive data.
So, in summary, a breach of confidentiality means that someone gains access to
information who shouldn't have access to it.
A good way to get programs to behave in a good manner consistent with a given security
policy is by "brainwashing." That is, modify the programs so that they behave only in
safe ways. This is embodied by a recent approach to security known as software-based
fault isolation (SFI).
So far, the environment has been responsible for policy enforcement, where the
environment is either the OS/kernel or the hardware. Hardware methods include
addressing mechanisms (e.g. virtual memory); OS methods include having two modes
(where the supervisor mode has access to everything). The new approach we discuss
today is to construct a piece of software that transforms a given program p into a program
p', where p' is guaranteed to satisfy a security policy of interest.
When would the SFI approach be useful:?
Programs can read and write memory within their LFD. This ensures secrecy and
integrity for that memory.
Jumps (jmps, calls and returns) are to be kept within their own LFD.
There are two systems-level issues that should be mentioned. Multiple LFDs in a single
address space might correspond to multiple programs. Then:
There are some problems with using SFI. Clearly, we would like the SFI transformation
process to preserve some properties of the original program. The problem is in defining
precisely what those properties are, since the transformed program obviously doesn't
preserve all properties. For instance, the transformed program will not have the same
number of instructions as the original. Library routines can also be used to circumvent
SFI. In particular, we must prevent programs from invoking library routines that have not
been SFI'd. Finally, denial of service attacks remain possible. For example, one LFD
could acquire a shared lock and then never relinquish it.
Rootkit:-
A Rootkit is a collection of computer software, typically malicious, designed to enable
access to a computer or an area of its software that is not otherwise allowed (for example,
to an unauthorized user) and often masks its existence or the existence of other
software. The term rootkit is a concatenation of "root" (the traditional name of the
privileged account on Unix-like operating systems) and the word "kit" (which refers to
the software components that implement the tool). The term "rootkit" has negative
connotations through its association with malware.[1]
Rootkit installation can be automated, or an attacker can install it after having obtained
root or Administrator access. Obtaining this access is a result of direct attack on a system,
i.e. exploiting a known vulnerability (such as privilege escalation) or
a password (obtained by cracking or social engineering tactics like "phishing"). Once
installed, it becomes possible to hide the intrusion as well as to maintain privileged
access. The key is the root or administrator access. Full control over a system means that
existing software can be modified, including software that might otherwise be used to
detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that is
intended to find it. Detection methods include using an alternative and trusted operating
system, behavioral-based methods, signature scanning, difference scanning, and memory
dump analysis. Removal can be complicated or practically impossible, especially in cases
where the rootkit resides in the kernel; reinstallation of the operating system may be the
only available solution to the problem.[2] When dealing with firmware rootkits, removal
may require hardware replacement, or specialized equipment.
VMM Detection
Can an OS detect it is running on top of a VMM?
Applications: Virus detector can detect VMBR
Normal virus (non-VMBR) can detect VMM refuse to run to
avoid reverse engineering
A cross-site request forgery (CSRF) attack forces the victim's browser to make a request
without the victim's knowledge or agency. Browsers make requests all the time without
the knowledge or approval of the user: images, frames, and script tags. The CSRF
focuses on finding a link, that when requested performs some action beneficial to the
attacker. The Web browser's same origin policy (SOP) prohibits the interaction between
content pulled from different domains, but it doesn't block a Web page from pulling that
content together. The attacker only needs to forge a request. The content of the site's
response, which is protected by the same origin policy (SOP), is immaterial to the success
of the attack. A CSRF attack would use an iframe or img element to force the user's
browser to accomplish the same query, but to do so without the user's intervention or
knowledge. The page might be hosted on a server controlled by the attacker. One of the
most effective CSRF countermeasures assigns a temporary pseudo-random token to the
sensitive forms or links that may be submitted by an authenticated user. The value of the
token is known only to the Web application and the user's Web browser.
We have been discussing access control policies and have been concerned with defining
what accesses subjects can make to objects. We model this behavior with a protection
matrix and commands. In the last lecture, we pointed out that there are at least two ways
to implement the matrix:
access control lists (ACLs)-- the column for each object stored as a list associated
with that object.
capabilities -- the row for each subject stored as a list associated with that subject.
We noted that there are two generic ways to implement ACLs. One is to employ a
mechanism for name interpretation, to put a layer between the names a subject utters and
what those names mean. Another is operation interpretation, to put the protection
mechanism into the execution of an operation.
Every process inherits its uid based on which user starts the process. Every process also
has an effective uid, also a number, which may be different from the uid.
Finally, each UNIX process is a member of some groups. In the original UNIX every
user was a member of one group. Currently, users can be members of more than one
group. Group information can be gotten from /etc/passwd or from a file /etc/groups.
System administrators control the latter file.
When a process is created, associated with it is the list of all the groups it is in.
Recall that groups are a way to shorten access control lists. They are useful in other ways
as well.
Here is a high-level overview of the UNIX file system. A directory is a list of pairs:
(filename, i-node number). Running the command 'ls' will produce a list of filenames
from this list of pairs for the current working directory. An i-node contains a lot of
information, including:
where the file is stored -- necessary since the directory entry is used to access the
file,
the length of the file -- necessary to avoid reading past the end of the file,
the last time the file was read,
the last time the file was written,
the last time the i-node was read,
the last time the i-node was written,
the owner -- a uid, generally the uid of the process that created the file,
a group -- gid of the process that created the file is a member of,
12 mode bits to encode protection privileges -- equivalent to encoding a set of
access rights.
There is a UNIX that uses a notion of an additional access control list, and not just mode
bits to handle access control. In this case, each file has mode bits as we have been
discussing and also extended permissions. The extended permissions provide exceptions
to the mode bits as follows:
Specify: for example, "r-- u:harry" means that user harry has read only access.
Deny: for example "-w- g:acsu" means remove write access from the group acsu.
Permit: for example "rw- u:bill, g:swe" means give read and write access to bill if
bill is also a member of the group swe. The comma is conjunction.
With extended permissions it's possible to force a user to enter a particular group before
being allowed access to a file.
NTFS is structured so that a file is a set of properties, the contents of the file being just
one of those properties. An ACL is a property of an item. The ACL itself is a list of
entries: (user or group, permissions). NTFS permissions are closer to extended
permissions in UNIX than to the 9 mode bits. The permission offer a rich set of
possibilities:
R -- read
W -- write
X -- execute
D -- delete
P -- modify the ACL
O -- make current account the new owner ("take ownership")
The owner is allowed to change the ACL. A user with permission P can also change the
ACL. A user with permission O can take ownership. There is also a packaging of
privileges known as permissions sets:
no access
read -- RX
change -- RWXO
full control -- RWXDPO
Browser isolation:-
Browser isolation is a cyber security model for web browsing that can be used to
physically separate an internet user’s browsing activity from their local machine, network
and infrastructure. With this model, individual browser sessions are abstracted away from
hardware and direct internet access, trapping harmful activity inside the disposable
environment. Browser isolation may also be referred to as remote browser isolation, web
isolation or remote browsing.
How it works
The browser isolation method treats all websites, files and content equally by labeling
them as untrusted or blacklisted unless otherwise specified. Within the isolated
environment, files can be rendered remotely or sanitized without the need to download
them. This is different from other security methods that do not treat information equally
and filter content based on potential threatening signs.
The primary benefit to browser isolation is reducing the spread of malware through web
browsers. This has proven to be more effective than other anti-virus application methods
since it does not need to be programmed to find specific threats or risks. However, the
installation of browser isolation technology can be complex or expensive. This usually
means an organization has to hire IT professionals with the right expertise or contracted
service providers to oversee and troubleshoot isolation efforts. Additionally, browser
isolation may cause users to experience slight delay or lag times when browsing.
A computer is secure if you can depend on it and its software to behave as you expect.
Using this definition, web security is a set of procedures, practices, and technologies for
protecting web servers, web users, and their surrounding organizations. Security protects
you against unexpected behavior.
Why should web security require special attention apart from the general subject of
computer and Internet security? Because the Web is changing many of the assumptions
that people have historically made about computer security and publishing:
o The Internet is a two-way network. As the Internet makes it possible for web
servers to publish information to millions of users, it also makes it possible for computer
hackers, crackers, criminals, vandals, and other “bad guys” to break into the very
computers on which the web servers are running. Those risks don’t exist in most other
publishing environments, such as newspapers, magazines, or even “electronic” publishing
systems involving teletext, voice-response, and fax-back.
o The World Wide Web is increasingly being used by corporations and governments
to distribute important information and conduct business transactions. Reputations can be
damaged and money can be lost if web servers are subverted.
o Although the Web is easy to use, web servers and browsers are exceedingly
complicated pieces of software, with many potential security flaws. Many times in the
past, new features have been added without proper attention being paid to their security
impact. Thus, properly installed software may still pose security threats.
o Once subverted, web browsers and servers can be used by attackers as a launching
point for conducting further attacks against users and organizations.
o It is considerably more expensive and more time-consuming to recover from a
security incident than to take preventative measures ahead of time.
The World Wide Web is the fastest growing part of the Internet. Increasingly, it is also
the part of the Internet that is most vulnerable to attack.
Web servers make an attractive target for attackers for many reasons:
Publicity
Web servers are an organization’s public face to the Internet and the electronic
world. A successful attack on a web server is a public event that may be seen by
hundreds of thousands of people within a matter of hours. Attacks can be mounted
for ideological or financial reasons; alternatively, they can simply be random acts
of vandalism.
Commerce(business)
Many web servers are involved with commerce and money. Indeed, the
cryptographic protocols built into Netscape Navigator and other browsers were
originally placed there to allow users to send credit card numbers over the Internet
without fear of compromise. Web servers have thus become a repository for
sensitive financial information, making them an attractive target for attackers. Of
course, the commercial services on these servers also make them targets of
interest.
Proprietary information
Network access
Because they are used by people both inside and outside an organization, web
servers effectively bridge an organization’s internal and external networks. Their
position of privileged network connectivity makes web servers an ideal target for
attack, as a compromised web server may be used to further attack computers
within an organization.
Unfortunately, the power of web technology makes web servers and browsers especially
vulnerable to attack as well:
Server extensibility
By their very nature, web servers are designed to be extensible. This extensibility
makes it possible to connect web servers with databases, legacy systems, and other
programs running on an organization’s network. If not properly implemented,
modules that are added to a web server can compromise the security of the entire
system.
Browser extensibility
In the same manner that servers can be extended, so can web clients. Today,
technologies such as ActiveX, Java, JavaScript, VBScript, and helper applications
can enrich the web experience with many new features that are not possible with
the HTML language alone. Unfortunately, these technologies can also be
subverted and employed against the browser’s user—often without the user’s
knowledge.
Disruption of service
Complicated support
Web browsers require external services such as DNS (Domain Name Service) and
IP (Internet Protocol) routing to function properly. The robustness and
dependability of those services may not be known and can be vulnerable to bugs,
accidents, and subversion. Subverting a lower-level service can result in problems
for the browsers as well.
The solution to these problems is not to forsake web technology but to embrace both the
limitations and the appropriate security measures. However, it is also important to
understand the limits of any system and to plan accordingly for failure and accident.
computer network
Information that travels over the Internet is divided into compact pieces called packets .
The way that data is divided up and reassembled is specified by the Internet Protocol.
User information can be sent in streams using the Transmission Control Protocol
(TCP/IP) or as a series of packets using the User Datagram Protocol (UDP). Other
protocols are used for sending control information.
Computers can be connected to one or more networks. Computers that are connected to at
least one network are called hosts . A computer that is connected to more than one
network is called a multi-homed host . If the computer can automatically transmit packets
from one network to another, it is called a gateway . A gateway that examines packets
and determines which network to send them to next is functioning as a router . A
computer can also act as a repeater, by forwarding every packet appearing on one
network to another, or as a bridge , in which the only packets forwarded are those that
need to be. Firewalls are special kinds of computers that are connected to two networks
but selectively forward information. There are fundamentally two kinds of firewalls.
A packet-filtering firewall decides packet-by-packet whether a packet should be copied
from one network to another. Firewalls can also be built from application-level proxies,
which operate at a higher level. Because they can exercise precise control over what
information is passed between two networks, firewalls are thought to improve computer
security.
Most Internet services are based on the client/server model. Under this model, one
program requests service from another program. Both programs can be running on the
same computer or, as is more often the case, on different computers. The program
making the request is called the client; the program that responds to the request is called
the server. Often, the words “client” and “server” are used to describe the computers as
well, although this terminology is technically incorrect. Most client software tends to be
run on personal computers, such as machines running the Windows 95 or MacOS
operating system. Most server software tends to run on computers running the UNIX or
Windows NT operating system. But these operating system distinctions are not too useful
because both network clients and servers are available for all kinds of operating systems.
The World Wide Web was invented in 1990 by Tim Berners-Lee while at the Swiss-
based European Laboratory for Particle Physics (CERN). The company created a web
browser called Mozilla, but soon renamed Netscape. Soon Clark’s company was renamed
Netscape Communications and the web browser was renamed Netscape Navigator.
Information is displayed on the World Wide Web as a series of pages . Web pages are
written in the HyperText Markup Language (HTML). The pages themselves are usually
stored on dedicated computers called web servers. The term web server is used
interchangeably to describe the computer on which the web pages reside and the program
on that computer that receives network requests and transmits HTML files in response.
Web pages are requested and received using messages formatted according to
the HyperText Transport Protocol (HTTP).
Besides transmitting a file, a web server can run a program in response to an incoming
web request. Originally, these programs were invoked using the Common Gateway
Interface (CGI). Although CGI makes it simple to have a web server perform a
complicated operation, such as performing a database lookup, it is not efficient because it
requires that a separate program be started for each incoming web request. A more
efficient technique is to have the web server itself perform the external operation. A
variety of Application Programmer Interfaces (APIs), such as the Netscape API (NSAPI),
are now available to support this function.
A virus is a malicious computer program that makes copies of itself and attaches
those copies to other programs. A worm is similar to a virus, except that it sends copies
of itself to other computers, where they run as standalone programs. A Trojan horse is a
program that appears to have one ubiquitous function, but actually has a hidden malicious
function. For instance, a program that claims to be an address book, but actually
reformats your hard drive when you run it, is a kind of Trojan horse.
What’s a “Secure Web Server” Anyway?
In recent years, the phrase “secure web server” has come to mean different things to
different people:
o For the software vendors that sell them, a secure web server is a program that
implements certain cryptographic protocols, so that information transferred between a
web server and a web browser cannot be eavesdropped upon.
o For users, a secure web server is one that will safeguard any personal information
that is received or collected. It’s one that supports their privacy and won’t subvert their
browser to download viruses or other rogue programs onto their computer.
o For a company that runs one, a secure web server is resistant to a determined
attack over the Internet or from corporate insiders.
A secure web server is all of these things, and more. It’s a server that is reliable. It’s a
server that is mirrored or backed up, so in the event of a hardware or software failure it
can be reconstituted quickly. It’s a server that is expandable, so that it can adequately
service large amounts of traffic.
HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a
proxy on behalf of it). Most of the time the user-agent is a Web browser, but it can be
anything, for example a robot that crawls the Web to populate and maintain a search
engine index.
Each individual request is sent to a server, which handles it and provides an answer, called
the response. Between the client and the server there are numerous entities, collectively
called proxies, which perform different operations and act as gateways or caches, for
example.
In reality, there are more computers between a browser and the server handling the
request: there are routers, modems, and more. Thanks to the layered design of the Web,
these are hidden in the network and transport layers. HTTP is on top, at the application
layer. Although important to diagnose network problems, the underlying layers are mostly
irrelevant to the description of HTTP.
A Web page is a hypertext document. This means some parts of displayed text are links
which can be activated (usually by a click of the mouse) to fetch a new Web page,
allowing the user to direct their user-agent and navigate through the Web. The browser
translates these directions in HTTP requests, and further interprets the HTTP responses to
present the user with a clear response.
Security Interface
Provide user interface elements for security features such as authorization, access to
digital certificates, and access to items in keychains.
frame busting
Web framing attacks such as clickjacking use iframes to hijack a user's web session. The
most common defense, called frame busting, prevents a site from functioning when
loaded inside a frame. We study frame busting practices for the Alexa Top-500 sites and
show that all can be circumvented in one way or another. Some circumventions are
browser-specific while others work across browsers. We conclude with recommendations
for proper frame busting.
Cookies frames
Frame use cookies in order to create an enjoyable and easy online shopping experience
for you. By using Frame’s website you are agreeing that we can house cookies on your
device.
Cookies are text files. They are stored on a user’s computer by a web browser, at the
request of the web server.
A cookie is limited to a small amount of data and can only be read by the website that
created it.
To avoid the size limitations of cookies, some websites will store a unique identification
code in a cookie, and the remainder of the data in their own databases.
Cookies are not programs. They cannot perform any operations, they are
not viruses or malware.
Cookies can be disabled in your browser settings, however this could make some
websites unusable (e.g. e-commerce).
Vulnerabilities in the implementation of TCP/IP protocol suit are the most exploited
of them all.
This attack is also called URL poisoning as the attackers manipulates the URL by
changing its semantics but keeping the syntax intact. The parameters of the URL are
adjusted so that information beyond what is intended can be retrieved from the web
server. This type of attack is quite common with CGI-based websites.
As the name suggests, SQL injection attack aims to modify a database or extract
information from it. An SQL query with parameters from the URL is fed to the database
that has the ability to alter the data. The stored procedures in the database can also be
executed through SQL injection and database can be made to do things, it is intended to
do only when desired by the authorized personnel.
Input validation attack is an attack on the web server where the server executes a code
injected by a hacker to the web server or the database server. There are many input types
that need to be validated before execution including data type, data ranges, and others. By
executing the code with inputs that are not validated, information can be retrieved or
modified by the attacker.
Buffer Overflow attack implies the deliberate overflowing of the buffer memory that is
reserved for the users™ input. When an application awaits a users™ input, it allocates a
stack with a memory location where the input data by the user is fed. The attackers flood
this space by writing arbitrary data so that the memory stack is full and the users deny the
service. This is one of the ways to perform denial of service attack which is dealt with in
more detail further.
>IMPERSONATION ATTACKS
Impersonation attack is also called IP spoofing where the hacker pretends to be accessing
the web server with an IP that is actually impersonating an IP that has the access to the
web server. There are special programs that the hackers make use of, to create an IP
packet that appears to be originating from the intranet and hence gain entry to the section
of the web server that is intended to be accessed only by the authorized personnel.
>PASSWORD-BASED ATTACKS
The authentication system of a web server is often based on the password that identifies a
valid user and grants access to the web server. If the hacker can, by any means, get your
username and password, he or she can access the information that only you are supposed
to access. The older applications do not have strong authentication system and this makes
it easy for the eavesdroppers to get through the authentication process.
>DENIAL OF SERVICE ATTACKS
Denial of service attack (DOS) is an attack where the server denies serving the users with
a response to their request. This attack is performed by several means and buffer flow is
one of them. It is an effective and naturally one of the most popular ways of attacking the
web server. The attackers after gaining access to the network randomize the attention of
the security system experts so that they do not become aware of the attack immediately
so that they can exploit the web server in other ways.
>BRUTE FORCE
Brute Force, as the name suggests, implies cracking the username, password combination
by using all possible iterations. This is a basic form of web server attack and is
implemented when the hackers have a clue that weak passwords have been used in the
authentication. The chance of brute force working is maximum when no other security
measures are there besides password authentication.
Through Source code disclosure attack, the attackers are able to retrieve the application
files without using any parsing. The source code of the application is recovered and then
it is analyzed to find loopholes that can be used to attack the web servers. It is often
caused when the application is designed poorly or there are errors in the configuration.
>SESSION HIJACKING
HTTP is a stateless protocol while all the web applications have states. When the tracking
of these states is based on poor mechanism, session hijacking becomes easy for the
hackers. It is also called cookie hijacking because a web server determines the session
with a user based on the cookie. The cookie stored on the users™ computer is stolen by
the hijacker by either intercepting it through the access to the network or through a
previously saved cookie. Sniffing programs are used to perform this attack in an
automated manner.
Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute
unwanted actions on a web application in which they're currently authenticated. CSRF
attacks specifically target state-changing requests, not theft of data, since the attacker has
no way to see the response to the forged request. With a little help of social engineering
(such as sending a link via email or chat), an attacker may trick the users of a web
application into executing actions of the attacker's choosing. If the victim is a normal
user, a successful CSRF attack can force the user to perform state changing requests like
transferring funds, changing their email address, and so forth. If the victim is an
administrative account, CSRF can compromise the entire web application.
Related vulnerabilities[edit]
In a Universal Cross-Site Scripting (UXSS, or Universal XSS) attack, vulnerabilities in
the browser itself or in the browser plugins are exploited (rather than vulnerabilities in
other websites, as is the case with XSS attacks); such attacks are commonly used
by Anonymous, along with DDoS, to compromise control of a network.
Several classes of vulnerabilities or attack techniques are related to XSS: cross-zone
scripting exploits "zone" concepts in certain browsers and usually executes code with a
greater privilege. HTTP header injection can be used to create cross-site scripting
conditions due to escaping problems on HTTP protocol level (in addition to enabling
attacks such as HTTP response splitting).
Cross-site request forgery (CSRF/XSRF) is almost the opposite of XSS, in that rather
than exploiting the user's trust in a site, the attacker (and his malicious page) exploits the
site's trust in the client software, submitting requests that the site believes represent
conscious and intentional actions of authenticated users.[57] XSS vulnerabilities (even in
other applications running on the same domain) allow attackers to bypass CSRF
prevention efforts.
With the vast amount of threats that constantly pressure companies and governments, it is
important to ensure that the software applications these organizations utilize are
completely secure. Secure development is a practice to ensure that the code and processes
that go into developing applications are as secure as possible. Secure development entails
the utilization of several processes, including the implementation of a Security
Development Lifecycle (SDL) and secure coding itself.
Integrating security practices into the software development lifecycle and verifying the
security of internally developed applications before they are deployed can help mitigate
risk from internal and external sources. Using Veracode to test the security of
applications helps customers implement a secure development program in a simple and
cost-effective way.
The Security Development Lifecycle (SDL) is a software development security assurance
process consisting of security practices grouped by six phases: training, requirements &
design, construction, testing, release, and response.