Data Security

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

c 


                 
                    
    

Data is the raw form of information stored as columns and rows in our databases, network
servers and personal computers. This may be a wide range of information from personal files and
intellectual property to market analytics and details intended to top secret. Data could be
anything of interest that can be read or otherwise interpreted in human form.

However, some of this information isn't intended to leave the system. The unauthorized access of
this data could lead to numerous problems for the larger corporation or even the personal home
user. Having your bank account details stolen is just as damaging as the system administrator
who was just robbed for the client information in their database.

There has been a huge emphasis on data security as of late, largely because of the internet. There
are a number of options for locking down your data from software solutions to hardware
mechanisms. Computer users are certainly more conscious these days, but is your data really
secure? If you're not following the essential guidelines, your sensitive information just may be at
risk.

3  
3ncryption has become a critical security feature for thriving networks and active home users
alike. This security mechanism uses mathematical schemes and algorithms to scramble data into
unreadable text. It can only by decoded or decrypted by the party that possesses the associated
key.

(FD3 Full-disk encryption offers some of the best protection available. This technology enables
you to encrypt every piece of data on a disk or hard disk drive. Full disk encryption is even more
powerful when hardware solutions are used in conjunction with software components. This
combination is often referred to as end-based or end-point full disk encryption.


   
Authentication is another part of data security that we encounter with everyday computer usage.
Just think about when you log into your email or blog account. That single sign-on process is a
form authentication that allows you to log into applications, files, folders and even an entire
computer system. Once logged in, you have various given privileges until logging out. Some
systems will cancel a session if your machine has been idle for a certain amount of time,
requiring that you prove authentication once again to re-enter.

The single sign-on scheme is also implemented into strong user authentication systems.
However, it requires individuals to login using multiple factors of authentication. This may
include a password, a one-time password, a smart card or even a fingerprint.
G 

Data security wouldn't be complete without a solution to backup your critical information.
Though it may appear secure while confined away in a machine, there is always a chance that
your data can be compromised. You could suddenly be hit with a malware infection where a
virus destroys all of your files. Someone could enter your computer and thieve data by sliding
through a security hole in the operating system. Perhaps it was an inside job that caused your
business to lose those sensitive reports. If all else fails, a reliable backup solution will allow you
to restore your data instead of starting completely from scratch.

The term  


describes the use of a system that constantly monitors a computer
network for slow or failing components and that notifies the network administrator (via email,
pager or other alarms in case of outages. It is a subset of the functions involved in network
management.

While an intrusion detection system monitors a network for threats from the outside, a network
monitoring system monitors the network for problems caused by overloaded and/or crashed
servers, network connections or other devices.

For example, to determine the status of a webserver, monitoring software may periodically send
an HTTP request to fetch a page. For email servers, a test message might be sent through SMTP
and retrieved by IMAP or POP3.

Commonly measured metrics are response time, availability and uptime, although both
consistency and reliability metrics are starting to gain popularity. The widespread addition of
WAN optimization devices is having an adverse effect on most network monitoring tools --
especially when it comes to measuring accurate end-to-end response time because they limit
round trip visibility.[1]

Status request failures - such as when a connection cannot be established, it times-out, or the
document or message cannot be retrieved - usually produce an action from the monitoring
system. These actions vary -- an alarm may be sent (via SMS, email, etc. to the resident
sysadmin, automatic failover systems may be activated to remove the troubled server from duty
until it can be repaired, etc.

Monitoring the performance of a network uplink is also known as network traffic measurement,
and more software is listed there.

 

Network tomography is an important area of network measurement, which deals with monitoring
the health of various links in a network using end-to-end probes sent by agents located at vantage
points in the network/Internet.
D   
Doute analytics is another important area of network measurement. It includes the methods,
systems, algorithms and tools to monitor the routing posture of networks. Incorrect routing or
routing issues cause undesirable performance degradation or downtime.

   


Website monitoring service can check HTTP pages, HTTPS, SNMP, FTP, SMTP, POP3, IMAP,
DNS, SSH, T3N3T, SS, TCP, ICMP, SIP, UDP, Media Streaming and a range of other ports
with a variety of check intervals ranging from every four hours to every one minute. Typically,
most network monitoring services test your server anywhere between once-per-hour to once-per-
minute.

    
 
Network monitoring services usually have a number of servers around the globe - for example in
America, 3urope, Asia, Australia and other locations. By having multiple servers in different
geographic locations, a monitoring service can determine if a Web server is available across
different networks worldwide. The more the locations used, the more complete is the picture on
network availability.

        

x 


A   is a device or set of devices designed to permit or deny network transmissions based
upon a set of rules and is frequently used to protect networks from unauthorized access while
permitting legitimate communications to pass.

Many personal computer operating systems include software-based firewalls to protect against
threats from the public Internet. Many routers that pass data between networks contain firewall
components and, conversely, many firewalls can perform basic routing functions.

   
ÿ 

   
 !      
 "#          
 $    % % 
 &#'   
 "
 "      
 ""(  )  
 "$* +
 "&     
 $# 
 &,  
 X-+  

ÿ  
The term u  originally referred to a wall intended to confine a fire or potential fire within a
building; cf. firewall (construction . ater uses refer to similar structures, such as the metal sheet
separating the engine compartment of a vehicle or aircraft from the passenger compartment.

- The Morris Worm spread itself through multiple vulnerabilities in the machines of the time.
Although it was not malicious in intent, the Morris Worm was the first large scale attack on
Internet security; the online community was neither expecting an attack nor prepared to deal with
one.[1]

ÿ x       

The first paper published on firewall technology was in 1988, when engineers from Digital
3quipment Corporation (!3 developed filter systems known as    firewalls. This
fairly basic system was the first generation of what became a highly evolved and technical
internet security feature. At AT&T Bell abs, Bill Cheswick and Steve Bellovin were continuing
their research in packet filtering and developed a working model for their own company based
on their original first generation architecture.

This type of packet filtering pays no attention to whether a packet is part of an existing stream of
traffic (i.e. it stores no information on connection "state" . Instead, it filters each packet based
only on information contained in the packet itself (most commonly using a combination of the
packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port
number .

TCP and UDP protocols constitute most communication over the Internet, and because TCP and
UDP traffic by convention uses well known ports for particular types of traffic, a "stateless"
packet filter can distinguish between, and thus control, those types of traffic (such as web
browsing, remote printing, email transmission, file transfer , unless the machines on each side of
the packet filter are both using the same non-standard ports.

Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which
means most of the work is done between the network and physical layers, with a little bit of
peeking into the transport layer to figure out source and destination port numbers.[2] When a
packet originates from the sender and filters through a firewall, the device checks for matches to
any of the packet filtering rules that are configured in the firewall and drops or rejects the packet
accordingly. When the packet passes through the firewall, it filters the packet on a protocol/port
number basis (GSS . For example, if a rule in the firewall exists to block telnet access, then the
firewall will block the IP protocol for port number 23.

ÿ      


º    (      

The key benefit of application layer filtering is that it can "understand" certain applications and
protocols (such as File Transfer Protocol, DNS, or web browsing , and it can detect if an
unwanted protocol is sneaking through on a non-standard port or if a protocol is being abused in
any harmful way.

An application firewall is much more secure and reliable compared to packet filter firewalls
because it works on all seven layers of the OSI model, from the application down to the physical
ayer. This is similar to a packet filter firewall but here we can also filter information on the
basis of content. Good examples of application firewalls are MS-ISA (Internet Security and
Acceleration server, McAfee Firewall 3nterprise & Palo Alto PS Series firewalls. An
application firewall can filter higher-layer protocols such as FTP, Telnet, DNS, DHCP, HTTP,
TCP, UDP and TFTP (GSS . For example, if an organization wants to block all the information
related to "foo" then content filtering can be enabled on the firewall to block that particular word.
Software-based firewalls (MS-ISA are much slower than hardware based stateful firewalls but
dedicated appliances (McAfee & Palo Alto provide much higher performance levels for
Application Inspection.

In 2009/2010 the focus of the most comprehensive firewall security vendors turned to expanding
the list of applications such firewalls are aware of now covering hundreds and in some cases
thousands of applications which can be identified automatically. Many of these applications can
not only be blocked or allowed but manipulated by the more advanced firewall products to allow
only certain functionally enabling network security administrations to give users functionality
without enabling unnecessary vulnerabilities. As a consequence these advanced version of the
"Second Generation" firewalls are being referred to as "Next Generation" and surpass the "Third
Generation" firewall. It is expected that due to the nature of malicious communications this trend
will have to continue to enable organizations to be truly secure.

ÿ        


º    #   

From 1989-1990 three colleagues from AT&T Bell aboratories, Dave Presetto, Janardan
Sharma, and Kshitij Nigam, developed the third generation of firewalls, calling them circuit level
firewalls.

Third-generation firewalls, in addition to what first- and second-generation look for, regard
placement of each individual packet within the packet series. This technology is generally
referred to as a stateful packet inspection as it maintains records of all connections passing
through the firewall and is able to determine whether a packet is the start of a new connection, a
part of an existing connection, or is an invalid packet. Though there is still a set of static rules in
such a firewall, the state of a connection can itself be one of the criteria which trigger specific
rules.

This type of firewall can actually be exploited by certain Denial-of-service attacks which can fill
the connection tables with illegitimate connections.

ÿ    


 

In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC were
refining the concept of a firewall. The product known as "Visas" was the first system to have a
visual integration interface with colors and icons, which could be easily implemented and
accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In
1994 an Israeli company called Check Point Software Technologies built this into readily
available software known as FireWall-1.

The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-
prevention systems (IPS .

Currently, the Middlebox Communication Working Group of the Internet 3ngineering Task
Force (I3TF is working on standardizing protocols for managing firewalls and other
middleboxes.

Another axis of development is about integrating identity of users into Firewall rules. Many
firewalls provide such features by binding user identities to IP or MAC addresses, which is very
approximate and can be easily turned around. The NuFW firewall provides real identity-based
firewalling, by requesting the user's signature for each connection. authpf on BSD systems loads
firewall rules dynamically per user, after authentication via SSH.

ÿ  
There are several classifications of firewalls depending on where the communication is taking
place, where the communication is intercepted and the state that is being traced.

ÿ       

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP
protocol stack, not allowing packets to pass through the firewall unless they match the
established rule set. The firewall administrator may define the rules; or default rules may apply.
The term "packet filter" originated in the context of BSD operating systems.

Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful
firewalls maintain context about active sessions, and use that "state information" to speed packet
processing. Any existing network connection can be described by several properties, including
source and destination IP address, UDP or TCP ports, and the current stage of the connection's
lifetime (including session initiation, handshaking, data transfer, or completion connection . If a
packet does not match an existing connection, it will be evaluated according to the ruleset for
new connections. If a packet matches an existing connection based on comparison with the
firewall's state table, it will be allowed to pass without further processing.

Stateless firewalls require less memory, and can be faster for simple filters that require less time
to filter than to look up a session. They may also be necessary for filtering stateless network
protocols that have no concept of a session. However, they cannot make more complex decisions
based on what stage communications between hosts have reached.

Modern firewalls can filter traffic based on many packet attributes like source IP address, source
port, destination IP address or port, destination service like WWW or FTP. They can filter based
on protocols, TT values, netblock of originator, of the source, and many other attributes.

Commonly used packet filters on various versions of Unix are u (various , u (FreeBSD/Mac
OS X , u (OpenBSD, and all other BSDs ,  / 
 (inux .

ÿ    !
º    (      

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser
traffic, or all telnet or ftp traffic , and may intercept all packets traveling to or from an
application. They block other packets (usually dropping them without acknowledgment to the
sender . In principle, application firewalls can prevent all unwanted outside traffic from reaching
protected machines.

On inspecting all packets for improper content, firewalls can restrict or prevent outright the
spread of networked computer worms and trojans. The additional inspection criteria can add
extra latency to the forwarding of packets to their destination.

ÿ " # 


º    * +  

A proxy device (running either on dedicated hardware or as software on a general-purpose


machine may act as a firewall by responding to input packets (connection requests, for example
in the manner of an application, whilst blocking other packets.

Proxies make tampering with an internal system from the external network more difficult and
misuse of one internal system would not necessarily cause a security breach exploitable from
outside the firewall (as long as the application proxy remains intact and properly configured .
Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own
purposes; the proxy then masquerades as that system to other internal machines. While use of
internal address spaces enhances security, crackers may still employ methods such as IP spoofing
to attempt to pass packets to a target network.

ÿ     


º         
Firewalls often have network address translation (NAT functionality, and the hosts protected
behind a firewall commonly have addresses in the "private address range", as defined in DFC
1918. Firewalls often have such functionality to hide the true address of protected hosts.
Originally, the NAT function was developed to address the limited number of IPv4 routable
addresses that could be used or assigned to companies or individuals as well as reduce both the
amount and therefore cost of obtaining enough public addresses for every computer in an
organization. Hiding the addresses of protected devices has become an increasingly important
defense against network reconnaissance.

    !  ."/  ."0

c   $  

This section will deal with how to get early warning, how to be alerted after the fact, and how to
clean up from intrusion attempts.

ë     !       ! ! "


Intrusion Detection Systems (IDS for short are designed to catch what might have gotten past
the firewall. They can either be designed to catch an active break-in attempt in progress, or to
detect a successful break-in after the fact. In the latter case, it is too late to prevent any damage,
but at least we have early awareness of a problem. There are two basic types of IDS: those
protecting networks, and those protecting individual hosts.

For host based IDS, this is done with utilities that monitor the filesystem for changes. System
files that have changed in some way, but should not change -- unless we did it -- are a dead give
away that something is amiss. Anyone who gets in, and gets root, will presumably make changes
to the system somewhere. This is usually the very first thing done. 3ither so he can get back in
through a backdoor, or to launch an attack against someone else. In which case, he has to change
or add files to the system.

This is where tools like tripwire (http://www.tripwire.org play a role. Such tools monitor various
aspects of the filesystem, and compare them against a stored database. And can be configured to
send an alert if  changes are detected. Such tools should only be installed on a known "clean"
system.

For home desktops and home ANs, this is probably not an absolutely necessary component of
an overall security strategy. But it does give peace of mind, and certainly does have its place. So
as to priorities, make sure the Steps 1, 2 and 3 above are implemented and verified to be sound,
before delving into this.

DPM users can get somewhat the same results with m , which will verify all packages, but
without all the same functionality. For instance, it will not notice new files added to most
directories. Nor will it detect files that have had the extended attributes changed (e.g.  m
,
man  and man   . For this to be helpful, it needs to be done after a clean install, and
then each time any packages are upgraded or added.

http://tldp.org/HOWTO/Security-Quickstart-HOWTO/intrusion.html

In Information Technology, a  or the process of 


 refers to making copies of
data so that these additional copies may be used to   the original after a data loss event.
The verb form is 
 in two words, whereas the noun is 
 (often used like an adjective
in compound nouns .[1]

Backups have two distinct purposes. The primary purpose is to recover data as a reaction to data
loss, be it by data deletion or corrupted data. Data loss is a very common experience of computer
users. 67% of internet users have suffered serious data loss.[2] The secondary purpose of backups
is to recover data from a historical period of time within the constraints of a user-defined data
retention policy, typically configured within a backup application for how long copies of data are
required. Though backups popularly represent a simple form of disaster recovery, and should be
part of a disaster recovery plan, by themselves, backups should not alone be considered disaster
recovery. [3] Not all backup systems and/or backup applications are able to reconstitute a
computer system, or in turn other complex configurations such as a computer cluster, active
directory servers, or a database server, by restoring only data from a backup.

Since a backup system contains at least one copy of all data worth saving, the data storage
requirements are considerable. Organizing this storage space and managing the backup process is
a complicated undertaking. A data repository model can be used to provide structure to the
storage. In the modern era of computing there are many different types of data storage devices
that are useful for making backups. There are also many different ways in which these devices
can be arranged to provide geographic redundancy, data security, and portability.

Before data is sent to its storage location, it is selected, extracted, and manipulated. Many
different techniques have been developed to optimize the backup procedure. These include
optimizations for dealing with open files and live data sources as well as compression,
encryption, and de-duplication, among others. Many organizations and individuals try to have
confidence that the process is working as expected and work to define measurements and
validation techniques. It is also important to recognize the limitations and human factors
involved in any backup scheme.

    1 

%   

  


V*   +         
 )       
      
     

2 
                 3   
#  º                   
      (  
              
+           ) 
   
              3         
 


 
   4               
Y   
  ) 
 !
   º     ) )) Ê 
Y
 O !



 -                   
         

             +

D !, an acronym for D   u     (Changed from its original term
Dedundant Array of Inexpensive Disks , is a technology that provides increased storage
functions and reliability through redundancy. This is achieved by combining multiple disk drive
components into a logical unit, where data is distributed across the drives in one of several ways
called "DAID levels". This concept was first defined by David A. Patterson, Garth A. Gibson,
and Dandy Katz at the University of California, Berkeley in 1987 as D   u
    .[1] Marketers representing industry DAID manufacturers later attempted to
reinvent the term to describe a    u     as a means of dissociating a
low-cost expectation from DAID technology.[2]

DAID is now used as an umbrella term for computer data storage schemes that can divide and
replicate data among multiple disk drives. The schemes or architectures are named by the word
DAID followed by a number (e.g., DAID 0, DAID 1 . The various designs of DAID systems
involve two key goals: increase data reliability and increase input/output performance. When
multiple physical disks are set up to use DAID technology, they are said to be a DAID array.[3]
This array distributes data across multiple disks, but the array is addressed by the operating
system as one single disk. DAID can be set up to serve several different purposes.

    ,(c5


In computing,   is the capability to switch over automatically to a redundant or standby
computer server, system, or network upon the failure or abnormal termination of the previously
active application,[1] server, system, or network. Failover happens without human intervention
and generally without warning, unlike switchover.

Systems designers usually provide failover capability in servers, systems or networks requiring
continuous availability and a high degree of reliability.

At server-level, failover automation takes place using a "heartbeat" cable that connects two
servers. As long as a regular "pulse" or "heartbeat" continues between the main server and the
second server, the second server will not initiate its systems. There may also be a third "spare
parts" server that has running spare components for "hot" switching to prevent down time.

The second server will immediately take over the work of the first as soon as it detects an
alteration in the "heartbeat" of the first machine. Some systems have the ability to page or send a
message to a pre-assigned technician or center.

Some systems, intentionally, do not failover entirely automatically, but require human
intervention. This "automated with manual approval" configuration runs automatically once a
human has approved the failover.

#, conversely, involves the process of restoring a system/component/service in a state of


failover back to its original state (before failure .

The use of virtualization software has allowed failover practices to become less reliant on
physical hardware.

x  

  

There are 2 types of failover:

1. Automatic Failover ± Automatic 3DSON-Failover where 2 servers located in 2 different


geographic locations. If disaster happens at host site, the secondary server will take over
automatically without user or support intervention. In this case, usually, they have online data
replication from host to surviving site (Decovery Site , or using clustering technology to failover
to secondary server. Of course, there are also other high availability technology such as hyperV
or VMware, which cause a very minimum interruption and business can resume as normal . This
solution is primarily used for high-reliability/critical applications or systems.

2. Manual Failover ± In this case, user or support team intervention is necessary. For example, if
an abnormality occurs at a host site, the support team has to restore the database manually at the
surviving site, then switch users to the recovery site to resume business as usual. This is also
known as a backup and restore solution, which usually used for non-critical applications or
systems.

    !  

i-dun´d&nt !$ " Used to describe a component of a computer or network system that is used to
guard the primary system from failure by acting as a back up system. Dedundant components can
include both hardware elements of a system -- such as disk drives, peripherals, servers, switches,
routers -- and software elements -- such as operating systems, applications and databases.

D 
is the quality of systems or elements of a system that are backed up with secondary
resources. For example, "The network has redundancy."

   -,º,   

You might also like