Unit 3

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 85

Lecture note # 27 Content: Network Management What Is Network Management? Network management means different things to different people.

In some cases, it involves a solitary network consultant monitoring network activity with an outdated protocol analyzer. In other cases, network management involves a distributed database, auto polling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks. Network Management Architecture Most network management architectures use the same basic structure and set of relationships. End stations (managed devices), such as computer systems and other network devices, run software that enables them to send alerts when they recognize problems (for example, when one or more user-determined thresholds are exceeded). Upon receiving these alerts, management entities are programmed to react by executing one, several, or a group of actions, including operator notification, event logging, system shutdown, and automatic attempts at system repair. Management entities also can poll end stations to check the values of certain variables. Polling can be automatic or user-initiated, but agents in the managed devices respond to all polls. Agents are software modules that first compile information about the managed devices in which they reside, then store this information in a management database, and finally provide it (proactively or reactively) to management entities within network management systems (NMSs) via a network management protocol.Well-known network management protocols include the Simple Network Management Protocol (SNMP) and Common Management Information Protocol (CMIP). Management proxies are entities that provide management information on behalf of other entities. Figure 6-1 depicts a typical network management architecture.

ISO Network Management Model The ISO has contributed a great deal to network standardization. Its network management model is the primary means for understanding the major functions of network management systems. This model consists of five conceptual areas, as discussed in the next sections. a) Performance Management The goal of performance management is to measure and make available various aspects of network performance so that inter network performance can be maintained at an acceptable level. Examples of performance variables that might be provided include network throughput, user response times, and line utilization. Performance management involves three main steps. First, performance data is gathered on variables of interest to network administrators. Second, the data is analyzed to determine normal (baseline) levels. Finally, appropriate performance thresholds are determined for each important variable so that exceeding these thresholds indicates a network problem worthy of attention. Management entities continually monitor performance variables. When a performance threshold is exceeded, an alert is generated and sent to the network management system. Each of the steps just described is part of the process to set up a reactive system. When performance becomes unacceptable because of an exceeded user-defined threshold, the system reacts by sending a message. Performance management also permits proactive methods: For example, network simulation can be used to project how network growth will affect performance metrics. Such simulation can alert administrators to impending problems so that counteractive measures can be taken. b) Configuration Management The goal of configuration management is to monitor network and system configuration information so that the effects on network operation of various versions of hardware and software elements can be tracked and managed. Each network device has a variety of version information associated with it. An engineering workstation, for example, may be configured as follows:

Operating system, Version 3.2 Ethernet interface, Version 5.4 TCP/IP software, Version 2.0 NetWare software, Version 4.1 NFS software, Version 5.1 Serial communications controller, Version 1.1 X.25 software, Version 1.0 SNMP software, Version 3.1 Configuration management subsystems store this information in a database for easy access. When a problem occurs, this database can be searched for clues that may help solve the problem. c) Accounting Management The goal of accounting management is to measure network utilization parameters so that individual or group uses on the network can be regulated appropriately. Such regulation minimizes network problems (because network resources can be apportioned based on resource capacities) and maximizes the fairness of network access across all users. As with performance management, the first step toward appropriate accounting management is to measure utilization of all important network resources. Analysis of the results provides insight into current usage patterns, and usage quotas can be set at this point. Some correction, of course, will be required to reach optimal access practices. From this point, ongoing measurement of resource use can yield billing information as well as information used to assess continued fair and optimal resource utilization. d) Fault Management The goal of fault management is to detect, log, notify users of, and (to the extent possible) automatically fix network problems to keep the network running effectively. Because faults can cause downtime or unacceptable network degradation, fault management is perhaps the most widely implemented of the ISO network management elements. Fault management involves first determining symptoms and isolating the problem. Then the problem is fixed and the solution is tested on all-important subsystems. Finally, the detection and resolution of the problem is recorded. e) Security Management The goal of security management is to control access to network resources according to local guidelines so that the network cannot be sabotaged (intentionally or unintentionally) and sensitive information cannot be accessed by those without appropriate authorization. A security management subsystem, for example, can monitor users logging on to a network resource and can refuse access to those who enter inappropriate access codes. Security management subsystems work by partitioning network resources into authorized and unauthorized areas. For some users, access to any network resource is inappropriate, mostly because such users are usually company outsiders. For other (internal) network users, access to information originating from a particular department is inappropriate.

Access to Human Resource files, for example, is inappropriate for most users outside the Human Resources department. Security management subsystems perform several functions. They identify sensitive network resources (including systems, files, and other entities) and determine mappings between sensitive network resources and user sets. They also monitor access points to sensitive network resources and log inappropriate access to sensitive network resources. Simple Network Management Protocol SNMP is a framework that provides facilities for managing and monitoring network resources on the Internet. Components of SNMP: SNMP agents SNMP managers Management Information Bases (MIBs) SNMP protocol itself

SNMP agent SNMP manager SNMP protocol messages SNMP agent

SNMP agent

SNMP agent is software that runs on a piece of network equipment (host, router, printer, or others) and that maintains information about its configuration and current state in a database Information in the database is described by Management Information Bases (MIBs) An SNMP manager is an application program that contacts an SNMP agent to query or modify the database at the agent. SNMP protocol is the application layer protocol used by SNMP agents and managers to send and receive data.

Interactions in SNMP

Management Station SNMP Manager Process


Queries Replies Traps

Managed System SNMP Agent Process


Access objects Traps Data

MIB

SNMP UDP IP

SNMP messages

SNMP UDP IP

IP Network

MIBS A MIB specifies the managed objects MIB is a text file that describes managed objects using the syntax of ASN.1 (Abstract Syntax Notation 1) ASN.1 is a formal language for describing data and its properties In Linux, MIB files are in the directory /usr/share/snmp/mibs Multiple MIB files MIB-II (defined in RFC 1213) defines the managed objects of TCP/IP networks

Managed Objects Each managed object is assigned an object identifier (OID) The OID is specified in a MIB file. An OID can be represented as a sequence of integers separated by decimal points or by a text string: Example: 1.3.6.1.2.1.4.6. iso.org.dod.internet.mgmt.mib-2.ip.ipForwDatagrams When an SNMP manager requests an object, it sends the OID to the SNMP agent. Organization of managed objects Managed objects are organized in a tree-like hierarchy and the OIDs reflect the structure of the hierarchy. Each OID represents a node in the tree. The OID 1.3.6.1.2.1 (iso.org.dod.internet.mgmt.mib-2) is at the top of the hierarchy for all managed objects of the MIB-II.

Manufacturers of networking equipment can add product specific objects to the hierarchy

.
is (1 o ) o (3 rg ) d d(6 o ) in rn t (1 te e ) d c ry (1 ire to ) m m (2 g t )

ro t o

e p rim n l (3 x e e ta )

p a (4 riv te )

m -2(1 ib )

s s m(1 y te ) in rfa e(2 te c )

a (3 t ) ip(4 )

ic p(5 m ) tc (6 p )

u p(7 d ) e p(8 g )

s m (1 ) n p 1 tra s is io (1 ) nm s n 0

ip o D ta ra s (6 F rw a g m )

Definition of managed objects in a MIB Specification of ipForwDatagrams in MIB-II. ipForwDatagrams OBJECT-TYPE SYNTAX Counter ACCESS read-only STATUS mandatory DESCRIPTION "The number of input datagrams for which this entity was not their final IP destination, as result of which an attempt was made to find a route to forward them to that final destination. In entities which do not act as IP Gateways, counter will include only those packets which were Source-Routed via this entity, and the Source- Route option processing was successful." ::= { ip 6 } SNMP Protocol SNMP manager and an SNMP agent communicate using the SNMP protocol Generally: Manager sends queries and agent responds Exception: Traps are initiated by agent.

get-request get-response
SNMP manager Port 161

get-next-request get-response set-request get-response


Port 161 Port 161

SNMP agent

Port 162

trap

Get-request. Requests the values of one or more objects Get-next-request. Requests the value of the next object, according to a lexicographical ordering of OIDs. Set-request. A request to modify the value of one or more objects Get-response. Sent by SNMP agent in response to a get-request, get-nextrequest, or set-request message. Trap. An SNMP trap is a notification sent by an SNMP agent to an SNMP manager, which is triggered by certain events at the agent.

Traps Traps are messages that asynchronously sent by an agent to a manager Traps are triggered by an event Defined traps include: linkDown: Even that an interface went donw coldStart - unexpected restart (i.e., system crash) warmStart - soft reboot linkUp - the opposite of linkDown (SNMP) AuthenticationFailure SNMP Versions Three versions are in use today: SNMPv1 (1990) SNMPv2c (1996) Adds GetBulk function and some new types Adds RMON (remote monitoring) capability SNMPv3 (2002) SNMPv3 started from SNMPv1 (and not SNMPv2c) Addresses security All versions are still used today

Many SNMP agents and managers support all three versions of the protocol.

Format of SNMP Packets SNMPv1 Get/Set messages

Version

Community

SNMP PDU

PDU Type Error Status

Request ID Error Index

Object 1, Value 1 SNMP Security SNMPv1 uses plain text community2, strings2 for authentication as plain text Object Value without encryption ...

SNMPv2 was supposed to fix security problems, but effort de-railed (The c in SNMPv2c stands for community).

SNMPv3 has numerous security features: Ensure that a packet has not been tampered with (integrity), Ensures that a message is from a valid source (authentication) Ensures that a message cannot be read by unauthorized (privacy). Security model of SNMPv3 has two components: 1.Instead of granting access rights to a community, SNMPv3 grants access to users. 2. Access can be restricted to sections of the MIB (Version-based Access Control Module (VACM). Access rights can be limited by specifying a range of valid IP addresses for a user or community, or by specifying the part of the MIB tree that can be accessed. SNMP has three security levels: noAuthNoPriv: Authentication with matching a user name. authNoPriv: Authentication with MD5 or SHA message digests. authPriv: Authentication with MD5 or SHA message digests, and encryption with DES encryption

Lecture note # 28 Content: Host and Network Security: Security Planning, Categories of Security: C1, C2, C3, C4 Principles of Security:Security management cannot be separated from network and system administration because security requires a fully systemic approach. Security is about protecting things of value to an organization, in relation to the possible risks. This includes material and intellectual assets; it includes the very assumptions that are the foundation of an organization or humancomputer system. Anything that can cause a failure of those assumptions can result in loss, and must therefore be considered a threat. A system can be compromised by: Physical threats: weather, natural disaster, bombs, power failures etc. Human threats: cracking, stealing, trickery, bribery, spying, sabotage, accidents. Software threats: viruses, Trojan horses, logic bombs, denial of service. Protecting against these issues requires both proactive (preventative) measures and damage control after breaches. Our task is roughly as follows: Identify what we are trying to protect. Evaluate the main sources of risk and where trust is placed. Work out possible or cost-effective counter-measures to attacks. Four independent issues Principle 1 (Security is a property of systems). Security is a property of entire systems, not an appendage that can be added in any one place, or be applied at any one time. It relies on the constant appraisal and re-appraisal (the integrity) of our assumptions about a system. There are usually many routes through a system that permit theft or destruction. If we try to add security in one place, an attacker or random chance will simply take a different route. Principle 2 (Access and privilege). A fundamental prerequisite for security is the ability to restrict access to data. This leads directly to a notion of privilege for certain users. The word privilege does not apply to loss by accident or natural disaster, but the word access does. If accidental actions or natural disasters do not have access to data, then they cannot cause them any harm. Any attempt to run a secure system where restriction of access is not possible is fundamentally awed. There are four basic elements in security: Privacy or condentiality: restriction of access. Authentication: verication of presumed identity. Integrity: protection against corruption or loss (redundancy). Trust: underlies every assumption. Some authors include the following as independent points:

Availability: preventing disruption of a service. Non-repudiation: preventing deniability of actions. Principle 3(Security is about trust). Every security problem boils down to a question of whom or what do we trust? Once we have understood this, the topic of security is reduced to a litany of examples of how trust may be exploited and how it may be improved using certain technological aids. Failure to understand this point can lead to embarrassing mistakes being made. Usually, we introduce some kind of technology to move trust from a risky place to a safer place. For example, if we do not trust our neighbors not to steal our possessions, we might put a lock on our door. We no longer have to trust our neighbors, but we have to trust that the lock will do its job in the way we expect. This is easier to trust, because a simple mechanical device is more predictable than complicated human beings, but it can still fail. If we dont entirely trust the lock, we could install an alarm system which rings the police if someone breaks in. Now we are trusting the lock a little, the alarm system and the police. After all, who says that the police will not be the ones to steal your possessions? In some parts of the world, this idea is not so absurd. Trust is based on assumption. It can be bolstered with evidence but, just as science can never prove something is true, we can never trust something with absolute certainty. We only know when trust is broken. This is the real insight of security not the technologies that help us to build trust. Physical security For a computer to be secure it must be physically secure. If we can get our hands on a host then we are never more than a screwdriver away from all of its assets. Such as Disks can be removed. Sophisticated users can tap network linesand listen to trafc. The radiation from monitor screens can be captured and recorded, showing an exact image of what a user is looking at on his/her screen. Assuming that hosts are physically secure, we still have to deal with the issues of software security which is a much more difficult topic. Software security is about access control and software reliability. No single tool can make computer systems secure. Major blunders have been made out of the belief that a single product (e.g. a rewall) would solve the security problem. The bottom line is that there is no such thing as a secure operating system. What is required is a persistent mixture of vigilance and adaptability.

Trust relationships

10

There are many implicit trust relationships in computer systems. It is crucial to understand them. If we do not understand where we are placing our trust, that trust can be exploited by attackers who have thought more carefully than we have. For example, any host that shares users home-directories (such as an NFS server or DFS server) trusts the identities and personal integrity of the users on the hosts which mount those directories Implicit trust relationships lie at the heart of so many software systems which grant access to services or resources that it would be impossible to list them all here. Trust relationships are important to grasp because they can lead to security holes. Security policy and denition of security Security only has meaning when we have dened a frame of reference. It is intrinsically connected to our own appreciation of risks. It must be based on a thorough risk analysis. Principle (Risk). There is always a non-zero level of risk associated with any system. Denition (Secure system). A secure system is one in which every possible threat has been analyzed and where all the risks have been assessed and accepted as policy. Security must be balanced against convenience How secure must we be From outside the organization? From inside the organization (different host)? From inside the organization (same host)? Against the interruption of services? From user error? Finally, how much inconvenience are the users of the system willing to endure in order to uphold this level of security? This point should not be underestimated: if users consider security to be a nuisance, they try to circumvent it. Suggestion (Work defensively). Expect the worst, do your best, preferably in advance of a problem. Suggestion (Network security). Extremely sensitive data should not be placed on a computer which is attached in any way to a public network. What resources are we trying to protect? Secrets: Some sites have secrets they wish to protect. They might be government or trade secrets or the solutions to a college exam. Personnel data: In your country there are probably rules about what you must do to safeguard sensitive personal information. This goes for any information about employees, patients, customers or anyone else we deal with. Information about people is private.

11

CPU usage/System downtime: We might not have any data that we are afraid will fall into the wrong hands. It might simply be that the system is so important to us that we cannot afford the loss of time incurred by having someone screw it up. If the system is down, everything stops. Abuse of the system: It might simply be that we do not want anyone using our system to do something for which they are not authorized, like breaking into other systems. Who are we trying to protect them from? Competitors, who might gain an advantage by learning your secrets. Malicious intruders. Note that people with malicious intent might come from inside or outside our organization. It is wrong to think that the enemy is simply everyone outside of our domain. Too many organizations think inside/outside instead of dealing with proper access control. If one always ensures that systems and data are protected on a need-to-know basis, then there is no reason to discriminate between inside or outside of an organization. Old employees with a grudge against the organization. Next: what will happen if the system is compromised? Loss of money Threat of legal action against you Missed deadlines Loss of reputation. How much work will we need to put into protecting the system? Who are the people trying to break in? Sophisticated spies Tourists, just poking around Braggers, trying to impress. Finally: what risk is acceptable? If we have a secret which is worth 4 lira, would we be interested in spending 5 lira to secure it? Where does one draw the line? How much is security worth? The social term in the security equation should never be forgotten. One can spend a hundred thousand dollars on the top of the range rewall to protect data from network intrusion, but someone could walk into the building and look over an unsuspecting shoulder to obtain it instead, or use a receiver to collect the stray radiation from your monitors. Are employees leaving sensitive printouts lying around? Are we willing to place our entire building in a Faraday cage to avoid remote detection of the radiation expelled by monitors? In the nal instance, someone could just point a gun at someones head and ask nicely for their secrets. An example of security policies can be found at RFC 1244 and British Standard/ISO17799.

12

RFC 2196 and BS/ISO 17799 Security standards are attempts at capturing the essence of security management. Rather than focusing on the technologies that can be used to implement security, as most texts do, these standards attempt to capture the more important points of how these technologies and methods can be used in concert to address the actual risks. There are two main standards 1) RFC 2196 Is a guide to producing a site security policy that is addressed at system administrators. It emphasizes that a policy must be closely tied to administrative practice. The document correctly points out that: It must be implementable through system administration procedures, publishing of acceptable use guidelines, or other appropriate methods. It must be enforceable with security tools, where appropriate, and with sanctions, where actual prevention is not technically feasible. It must clearly dene the areas of responsibility for the users, administrators and management. The standard goes on to describe the elements of such a policy, including purchasing guidelines, privacy policy, access policy, accountability policy, authentication policy, a statement of availability requirements, a maintenance policy and a reporting policy in case of security breach. It also iterates the importance of documentation and supporting information for users and staff. 2 ISO 17799 (Information Technology Code of Practice for Security Management) Is a recently recognized security standard that is based upon the British Standard BS7799, published in 1999.ISO 17799 was published in 2000 and accepted as a British Standard in 2001. It is an excellent starting place for formulating an organizations security policy. It is less technical than RFC 2196, but goes into greater detail from a logistic viewpoint. It has been constructed with great care and expertise. Compliance with ISO 17799 is far from trivial, even for the most security conscious of organizations. The document describes ten points: 1. Security policy: The standard iterates the importance of a security policy that sets clear goals, and demonstrates a commitment by an organization to the seriousness of its security. The policy should ensure that legal and contractual commitments are met, and that the economic ramications of a security breach are understood. The policy itself must be maintained and updated by a responsible party. 2. Organizational security: A security team should be assembled that sets policy and provides multi-disciplinary advice. The team should allocate responsibilities within the organization for maintaining security of assets at all levels. Levels of authorization to assets must be determined. Contact with law-enforcement organizations and regulatory bodies should be secured and policies and procedures should be peer-reviewed for quality

13

assurance. Any outsourcing, i.e. third party contracts, must address the risks associated with opening the organizations security borders. 3. Asset classication and control: Organizations should have an inventory of their assets, which classies each asset according to its appropriate level of security. Procedures for labelling and handling different levels of information, including electronic and paper transmission (post, fax, E-mail etc.) and speech (telephone or conversation over dinner) must be determined. 4. Personnel security: In order to reduce the risks of human error, malicious attacks by theft, fraud or vandalism, staff should be screened and given security responsibilities. Condentiality (non-disclosure (NDA)) agreements, as well as terms and conditions of employment may be used as a binding agreement of responsibility. Most importantly, training of staff (users) in the possible threats and their combative procedures is needed to ensure compliance with policy. A response contingency plan should be drawn up and familiarized to the staff, so that security breaches and weaknesses are reported immediately. 5. Physical and environmental security: All systems must have physical security; this usually involves a security perimeter with physical constraints against theft and intrusion detection systems, as well as a controlled, safe environment. Secure areas can provide extra levels of security for special tasks. Equipment should be protected from physical threats (including re, coffee, food etc.) and have uninterruptible power supplies where appropriate. Cables must be secured from damage and wire-tapping. Desks and screens and refuse/trash should be cleared when not in use, to avoid accidental disclosure of condential information. 6. Communications and operations management: The processing and handling of information must be specied with appropriate procedures for the level of information. Change management should include appropriate authorizations and signicant documentation of changes to allow analysis in the case of problems. Procedures must be documented for responding to all kinds of threat. Analysis (causality) and audit trails should be planned for breaches. Housekeeping, backups, safe disposal of information and materials and other regular maintenance should be in place and be integrated into the scheme. System administration features heavily here: security and administration go hand in hand; they are not separate issues. This part of the standard covers many miscellaneous issues. 7. Access control: User management, password and key management, access rights, securing unattended equipment. Enforced pathways to assets that prevent back-doors. Segregation of independent assets and authentication for access. Use of securable operating systems, restriction of privilege. Clock synchronization. Mobile computing issues. 8. Systems development and maintenance: Security should be built into systems from the start. Input output validation. Policy on cryptographic controls, including signatures and certicates. Interestingly, the standard recommends against open source software. Covert channels (secret channels that bypass security controls) must be avoided. 9. Business continuity management: Each organization should estimate the impact of catastrophes and security breaches on its business. Will the organization be able to continue after such a breach? What will be the impact? The standard suggests the testing of this continuity plan.

14

10. Compliance: Laws and regulations must be obeyed, with regard to each country and contractual obligation. Regulation of personal information and cryptographic methods must be taken into account in different parts of the world. Compliance with the organizations own policy must be secured by auditing. The ISO standard is a good introduction to human computer security that can be recommended for any organization.

The OSI Security Architecture


To assess effectively the security needs of an organization and to evaluate and choose various security products and policies, the manager responsible for security needs some systematic way of defining the requirements for security and characterizing the approaches to satisfying those requirements. This is difficult enough in a centralized data processing environment; with the use of local and wide area networks, the problems are compounded. ITU-T[2] Recommendation X.800, Security Architecture for OSI, defines such a systematic approach.[3] The OSI security architecture is useful to managers as a way of organizing the task of providing security. The OSI security architecture focuses on security attacks, mechanisms, and services. These can be defined briefly as follows:

Security attack: Any action that compromises the security of information owned by an organization. Security mechanism: A process (or a device incorporating such a process) that is designed to detect, prevent, or recover from a security attack. Security service: A processing or communication service that enhances the security of the data processing systems and the information transfers of an organization. The services are intended to counter security attacks, and they make use of one or more security mechanisms to provide the service

Security Attacks
a) Passive Attacks Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the opponent is to obtain information that is being transmitted. Two types of passive attacks are release of message contents and traffic analysis. The release of message contents is easily understood (Figure 1.3a). A telephone conversation, an electronic mail message, and a transferred file may contain sensitive or confidential information. We would like to prevent an opponent from learning the contents of these transmissions.

15

Figure 1.3. Passive Attacks

A second type of passive attack, traffic analysis, is subtler (Figure 1.3b). Suppose that we had a way of masking the contents of messages or other information traffic so that opponents, even if they captured the message, could not extract the information from the message. The common technique for masking contents is encryption. If we had encryption protection in place, an opponent might still be able to observe the pattern of these messages. The opponent could determine the location and identity of communicating hosts and could observe the frequency and length of messages being exchanged. This information might be useful in guessing the nature of the communication that was taking place. Passive attacks are very difficult to detect because they do not involve any alteration of the data. Typically, the message traffic is sent and received in an apparently normal fashion and neither the sender nor receiver is aware that a third party has read the messages or observed the traffic pattern. However, it is feasible to prevent the success of these attacks, usually by means of encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than detection. Active Attacks Active attacks involve some modification of the data stream or the creation of a false stream and can be subdivided into four categories: masquerade, replay, modification of messages, and denial of service. A masquerade takes place when one entity pretends to be a different entity (Figure 1.4a). A masquerade attack usually includes one of the other forms of active attack. For example, authentication sequences can be captured and replayed after a valid authentication sequence has taken place, thus enabling an authorized entity with few privileges to obtain extra privileges by impersonating an entity that has those privileges.

16

replay attacks An attack in which a service already authorized and completed is forged by another "duplicate request" in an attempt to repeat authorized commands. Its involves the passive capture of a data unit and its subsequent retransmission to produce an unauthorized effect (Figure 1.4b). Modification of messages simply means that some portion of a legitimate message is altered, or that messages are delayed or reordered, to produce an unauthorized effect (Figure 1.4c). For example, a message meaning "Allow John Smith to read confidential file accounts" is modified to mean "Allow Fred Brown to read confidential file accounts." The denial of service prevents or inhibits the normal use or management of communications facilities (Figure 1.4d). This attack may have a specific target; for example, an entity may suppress all messages directed to a particular destination (e.g., the security audit service). Another form of service denial is the disruption of an entire network, either by disabling the network or by overloading it with messages so as to degrade performance. Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks are difficult to detect, measures are available to prevent their success. On the other hand, it is quite difficult to prevent active attacks absolutely, because of the wide variety of potential physical, software, and network vulnerabilities. Instead, the goal is to detect active attacks and to recover from any disruption or delays caused by them. If the detection has a deterrent effect, it may also contribute to prevention.

Security Services
X.800 defines a security service as a service provided by a protocol layer of communicating open systems, which ensures adequate security of the systems or of data 17

transfers. Perhaps a clearer definition is found in RFC 2828, which provides the following definition: a processing or communication service that is provided by a system to give a specific kind of protection to system resources; security services implement security policies and are implemented by security mechanisms. Security Services 1) AUTHENTICATION:The assurance that the communicating entity is the one that it claims to be. Peer Entity Authentication Used in association with a logical connection to provide confidence in the identity of the entities connected. Data Origin Authentication In a connectionless transfer, provides assurance that the source of received data is as claimed. 2) ACCESS CONTROL:The prevention of unauthorized use of a resource (i.e., this service controls who can have access to a resource, under what conditions access can occur, and what those accessing the resource are allowed to do). 3) DATA CONFIDENTIALITY:The protection of data from unauthorized disclosure. Connection Confidentiality The protection of all user data on a connection. Connectionless Confidentiality The protection of all user data in a single data block Selective-Field Confidentiality The confidentiality of selected fields within the user data on a connection or in a single data block. Traffic Flow Confidentiality The protection of the information that might be derived from observation of traffic flows. 4) DATA INTEGRITY The assurance that data received are exactly as sent by an authorized entity (i.e., contain

18

no modification, insertion, deletion, or replay). Connection Integrity with Recovery Provides for the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data within an entire data sequence, with recovery attempted. Connection Integrity without Recovery As above, but provides only detection without recovery. Selective-Field Connection Integrity Provides for the integrity of selected fields within the user data of a data block transferred over a connection and takes the form of determination of whether the selected fields have been modified, inserted, deleted, or replayed. Connectionless Integrity Provides for the integrity of a single connectionless data block and may take the form of detection of data modification. Additionally, a limited form of replay detection may be provided. Selective-Field Connectionless Integrity Provides for the integrity of selected fields within a single connectionless data block; takes the form of determination of whether the selected fields have been modified. 5) NONREPUDIATION Provides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication. Nonrepudiation, Origin Proof that the message was sent by the specified party. Nonrepudiation, Destination Proof that the message was received by the specified party. Security Mechanisms 1) Specific security mechanisms May be incorporated into the appropriate protocol layer in order to provide some of the OSI security services. A) Encipherment :-

19

The use of mathematical algorithms to transform data into a form that is not readily intelligible. The transformation and subsequent recovery of the data depend on an algorithm and zero or more encryption keys. B) Digital Signature:Data appended to, or a cryptographic transformation of, a data unit that allows a recipient of the data unit to prove the source and integrity of the data unit and protect against forgery (e.g., by the recipient). C) Access Control :A variety of mechanisms that enforce access rights to resources. D) Data Integrity:A variety of mechanisms used to assure the integrity of a data unit or stream of data units. E) Authentication Exchange:A mechanism intended to ensure the identity of an entity by means of information exchange. F) Traffic Padding:The insertion of bits into gaps in a data stream to frustrate traffic analysis attempts G) Routing Control:Enables selection of particular physically secure routes for certain data and allows routing changes, especially when a breach of security is suspected. H) Notarization:The use of a trusted third party to assure certain properties of a data exchange 2) Pervasive security mechanisms Mechanisms that are not specific to any particular OSI security service or protocol layer A) Trusted Functionality That which is perceived to be correct with respect to some criteria (e.g., as established by a security policy). B) Security Label The marking bound to a resource (which may be a data unit) that names or designates the security attributes of that resource. C) Event Detection Detection of security-relevant events. D) Security Audit Trail Data collected and potentially used to facilitate a security audit, which is an independent review and examination of system records and activities. E) Security Recovery Deals with requests from mechanisms, such as event handling and management functions, and takes recovery actions. Contents of a Security Plan (Planning) A security plan identifies and organizes the security activities for a computing system. The plan is both a description of the current situation and a plan for improvement. Every security plan must address seven issues.

20

1. policy, indicating the goals of a computer security effort and the willingness of the people involved to work to achieve those goals 2. current state, describing the status of security at the time of the plan 3. requirements, recommending ways to meet the security goals 4. recommended controls, mapping controls to the vulnerabilities identified in the policy and requirements 5. accountability, describing who is responsible for each security activity 6. timetable, identifying when different security functions are to be done 7. continuing attention, specifying a structure for periodically updating the security plan

1 Policy
A security plan must state the organization's policy on security. A security policy is a high-level statement of purpose and intent. The policy statement should specify the following:

The organization's goals on security. For example, should the system protect data from leakage to outsiders, protect against loss of data due to physical disaster, protect the data's integrity, or protect against loss of business when computing resources fail? What is the higher priority: serving customers or securing data? Where the responsibility for security lies. For example, should the responsibility rest with a small computer security group, with each employee, or with relevant managers? The organization's commitment to security. For example, who provides security support for staff, and where does security fit into the organization's structure?

2. Current Security Status


To be able to plan for security, an organization must understand the vulnerabilities to which it may be exposed. The organization can determine the vulnerabilities by performing a risk analysis: a careful investigation of the system, its environment, and the things that might go wrong. The risk analysis forms the basis for describing the current status of security. The status can be expressed as a listing of organizational assets, the security threats to the assets, and the controls in place to protect the assets.

3. Requirements
The heart of the security plan is its set of security requirements: functional or performance demands placed on a system to ensure a desired level of security. The requirements are usually derived from organizational needs. Sometimes these needs include the need to conform to specific security requirements imposed from outside, such as by a government agency or a commercial standard. Figure 8-1 illustrates how the different aspects of system analysis support the security planning process. 21

As with the general software development process, the security planning process must allow customers or users to specify desired functions, independent of the implementation. The requirements should address all aspects of security: confidentiality, integrity, and availability. They should also be reviewed to make sure that they are of appropriate quality. In particular, we should make sure that the requirements have these characteristics:

Correctness: Are the requirements understandable? Are they stated without error? Consistency: Are there any conflicting or ambiguous requirements? Completeness: Are all possible situations addressed by the requirements? Realism: Is it possible to implement what the requirements mandate? Need: Are the requirements unnecessarily restrictive? Verifiability: Can tests be written to demonstrate conclusively and objectively that the requirements have been met? Can the system or its functionality be measured in some way that will assess the degree to which the requirements are met? Traceability: Can each requirement be traced to the functions and data related to it so that changes in a requirement can lead to easy reevaluation?

4. Recommended Controls
The security requirements lay out the system's needs in terms of what should be protected. The security plan must also recommend what controls should be incorporated into the system to meet those requirements. Throughout this book you have seen many examples of controls, so we need not review them here. As we see later in this chapter, we can use risk analysis to create a map from vulnerabilities to controls. The mapping tells us how the system will meet the security requirements. That is, the recommended controls address implementation issues: how the system will be designed and developed to meet stated security requirements.

22

5. Responsibility for Implementation


A section of the security plan should identify which people are responsible for implementing the security requirements. This documentation assists those who must coordinate their individual responsibilities with those of other developers. At the same time, the plan makes explicit who is accountable should some requirement not be met or some vulnerability not be addressed. That is, the plan notes who is responsible for implementing controls when a new vulnerability is discovered or a new kind of asset is introduced. People building, using, and maintaining the system play many roles. Each role can take some responsibility for one or more aspects of security. Consider, for example, the groups listed here.

Personal computer users may be responsible for the security of their own machines. Alternatively, the security plan may designate one person or group to be coordinator of personal computer security. Project leaders may be responsible for the security of data and computations Managers may be responsible for seeing that the people they supervise implement security measures. Database administrators may be responsible for the access to and integrity of data in their databases. Information officers may be responsible for overseeing the creation and use of data; these officers may also be responsible for retention and proper disposal of data. Personnel staff members may be responsible for security involving employees, for example, screening potential employees for trustworthiness and arranging security training programs.

6. Timetable
A comprehensive security plan cannot be executed instantly. The security plan includes a timetable that shows how and when the elements of the plan will be performed. These dates also give milestones so that management can track the progress of implementation. If the implementation is to be a phased development (that is, the system will be implemented partially at first, and then changed functionality or performance will be added in later releases), the plan should also describe how the security requirements will be implemented over time. Even when overall development is not phased, it may be desirable to implement the security aspects of the system over time. For example, if the controls are expensive or complicated, they may be acquired and implemented gradually. Similarly, procedural controls may require staff training to ensure that everyone understands and accepts the reason for the control.

23

The plan should specify the order in which the controls are to be implemented so that the most serious exposures are covered as soon as possible. A timetable also gives milestones by which to judge the progress of the security program.

7. Continuing Attention
Good intentions are not enough when it comes to security. We must not only take care in defining requirements and controls, but we must also find ways for evaluating a system's security to be sure that the system is as secure as we intend it to be. Thus, the security plan must call for reviewing the security situation periodically. As users, data, and equipment change, new exposures may develop. In addition, the current means of control may become obsolete or ineffective (such as when faster processor times enable attackers to break an encryption algorithm). The inventory of objects and the list of controls should periodically be scrutinized and updated, and risk analysis performed anew. The security plan should set times for these periodic reviews, based either on calendar time (such as, review the plan every nine months) or on the nature of system changes (such as, review the plan after every major system release). Security Planning Team Members Who performs the security analysis, recommends a security program, and writes the security plan? As with any such comprehensive task, these activities are likely to be performed by a committee that represents all the interests involved. The size of the committee depends on the size and complexity of the computing organization and the degree of its commitment to security. Organizational behavior studies suggest that the optimum size for a working committee is between five and nine members. Sometimes a larger committee may serve as an oversight body to review and comment on the products of a smaller working committee. Alternatively, a large committee might designate subcommittees to address various sections of the plan. The membership of a computer security planning team must somehow relate to the different aspects of computer security .Security in operating systems and networks requires the cooperation of the systems administration staff. Program security measures can be understood and recommended by applications programmers. Physical security controls are implemented by those responsible for general physical security, both against human attacks and natural disasters. Finally, because controls affect system users, the plan should incorporate users' views, especially with regard to usability and the general desirability of controls. Thus, no matter how it is organized, a security planning team should represent each of the following groups.

computer hardware group system administrators systems programmers applications programmers

24

data entry personnel physical security personnel representative users

Assuring Commitment to a Security Plan After the plan is written, it must be accepted and its recommendations carried out. Acceptance by the organization is key; a plan that has no organizational commitment is simply a plan that collects dust on the shelf. Commitment to the plan means that security functions will be implemented and security activities carried out. Three groups of people must contribute to making the plan a success.

The planning team must be sensitive to the needs of each group affected by the plan. Those affected by the security recommendations must understand what the plan means for the way they will use the system and perform their business activities. In particular, they must see how what they do can affect other users and other systems. Management must be committed to using and enforcing the security aspects of the system.

Writing a Security Policy Security is largely a "people problem." People, not computers, are responsible for implementing security procedures, and people are responsible when security is breached. Therefore, network security is ineffective unless people know their responsibilities. It is important to write a security policy that clearly states what is expected and from whom. A network security policy should define: The network user's security responsibilities The policy may require users to change their passwords at certain intervals, to use passwords that meet certain guidelines, or to perform certain checks to see if their accounts have been accessed by someone else. Whatever is expected from users, it is important that it be clearly defined. The system administrator's security responsibilities The policy may require that every host use specific security measures, login banner messages, or monitoring and accounting procedures. It might list applications that should not be run on any host attached to the network. The proper use of network resources Define who can use network resources, what things they can do, and what things they should not do. If your organization takes the position that email, files, and

25

histories of computer activity are subject to security monitoring, tell the users very clearly that this is the policy. The actions taken when a security problem is detected What should be done when a security problem is detected? Who should be notified? It is easy to overlook things during a crisis, so you should have a detailed list of the exact steps that a system administrator or user should take when a security breach is detected. This could be as simple as telling the users to "touch nothing, and call the network security officer." But even these simple actions should be in the written policy so that they are readily available. Security planning (assessing the threat, assigning security responsibilities, and writing a security policy) is the basic building block of network security, but the plan must be implemented before it can have any effect.

A Model for Network Security


A model for much of what we will be discussing is captured, in very general terms, in Figure 1.5. A message is to be transferred from one party to another across some sort of internet. The two parties, who are the principals in this transaction, must cooperate for the exchange to take place. A logical information channel is established by defining a route through the internet from source to destination and by the cooperative use of communication protocols (e.g., TCP/IP) by the two principals.

Security aspects come into play when it is necessary or desirable to protect the information transmission from an opponent who may present a threat to confidentiality, authenticity, and so on. All the techniques for providing security have two components:

A security-related transformation on the information to be sent. Examples include the encryption of the message, which scrambles the message so that it is unreadable by the opponent, and the addition of a code based on the contents of the message, which can be used to verify the identity of the sender Some secret information shared by the two principals and, it is hoped, unknown to the opponent. An example is an encryption key used in conjunction with the transformation to scramble the message before transmission and unscramble it on reception.

26

A trusted third party may be needed to achieve secure transmission. For example, a third party may be responsible for distributing the secret information to the two principals while keeping it from any opponent. Or a third party may be needed to arbitrate disputes between the two principals concerning the authenticity of a message transmission. This general model shows that there are four basic tasks in designing a particular security service: 1. Design an algorithm for performing the security-related transformation. The algorithm should be such that an opponent cannot defeat its purpose. 2. Generate the secret information to be used with the algorithm. 3. Develop methods for the distribution and sharing of the secret information. 4. Specify a protocol to be used by the two principals that makes use of the security algorithm and the secret information to achieve a particular security service However, there are other security-related situations of interest that do not neatly fit this model. A general model of these other situations is illustrated by Figure 1.6, which reflects a concern for protecting an information system from unwanted access. Most readers are familiar with the concerns caused by the existence of hackers, who attempt to penetrate systems that can be accessed over a network. The hacker can be someone who, with no malign intent, simply gets satisfaction from breaking and entering a computer system. Or, the intruder can be a disgruntled employee who wishes to do damage, or a criminal who seeks to exploit computer assets for financial gain (e.g., obtaining credit card numbers or performing illegal money transfers).

Another type of unwanted access is the placement in a computer system of logic that exploits vulnerabilities in the system and that can affect application programs as well as utility programs, such as editors and compilers. Programs can present two kinds of threats:

Information access threats intercept or modify data on behalf of users who should not have access to that data. Service threats exploit service flaws in computers to inhibit use by legitimate users.

27

The security mechanisms needed to cope with unwanted access fall into two broad categories (see Figure 1.6). The first category might be termed a gatekeeper function. It includes password-based login procedures that are designed to deny access to all but authorized users and screening logic that is designed to detect and reject worms, viruses, and other similar attacks. Once either an unwanted user or unwanted software gains access, the second line of defense consists of a variety of internal controls that monitor activity and analyze stored information in an attempt to detect the presence of unwanted intruders.

28

Lecture note # 29 Content: Password Security Access Control and Monitoring Access Control and Monitoring What is access control? The ability to allow only authorized users, programs or processes system or resource access The granting or denying, according to a particular security model, of certain permissions to access a resource An entire set of procedures performed by hardware, software and administrators, to monitor access, identify users requesting access, record access attempts, and grant or deny access based on pre-established rules. Access control is the heart of security Access Control Models (based on how security policies are managed) Discretionary Access Control (DAC) Mandatory Access Control (MAC) Role-Based Access Control (RBAC) Access Control Models The subjects are the active entities (a.k.a. principals) that do things The objects are the passive entities to which things are done. Examples of subjects might be a person or a process on a computer, an example of an object might be a file or a subroutine. The sets of subjects and objects need not be disjoint. Our model of access control is illustrated as follows:

Here we are presuming Complete Mediation: Every request any subject makes will be checked and the decision is based on past actions (and perhaps some stored information that summarizes those past actions). In our example of the guard at the door, this assumption is equivalent to postulating that students are not allowed to come through the

29

window---only through the door---where their ID is checked against a list (=stored information) DAC: Discretionary Access Control 1. Definition: An individual user can set an access control mechanism to allow or deny access to an object. 2. Relies on the object owner to control access. 3. DAC is widely implemented in most operating systems, and we are quite familiar with it. Strength of DAC: Flexibility: a key reason why it is widely known and implemented in mainstream operating systems. Limitation of DAC: Global policy: DAC let users to decide the access control policies on their data, regardless of whether those policies are consistent with the global policies. Therefore, if there is a global policy, DAC has trouble to ensure consistency. Information flow: information can be copied from one object to another, so access to a copy is possible even if the owner of the original does not provide access to the original copy. This has been a major concern for military. Malicious software: DAC policies can be easily changed by owner, so a malicious program (e.g., a downloaded untrustworthy program) running by the owner can change DAC policies on behalf of the owner. Flawed software: Similarly to the previous item, flawed software can be instructed by attackers to change its DAC policies. MAC: Mandatory Access Control 1. Definition: A system-wide policy decrees who is allowed to have access; individual user cannot alter that access. 2. Relies on the system to control access. 3. Examples: The law allows a court to access driving records without the owners' permission. 4. Traditional MAC mechanisms have been tightly coupled to a few security models. 5. Recently, systems supporting flexible security models start to appear (e.g., SELinux, Trusted Solaris, TrustedBSD, etc.) Role-Based Access Control (RBAC) We have not yet distinguished among kinds of users, but we want some users (such as administrators) to have significant privileges, and we want others (such as regular users or guests) to have lower privileges. In companies and educational institutions, this can get complicated when an ordinary user becomes an administrator or a baker moves to the candlestick makers' group. Role-based access control lets us associate privileges with groups, such as all administrators can do this or candlestick makers are forbidden to do 30

this. Administering security is easier if we can control access by job demands, not by person. Access control keeps up with a person who changes responsibilities, and the system administrator does not have to choose the appropriate access control settings for someone. Access Control Methods Access Control Matrix Access Control List Capability List Access Control Matrix Access control matrix, a table in which each row represents a subject, each column represents an object, and each entry is the set of access rights for that subject to that object. In general, the access control matrix is sparse (meaning that most cells are empty): Most subjects do not have access rights to most objects. The access matrix can be represented as a list of triples, having the form <subject, object, rights>. Searching a large number of these triples is inefficient enough that this implementation is seldom used We can represent access rights enforced by complete mediation using an access control matrix. Let Subj be the set of subjects and Obj be the set of objects. Neither set is ordered, and we postulate that Subj is a subset of Obj. Subjects are active entities, and we want subjects to be able to do things to other subjects (e.g., processes can send kill signals to other processes), so it is sensible that Subj be a subset of Obj. The system state with respect to access control can be represented in a matrix, as follows:

Each entry of the matrix stores the access rights that the subject of that row has to the object of that column. We denote this: [S, O]. Systems are not static, and there will often be changes in access rights of subjects to objects. We therefore specify commands that will change state as follows: 31

name(O, O', O'', . . ., S, S', S'') if (R in [S, O]), (R' in [S', O']), . . . then Op1 Op2 Op3 .... Where the arguments to the command are names of elements in Obj. Note that the conditions of the if statement are predicates, either true or false. Operations (Op1, Op2, etc.) change the protection matrix. They include: create object O -add a column, create subject S -add a row, delete object O -delete a column, and delete subject S -- delete a row, as well as operations that grant and take back rights such as: grant R to [S, O] which is equivalent to [S, O] := [S, O] union {R}, and delete R from [S, O] which is equivalent to [S, O] := [S, O] - {R}. We have described a simple programming language. For the purposes of our discussion, we disregard details such as how to handle deleting an object that does not exist. Using this language, we can postulate protection commands that model, for example, creating a file, conferring read rights and revoking read rights, as follows: create.file(p, f) create object grant own rights grant read rights grant write rights to [p, f] confer.read(p, q, f) if (own grant read to [q, f] revoke.read(p, q, f) if (own delete read from [q, f]. in [p, f [p, [p,

to to

f] f]

f])

then

in

[p,

f])

then

In the proposed scheme, any right a subject S has to an object can be conferred to any other subject S' if S has "own" rights over the object. A domain is a set of objects that a subject can access. In other words, a domain is the union of the elements of a row in the access matrix. A process that changes from one small protection domains to another as execution proceeds can obey the Principle of Least Privilege. We implement multiple small protection domains by associating multiple rows of the access control matrix with a

32

process. Each row defines a domain. However, now we need to allow a domain (subject) to transfer control to another. An "enter" right for one domain to the other permits such a cross-domain transfer. The issue now is to identify criteria for a domain change, and make sure causing these domain changes does not end up adding work to the programmer. We solve this problem by overloading procedure call with domain changes to get protected procedures. Each protected procedure executes in its own protection domain. Thus, execution in a protected procedure has certain inalienable rights. Some of these rights come from arguments in the call, others from information obtained statically. We give an example of a protected domain. Imagine that subjects consist of a user and an editor, and that the objects are some files and a spelling-checker dictionary. The matrix may look as follows (without the dotted arrows):

Upon invoking the editor, the access matrix changes, and the r/w privileges move from the first column to the second (as indicated by the dotted arrows). Also, note that only the editor has access to the dictionary. Now, suppose there are two users that wish to invoke the editor:

In this case, it appears that the editor has access to all three files. In practice, there will be two copies of the editor running and we desire that each copy only has access to the files of one user. The solution is to invent a "template" domain. A procedure call (such as invoking the editor) causes a new domain to be constructed based on the template. This adds a new subject/row to the matrix. Upon return from the procedure call the instantiated domain is destroyed and the process continues execution in the user domain. Disadvantage:

33

In a large system, the matrix will be enormous in size and mostly sparse.

Access Control List In access control list there is one such list for each object, and the list shows all subjects who should have access to the object and what their access is. The column of access control matrix.

An obvious variant of the access control matrix is to store each column with the object it represents. Thus, each object has associated with it a set of pairs, with each pair containing a subject and a set of rights. The named subject can access the associated object using any of those rights. More formally:

Consider the access control matrix in Figure 2-1, The set of subjects is process 1 and process 2, and the set of objects is file 1, file 2, process 1, and process 2. The corresponding access control lists are acl(file 1) = { (process 1, { read, write, own }), (process 2, { append }) } acl(file 2) = { (process 1, { read }), (process 2, { read, own }) } acl(process 1) = { (process 1, { read, write, execute, own }), (process 2, { read }) } 34

acl(process 2) = { (process 1, { write }), (process 2, { read, write, execute, own }) } Each subject and object has an associated ACL. Thus, process 1 owns file 1, and can read from or write to it; process 2 can only append to file 1. Similarly, both processes can read file 2, which process 2 owns. Both processes can read from process 1; both processes can write to process 2. The exact meanings of "read" and "write" depend on the instantiation of the rights. Creation and Maintenance of Access Control Lists Specific implementations of ACLs differ in details. Some of the issues are as follows. 1. Which subjects can modify an object's ACL? 2. If there is a privileged user (such as root in the UNIX system or administrator in Windows NT), do the ACLs apply to that user? 3. Does the ACL support groups or wildcards (that is, can users be grouped into sets based on a system notion of "group" or on pattern matching)? 4. How are contradictory access control permissions handled? If one entry grants read privileges only and another grants write privileges only, which right does the subject have over the object? 5. If a default setting is allowed, do the ACL permissions modify it, or is the default used only when the subject is not explicitly mentioned in the ACL?

Which Subjects Can Modify an Object's ACL?


When an ACL is created, rights are instantiated. Chief among these rights is the one we will call own. Possessors of the own right can modify the ACL. Creating an object also creates its ACL, with some initial value (possibly empty, but more usually the creator is initially given all rights, including own, over the new object). By convention, the subject with own rights is allowed to modify the ACL. However, some systems allow anyone with access to manipulate the rights.

Do the ACLs Apply to a Privileged User?


Many systems have users with extra privileges. The two best known are the root superuser on UNIX systems and the administrator user on Windows NT and 2000 systems. Typically, ACLs (or their degenerate forms) are applied in a limited fashion to such users

Does the ACL Support Groups and Wildcards?


In its classic form, ACLs do not support groups or wildcards. In practice, systems support one or the other (or both) to limit the size of the ACL and to make manipulation of the lists easier. A group can either refine the characteristics of the processes to be allowed access or be a synonym for a set of users (the members of the group).

35

Conflicts
A conflict arises when two access control list entries in the same ACL give different permissions to the subject. The system can allow access if any entry would give access, deny access if any entry would deny access, or apply the first entry that matches the subject.

ACLs and Default Permissions


When ACLs and abbreviations of access control lists or default access rights coexist (as on many UNIX systems), there are two ways to determine access rights. The first is to apply the appropriate ACL entry, if one exists, and to apply the default permissions or abbreviations of access control lists otherwise. The second way is to augment the default permissions or abbreviations of access control lists with those in the appropriate ACL entry. Revocation of Rights Revocation, or the prevention of a subject's accessing an object, requires that the subject's rights be deleted from the object's ACL. Preventing a subject from accessing an object is simple. The entry for the subject is deleted from the object's ACL. If only specific rights are to be deleted, they are removed from the relevant subject's entry in the ACL. If ownership does not control the giving of rights, revocation is more complex Advantage: Easy to determine who can access a given object. Easy to revoke all access to an object Disadvantage: Difficult to know the access right of a given subject. Difficult to revoke a user's right on all objects. Used by most mainstream operating system Capability List Conceptually, a capability is like the row of an access control matrix. Each subject has associated with it a set of pairs, with each pair containing an object and a set of rights. The subject associated with this list can access the named object in any of the ways indicated by the named rights. More formally: The row of access control matrix.

36

We abbreviate "capability list" as C-List. Again, consider the access control matrix in Figure 2-1 The set of subjects is process 1 and process 2. The corresponding capability lists are cap(process 1) = { (file 1, { read, write, own }), (file 2, { read }), (process 1, {read, write, execute, own}), (process 2, { write }) } cap(process 2) = { (file 1, { append }), (file 2, { read, own }), (process 1, { read }), (process 2, {read, write, execute, own}) } Each subject has an associated C-List. Thus, process 1 owns file 1, and can read or write to it; process 1 can read file 2; process 1 can read, write to, or execute itself and owns itself; and process 1 can write to process 2. Similarly, process 2 can append to file 1; process 2 owns file 2 and can read it; process 2 can read process 1; and process 2 can read, write to, or execute itself and owns itself. Capabilities encapsulate object identity. When a process presents a capability on behalf of a user, the operating system examines the capability to determine both the object and the access to which the process is entitled. This reflects how capabilities for memory management work; the location of the object in memory is encapsulated in the capability. Without a capability, the process cannot name the object in a way that will give it the desired access Implementation of Capabilities Three mechanisms are used to protect capabilities: tags, protected memory, and cryptography. 1.A tagged architecture has a set of bits associated with each hardware word. The tag has two states: set and unset. If the tag is set, an ordinary process can read but not modify the word. If the tag is unset, an ordinary process can read and modify the word. Further, an ordinary process cannot change the state of the tag; the processor must be in a privileged mode to do so. 2. More common is to use the protection bits associated with paging or segmentation. All capabilities are stored in a page (segment) that the process can read but not alter. This requires no special-purpose hardware other than that used by the memory management

37

scheme. But the process must reference capabilities indirectly, usually through pointers, rather than directly. The c-list is stored in the kernel's memory and is a table with rights and pointers to objects.

Instead of giving capabilities directly to processes, we only provide processes with indices to the table. A process can utter only these indices, and thus a process can only talk about capabilities that it has, making it impossible to create new capabilities. We are using indirection to prevent processes from forging capabilities. Associated with a c-list are meta-instructions that allow capabilities to be changed. These meta-instructions include: create a new c-list, copy a capability into the c-list, and delete a capability from a c-list. These operations allow processes to instruct the kernel to move capabilities around. 3. A third alternative is to use cryptography. The goal of tags and memory protection is to prevent the capabilities from being altered. This is akin to integrity checking. Cryptographic checksums are another mechanism for checking the integrity of information. Each capability has a cryptographic checksum associated with it, and the checksum is digitally enciphered using a cryptosystem whose key is known to the operating system. When the process presents a capability to the operating system, the system first recomputes the cryptographic checksum associated with the capability. It then either enciphers the checksum using the cryptographic key and compares it with the one stored in the capability, or deciphers the checksum provided with the capability and compares it with the computed checksum. If they match, the capability is unaltered. If not, the capability is rejected Comparison with Access Control Lists Two questions underlie the use of access controls:

38

1. Given a subject, what objects can it access, and how? 2. Given an object, what subjects can access it, and how? In theory, either access control lists or capabilities can answer these questions. For the first question, capabilities are the simplest; just list the elements of the subject's associated C-List. For the second question, ACLs are the simplest; just list the elements of the object's access control list. In an ACL-based system, answering the first question requires all objects to be scanned. The system extracts all ACL entries associated with the subject in question. In a capability-based system, answering the second question requires all subjects to be scanned. The system extracts all capabilities associated with the object in question. Advantage: Easy to know the access right of a given subject. Easy to revoke a users access right on all objects. Disadvantage: Difficult to know who can access a given object. Difficult to revoke all access right to an object. A number of capability-based computer systems were developed, but have not proven to be commercially successful. Packet Filtering using ACL Each of these rules has some powerful implications when filtering IP and IPX packets with access lists. There are two types of access lists used with IP and IPX: Standard access lists: These use only the source IP address in an IP packet to filter the network. This basically permits or denies an entire suite of protocols. IPX standards can filter on both source and destination IPX address. You create a standard IP access list by using the access list numbers 199. Extended access lists: this check for source and destination IP address, protocol field in the Network layer header, and port number at the Transport layer header. IPX extended access lists use source and destination IPX addresses, Network layer protocol fields, and socket numbers in the Transport layer header. Youll use the extended access list range from 100 to 199. Once you create an access list, you apply it to an interface with either an inbound or outbound list: Inbound access lists: Packets are processed through the access list before being routed to the outbound interface. Outbound access lists: Packets are routed to the outbound interface and then processed through the access list.

This tells the list to deny any packets from host 172.16.30.2. The default command is

39

Host In other words, if you type access-list 10 deny 172.16.30.2; the router assumes you mean host 172.16.30.2. Wildcards Wildcards are used with access lists to specify a host, network, or part network. To understand wildcards, you need to understand block size Block sizes are used to specify a range of addresses. The following lists some of the different block sizes available. Block Sizes: 64,32,16,8,4 To specify that an octet can be any value, the value of 255 is used. As an example, here is how a full subnet is specified with a wildcard: This tells the router to match up the first three octets exactly, but the fourth octet can be any value. Configuration of standard access lists Acme#config t Acme(config)#access-list 10 deny 172.16.40.0 0.0.0.255 Acme(config)#access-list 10 permit any Acme(config)#int e0 Acme(config-if)#ip access-group 10 out Configuration of extended access lists Acme#config t Acme(config)#access-list 110 deny tcp any host 172.16.10.5 eq 21 Acme(config)#access-list 110 deny tcp any host 172.16.10.5 eq 23 Acme(config)#access-list 110 permit ip any any Acme(config)#int e2 Acme(config-if)#ip access-group 110 out Password security Perhaps the most important issue for network security, beyond the realm of accidents, is the consistent use of strong passwords. Unix-like operating systems which allow remote logins from the network are particularly vulnerable to password attacks. The .rhosts and hosts.equiv les which allowed login without password challenge via rsh and rlogin were acceptable risks in bygone times, but these days one cannot afford to be lax about security. The problem with this mechanism is that .rhosts and hosts.equiv use hostnames as effective passwords. This mechanism trusts DNS name service lookups which can be spoofed in elaborate attacks.

40

Moreover, if a cracker gets into one host, he/she will then be able to log in on every host in these les without a password. This greatly broadens the possibilities for effective attack. Typing a password is not such a hardship for users and there are alternative ways of performing remote execution for administrators, without giving up password protection (e.g. use of cfengine). Password security is the rst line of defence against intruders. Once a malicious user has gained access to an account, it is very much easier to exploit other weaknesses in security. Some sites use schemes such as password aging in order to force users to change passwords regularly. This helps to combat password familiarity gained over time by local peer users, but it has an unfortunate side-effect. Users who tend to set poor passwords will not appreciate having to change their passwords repeatedly and will tend to rebel by setting trivial passwords if they can. Once a user has a good password, it is often advantageous to leave it alone. The problems of password aging are insignicant compared with the problem of weak passwords. Finding the correct balance of changing and leaving alone is a challenge. Passwords are not visible to ordinary users, but their encrypted form is often visible. Even on Windows systems, where a binary le format is used, a freely available program like PwDump can be used to decode the binary format into ASCII. There are many publicly available programs which can guess passwords and compare them with the encrypted forms, e.g. crack, which is available both for Unix and for Windows. No one with an easy password is safe. Passwords should never be any word in a dictionary or a simple variation of such a word or name. It takes just a few seconds to guess these. Modern operating systems have shadow password les or databases that are not readable by normal users. For instance, the Unix password le contains an x instead of a password, and the encrypted password is kept in an unreadable le. This makes it much harder to scan the password le for weak passwords. Tools for password cracking (e.g. Alec Muffets crack program) can help administrators nd weak passwords before crackers do.

41

Lecture note # 30 Content: Firewall: Filtering Rules Wrappers What is a Firewall? A process that filters all traffic between a protected or inside network and a less trustworthy or outside network. Firewalls implement a security policy, which distinguish good traffic from bad traffic. Part of the challenge of protecting a network with a firewall is determining the security policy that meets the needs of the installation. Why firewall is required? When youre connected to the Internet, the Internet is connected to you. Your Internet connection is an open two-way channel; when you are connected to the Internet, any machine on the Internet can reach your machines. Therefore it can use any service running on your desktop PCs or servers Microsoft Windows Networking (your shared files and printers), e-mail, telnet, NFS, your company database, etc. unless you explicitly prevent it. Some services are protected by usernames and passwords, but operating systems and applications often have security holes that let a malign hacker bypass these basic checks. Or, the hacker may be able to install a Trojan horse program or sniffer to capture your passwords, and then use your servers posing as a legitimate user. This is where a firewall comes in. Its a single, secured point of entry and exit for your network: everything passes through here and nothing comes in or out anywhere else. It blocks access to your network at the perimeter of your site (Figure 24.1). Because the packets are blocked before they even reach the machines that are running your services, they protect you from (many of) the vulnerabilities in your applications and operating systems

42

At their most basic, firewalls block particular TCP/IP source/destination address/port combinations. For example, your mail server (IP number 192.0.2.66) listens on port 25, and your Web server (192.0.2.77) listens on port 80. Your firewall might have rules like:

Features of a firewall are: alerting you to hacker attacks, so you can take immediate evasive action if required Logging your Internet activity. Because all access is through this one point, monitoring your firewall lets you track all your Internet traffic. Design of Firewalls: By careful positioning of a firewall within a network, we can ensure that all network access that we want to control must pass through it. A firewall is typically well isolated, making it highly immune to modification. Usually a firewall is implemented on a separate computer, with direct connections generally just to the outside and inside networks Types of Firewalls: Firewalls have a wide range of capabilities. Types of firewalls include:

packet filtering gateways or screening routers stateful inspection firewalls application proxies guards personal firewalls

1. Packet filtering gateways or screening routers The router performs the normal LAN-to-Internet routing function, but its also configured to act as a packet filter with a set of rules to allow or deny packets based on their source and destination. A packet is filtered only on the information in its header; the content or payload of the packet is not examined

Figure 24.3 shows a set of packet filtering rules for a firewall of this type. Packet filtering is very fast, but because it only looks at each packet in isolation, it has significant limitations: it doesnt maintain any state about established TCP connections, so there are certain types of attack it cant block it has difficulty handling FTP, because of FTPs multiple-port operation. Some firewalls can only handle passive-mode FTP because of this it doesnt handle user authentication which requires the firewall to keep state information about authenticated users. 43

Ordering of security permit/deny rules The packet filtering rules in Figure 24.3 are applied top-down. When the firewall/router receives a packet, it compares the packet headers against the first rule; if the packet matches, the action specified in the rule is performed (permit in this case) on the packet, and thats the end of that. However, if the first rule doesnt match, the packet is compared with the second rule. If that matches, its actioned; otherwise the third rule is checked, and so on, until no rules are left. Usually there is an automatic, denyeverything, catch-all rule inserted at the end.

2. Stateful inspection firewalls A stateful packet inspection (SPI) firewall maintains a table of current activity, e.g. information about each TCP connection, and uses this information in conjunction with the packets source and destination addresses/ports to decide whether the packet should be allowed or denied (blocked) Figure 24.4 illustrates how incoming traffic, even for the same port, is distinguished on the basis of whether it is part of an existing connection. By contrast, a packet filter would either allow traffic to port: 1234 or deny it; it

44

could not distinguish between the two cases shown, because it keeps no record (state information) about what happened in the past.

SPI firewalls have many advantages over basic packet filtering: They can allow reverse traffic automatically, without your having to enter specific rules. Consider an inside Web client connecting to a Web server outside on the Internet. The TCP connection will be from an ephemeral port (:1234, say) on the client, to port :80 on the server. With a packet filter you have a problem, because you wont know in advance what the ephemeral port is, and often just have to allow any incoming access to a whole range of ports that might be used as ephemeral ports by clients. By contrast, the SPI firewall records the ephemeral port number in its state table when the connection is first established through the firewall, and knows that only this Web server is allowed reply to this client on this port Similarly, even though the firewall might allow incoming Web connections to port 80 on internal server 192.0.2.55, if an incoming packet for that destination isnt part of an existing connection, e.g. if the connection hasnt completed the normal three-way TCP handshake the firewall can deny it. So even though port 80 has been opened on the firewall, incoming traffic to port 80 is subjected to many other checks before being permitted. By contrast, on a packet filter, an allow traffic to port 80 on 192.0.2.55 is a blanket allow anything to that port is permitted. This is an important distinction: there are hacker attacks that work by sending a packet that looks like its in the middle of a TCP session. SPI firewalls can block these attacks but packet filters cant They can control UDP and ICMP traffic, even though these are connectionless, by creating pseudo sessions. For example, the firewall can recognize an outgoing ICMP ping packet and record the source and destination, ping identifier and

45

sequence number, etc. in its state table, and permit only incoming packets that are replies to this They can handle user authentication. An external user can connect to the firewall and authenticate (logon) to it and the firewall records this fact. It can then let this user access internal resources that are blocked to non-authenticated users. This allows external users to connect to private internal mail and Web servers from external sites, for instance. In short, SPI firewalls give much finer control over your traffic than do packetfiltering fire- walls. You will usually have far fewer rules too, which makes configuring them much easier and much less error-prone. Lets look at a specific example of how an SPI firewall can block a particular type of attack, using its state information. A common denial of service (DoS) attack is the SYN Flood. Heres how it works. When a TCP connection is being established, client and server go through the normal SYN, SYN/ACK, ACK three-way handshake When the server receives the initial SYN from the client, it replies with SYN/ACK and creates an entry in an internal table saying its awaiting an ACK from the client to complete the connection establishment (Figure 24.5)

When the client sends the answering ACK, the server removes the entry from the awaiting ACK table because the connection is now fully established (Figure 24.7).

46

In a SYN flood, the attacker sends a large number of SYNs in quick succession and never completes the connection establishment. The servers internal table fills up (Figure 24.7) so its unable to accept connections from any other, real clients. In this way it has denied service to legitimate users.

SPI firewalls can protect servers from this type of attack. Instead of passing the initial SYN through to the server, the firewall itself sends the SYN/ACK, and awaits the establishing ACK. If the client is legitimate and sends the ACK, the firewall then performs its own three-way handshake with the server (Figure 24.8); when thats complete, it passes all traffic from the legitimate client to the server as normal. However, if the client is a hacker attempting a SYN flood, the real server is never contacted; the firewall detects the attack, recognizing the abnormal load of incomplete connection establishments and discarding them. The firewall may also raise an alert to inform the network manager of the attack

Because the firewall must maintain state tables for each established TCP connection, for connections in the process of being established, and for UDP and ICMP packets, it can use a lot of memory. This is why you will often see a firewall advertised as handling 5000 simultaneous connections or similar. The

47

busier your site, the more connections will be needed, so the firewall must have the memory and processing power to handle the likely volume 3. Application proxies or application-level gateways An application-level gateway (ALG) is a firewall program that runs at the application level of the TCP/IP stack, not at the IP packet level. Figure 24.9 shows a client on the Internet connecting to one of our internal servers via an ALG firewall.

the client establishes a connection to the ALG the ALG connects to the server The ALG acts as a proxy on behalf of the client. It accepts requests from the client, analyzes them, and if they are acceptable according to the firewalls rules, re-issues them on the ALGs connection to the server. The ALG handles responses from the server in the same way. With an ALG there are two separate TCP connections, as illustrated in Figure 24.9. This is fundamentally different from packet filter and stateful packet inspection firewalls where there is a single connection from the client to the server with the firewall acting almost as a router (Figure 24.10).

With the ALG, packets from the client are not forwarded to the server only the semantic information (protocol commands, etc.) extracted from the packets is relayed. Malformed packets sent from the client either because of a faulty client implementation or because they are deliberately malformed as part of a hacker attack can never reach the server. Advantages of application-level gateway firewalls ALGs understand the application-level protocols being used, so they can check the meaning of requests and replies. Lets take some examples:

48

1. an early version of the sendmail mail server had a backdoor built in for debugging: if you used the WIZ (wizard) command, you could get root user (administrator) access. Because packet filters and SPI firewalls dont understand the SMTP protocol at this high level, they cant block valid but unwanted commands like this. Because the ALG does operate at the application level, it can easily block this type of request 2. ALGs can offer fine control over requests sent to a Web server. At the simplest level, the firewall could block certain URLs when requested by certain internal IP addresses. It could also block the Code Red worms requests: they are valid according to the HTTP specs, but are recognizable because of the particular URL they request 3. you could allow FTP GET requests but block PUT requests, so Internet users can retrieve files from your FTP server but cant deposit virus-laden or other troublesome files. ALGs can give very detailed, protocol-specific logging. Disadvantages of application-level gateway firewalls Because of the amount of state information and the number of open connections they maintain, ALGs need more memory and processing power than packet filters or SPI firewalls. They may also be slower because of the extra load of extracting and validating the application-level semantics A separate proxy application must be provided within the firewall for each different protocol supported. When a new protocol or service is developed, there will usually be a delay before the firewall manufacturer releases a suitable proxy. For most people this delay wont matter, and anyway a generic proxy application can often be used as a stop gap. This will have no specific knowledge of the new protocol, but will allow it to operate through the firewall. Proxy programs are often only a few hundred lines of C program code, so they can be produced quickly some old ALGs and proxies only worked with special versions of client applications or libraries that were aware they were using the ALG, but nowadays most ALGs are transparent giving the illusion that the client is connected directly to the server, and dont need any special client software. 4. Guards A guard is a sophisticated firewall. Like a proxy firewall, it receives protocol data units, interprets them, and passes through the same or different protocol data units that achieve either the same result or a modified result. The guard decides what services to perform on the user's behalf in accordance with its available knowledge, such as whatever it can reliably know of the (outside) user's identity, previous interactions, and so forth. The degree of control a guard can provide is limited only by what is computable.

49

But guards and proxy firewalls are similar enough that the distinction between them is sometimes fuzzy. That is, we can add functionality to a proxy firewall until it starts to look a lot like a guard. Guard activities can be quite sophisticated, as illustrated in the following examples:

A university wants to allow its students to use e-mail up to a limit of so many messages or so many characters of e-mail in the last so many days. Although this result could be achieved by modifying e-mail handlers, it is more easily done by monitoring the common point through which all e-mail flows, the mail transfer protocol. A school wants its students to be able to access the World Wide Web but, because of the slow speed of its connection to the web, it will allow only so many characters per downloaded image (that is, allowing text mode and simple graphics, but disallowing complex graphics, animation, music, or the like). A library wants to make available certain documents but, to support fair use of copyrighted matter; it will allow a user to retrieve only the first so many characters of a document. After that amount, the library will require the user to pay a fee that will be forwarded to the author. A company wants to allow its employees to fetch files via ftp. However, to prevent introduction of viruses, it will first pass all incoming files through a virus scanner. Even though many of these files will be non executable text or graphics, the company administrator thinks that the expense of scanning them (which should pass) will be negligible.

Each of these scenarios can be implemented as a modified proxy. Because the proxy decision is based on some quality of the communication data, we call the proxy a guard. Since the security policy implemented by the guard is somewhat more complex than the action of a proxy, the guard's code is also more complex and therefore more exposed to error. Simpler firewalls have fewer possible ways to fail or be subverted. 5. Personal firewall Personal firewall is an application program that runs on a workstation to block unwanted traffic, usually from the network. A personal firewall can complement the work of a conventional firewall by screening the kind of data a single host will accept, or it can compensate for the lack of a regular firewall, as in a private DSL or cable modem connection. Just as a network firewall screens incoming and outgoing traffic for that network, a personal firewall screens traffic on a single workstation. A workstation could be vulnerable to malicious code or malicious active agents (ActiveX controls or Java applets), leakage of personal data stored on the workstation, and vulnerability scans to identify potential weaknesses. Commercial implementations of personal firewalls include Norton Personal Firewall from Symantec, McAfee Personal Firewall, and Zone Alarm from Zone Labs (now owned by Check Point).

50

The personal firewall is configured to enforce some policy. For example, the user may decide that certain sites, such as computers on the company network, are highly trustworthy, but most other sites are not. The user defines a policy permitting download of code, unrestricted data sharing, and management access from the corporate segment, but not from other sites. Personal firewalls can also generate logs of accesses, which can be useful to examine in case something harmful does slip through the firewall. Combining a virus scanner with a personal firewall is both effective and efficient. Typically, users forget to run virus scanners daily, but they do remember to run them occasionally, such as sometime during the week. However, leaving the virus scanner execution to the user's memory means that the scanner detects a problem only after the fact such as when a virus has been downloaded in an e-mail attachment. With the combination of a virus scanner and a personal firewall, the firewall directs all incoming e-mail to the virus scanner, which examines every attachment the moment it reaches the target host and before it is opened. A personal firewall runs on the very computer it is trying to protect. Thus, a clever attacker is likely to attempt an undetected attack that would disable or reconfigure the firewall for the future. Still, especially for cable modem, DSL, and other "always on" connections, the static workstation is a visible and vulnerable target for an ever-present attack community. A personal firewall can provide reasonable protection to clients that are not behind a network firewall 7. Hybrid firewalls Hybrid firewalls are a mixture of stateful packet inspection for performance, and ALGs for fine control of specific protocols. Many modern firewalls are hybrids of this type. What firewalls can and Can not Block

Firewalls can protect an environment only if the firewalls control the entire perimeter. That is, firewalls are effective only if no unmediated connections breach the perimeter. If even one inside host connects to an outside address, by a modem for example, the entire inside net is vulnerable through the modem and its host. Firewalls do not protect data outside the perimeter; data that have properly passed (outbound) through the firewall are just as exposed as if there were no firewall. Firewalls are the most visible part of an installation to the outside, so they are the most attractive target for attack. For this reason, several different layers of protection, called defense in depth, are better than relying on the strength of just a single firewall. Firewalls must be correctly configured, that configuration must be updated as the internal and external environment changes, and firewall activity reports must be reviewed periodically for evidence of attempted or successful intrusion. Firewalls are targets for penetrators. While a firewall is designed to withstand attack, it is not impenetrable. Designers intentionally keep a firewall small and

51

simple so that even if a penetrator breaks it, the firewall does not have further tools, such as compilers, linkers, loaders, and the like, to continue an attack. Firewalls exercise only minor control over the content admitted to the inside, meaning that inaccurate data or malicious code must be controlled by other means inside the perimeter.

Table 7-8. Comparison of Firewall Types. Packet Filtering Simplest Stateful Inspection More complex Application Proxy Guard Personal Firewall Similar to packet filtering firewall

Even more Most complex complex

Sees only Can see either Sees full data Sees full text of Can see full data addresses and addresses or data portion of communication portion of packet service protocol packet type Auditing difficult Auditing possible Can activity audit Can audit activity Can and usually does audit activity Typically, screens based on information in a single packet, using header or data Usually starts in "deny all inbound" mode, to which user adds trusted addresses as they appear

Screens based Screens based on Screens based Screens based on on connection information on behavior of interpretation of rules across packets in proxies message content either header or data field Complex addressing rules can make configuration tricky Usually preconfigured to detect certain attack signatures Simple proxies Complex guard can substitute functionality can for complex limit assurance addressing rules

Difference between firewall & IDS (Intrusion Detection system) Firewall It is first line of defense Primary means of securing a private network against penetration from a public network An access control device, performing perimeter security by deciding which packets are allowed or denied and which must be modified before passing. 52

Core of enterprises comprehensive security policy Can monitor all traffic entering and leaving the private network and alert the IT staff to any attempts to circumvent security or patterns of inappropriate use.

Limitations of firewall: IDS IDSs prepare for & deal with attacks by collecting information from a variety of system and network sources, then analyzing the symptoms of security problems. IDSs serve three essential security functions; monitor, detect & respond to unauthorized activity. IDS can also response automatically (in real-time) to a security breach event such as logging off a user, disabling a user accounts & launching of some scripts. Firewall cannot detect security breaches associated with traffic that does not pass through it. Only IDS is aware of traffic in the internal network. Not all access to the internet occurs through the firewall. Firewall does not inspect the content of the permitted traffic. Firewall is more likely to be attacked more often than IDS. Firewall is usually helpless against tunneling attacks where as IDS is capable of monitoring messages from other pieces of security infrastructure.

Difference between IDS and Firewalls Though they both relate to network security, an IDS differs from a firewall in that a firewall looks out for intrusions in order to stop them from happening. The firewall limits the access between networks in order to prevent intrusion and does not signal an attack from inside the network. An IDS evaluates a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. Personal Firewalls 1. Protect personal machines. 2. Software tcpwrapper iptables TCPWRAPPERS

One of the problems with TCP/IP is that it is basically insecure. Many intrusions are the result of this insecurty.

53

TCP Wrappers solves this by restricting services that can be used. TCP Wrappers is a daemon that is run instead of inetd. It intercepts requests and either allows inetd to run and service the request or does not run inetd, thereby denying the request. TCP Wrappers handles telnet, finger, ftp, exec, rsh, rlogin, tftp, talk, comsat, and other services that have a one-to-one mapping onto executable files. TCP Wrappers consists of: o hosts.allow and hosts.deny files with the rules for allowed and denied services o tcpdchk program that checks configuration files for problems o tcpdmatch program that reports how a service will be handled Configure hosts.allow and hosts.deny files o First, decide on strategy If you have a high level of trust then allow everything that isn't specifically denied If you have a low level of trust then deny everything that isn't specifically allowed o Then set up hosts.allow and hosts.deny files Remember that hosts.allow is examined first o For a mostly open system, leave hosts.allow empty and put denied services in hosts.deny o For a mostly closed system, put "ALL : ALL" in hosts.deny and then put allowed services in hosts.allow Example - mostly closed system o hosts.allow ALL : localhost in.telnetd : gilton.net in.fingerd : jeff1.gilton.net This configuration allows any service for the local machine, telnet for any machine on the gilton network, allows finger for jeff1 on the gilton network
o

hosts.deny

ALL : ALL in.telnetd : ann.gilton.net Anything else is denied. Note that second line doesn't do anything because second line of hosts.allow lets any machine on gilton network use telnet. TCP Wrapper Configuration File: /etc/hosts.allow (and /etc/hosts.deny)

54

Inetd.conf

Iptables 1. Support both stateless and stateful packet filtering 2. You need a kernel which has the netfilter infrastructure in it: netfilter is a general framework inside the Linux kernel which other things (such as the iptables module) can plug into. This means you need kernel 2.3.15 or beyond, and answer `Y' to CONFIG_NETFILTER in the kernel configuration. 3. The iptables tool inserts and deletes rules from the kernel's packet filtering table. 4. How packets traverse the filters

When a packet reaches a circle in the diagram, that chain is examined to decide the fate of the packet. If the chain says to DROP the packet, it is killed there, but if the chain says to ACCEPT the packet, it continues traversing the diagram. An example of firewall rules

55

Lecture note # 31 Content: Detection and Prevention of Denial of Service (DOS) Attacks Denial of service attack (DOS): Denial of service implies that an attacker disables or corrupts networks, systems, or services with the intent to deny services to intended users. DoS attacks involve either crashing the system or slowing it down to the point that it is unusable. But DoS can also be as simple as deleting or corrupting information. In most cases, performing the attack simply involves running a hack or script. The attacker does not need prior access to the target because a way to access it is all that is usually required. For these reasons, DoS attacks are the most feared DoS attack is when the attacker launches an attack from his or her own computer, this is done by sending packets of data to the remote computer, for each packet sent the target machine receives one. This is a very uncommon form of denial of service because the attack most of the time is very unsuccessful and at times can be easily traced. DoS attacks are usually carried out by amateur script kiddies

Fig DoS Type of DOS attacks:1.) Ping of death:A ping of death is a simple attack. Since ping requires the recipient to respond to the ping request, all the attacker needs to do is send a flood of pings to the intended victim This attack modifies the IP portion of the header, indicating that there is more data in the packet than there actually is, causing the receiving system to crash, as shown in Figure

56

Figure of ping of death 2.) Smurf Attack An attack where a ping request is sent to a broadcast network address with the sending address spoofed so many ping replies will come back to the victim and overload the ability of the victim to process the replies. This attack is made possible mostly because of badly configured network devices that respond to ICMP echoes sent to broadcast addresses. The amount of traffic sent by the attacker is multiplied by a factor equal to the number of hosts behind the router that reply to the ICMP echo packets.

Fig of smurf attack Besides the target system, the intermediate router is also a victim, and thus also the hosts in the bounce site. A similar attack that uses UDP echo packets instead of ICMP echo packets is called a Fraggle attack. 3) Echo-Chargen

57

This attack works between two hosts. Chargen is a protocol that generates a stream of packets; it is used to test the network's capacity. The attacker sets up a chargen process on host A that generates its packets as echo packets with a destination of host B. Then, host A produces a stream of packets to which host B replies by echoing them back to host A. This series puts the network infrastructures of A and B into an endless loop. If the attacker makes B both the source and destination address of the first packet, B hangs in a loop, constantly creating and replying to its own messages. 4) Teardrop Attack:A normal packet is sent then a second packet is sent which has a fragmentation offset claiming to be inside the first fragment. This second fragment is too small to even extend outside the first fragment. This may cause an unexpected error condition to occur on the victim host which can cause a buffer overflow and possible system crash on many operating systems In the teardrop attack, the attacker sends a series of datagrams that cannot fit together properly. One datagram might say it is position 0 for length 60 bytes, another position 30 for 90 bytes, and another position 41 for 173 bytes. These three pieces overlap, so they cannot be reassembled properly. In an extreme case, the operating system locks up with these partial data units it cannot reassemble, thus leading to denial of service 5) Flood attacks:Different types of flood attacks are: a) UDP Flood Attack:UDP flooding is when the attacker sends garbage packets from UDP port(s) to UDP port(s) on the remote computer, since UDP is a connectionless protocol (no handshake mechanism) UDP flooding can be very effective and easy to abuse for flood attacks. A common type of UDP flood attack often referred to as a Pepsi attack, is an attack in which the attacker sends a large number of forged UDP packets to random diagnostic ports on a target host. The CPU time, memory, and bandwidth required to process these packets may cause the target to become unavailable for legitimate users. b) TCP SYN Flood Attack:A TCP session is established by using a three-way handshake mechanism, which allows the client and the host to synchronize the connection and agree upon the initial sequence numbers. When the client connects to the host, it sends a SYN request to establish and synchronize the connection. The host replies with a SYN / ACK, again to synchronize. Then the client acknowledges it received the SYN/ ACK packet by sending and ACK. When the host receives the ACK the connection will become OPEN, allowing traffic from both sides (full-duplex). The connection remains open until the client or the host issues a FIN or RST packet, or the connection times out. If you flood a remote computer

58

with SYN packets it is going to send back a SYN/ACK packet so bandwidth will be wasted. In addition, in a TCP SYN flood attack the connection is not completed so the target computer is left waiting for an ACK, therefore it is possible to max out the remote computers connection queue. Connections from legitimate users will be rejected in this case. The amount of bandwidth this attack uses is very minimal, although if done on a very large scale it could affect the bandwidth of a web server.

Figure TCP3 way handshaking c) Ack Flood: A denial of service attack that sends a large number of TCP packets with the ACK flag set to a target. The goal of the attack is to use all of the target system's network resources causing the target's performance to degrade and possible even cause a system crash. This attack can also be very effective against stateful firewalls & IDS/IPS product if they have a low amount of memory dedicated to state table management d) Reset (RST) Attack: Whereas SYN flooding attacks are carried out at the beginning of the connection, RST attacks usually occur in the middle of it. The RST flag in the TCP packet is used to reset the connection. If two machines C and B are in the middle of a connection and an attacker A decides to attack machine C then all he has to do is calculate/guess the correct sequence number using the methods described above. (There is no ACK in a RST packet). After that the attacker can disrupt the connection by sending a spoofed packet with RST flag set to B. The attacker then assumes B's identity and starts attacking C e) FIN Attack: A FIN attack is similar to the RST attack and is used to disconnect the client. However it concentrates on the end state of a TCP connection. The attacker tries to establish a series of new connections and closing them immediately without any data transfers. The idea is to keep the server busy and eventually crash it with a large number of open and close connection requests. This is more popular than the RST attack because the attacker can

59

know immediately whether or not the attack was successful as the client has to reply with an ACK after it receives a FIN flag

6) Traffic Redirection
A router is a device that forwards traffic on its way through intermediate networks between a source host's network and a destination's network. So if an attacker can corrupt the routing, traffic can disappear. Routers use complex algorithms to decide how to route traffic. No matter the algorithm, they essentially seek the best path (where "best" is measured in some combination of distance, time, cost, quality, and the like). Routers are aware only of the routers with which they share a direct network connection, and they use gateway protocols to share information about their capabilities. Each router advises its neighbors about how well it can reach other network addresses. This characteristic allows an attacker to disrupt the network. To see how, keep in mind that, in spite of its sophistication, a router is simply a computer with two or more network interfaces. Suppose a router advertises to its neighbors that it has the best path to every other address in the whole network. Soon all routers will direct all traffic to that one router. The one router may become flooded, or it may simply drop much of its traffic. In either case, a lot of traffic never makes it to the intended destination. 7) DNS Attacks / DNS poisoning: This is an attack where DNS information is falsified. This attack can succeed under the right conditions, but may not be real practical as an attack form. The attacker will send incorrect DNS information which can cause traffic to be diverted 8) Distributed Denial of Service A distributed denial of service attack is when an attacker attacks from multiple source systems DDoS attack is generally more effective to bring down huge corporate sites than DoS attacks. The attacker can put in order a large number of computers to connect to a website at the same time. The web server has a maximum allowed number of client connections. If this number is attained, the server will deny further connections. So there will be a denial of service. Usually the attacker does not own all these computers so he uses Trojan horses with back doors as malicious code to infect computers which become zombies (also called secondary victims). The users of the infected computers are not aware that their computers are used in a DDoS attack. DoS bots (small word for robot, program for flooding present on the secondary victims computer) usually have standard flooding, such as ICMP, UDP, TCP, and SYN Flooding. The Internet services and resources under the attack are primary victims. A typical DDoS attack consists of

60

master, slave, and victim. Master being the attacker, slave being the compromised systems and victim being the attackers target.

9) DRDoS DRDoS is when an attacker sets his bots to flood different intermediate hosts with spoofed packets. For example the attacker sets half his bots to flood yahoo.com with spoofed ICMP packets and half ebay.com with spoofed ICMP packets. The spoofed packets seem to have microsoft.com as a source so yahoo.com and ebay.com flood microsoft.com (ebay.com and yahoo.com will reply to the spoofed source). For each packet the attacker sends to yahoo.com or ebay.com, yahoo.com or ebay.com may have thousands of machines on the same IP Address. Each of these machines will reply to the spoofed ICMP packet therefore amplifying the power of the attack greatly

61

Fig DRDoS- Red Lines: Connection from attacker computer to zombies computers. Blue Lines: Zombies sending spoofed ICMP packets. The ICMP packets look like they come from the Internet Core router the attacker wants to attack. Green Lines: Each of the computers connected to ebay.com, yahoo.com, cnn.com and Amazon.com are replying to the spoofed ICMP packets therefore flooding the Internet core router.

Attack Detection Symptoms of Attacks: There are several ways to detect a DOS attack while it is happening. Those way are generally through monitoring the router, whether it be by examining CPU utilization and by using access lists to detect attacks. Since all DOS attacks must come through the router on its way to the network, the router is the first place to detect an incoming or current DOS attack. There are several ways to detect an attack on the router such as: 1. Your router is seeing an unusually high number of ARP requests. 2. Your NAT/PAT address-translation tables have a large number of entries. 3. Your router's IP Input, ARP Input, IP Cache Ager, and CEF processes are using abnormally high amounts of memory. 4. Your router's ARP, IP Input, CEF, and IPC processes are running at a much higher CPU utilization rate. 1. Examining CPU Utilization One of the first and easiest ways of detecting a DOS attack is to monitor the CPU usage. This can be with some deviation of the sh processes cpu history command (depending on the manufacturer of the router). Using this command will display the total cpu usage of the router in one minute, one hour and three day formats. This will also display the maximum cpu usage (marked by an *) and the average cpu usage measuered in one second periods (marked by a #). Example: Router# show processes cpu history <-- One minute output omitted --> 6665776865756676676666667667677676766666766767767666566667 6378016198993513709771991443732358689932740858269643922613 100 90 80 ** * ** **** CPU% per minute (last 60 minutes) = maximum CPU% # = average CPU% 2. Using ACLs to Detect DOS Attacks One of the most useful tools for detecting DOS attacks in a router is using access lists (ACLs). Access lists are a set of rules that govern what traffic is allowed 62

on the network and how the traffic moves once it is there. Thus ACLs can be used to filter out traffic that is commonly associated with DOS attacks. Although it is not necessary for a network administrator to add the deny ip any any command, it will help the network administrator keep track of all the traffic that is blocked on the router. Example: Router(config)# remark Insert other ACL statements here Router(config)# access-list 100 deny ip any 192.1.1.0 0.0.0.0 Router(config)# access-list 100 deny ip any 192.1.1.255 0.0.0.0 Router(config)# access-list 100 deny ip any 192.1.2.0 0.0.0.0 Router(config)# access-list 100 deny ip any 192.1.2.255 0.0.0.0 Router(config)# access-list 100 permit icmp any host 192.1.2.9 echo-reply Router(config)# access-list 100 deny icmp any any echo Router(config)# access-list 100 deny icmp any any echo-reply Prevention and Response Since any organization is susceptible to DOS attacks and no matter what organization it is, a DOS attack will cause it to lose money and time. Every organization is encouraged to use preventative measures to help save them from possible attacks. Possible preventative practices are the following: 1. Use access list to block known methods of attack This will also stop users on the network from inadvertently starting them by mistake as well 2. Install available patches to protect workstations against missed attacks 3. Disable network services that are not needed or used 4. Use quota systems to limit the amount of resources a system can use to prevent massive CPU usage by a DOS attack 5. Establish normal levels of network usage and monitor the network for spikes 6. Continually reconsider security with consideration to physical equipment 7. Constantly monitor changes to system configuration files 8. Regularly maintain backup. Both of high priority data and of machines that can be placed in service to replace an attacked machine. 9. Maintain rigorous password policies on length, characters, and life span of all passwords on the network Countermeasure against smurf attacks: It is difficult to prevent Smurf attacks entirely because they are made possible by incorrectly configured networks from a third party. The Smurf Amplifier Registry (SAR) http://www.powertech.no/smurf/ Netscan.org is one of several publicly available databases that can be used to configure routers and firewalls to block ICMP traffic from these networks. The Smurf Amplifier Registry (SAR) can be downloaded in Cisco ACL format. If you use Cisco routers, make sure all interfaces are configured with the no ipdirected broadcast command

63

Countermeasure against ping of death: Modern operating systems and network devices safely disregard these oversized packets. Older systems can usually be updated with a patch. Countermeasure against UDP flood attack: To minimize the risk of a UDP flood attack, disable all unused UDP services on host and block the unused UDP ports if you use a firewall to protect your network Countermeasure against TCP and SYN floods: Many routers and other network nodes today are able to detect SYN floods by monitoring the amount of unacknowledged TCP sessions and kill them before the session queue is full. They can often be configured to set the maximum allowed number of half-open connections, and limit the amount of time the host waits for the final acknowledgement. Without these preventive measures, the server could eventually run out of memory, causing it to crash entirely.

Connection flooding, e.g., echo-chargen, ping of death, smurf, syn flood, DNS attack, Distributed denial of service :- Firewall, Intrusion detection system, ACL on border router, Honeypot Traffic redirection :- Encryption, Audit

64

Lecture note # 32 Content: Automatic Identification of Configuration Loop Holes Automatic Identification of Configuration Loop Holes Manually monitoring your system is time consuming and prone to errors and omissions. Fortunately, several automated monitoring tools are available. At this writing, the web site http://www.insecure.com lists the monitoring tools that are currently most popular. are: 1) Tripwire is a tool that detects when files have been altered by regularly recalculating hashes of them and storing the hashes in a secure location. The product triggers when changes to the files have been detected. By using cryptographic hashes, tripwire is often able to detect subtle changes. Contrast: The simplistic form of tripwire is to check file size and last modification time. However, programs that change files (like viruses) will often keep these the same. On the other hand, keeping complete backups would require too much space. Therefore, cryptographic hashes are used. Contrast: The cryptographic hash calculated from the file is often known as a "fingerprint" or "signature". However, these terms have completely different meanings in other areas of security, so some people just say "hash" or "checksum". History: The original tool was published in 1992 for Unix. The company Tripwire Inc. was formed in 1998. Point: Reasons why files change: Replace common system programs with duplicates contains backdoors. Change configuration files to allow intruder back into the system. Alter system logfiles in order to cover tracks. Alter data files (such as financial records or school grades). Tripwire is a file integrity checker for UNIX/Linux based operating systems and works as an excellent intrusion detection system Tripwire comes with four binary files: tripwire The main file; used for initialising the database, checking the integrity of the file system, updating the database and updating the policy. twadmin Tripwire's administrative and utility tool; used for creating and printing configuration files, replacing and printing a policy file, generating site and local keys and other encryption related functions. twprint Used to print the reports and database in human-readable format. siggen Generates the various hashes that Tripwire supports for checking the integrity of files. Integrity Checking Now that Tripwire is installed, configured and the baseline database has been created, we can get on with the business of checking the integrity of the file system: [root@home /etc/tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol

65

*** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/$HOSTNAME-20040823-210750.twr ... ... ... Total objects scanned: 52387 Total violations found: 0 Each violation (an addition, removal or change) is reported to stdout and written to the report file as indicated. On this occasion I have assumed the default locations of the configuration and policy files. I could have specified these explicitly on the command line as I have been doing with switches such as --cfgfile, etc. Your goal should be to set this up to run on a daily basis. This can be done as a cron or an Anacron job; Anacron is the better choice when the computer is not on 24/7. Using either cron or Anacron, the output should be e-mailed to the root user on each run of Tripwire. In the case of Anacron, create a file in /etc/cron.daily/ called (for example) tripwire-check containing: #!/bin/bash /usr/sbin/tripwire --check and ensure that it is executable (chmod u+x /etc/cron.daily/tripwire-check). If you want to use a cron job, then add the following line to root's crontab to perform the check every day at 3am (crontab -e): 00 03 * * * /usr/sbin/tripwire --check Nessus Nessus is a network-based security scanner that uses a client/server architecture. Nessus scans target systems for a wide range of known security problems. SATAN Security Auditing Tool for Analyzing Networks is the first network-based security scanner that became widely distributed. Somewhat outdated, it is still popular and can detect a wide range of known security problems. SATAN has spawned some children, SAINT and SARA, that are also popular. SAINT System Administrator's Integrated Network Tool scans systems for a wide range of known security problems. SAINT is based on SATAN. SARA Security Auditor's Research Assistant is the third-generation security scanner based on SATAN and SAINT. SARA detects a wide range of known security problems. Whisker

66

Whisker is a security scanner that is particularly effective at detecting certain CGI script problems that threaten web site security. ISS Internet Security Scanner is a commercial security scanner for those who prefer a commercial product. Cybercop Cybercop is another commercial security scanner for those who prefer commercial products. Snort Snort provides a rule-based system for logging packets. Snort attempts to detect intrusions and report them to the administrator in real time. Port Sentry Port Sentry detects port scans and can, in real time, block the system initiating the scan. Port scans often precede a full-blown security attack.

67

Lecture note # 33 Content: Security Information Resources: CERT, Installing and Upgrading System Software CERT:Securing Indian Cyber Space role of Indian Computer Emergency Response Team (CERT-In) Established in 2004 Mission: Alert, Advice and Assurance Ensure security of cyber space in the country by Enhancing the security of communications and Information infrastructure through Proactive action and effective collaboration aimed at security incident prevention, prediction & protection and security assurance

CERT-In - Cyber Security Focus It has four enabling actions: Enabling Govt. as a key stakeholder in creating appropriate environment/conditions by way of policies and legal/regulatory framework to address important aspect of data security and privacy protection concerns. Specific actions include National Cyber Security policy, amendments to Indian IT Act, security and privacy assurance framework, crisis management plan (CMP) etc. Enabling User agencies in Govt. and critical sectors to improve the security posture of their IT systems and networks and enhance their ability to resist cyber attacks and recover within reasonable time if attacks do occur. Specific actions include security standards/ guidelines, empanelment of IT security auditors, creating a network & database of points-of-contact and CISOs of Govt & critical sector organisations for smooth and efficient communication to deal with security incidents and emergencies, CISO training programmes on security related topics and CERT-In initiatives, cyber security drills and security conformity assessment infrastructure covering products, process and people Enabling CERT-In to enhance its capacity and outreach and to achieve force multiplier effects to serve its constituency in an effective manner as a `Trusted referral agency. Specific actions include National cyber security strategy (11th Five Year Plan), National Cyber Alert system, MoUs with vendors, MoUs with 68

CERTs across the world, network of sectoral CERTs in India, membership with international/regional CERT forums for exchange of information and expertise & rapid response, targeted projects and training programmes for use of and compliance to international best practices in security and incident response. Public Communication & Contact programmes to increase cyber security awareness and to communicate Govt. policies on cyber security. Cyber Security Strategic objectives Prevent cyber attacks against the countrys critical information infrastructures Reduce national vulnerability to cyber attacks Minimise damage and recovery time from cyber attacks Security Assurance Actions at Country level Policy directives on data security and privacy protection - Compliance, liabilities and enforcement (ex. Information Technology Act 2000) Standards and guidelines for compliance (ex: ISO 27001, ISO 20001 & CERT-In guidelines) Conformity assessment infrastructure (enabling and endorsement actions concerning security product ISO 15408, security process ISO 27001 and security manpower CISA, CISSP, ISMS-LA, DISA etc.) Security incident - early warning and response (National cyber alert system and crisis management) Information sharing and cooperation (MoUs with vendors and overseas CERTs and security forums). Pro-active actions to deal with and contain malicious activities on the net by way of net traffic monitoring, routing and gateway controls Lawful interceptions and Law enforcement. Nation wide security awareness campaign. Security research and development focusing on tools, technology, products and services. Security Assurance Actions at Network level (ISP) Compliance to security best practices (ex. ISO27001), service quality (ISO 20001) and service level agreements (SLAs) and demonstration. Pro-active actions to deal with and contain malicious activities, ensuring quality of services and protecting average end users by way of net traffic monitoring, routing and gateway controls Keeping pace with changes in security technology and processes to remain current (configuration, patch and vulnerability management) Conform to legal obligations and cooperate with law enforcement activities including prompt actions on alert/advisories issued by CERT-In. Use of secure product and services and skilled manpower. Crisis management and emergency response. Security Assurance Actions at Corporate level Compliance to security best practices (ex. ISO27001), and demonstration. Pro-active actions to deal with and contain malicious activities, and protecting average end users by way of net traffic monitoring, routing and gateway controls Keeping pace with changes in security technology and processes to remain current (configuration, patch and vulnerability management)

69

Conform to legal obligations and cooperate with law enforcement activities including prompt actions on alert/advisories issued by CERT-In. Use of secure product and services and skilled manpower. Crisis management and emergency response. Periodic training and up gradation of skills for personnel engaged in security related activities Promote acceptable users behavior in the interest of safe computing both within and outside. Security Assurance Actions at Small users/Home users level Maintain a level of awareness necessary for self-protection. Use legal software and update at regular intervals. Beware of security pitfalls while on the net and adhere to security advisories as necessary. Maintain reasonable and trust-worthy access control to prevent abuse of computer resources. Security Assurance Ladder Security control emphasis depends on the kind of environment Low risk : Awareness know your security concerns and follow best practices Medium risk: Awareness & Action Proactive strategies leave you better prepared to handle security threats and incidents High risk: Awareness, Action and Assurance Since security failures could be disastrous and may lead to unaffordable consequences, assurance (basis of trust & confidence) that the security controls work when needed most is essential. Cyber Security - Final Message Failure is not when we fall down, but when we fail to get up

Lecture note # 34 Content: Use of scripting tools: Shell Scripting

70

Shell scripting:Features of the Shell Command Interpreter Input / Output Redirection Filters and Pipes Wildcards Background Processing Shell as a Programming Language The Shell is the command interpreter of any UNIX system. It interprets the commands that the user gives at the prompt and sends them for execution to the kernel. The Shell is essential for interactive computing where the user desires instant output of -His/her commands. The Shell has the features of re-directing the standard input, output and error files to devices other than the standard devices. . Using the pipe feature different commands can be combined to solve a particular problem, which is not possible through a single command. The Shell creates temporary files to hold the intermediate results and erases them once the command execution is over. The Shell has the capability of file name expansion, using Met characters or Wildcards, discussed earlier. More than one command can be given at the same line using the command terminator ";". The Multitasking feature of UNIX is supported by the shell using the background processing method, where more than one process can be started in the background. Besides the features that are already discussed, the shell has many more facilities like defining and manipulating variables, command substitution, error handling, etc. All these together can be used as a programming language. Different types of shells available in the UNIX system are the Bourne shell, the 'e' shell, and the Korn shell. Shell as a Programming Language Manipulation of variables Decision making Looping Parameter handling Handling interrupts The main features of the shell programming language are: Structured language construct - implement programming language features like looping and decisions making.

71

I/O interaction in the form of accepting values from the user and displaying the results. Subroutines construct to facilitate a modular approach to programming. Variables Arguments to control the execution on different values or files that are passed as arguments. Interrupt handling to receive signals and carry out alternate courses of action. Creating and executing a shell scripts Open a file in vi editor. Write any UNIX command. Save the file under a given name. At the shell prompt give the command sh followed by the file name. The command written in that file will be executed.

Now you are ready to write first shell script that will print "Knowledge is Power" on screen. See the common vi command list , if you are new to vi.
$ # # My # clear echo "Knowledge is Power" vi first shell first script

Output: First screen will be clear, then Knowledge is Power is printed on screen. Shell Variables A variable is a name associated with a data value and it offers a symbolic way to represent and manipulate data. The most important function of shell variable is to customize the operations of the shell. For example, using variables the user can establish a different shell prompt, specify a new home directory, assign different search paths for the commands or it can be used for shorthand notations for large command lines. The variables in the Bourne Shell are classified as: 1. User defined variables : defined by the user for his/her use 2. Environmental variables: defined by the shell for its own operations 3. Pre-defined variables: reserved variables used by the shell and UN1X commands for specifying the exit status of commands, arguments to shell scripts, the formal parameters, etc. 72

The user defined variables are created by specifying the name of the variable followed by the assignment operator and the value of the variable at the prompt. No 'white space is allowed before or after the assignment operator. $ Variable=value Example: creating user defined variables. $ Name=mano will create a variable name containing the value 'mano' $ Age=56 will create a variable age and store the characters "5" and "6". The shell, by default, treats all the value as strings of characters only. Computations on numeric variables are done in a different manner. Environmental variables PATH : contains the search path string HOME : specifies the full path names for the user login directory. TERM : holds the terminal specification information LOGNAME: holds the user login name PS1 : stores the primary prompt string PS2 : specifies the secondary prompt string SHELL : stores the name of the shell (Bourne, Korn or C) Using variables Displaying the contents of variables $echo mesg $echo $variable Escape sequences with echo command: \\b, \\f, \\n, \\r, \c Reading values into variables $read var The echo command simply echoes back its arguments on to the terminal screen. For example: $ echo welcome to RU will produce the output welcome to RU When the echo command writes its arguments, all Metacharacters are expanded by the shell. $ echo * will produce all the filenames present in the current directory as the output. This is a crude version of the Is command. The echo command can also be used to display the contents of the shell variables. $ echo HOME will display the string 'HOME' If the argument to the echo command is prefixed by a dollar sign '$', it treats the argument as a variable and displays the contents of that variable. "

73

If a variable by that name is not found, then a blank line is echoed. Example: The echo command $ echo $HOME will display the value of the variable HOME, say, /usr/mano $ echo $Name $ Name=David variable Name is assigned a value David David the output produced by echo $ $ echo Hello $Name!, welcome to $HOME The output may be: Hello David I welcome to /usr/mano The escape sequences used with echo command to control the output are: \\b \\f Form feed Back space \\n New line \\r Carriage return The echo command will exit immediately without printing a new line, if it encounters the n\c"escape sequence. The echo command is used as the 'print' statement in a shell script. Reading values into variables We can execute the echo command given in the previous example through a shell script. By doing so, it will display the same output each time we execute the script. If we want to write an interactive script which will take the input from the user and display the output, this version of the script will not be helpful. The command read is used to read a value to a variable at runtime. $ read <var> <Enter> This will read a value to the variable from the standard input device (key board). Example: Write a shell script which will accept the name and age from the user and display the same on the terminal screen. $ vi script_2 <Enter> will open a file scripC2 in vi. Enter the following lines and save it. echo "Enter your name: \c" read name echo "Enter your age:\c" read age echo "hello $name, nice to meet you. You are $age years old" $ sh script_2 <Enter> Execute this script.

The '\c' escape sequence will place the cursor at the end of the output of echo, so that the read command will wait for its input at the same line.

74

Example: Accept two filenames from the user and copy the first file onto the second. Write a shell script kopy in vi and execute it with the sh kopy command. # This script takes two file names and copies the first-named file into the second one echo "Enter source file name:\c" read source echo "Enter the target file name:\c" read target cp $source $target echo file $source is copied into file $target The hash symbol (#) is used to mark a comment line in the shell script. The statements followed by the '#' sign are ignored. NOTE: A shell script can also be executed by assigning the execute permission to it using the chmod command. Once the execute permission is assigned, the shell script can be executed like any other command in UNIX by giving its name at the prompt. To convert the scripts described above, we can use: $ Chmod +x kopy <Enter> (and similarly for other scripts)

Lecture note # 35 Content: Perl/Python Scripting, Use of Make Option

75

2. Perl/Python Scripting What is Perl? Depending on whom you ask, Perl stands for Practical Extraction and Report Language or Pathologically Eclectic Rubbish Lister. It is a powerful glue language useful for tying together the loose ends of computing life. Uses of Perl 1. Tool for general system administration 2. Processing textual or numerical data 3. Database interconnectivity 4. Common Gateway Interface (CGI/Web) programming 5. Driving other programs! (FTP, Mail, WWW, OLE) Philosophy & Idioms The Virtues of a Programmer Perl is a language designed to cater to the three chief virtues of a programmer. Laziness - develop reusable and general solutions to problems Impatience - develop programs that anticipate your needs and solve problems for you. Hubris - write programs that you want other people to see (and be able to maintain) There are many means to the same end Perl provides you with more than enough rope to hang yourself. Depending on the problem, there may be several official solutions. Generally those that are approached using Perl idioms will be more efficient. Perl Basics Script names While generally speaking you can name your script/program anything you want, there are a number of conventional extensions applied to portions of the Perl bestiary: .pm - Perl modules .pl - Perl libraries (and scripts on UNIX) .plx - Perl scripts Language properties Perl is an interpreted language program code is interpreted at run time. Perl is unique among interpreted languages, though. Code is compiled by the interpreter before it is actually executed. Many Perl idioms read like English Free format language whitespace between tokens is optional Comments are single-line, beginning with # Statements end with a semicolon (;)

76

Only subroutines and functions need to be explicitly declared Blocks of statements are enclosed in curly braces {} A script has no main() Invocation On platforms such as UNIX, the first line of a Perl program should begin with #!/usr/bin/perl and the file should have executable permissions. Then typing the name of the script will cause it to be executed. Unfortunately, Windows does not have a real equivalent of the UNIX shebang line. On Windows 95/98, you will have to call the Perl interpreter with the script as an argument: perl myscript.plx On Windows NT, you can associate the .plx extension with the Perl interpreter: > assoc .plx=Perl > ftype Perl=c:\myperl\bin\perl.exe %1% %* > set PATHEXT=%PATHEXT%;.plx After taking these steps, you can execute your script from the command line as follows: > myscript Data Types & Variables Basic Types The basic data types known to Perl are scalars, lists, and hashes. Scalar $foo: Simple variables that can be a number, a string, or a reference. A scalar is a thingy. List @foo: An ordered array of scalars accessed using a numeric subscript. $foo[0] Hash %foo: An unordered set of key/value pairs accessed using the keys as subscripts. $foo {key} Variable Contexts Perl data types can be treated in different ways depending on the context in which theyare accessed.

Special Variables (defaults)

77

Some variables have a predefined and special meaning to Perl. A few of the most commonly used ones are listed below.

Scalars Scalars are simple variables that are either numbers or strings of characters. Scalar variable names begin with a dollar sign followed by a letter, then possibly more letters, digits, or underscores. Variable names are case-sensitive. Numbers Numbers are represented internally as either signed integers or double precision floating point numbers. Floating point literals are the same used in C. Integer literals include decimal (255), octal (0377), and hexadecimal (0xff) values. Strings Strings are simply sequences of characters. String literals are delimited by quotes: Single quote string : Enclose a sequence of characters Double quote string: Subject to backslash and variable interpolation Back quote `command`: Evaluates to the output of the enclosed command Basic I/O The easiest means to get operator input to your program is using the diamond operator: $input = <>; The input from the diamond operator includes a newline (\n). To get rid of this pesky character, use either chop() or chomp(). chop() removes the last character of the string, while chomp() removes any line-ending characters (defined in the special variable $/). If no argument is given, these functions operate on the $_ variable. To do the converse, simply use Perls print function: print $output.\n; Basic Operators

78

79

Control Structures Statement Blocks A statement block is simply a sequence of statements enclose in curly braces: { first_statement; second_statement; last_statement } Conditional Structures (If/elsif/else) The basic construction to execute blocks of statements is the if statement. The if statement permits execution of the associated statement block if the test expression evaluates as true. It is important to note that unlike many compiled languages, it is necessary to enclose the statement block in curly braces, even if only one statement is to be executed. The general form of an if/then/else type of control statement is as follows if (expression_one) { true_one_statement; } elsif (expression_two) { true_two_statement;

80

} else { all_false_statement; } Loops Perl provides several different means of repetitively executing blocks of statements. While The basic while loop tests an expression before executing a statement block while (expression) { statements; } Until The until loop tests an expression at the end of a statement block; statements will be executed until the expression evaluates as true. until (expression) { statements; } Do while A statement block is executed at least once, and then repeatedly until the test expression is false. do { statements; } while (expression); Do until A statement block is executed at least once, and then repeatedly until the test expression is true. do { statements; } until (expression); For The for loop has three semicolon-separated expressions within its parentheses. These expressions function respectively for the initialization, the condition, and re-initialization expressions of the loop. The for loop for (initial_exp; test_exp; reinit_exp) { statements; } This structure is typically used to iterate over a range of values. The loop runs until the test_exp is false. for ($i; $i<10;$i++) { print $i; }

81

Foreach The foreach statement is much like the for statement except it loops over the elements of a list: foreach $i (@some_list) { statements; } If the scalar loop variable is omitted, $_ is used. Labels Any statement block can be given a label. Labels are identifiers that follow variable naming rules. They are placed immediately before a statement block and end with a colon: SOMELABEL: { statements; } Indexed Arrays (Lists) A list is an ordered set of scalar data. List names follow the same basic rules as for scalars. A reference to a list has the form @foo. List literals List literals consist of comma-separated values enclosed in parentheses: (1,2,3) (foo,4.5) A range can be represented using a list constructor function (such as ..): (1..9) = (1,2,3,4,5,6,7,8,9) ($a..$b) = ($a, $a+1, , $b-1,$b) In the case of string values, it can be convenient to use the quote-word syntax @a = (fred,barney,betty,wilma); @a = qw( fred barney betty wilma ); Accessing List Elements List elements are subscripted by sequential integers, beginning with 0 $foo[5] is the sixth element of @foo The special variable $#foo provides the index value of the last element of @foo. A subset of elements from a list is called a slice. @foo[0,1] is the same as ($foo[0],$foo[1]) You can also access slices of list literals: @foo = (qw( fred barney betty wilma ))[2,3] List operators and functions

82

Many list-processing functions operate on the paradigm in which the list is a stack. The highest subscript end of the list is the top, and the lowest is the bottom.

3. Use of Make Option Make is a declarative language which was designed for building software. Make is, in reality, a generalized hierarchical organizer for instructions which generate le objects. Make was originally written so that Unix programmers could write huge source trees of code, occupying many directories and subdirectories and assemble them efficiently and effortlessly. Building programs. Typing lines like cc -c file1.c file2.c ... cc -o target file1.o .... Repeatedly to compile a complicated program can be a real nuisance. One possibility would therefore be to keep all the commands in a script. This could waste a lot of time though. Suppose you are working on a big project which consists of many lines of source code but are editing only one le. You really only want to recompile the le you are working on and then relink the resulting object le with all of the other object les. Recompiling the other les which hadnt changed would be a waste of time. But that would mean that you would have to change the script each time you change what you need to compile. A better solution is to use the make command. make was designed for precisely this purpose. To use make, we create a le called Make file in the same directory as our program. Make is a quite general program for building software. It is not specically tied to the C programming language it can be used in any programming language. A make conguration le, called a Make file, contains rules which describe how to compile or build all of the pieces of a program. For example, even without telling it specically, make knows that in order to go from prog.c to prog.o the command cc -c prog.c must be executed. A Make le works by making such associations. The Make le contains a list of all of the les which compose the program and rules as to how to get to the nished product from the source.The idea is that, to compile a program ,we just have to type make. The program make then reads a conguration le called a Make file and compiles only the parts which need compiling. It does not recompile les which have not changed since the last compilation! How does it do this? Make works by comparing the timestamp on the le it needs to create with the timestamp on the le which is to be compiled. If the compiled version exists and is newer than its source then the source does not need to be recompiled. To make this idea work in practice, make has to know how to go through the steps of compiling a program. Some default rules are dened in a global conguration le, e.g. 83

/usr/include/make/default.mk Lets consider an example of what happens for the three les a.c, b.c and c.c in the example above and lets not worry about what the Make le looks like yet. The rst time we compile, only the .c les exist. When we type make, the program looks at its rules and nds that it has to make a le called myprog. To make this it needs to execute the command gcc -o myprog a.o b.o c.o So it looks for a.o etc. and doesnt nd them. It now goes to a kind of subroutine and looks to see if it has any rules for making les called .o and it discovers that these are made by compiling with the gcc -c option. Since the les do not exist, it does this. Now the les a.o b.o c.o exist and it jumps back to the original problem of trying to make myprog. All the les it needs now exist and so it executes the command and builds myprog.If we now edit a.c, and type make once again it goes through the same procedure as before but now it nds all of the les. So it compares the dates on the les if the source is newer than the result, it recompiles. By using this recursive method, make only compiles those parts of a program which need compiling Make les.To write a Make le, we have to tell make about dependencies. The dependencies of a le are all of those les which are required to build it. In a strong sense, dependencies are like subroutines which are carried out by make in the course of building the nal target. The dependencies of myprog are a.o, b.o and @ec.o. The dependencies of a.o are simply a.c, the dependencies of b.o are b.c and so on.A Makele consists of rules of the form: target : dependencies [ TAB] rule; The target is the thing we eventually want to build; the dependencies are like subroutines to be executed rst if they do not exist. Finally the rule is some code which is to be executed if all of the dependencies exist; it takes the dependencies and turns them into the current target. Notice how dependencies are like sub- routines, so each sub-rule makes a sub-target. In the end, the aim is to combine all of the sub targets into one nal target. There are two important things to remember. The le names must start on the rst character of a line. There must be a TAB character at the beginning of every rule or action. If there are spaces instead of tabs, or no tab at all, make will signal an error. This bizarre feature can cause a lot of confusion.

84

85

You might also like